The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

R2sample

library(R2sample)

This package brings together a number of routines for the two sample problem for univariate data. There are two data sets x and y and we want to test whether they were generated by the same probability distribution.

The highlights of this package are:

Example 1

We generate two data sets of size 100 and 120 respectively from standard normal distributions and run the test:

set.seed(123)

Note all examples run with arguments B=500, maxProcessor = 2 in order to pass devtools::check()

x1 = rnorm(10)
y1 = rnorm(12)
twosample_test(x1, y1, B=500, maxProcessor = 2)
#> $statistics
#>       KS   Kuiper      CvM       AD       LR       ZA       ZK       ZC 
#>  0.20000  0.31660  0.05303  0.27040  0.12230 -3.17700  0.30790 -2.78800 
#>   Wassp1 ES large ES small EP large EP small 
#>  0.27970  1.03000  0.13000  0.49000  0.10000 
#> 
#> $p.values
#>       KS   Kuiper      CvM       AD       LR       ZA       ZK       ZC 
#>   0.9320   0.9080   0.9320   0.9320   0.8580   0.9540   0.9540   0.9540 
#>   Wassp1 ES large ES small EP large EP small 
#>   0.9540   0.5985   0.9369   0.7831   0.9490

In this case the the null hypothesis is true, both data sets were generated by a standard normal distribution. And indeed, all 13 tests have p values much larger than (say) 0.05 and therefore correctly fail to reject the null hypothesis.

Alternatively one might wish to carry out a select number of tests, but then find a correct p value for the combined test. This can be done as follows:

twosample_test_adjusted_pvalue(x1, y1, B=c(500,500))
#> p values of individual tests:
#> ES small :  0.9369
#> ZA :  0.02
#> ZK :  0
#> Wassp1 :  0.05
#> Kuiper :  0.08
#> adjusted p value of combined tests: 0.014

By default for continuous data this runs four tests (chi square with a small number of bins, ZA, ZK and Wassp1). It the finds the smallest p value (here 0.02) and adjusts it for the multiple testing problem, resulting in an overall p value of 0.05.

New Tests

Say we wish to use two new tests. One based on the difference in standardized means and the based on the difference in standard deviations:

\[ \begin{aligned} &TS_1 = sd(x) - sd(y) \\ &TS_2 = \bar{x}/sd(x) - \bar{y}/sd(y) \\ \end{aligned} \]

myTS1 = function(x, y) {
   out = c(0, 0)
   out[1] = abs(sd(x) - sd(y))
   out[2] = abs(mean(x)/sd(x) - mean(y)/sd(y))
   names(out) = c("std", "std t test")
   out
}

Then we can simply run

twosample_test(x1, y1, TS=myTS1, B=500, maxProcessor = 2)
#> $statistics
#>        std std t test 
#>    0.05832    0.01233 
#> 
#> $p.values
#>        std std t test 
#>      0.846      0.984

Note that the output vector of the routine has to be a named vector. If the routine is written in Rcpp parallel programming is not available.

If needed one can also supply the argument TSextra. This has to be a list, which will be passed directly to TS. In this way one can add items that might be needed for calculation of the test statistics.

Example 2

Here we generate two data sets of size 1000 and 1200 respectively from a binomial distribution with 5 trials and success probabilities of 0.5 and 0.55, respectively.

x2 = table(c(0:5,rbinom(1000, 5, 0.5)))-1
y2 = table(c(0:5,rbinom(1200, 5, 0.55)))-1
rbind(x2, y2)
#>     0   1   2   3   4  5
#> x2 22 156 315 320 154 33
#> y2 20 119 303 435 260 63
twosample_test(x2, y2, vals=0:5, B=500, maxProcessor = 2)$p.values
#>     KS Kuiper    CvM     AD     LR     ZA Wassp1  large  small 
#>      0      0      0      0      0      0      0      0      0

In this case the the null hypothesis is false, all nine tests for discrete data have p values much smaller than (say) 0.05 and therefore correctly reject the null hypothesis.

Notice, it is the presence of the vals argument that tells the twosample_test command that the data is discrete. The vectors x and y are the counts. Note that the lengths of the three vectors have to be the same and no value of vals is allowed that has both x and y equal to 0.

New Test

Again we might want to run a different test, say based again on the difference in standardized means. This time we will implement the new test using Rcpp:

#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
NumericVector myTS2(IntegerVector x, 
                         IntegerVector y,
                         NumericVector vals) {
  
  Rcpp::CharacterVector methods=CharacterVector::create("std t test");    
  int const nummethods=methods.size();
  int k=x.size(), n=sum(x), i;
  double meanx=0.0, meany=0.0, sdx=0.0, sdy=0.0;
  NumericVector TS(nummethods);
  TS.names() =  methods;
  for(i=0;i<k;++i) {
    meanx = meanx + x[i]*vals[i]/n;
    meany = meany + y[i]*vals[i]/n;
  }
  for(i=0;i<k;++i) {
    sdx = sdx + x[i]*(vals[i] - meanx)*(vals[i] - meanx);
    sdy = sdy + y[i]*(vals[i] - meany)*(vals[i] - meany);
  }  
  sdx = sqrt(sdx/(n-1));
  sdy = sqrt(sdy/(n-1));
  TS(0) = std::abs(meanx/sdx - meany/sdy);
  return TS;
}

Now

twosample_test(x2, y2, vals=0:5, TS=myTS2, B=500)
#> $statistics
#> std t test 
#>     0.2053 
#> 
#> $p.values
#> std t test 
#>      0.008

Permutation Method

For all the tests except the chi square variants the p values are found via the permutation method. The idea of a permutation test is simple. Say we have data sets \(x_1,..,x_n\) and \(y_1,..,y_m\). They are combined into one large data set, permuted and split again in sets of sizes n and m. Under the null hypothesis these new data sets come from the same distribution as the actual data. Therefore calculating the tests statistics for them and repeating many times one can build up the distributions of the test statistics and find p values from them.

In the discrete case the situation is somewhat more complicated. Say we have data sets with values \(v_1,..,v_k\) and counts \(x_1,..,x_n\), \(y_1,..,y_k\). One can then mimic the description above by expanding these to yield a large data set with \(x_1+y_1\) \(v_1\)’s, \(x_2+y_2\) \(v_2\)’s and so on. Then this vector is permuted and split as described above. The drawback of this sampling method is that its calculation speed increases with the sample size. Alternatively R2sample provides the following option. One can show that the distribution of the permuted data sets has the following distribution: let \(n=\sum x_i\) and \(m=\sum y_i\), then

\[P(X=a|x,y)=\frac{\prod_{j=1}^k{{x_j+y_j}\choose{a_j}}}{{n+m}\choose{n}}\] for any \(a\) such that \(0\le a_i \le x_i; i=1,..,k\) and \(\sum a=n\). Sampling from this distribution can be done using MCMC. R2sample uses the Metropolis-Hastings algorithm as follows: in each step two coordinates \(i\) and \(j\) with \(1\le i,j \le k, i\ne j\) are chosen randomly. Then the proposal \(a_i=x_i+1, a_j=x_j-1, b_i=y_i-1, b_j=y_j+1\), \(a=x, b=y\) is accepted with probability

\[\frac{P(X=a|x,y)}{P(X=x|x,y)}=\frac{(x_i+y_i-a_i)a_j}{(x_j+y_j+1-a_j)(a_i+1)}\] As is generally the case with MCMC simulation one needs a larger number of simulations than with independence sampling. The default is independence sampling size with a simulation size of 5000 for testing and (1000, 1000) for power estimation. If the sample sizes are very large this might run quite slowly and it might be advisable to change to samplingmethod=“MCMC”.

Of course MCMC sampling suffers from the usual issues of having to reach the stationary distribution. In the current situation a two-sample test is likely only done if the two data sets are reasonably close, and so the MCMC algorithm should start essentially at the stationary distribution and no burn-in period is required.

On a standard pc (as of 2024) independence sampling of two data sets with 10000 observation each takes only a few seconds and so is perfectly doable.

The Testing Methods

In the continuous case we have a sample x of size of n and a sample y of size m. In the discussion below both are assumed to be ordered. We denote by z the combined sample. In the discrete case we have a random variable with values v, and x and y are the counts. We denote by k the number of observed values.

We denote the edf’s of the two data sets by \(\widehat{F}\) and \(\widehat{G}\), respectively. Moreover we denote the empirical distribution function of the combined data set by \(\widehat{H}\).

In the case of continuous data, it is binned into nbins[1] equal probability bins, with the default nbins[1]=100. In the case of discrete data the number of classes comes from vals, the values observed.

Then a standard chi square test is run and the p value is found using the usual chi square approximation.

Many previous studies have shown the a chi square test with a large number of classes often has very poor power. So this test in the case of continuous data uses nbins[2] equal probability bins, with the default nbins[2]=10. In the case of discrete data if the number of classes exceeds nbins[2]=10 the classes are combined until there are nbins[2]=10 classes.

In either case the p values are found using the usual chi-square approximation. The degrees of freedom are the number of bins. For a literature review of chi square tests see (W. Rolke 2021).


For all other tests the p values are found using a permutation test. They are

This test is based on the quantity \(\max\{|\widehat{F}(x)-\widehat{G}(x)|:x\in \mathbf{R}\}\). In the continuous case (without ties) the function \(\widehat{F}(x)-\widehat{G}(x)\) either takes a jump of size \(1/n\) up at a point in the x data set, a jump of size \(1/m\) down if x is a point in the y data set, or is flat. Therefore the test statistic is

\[\max\{\sum_{i=1}^{j} \vert \frac1n I\{z_i \in x\}-\frac1m I\{z_i \in y\} \vert:j=1,..,n+m\}\]

In the discrete case the jumps have sizes \(x_i/n\) and \(y_j/m\), respectively and the test statistic is

\[\max\{\sum_{i=1}^{j} \vert \frac{x_i}n -\frac{y_j}m \vert:j=1,..,k\}\]

This test was first proposed in (Kolmogorov 1933), (Smirnov 1939) and is one of the most widely used tests today.

There is a close connection between the edf’s and the ranks of the x and y in the combined data set. Using this one can also calculate the KS statistic, and many statistics to follow, based on these ranks. This is used in many software routines, for example the ks.test routine in R. However, when applied to discrete data these formulas can fail badly because of the many ties, and so we will not use them in our routine.

This test is closely related to Kolmogorov-Smirnov, but it uses the sum of the largest positive and negative differences between the edf’s as a test statistic:

\[T_x=\sum_{i=1}^{j} \vert \frac1n I\{z_i \in x\}-\frac1m I\{z_i \in x\} \vert:j=1,..,n\] \[T_y=\sum_{i=1}^{j} \vert \frac1n I\{z_i \in y\}-\frac1m I\{z_i \in y\} \vert:j=1,..,m\] and the the test statistic is \(T_x-T_y\). The discrete case follows directly. This test was first proposed in (Kuiper 1960).

This test is based on the integrated squared difference between the edf’s:

\[\frac{nm}{(n+m)^2}\sum_{i=1}^{n+m} \left( \hat{F}(z_i)-\hat{G}(z_i) \right)^2 \] the extension to the two-sample problem of the Cramer-vonMises criterion is discussed in (T. W. Anderson 1962).

This test is based on

\[nm\sum_{i=1}^{n+m-1} \frac{\left( \hat{F}(z_i)-\hat{G}(z_i) \right)^2}{i(n-i)} \]

It was first proposed in (Theodore W. Anderson, Darling, et al. 1952).

Let \(r_i\) and \(s_i\) be the ranks of x and y in the combined sample, then the test statistic is given by

\[\frac1{nm(n+m)}\left[n\sum_{i=1}^n(r_i-1)^2+m\sum_{i=1}^m(s_i-1)^2\right]\] (Lehmann 1951) and (Rosenblatt 1952)

These tests are based on the paper (Zhang 2006). Note that the calculation of ZC is the one exception from the rule: even in the discrete data case the calculation of the test statistic does require loops of lengths n and m. The calculation time will therefore increase with the sample sizes, and for very large data sets this test might have to be excluded.

A test based on the Wasserstein p1 metric. It compares the quantiles of the x and y in the combined data set. If n=m the test statistic in the continuous case is very simple:

\[ \frac1n\sum_{i=1}^n |x_i-y_i|\] If \(n\ne m\) it is first necessary to find the respective quantiles of the x’s and y’s in the combined data set. In the discrete case the test statistic is

\[\sum_{i=1}^{k-1} \vert \sum_{j=1}^i\frac{x_j}{n}-\sum_{j=1}^i\frac{y_j}{m} \vert(v_{i+1}-v_i)\]

For a discussion of the Wasserstein distance see (Vaserstein 1969).

The Arguments

The routine has the following arguments:

Say for example we want to use only the KS and AD tests:

x=rnorm(10)
y=rnorm(12)
twosample_test(x, y, B=500, maxProcessor = 2, doMethods=c("KS","AD"))
#> $statistics
#>        KS        AD 
#> 0.1833333 0.2442214 
#> 
#> $p.values
#>    KS    AD 
#> 0.928 0.944

The Output

A list with vectors statistics (the test statistics) and p.values.

shiny app

The tests can also be run via a shiny app. To do so run

run_shiny()

twosample_power and plot_power

The routine twosample_power allows the estimation of the power of the various tests, and plot_power draws the corresponding power graph with the methods sorted by their mean power.

Note that due to the use of permutation tests the power calculations are fairly slow. Because of the time constraints imposed by CRAN the following routines are not run. A full power study with (say) 25 alternatives and fairly large data sets might take several hours to run.

New tests can be used in the same way as above.

Example 3

Say we wish to find the powers of the tests when one data set comes from a standard normal distribution with sample size 100 and the other from a t distribution with 1-10 degrees of freedom and sample size 200.

plot_power(twosample_power(
  f=function(df) list(x=rnorm(100), y=rt(200, df)), df=1:10)
)

The arguments of twosample_power are

The arguments of plot_power are a matrix of powers created by twosample_power and (optional) Smooth=FALSE if no smoothing is desired and a name of the variable on the x axis.

Example 4

We wish to find the powers of the tests when the data set x are 100 observations comes from a binomial n=10, p=0.5 and y 120 observations from from a binomial n=10 and a number of different success probabilities p. Note that in either case not all the possible values 0-10 will actually appear in every data set, so we need to take some care in assuring that x, y and vals match every time:

plot_power(twosample_power(
  function(p) {
    vals=0:10
    x=table(c(vals, rbinom(100, 10, 0.5))) - 1 #Make sure each vector has length 11
    y=table(c(vals, rbinom(120, 10, p))) - 1  #and names 1-10
    vals=vals[x+y>0] # only vals with at least one count
    list(x=x[as.character(vals)],y=y[as.character(vals)], vals=vals)
  },
  p=seq(0.5, 0.6, length=5)
), "p")

Example 5

We want to assess the effect of the sample size when one data set comes from a standard normal and the other from a normal with mean 1 and standard deviation 1:

plot_power(twosample_power(
  f=function(n) list(x=rnorm(n), y=rnorm(n, 1)),
  n=10*1:10), "n")

Adjusted p values for Several Tests

As no single test can be relied upon to consistently have good power, it is reasonable to employ several of them. We would then reject the null hypothesis if any of the tests does so, that is, if the smallest p-value is less than the desired type I error probability \(\alpha\).

This procedure clearly suffers from the problem of simultaneous inference, and the true type I error probability will be much larger than \(\alpha\). It is however possible to adjust the p value so it does achieve the desired \(\alpha\). This can be done as follows:

We generate a number of data sets under the null hypothesis. Generally about 1000 will be sufficient. Then for each simulated data set we apply the tests we wish to include, and record the smallest p value. Here is an example. .

pvals=matrix(0,1000,13)
for(i in 1:1000) 
  pvals[i, ]=R2sample::twosample_test(runif(100), runif(100), B=1000)$p.values

Next we find the smallest p value in each run for two selections of four methods. One is the selection found to be best above, namely Zhang’s ZC and ZK methods, the methods by Kuiper and Wasserstein as well as a chi square test with a small number of bins. As a second selection we use the methods by Kolmogorov-Smirnov, Kuiper, Anderson-Darling, Cramer-vonMises and Lehman-Rosenblatt, which for this null data turn out to be highly correlated.

colnames(pvals)=names(R2sample::twosample_test(runif(100), runif(100), B=1000)$p.values)
p1=apply(pvals[, c("ZK", "ZC", "Wassp1", "Kuiper", "ES small" )], 1, min)
p2=apply(pvals[, c("KS", "Kuiper", "AD", "CvM", "LR")], 1, min)

Next we find the empirical distribution function for the two sets of p values and draw their graphs. We also add the curves for the cases of four identical tests and the case of four independent tests, which of course is the Bonferroni correction. The data for the cdfs is in the inst/extdata directory of the package.

tmp=readRDS("../inst/extdata/pvaluecdf.rds")
Tests=factor(c(rep("Identical Tests", nrow(tmp)),
        rep("Correlated Selection", nrow(tmp)),
        rep("Best Selection", nrow(tmp)),
        rep("Independent Tests", nrow(tmp))),
        levels=c("Identical Tests",  "Correlated Selection", 
                 "Best Selection", "Independent Tests"),
        ordered = TRUE)
dta=data.frame(x=c(tmp[,1],tmp[,1],tmp[,1],tmp[,1]),
          y=c(tmp[,1],tmp[,3],tmp[,2],1-(1-tmp[,1])^4),
          Tests=Tests)
ggplot2::ggplot(data=dta, ggplot2::aes(x=x,y=y,col=Tests))+
  ggplot2::geom_line(linewidth=1.2)+
  ggplot2::labs(x="p value", y="CDF")+
  ggplot2::scale_color_manual(values=c("blue","red", "Orange", "green"))

References

Anderson, T. W. 1962. “On the Distribution of the Two_sample Cramer-von Mises Criterion.” The Annals of Mathematical Statistics 33(3): 1148–59.
Anderson, Theodore W, Donald A Darling, et al. 1952. “Asymptotic Theory of Certain Goodness of Fit Criteria Based on Stochastic Processes.” The Annals of Mathematical Statistics 23 (2): 193–212.
Kolmogorov, A. N. 1933. “Sulla Determinazione Empirica Di Una Legge Di Distribuzione.” Giornale Dell’Instituto Italiano Degli Attuari 4: 83–91.
Kuiper, N. H. 1960. “Tests Concerning Random Points on a Circle.” Proceedings of the Koninklijke Nederlandse Akademie van Wetenschappen 63: 38–47.
Lehmann, E. L. 1951. “Consistency and Unbiasedness of Certain Nonparametric Tests.” Ann. MAth. Statist. 22(1): 165–79.
Rosenblatt, M. 1952. “Limit Theorems Associated with Variants of the von Mises Statistic.” Ann. Math. Statist. 23: 617–23.
Smirnov, N. V. 1939. “Estimate of Deviation Between Empirical Distribution Functions in Two Independent Samples.” Bull. Moscow Univ. 2: 3–16.
Vaserstein, L. N. 1969. “Markov Processes over Denumerable Products of Spaces, Describing Large Systems of Automata.” Problemy Peredachi Informatsii 5(3): 64–72.
W. Rolke, C. Gutierrez Gongora. 2021. “A Chi-Square Goodness-of-Fit Test for Continuous Distributions Against a Known Alternative.” Computational Statistics.
Zhang, J. 2006. “Powerful Two Sample Tests Based on the Likelihood Ratio.” Techometrics 48: 95–103.

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.