The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

Introduction to the hwep package

Introduction

This vignette covers the basic usage of the hwep package. The methods implemented here are described in detail in Gerard (2022). We will cover:

  1. Functions for calculating segregation probabilities.
  2. Functions for calculating equilibrium genotype frequencies.
  3. Functions for testing for equilibrium and random mating.

Let’s load the package so we can begin.

library(hwep)

The double reduction parameter.

Throughout this vignette, we will discuss the “double reduction parameter”, which we should briefly clarify. This parameter is a vector of probabilities of length floor(ploidy / 4), where ploidy is the ploidy of the species. Element i of this vector is the probability that an offspring will have exactly i copies of identical-by-double-reduction (IBDR) alleles. If alpha is this parameter vector, than 1 - sum(alpha) is the probability that an offspring has no IBDR alleles.

The double reduction parameter is known to have an upper bound, based on the model for meiosis considered. The largest bound typically assumed in the literature is derived from the “complete equational segregation model”, introduced by Mather (1935) and later generalized in Huang et al. (2019). These bounds can be calculated by the drbounds() function for different ploidies.

drbounds(ploidy = 4)
#> [1] 0.1666667
drbounds(ploidy = 6)
#> [1] 0.3
drbounds(ploidy = 8)
#> [1] 0.38571429 0.02142857
drbounds(ploidy = 10)
#> [1] 0.43650794 0.05952381
drbounds(ploidy = 12)
#> [1] 0.462662338 0.105519481 0.002705628

During our analysis procedures, we typically assume that the double reduction parameter is between 0 and these upper bounds.

Segregation probabilities

This package comes with a few functions to calculate the probabilities of gamete and offspring dosages given parental dosages. These generalize classical Mendelian inheritance to include rates of double reduction. We use the specific model derived by Fisher & Mather (1943), and later generalized by Huang et al. (2019).

Equilibrium genotype frequencies

Equilibrium frequencies can be generated with hwefreq() for arbitrary (even) ploidy levels.

hout <- hwefreq(r = 0.1, alpha = 0.1, ploidy = 6)
round(hout, digits = 5)
#> [1] 0.55062 0.32437 0.10232 0.01998 0.00250 0.00019 0.00001

Alternatively, you can control the number of iterations of random mating before stopping. The population begins in a state where r proportion of individuals have genotype ploidy and 1-r proportion has genotype 0. It then updates each generation’s genotype frequencies using freqnext(). E.g., for r=0.1 and alpha=0.1, after the first round of random mating, we have:

freqnext(freq = c(0.9, 0, 0, 0, 0.1), alpha = 0.1)
#> [1] 8.100000e-01 6.106227e-17 1.800000e-01 4.440892e-17 1.000000e-02
hwefreq(r = 0.1, alpha = 0.1, niter = 1, ploidy = 4)
#> [1] 8.100000e-01 6.106227e-17 1.800000e-01 4.440892e-17 1.000000e-02

Testing for equilibrium and random mating

The main function for this package is hwefit(), which implements various tests for random mating and equilibrium. This function has parallelization support through the future package. We’ll demonstrate using the future package assuming at least two cores are available.

library(future)
availableCores()
#> system 
#>      8
plan(multisession, workers = 2)

Let’s simulate some data at equilibrium to demonstrate our methods:

geno_freq <- hwefreq(r = 0.5, alpha = 0.1, ploidy = 6)
nmat <- t(rmultinom(n = 1000, size = 100, prob = geno_freq))
head(nmat)
#>      [,1] [,2] [,3] [,4] [,5] [,6] [,7]
#> [1,]    0   10   28   22   23   12    5
#> [2,]    1    7   25   35   15   15    2
#> [3,]    2   10   23   26   22   14    3
#> [4,]    3   12   18   26   19   15    7
#> [5,]    2    8   26   27   25   10    2
#> [6,]    0   11   19   31   26   10    3

hwefit() expects a matrix of genotype counts, where the rows index the loci and the columns index the genotype. So nmat[i, j] is the count of the number of individuals that have dosage j-1 at locus i.

You control the type of test via the type argument. Using type = "ustat" will use the \(U\)-statistic approach to test for equilibrium, as implemented in hweustat().

uout <- hwefit(nmat = nmat, type = "ustat")
#> Using 2 worker(s) to run hwefit() on 1000 loci...
#> Done!
#> Don't forget to shut down your workers with:
#>   future::plan(future::sequential)

The output is a list-like object that contains the estimates of double reduction (alpha), the \(p\)-values for the test against the null of equilibrium (p_hwe), as well as the test-statistics (chisq_hwe) and degrees of freedom (df_hwe) of this test.

On average, we obtain good estimates of the double reduction rate

mean(uout$alpha1)
#> [1] 0.1109858

But the sampling properties of this estimator are highly variable, even for such a large sample size:

hist(uout$alpha1)

This highlights the difficulty in estimating double reduction using just a single biallelic locus.

The p-values are generally uniformly distributed, as they should be since we generated data under the null of equilibrium.

hist(uout$p_hwe, breaks = 10, xlab = "P-values", main = "")

qqplot(x = ppoints(length(uout$p_hwe)), 
       y = uout$p_hwe, 
       xlab = "Theoretical Quantiles",
       ylab = "Empirical Quantiles",
       main = "QQ-plot")
abline(0, 1, lty = 2, col = 2)

You can view this QQ-plot on the \(-log_{10}\)-scale using qqpvalue(). This plot will also show the simultaneous confidence bands for the QQ-plot from Aldor-Noiman et al. (2013), which can also be calculated using ts_bands().

qqpvalue(pvals = uout$p_hwe, method = "base")

Make sure to shut down your workers after you are done:

plan("sequential")

The other values of “type” run different procedures:

References

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.