The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
Package MKdescr includes a collection of functions that I found useful in my daily work. It contains several functions for descriptive statistical data analysis. Most of the functions were extracted from package MKmisc. Due to the growing number functions leading to an increasing number of dependencies required for MKmisc, I decided to split the package into smaller packages, where each package offers specific functionality.
We first load the package.
The following function should help to understand the meaning and computation of quantiles.
In addition, there is a function to illustrate the meaning and computation of box-and-whisker plots.
I implemented function IQrange before the standard function IQR gained the type argument. Since 2010 (r53643, r53644) the function is identical to function IQR.
## [1] 1.501684
## [1] 1.501684
It is also possible to compute a standardized version of the IQR leading to a normal-consistent estimate of the standard deviation.
## [1] 1.1132
## [1] 1.08751
The mean absolute deviation under the assumption of symmetry is a robust alternative to the sample standard deviation (see Tukey 1960).
## [1] 1.082176
Huber-type skipped mean and SD are robust alternatives to the arithmetic mean and sample standard deviation (see Hampel 1985).
## [1] 0.002150837
## [1] 1.08751
This is a function that computes a so-called five number summary which in contrast to function fivenum uses the first and third quartile instead of the lower and upper hinge.
## Minimum 25% Median 75% Maximum
## -2.49483544 -0.75313964 -0.03735759 0.74854387 2.70816828
Analogously to the five number summary we use quantiles instead of hinges and eighths as originally proposed by Tukey.
## Minimum 12.5% 25% Median 75% 87.5%
## -2.49483544 -1.14632584 -0.75313964 -0.03735759 0.74854387 1.15480246
## Maximum
## 2.70816828
There are functions to compute the (classical) coefficient of variation as well as two robust variants. In case of the robust variants, the mean is replaced by the median and the SD is replaced by the (standardized) MAD and the (standardized) IQR, respectively [see Arachchige2019].
## [1] 4
## [1] 0.3456849
## [1] 0.2295398
## [1] 0.2252741
There are functions to compute the (classical) signal to noise ratio as well as two robust variants. In case of the robust variants, the mean is replaced by the median and the SD is replaced by the (standardized) MAD and the (standardized) IQR, respectively.
## [1] 2.892808
## [1] 4.356543
## [1] 4.439036
We compute the standardized mean difference also known as Cohen’s und Hogdes’ g. In addition, there is a new proposal in case of unequal variances.
n1 <- 100
x <- rnorm(100)
n2 <- 150
y <- rnorm(n2, mean = 3, sd = 2)
## true value
(0-3)/sqrt((1 + n1/n2*2^2)/(n1/n2+1))
## [1] -2.0226
## bias-corrected SMD
## -2.068067
## bias-corrected SMD
## -1.843242
## SMD
## -2.07466
## SMD
## -1.84884
There are functions to compute z-score and robust variants of z-score based on median and MAD, respectively median and IQR.
## [1] 5
x <- (1-out)*rnorm(100, mean = 10, sd = 2) + out*25
z <- zscore(x)
z.med <- medZscore(x)
z.iqr <- iqrZscore(x)
## mean without outliers (should by close to 0)
mean(z[!out])
## [1] -0.193416
## [1] 0.02841029
## [1] 0.02682045
## [1] 0.5450112
## [1] 0.9702241
## [1] 0.9159303
In contrast to the standard function boxplot which uses the lower and upper hinge for defining the box and the whiskers, the function qboxplot uses the first and third quartile.
x <- rt(10, df = 3)
par(mfrow = c(1,2))
qboxplot(x, main = "1st and 3rd quartile")
boxplot(x, main = "Lower and upper hinge")
The difference between the two versions often is hardly visible.
The generalized logarithm, which corresponds to the shifted area hyperbolic sine, \[ \log(x + \sqrt{x^2 + 1}) - \log(2) \] may be useful as a variance stabilizing transformation when also negative values are present.
## Warning in log(x): NaNs wurden erzeugt
curve(glog, from = -3, to = 5, add = TRUE, col = "orange")
legend("topleft", fill = c("black", "orange"), legend = c("log", "glog"))
As in case of function log there is also glog10 and glog2.
## Warning in eval(expr, envir = ll, enclos = parent.frame()): NaNs wurden erzeugt
curve(glog10(x), from = -3, to = 5, add = TRUE, col = "orange")
legend("topleft", fill = c("black", "orange"), legend = c("log10", "glog10"))
There are also functions that compute the inverse of the generalized logarithm.
## [1] 10
## [1] 10
## [1] 10
## [1] 10
The thyroid function is usually investigated by determining the values of TSH, fT3 and fT4. The function thyroid can be used to visualize the measured values as relative values with respect to the provided reference ranges.
We can use the generalized logarithm for transforming the axes in ggplot2 plots (Wickham 2016).
The negative logrithm is for instance useful for displaying p values. The interesting values are on the top. This is for instance used in a so-called volcano plot.
x <- matrix(rnorm(1000, mean = 10), nrow = 10)
g1 <- rep("control", 10)
y1 <- matrix(rnorm(500, mean = 11.25), nrow = 10)
y2 <- matrix(rnorm(500, mean = 9.75), nrow = 10)
g2 <- rep("treatment", 10)
group <- factor(c(g1, g2))
Data <- rbind(x, cbind(y1, y2))
pvals <- apply(Data, 2, function(x, group) t.test(x ~ group)$p.value,
group = group)
## compute log-fold change
logfc <- function(x, group){
res <- tapply(x, group, mean)
log2(res[1]/res[2])
}
lfcs <- apply(Data, 2, logfc, group = group)
ps <- data.frame(pvals = pvals, logfc = lfcs)
ggplot(ps, aes(x = logfc, y = pvals)) + geom_point() +
geom_hline(yintercept = 0.05) + scale_y_neglog10() +
geom_vline(xintercept = c(-0.1, 0.1)) + xlab("log-fold change") +
ylab("-log10(p value)") + ggtitle("A Volcano Plot")
Often it’s better to have the data in a long format than in a wide format; e.g., when plotting with package ggplot2. The necessary transformation can be done with function melt.long.
## some random data
test <- data.frame(x = rnorm(10), y = rnorm(10), z = rnorm(10))
test.long <- melt.long(test)
test.long
## value variable
## x1 0.70195275 x
## x2 0.33618151 x
## x3 0.74982570 x
## x4 -0.80088234 x
## x5 -0.12274139 x
## x6 0.66428859 x
## x7 0.05495788 x
## x8 0.21269503 x
## x9 0.05086068 x
## x10 0.18291685 x
## y1 -0.02467293 y
## y2 -1.09939100 y
## y3 0.17399933 y
## y4 -0.45536672 y
## y5 1.43577886 y
## y6 -0.52959017 y
## y7 0.15215140 y
## y8 -1.97326132 y
## y9 -0.65887876 y
## y10 -0.82845265 y
## z1 1.23170354 z
## z2 -0.15108595 z
## z3 0.14495627 z
## z4 0.30029691 z
## z5 -0.53111148 z
## z6 0.30071514 z
## z7 1.50662404 z
## z8 0.61699271 z
## z9 0.71440006 z
## z10 -0.83452283 z
## introducing an additional grouping variable
group <- factor(rep(c("a","b"), each = 5))
test.long.gr <- melt.long(test, select = 1:2, group = group)
test.long.gr
## group value variable
## x1 a 0.70195275 x
## x2 a 0.33618151 x
## x3 a 0.74982570 x
## x4 a -0.80088234 x
## x5 a -0.12274139 x
## x6 b 0.66428859 x
## x7 b 0.05495788 x
## x8 b 0.21269503 x
## x9 b 0.05086068 x
## x10 b 0.18291685 x
## y1 a -0.02467293 y
## y2 a -1.09939100 y
## y3 a 0.17399933 y
## y4 a -0.45536672 y
## y5 a 1.43577886 y
## y6 b -0.52959017 y
## y7 b 0.15215140 y
## y8 b -1.97326132 y
## y9 b -0.65887876 y
## y10 b -0.82845265 y
## R version 4.2.2 Patched (2022-11-04 r83277)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Linux Mint 20.3
##
## Matrix products: default
## BLAS: /usr/lib/x86_64-linux-gnu/libf77blas.so.3.10.3
## LAPACK: /home/kohlm/RTOP/Rbranch/lib/libRlapack.so
##
## locale:
## [1] LC_CTYPE=de_DE.UTF-8 LC_NUMERIC=C
## [3] LC_TIME=de_DE.UTF-8 LC_COLLATE=C
## [5] LC_MONETARY=de_DE.UTF-8 LC_MESSAGES=de_DE.UTF-8
## [7] LC_PAPER=de_DE.UTF-8 LC_NAME=C
## [9] LC_ADDRESS=C LC_TELEPHONE=C
## [11] LC_MEASUREMENT=de_DE.UTF-8 LC_IDENTIFICATION=C
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] ggplot2_3.4.0 MKdescr_0.8
##
## loaded via a namespace (and not attached):
## [1] bslib_0.4.1 compiler_4.2.2 pillar_1.8.1 jquerylib_0.1.4
## [5] highr_0.9 tools_4.2.2 digest_0.6.30 jsonlite_1.8.3
## [9] evaluate_0.17 lifecycle_1.0.3 tibble_3.1.8 gtable_0.3.1
## [13] pkgconfig_2.0.3 rlang_1.0.6 cli_3.4.1 yaml_2.3.6
## [17] xfun_0.34 fastmap_1.1.0 withr_2.5.0 stringr_1.4.1
## [21] knitr_1.40 vctrs_0.5.0 sass_0.4.2 grid_4.2.2
## [25] glue_1.6.2 R6_2.5.1 fansi_1.0.3 rmarkdown_2.17
## [29] farver_2.1.1 magrittr_2.0.3 scales_1.2.1 htmltools_0.5.3
## [33] colorspace_2.0-3 labeling_0.4.2 utf8_1.2.2 stringi_1.7.8
## [37] munsell_0.5.0 cachem_1.0.6
Hampel, Frank R. 1985. “The breakdown points of the mean combined with some rejection rules.” Technometrics 27: 95–107.
Tukey, J. W. 1960. “A survey of sampling from contaminated distribution.” In Contributions to Probability and Statistics. Essays in Honor of H. Hotelling., edited by I. Olkin, 448–85. Stanford University Press.
Wickham, Hadley. 2016. ggplot2: Elegant Graphics for Data Analysis. Springer New York.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.