The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
RenyiExtropy provides a unified interface for computing entropy and extropy measures for discrete probability distributions. All functions accept a probability vector (or matrix for joint measures) and return a single numeric value measured in nats (natural-logarithm base).
The package covers three families of measures:
Every function that accepts a vector p requires it to
satisfy:
NA/NaNThe Shannon entropy is defined as \[H(\mathbf{p}) = -\sum_{i=1}^n p_i \log p_i\] and equals zero for a degenerate distribution and \(\log n\) for the uniform distribution.
shannon_entropy(p) # three-outcome distribution
#> [1] 1.029653
shannon_entropy(rep(0.25, 4)) # uniform: H = log(4)
#> [1] 1.386294
shannon_entropy(c(1, 0, 0)) # degenerate: H = 0
#> [1] 0
normalized_entropy(p) # H(p) / log(n), always in [0, 1]
#> [1] 0.9372306The Renyi entropy of order \(q > 0\) is \[H_q(\mathbf{p}) = \frac{1}{1-q} \log\!\left(\sum_i p_i^q\right)\]
For \(q \to 1\) it reduces to the Shannon entropy; the function automatically returns the Shannon limit when \(|q - 1| < 10^{-8}\).
renyi_entropy(p, q = 2) # collision entropy
#> [1] 0.967584
renyi_entropy(p, q = 0.5) # Renyi entropy, q = 0.5
#> [1] 1.063659
renyi_entropy(p, q = 1) # limit: equals shannon_entropy(p)
#> [1] 1.029653
shannon_entropy(p) # same value
#> [1] 1.029653The Tsallis entropy is \[S_q(\mathbf{p}) = \frac{1 - \sum_i p_i^q}{q - 1}\] a non-extensive generalisation of Shannon entropy widely used in statistical physics.
tsallis_entropy(p, q = 2)
#> [1] 0.62
tsallis_entropy(p, q = 1) # limit: equals shannon_entropy(p)
#> [1] 1.029653Extropy is the dual complement of entropy, measuring information via complementary probabilities: \[J(\mathbf{p}) = -\sum_{i=1}^n (1-p_i)\log(1-p_i)\]
extropy() and shannon_extropy() compute the
same quantity.
The Renyi extropy of order \(q\) is \[J_q(\mathbf{p}) = \frac{-(n-1)\log(n-1) + (n-1)\log\!\left(\sum_i(1-p_i)^q\right)}{1-q}\]
For \(n = 2\) it equals the Renyi entropy. For \(q \to 1\) it returns the classical extropy.
renyi_extropy(p, q = 2)
#> [1] 0.7421274
renyi_extropy(p, q = 1) # limit: equals extropy(p)
#> [1] 0.7747609
# n = 2: Renyi extropy == Renyi entropy
renyi_extropy(c(0.4, 0.6), q = 2)
#> [1] 0.6539265
renyi_entropy(c(0.4, 0.6), q = 2)
#> [1] 0.6539265
# Maximum Renyi extropy over n outcomes (uniform distribution)
max_renyi_extropy(3)
#> [1] 0.8109302
renyi_extropy(rep(1/3, 3), q = 2)
#> [1] 0.8109302joint_entropy() accepts a matrix of joint probabilities;
rows correspond to outcomes of \(X\),
columns to outcomes of \(Y\).
Pxy <- matrix(c(0.2, 0.3, 0.1, 0.4), nrow = 2, byrow = TRUE)
joint_entropy(Pxy) # H(X, Y)
#> [1] 1.279854
conditional_entropy(Pxy) # H(Y | X) = H(X,Y) - H(X)
#> [1] 0.586707
# Independent distributions: H(X,Y) = H(X) + H(Y)
px <- c(0.4, 0.6)
py <- c(0.3, 0.7)
Pxy_indep <- outer(px, py)
joint_entropy(Pxy_indep)
#> [1] 1.283876
shannon_entropy(px) + shannon_entropy(py)
#> [1] 1.283876conditional_renyi_extropy(Pxy, q = 2)
#> [1] 0.1039623
conditional_renyi_extropy(Pxy, q = 1) # limit: conditional Shannon extropy
#> [1] 0.13636The Kullback-Leibler divergence \(D_\text{KL}(P \| Q)\) is asymmetric and can be infinite when \(q_i = 0\) but \(p_i > 0\) (a warning is emitted in that case).
The Jensen-Shannon divergence is the symmetrised, always-finite version, bounded in \([0, \log 2]\):
All functions produce informative errors when given invalid inputs:
shannon_entropy(c(0.2, 0.3, 0.1)) # does not sum to 1
#> Error in `shannon_entropy()`:
#> ! Elements of 'p' must sum to 1 (tolerance 1e-8).
renyi_entropy(p, q = NA) # NA not allowed
#> Error in `renyi_entropy()`:
#> ! 'q' must not be NA or NaN.
max_renyi_extropy(1) # n must be >= 2
#> Error in `max_renyi_extropy()`:
#> ! 'n' must be at least 2.
kl_divergence(p, c(0.5, 0.5)) # length mismatch
#> Error in `kl_divergence()`:
#> ! 'p' and 'q' must have the same length.These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.