The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
Before calculating diversity a metacommunity
object must be created. This object contains all the information needed to calculate diversity. In the following example, we generate a metacommunity (partition
) comprising two species (“cows” and “sheep”), and partitioned across three subcommunities (a, b, and c).
# Load the package into R
library(rdiversity)
# Initialise data
partition <- data.frame(a=c(1,1),b=c(2,0),c=c(3,1))
row.names(partition) <- c("cows", "sheep")
The metacommunity()
function takes two arguments, partition
and similarity
. When species are considered completely distinct, an identity matrix is required, which is generated automatically if the similarity
argument is missing, as below:
# Generate metacommunity object
meta <- metacommunity(partition = partition)
## Metacommunity matrix was normalised to sum to 1.
Note that a warning is displayed when abundances (rather than relative abundances) are entered into the partition
argument. Both are acceptable inputs.
When species share some similarity and a similarity matrix is available, then a similarity object (and the metacommunity object) is generated in the following way:
# Initialise similarity matrix
s <- matrix(c(1, 0.5, 0.5, 1), nrow = 2)
row.names(s) <- c("cows", "sheep")
colnames(s) <- c("cows", "sheep")
# Generate similarity object
s <- similarity(similarity = s, dat_id = "my_taxonomic")
# Generate metacommunity object
meta <- metacommunity(partition = partition, similarity = s)
## Metacommunity matrix was normalised to sum to 1.
Alternatively, if a distance matrix is available, then a distance object is generated in the following way:
# Initialise distance matrix
d <- matrix(c(0, 0.7, 0.7, 0), nrow = 2)
row.names(d) <- c("cows", "sheep")
colnames(d) <- c("cows", "sheep")
# Generate distance object
d <- distance(distance = d, dat_id = "my_taxonomic")
# Convert the distance object to similarity object (by means of a linear or exponential transform)
s <- dist2sim(dist = d, transform = "linear")
# Generate metacommunity object
meta <- metacommunity(partition = partition, similarity = s)
## Metacommunity matrix was normalised to sum to 1.
Each metacommunity
object contains the following slots:
@type_abundance
: the abundance of types within a metacommunity,@similarity
: the pair-wise similarity of types within a metacommunity,@ordinariness
: the ordinariness of types within a metacommunity,@subcommunity_weights
: the relative weights of subcommunities within a metacommunity, and@type_weights
: the relative weights of types within a metacommunity.This method uses a wrapper function to simplify the pipeline and is recommended if only a few measures are being calculated.
A complete list of these functions is shown below:
raw_sub_alpha()
: estimate of naive-community metacommunity diversitynorm_sub_alpha()
: similarity-sensitive diversity of subcommunity j in isolationraw_sub_rho()
: redundancy of subcommunity jnorm_sub_rho()
: representativeness of subcommunity jraw_sub_beta()
: distinctiveness of subcommunity jnorm_sub_beta()
: estimate of effective number of distinct subcommunitiessub_gamma()
: contribution per individual toward metacommunity diversityraw_meta_alpha()
: naive-community metacommunity diversitynorm_meta_alpha()
: average similarity-sensitive diversity of subcommunitiesraw_meta_rho()
: average redundancy of subcommunitiesnorm_meta_rho()
: average representativeness of subcommunitiesraw_meta_beta()
: average distinctiveness of subcommunitiesnorm_meta_beta()
: effective number of distinct subcommunitiesmeta_gamma()
: metacommunity similarity-sensitive diversityEach of these functions take two arguments, meta
(a metacommunity
object) and qs
(a vector of q values), and output results as a rdiv
object. For example, to calculate normalised subcommunity alpha diversity for q=0, q=1, and q=2:
# Initialise data
partition <- data.frame(a=c(1,1),b=c(2,0),c=c(3,1))
row.names(partition) <- c("cows", "sheep")
# Generate a metacommunity object
meta <- metacommunity(partition)
## Metacommunity matrix was normalised to sum to 1.
# Calculate diversity
norm_sub_alpha(meta, 0:2)
## measure q type_level type_name partition_level partition_name
## 1 normalised alpha 0 types subcommunity a
## 2 normalised alpha 0 types subcommunity b
## 3 normalised alpha 0 types subcommunity c
## 4 normalised alpha 1 types subcommunity a
## 5 normalised alpha 1 types subcommunity b
## 6 normalised alpha 1 types subcommunity c
## 7 normalised alpha 2 types subcommunity a
## 8 normalised alpha 2 types subcommunity b
## 9 normalised alpha 2 types subcommunity c
## diversity dat_id transformation normalised k max_d
## 1 2.000000 naive NA NA NA NA
## 2 1.000000 naive NA NA NA NA
## 3 2.000000 naive NA NA NA NA
## 4 2.000000 naive NA NA NA NA
## 5 1.000000 naive NA NA NA NA
## 6 1.754765 naive NA NA NA NA
## 7 2.000000 naive NA NA NA NA
## 8 1.000000 naive NA NA NA NA
## 9 1.600000 naive NA NA NA NA
However, if multiple measures are required and computational efficiency is an issue, then the following method is recommended (the same results are obtained).
This method requires that we first calculate the species-level components, by passing a metacommunity
object to the appropriate function; raw_alpha()
, norm_alpha()
, raw_beta()
, norm_beta()
, raw_rho()
, norm_rho()
, or raw_gamma()
. Subcommunity- and metacommunity-level diversities are calculated using the functions subdiv()
and metadiv()
. Since both subcommunity and metacommunity diversity measures are transformations of the same species-level component, this method is computationally more efficient.
# Initialise data
partition <- data.frame(a=c(1,1),b=c(2,0),c=c(3,1))
row.names(partition) <- c("cows", "sheep")
# Generate a metacommunity object
meta <- metacommunity(partition)
## Metacommunity matrix was normalised to sum to 1.
# Calculate the species-level component for normalised alpha
component <- norm_alpha(meta)
# Calculate normalised alpha at the subcommunity-level
subdiv(component, 0:2)
## measure q type_level type_name partition_level partition_name
## 1 normalised alpha 0 types subcommunity a
## 2 normalised alpha 0 types subcommunity b
## 3 normalised alpha 0 types subcommunity c
## 4 normalised alpha 1 types subcommunity a
## 5 normalised alpha 1 types subcommunity b
## 6 normalised alpha 1 types subcommunity c
## 7 normalised alpha 2 types subcommunity a
## 8 normalised alpha 2 types subcommunity b
## 9 normalised alpha 2 types subcommunity c
## diversity dat_id transformation normalised k max_d
## 1 2.000000 naive NA NA NA NA
## 2 1.000000 naive NA NA NA NA
## 3 2.000000 naive NA NA NA NA
## 4 2.000000 naive NA NA NA NA
## 5 1.000000 naive NA NA NA NA
## 6 1.754765 naive NA NA NA NA
## 7 2.000000 naive NA NA NA NA
## 8 1.000000 naive NA NA NA NA
## 9 1.600000 naive NA NA NA NA
# Likewise, calculate normalised alpha at the metacommunity-level
metadiv(component, 0:2)
## measure q type_level type_name partition_level partition_name
## 1 normalised alpha 0 types metacommunity
## 2 normalised alpha 1 types metacommunity
## 3 normalised alpha 2 types metacommunity
## diversity dat_id transformation normalised k max_d
## 1 1.750000 naive NA NA NA NA
## 2 1.575314 naive NA NA NA NA
## 3 1.454545 naive NA NA NA NA
In some instances, it may be useful to calculate all subcommunity (or metacommunity) measures. In which case, a metacommunity
object may be passed directly to subdiv()
or metadiv()
:
# Calculate all subcommunity diversity measures
subdiv(meta, 0:2)
## measure q type_level type_name partition_level partition_name
## 1 raw alpha 0 types subcommunity a
## 2 raw alpha 0 types subcommunity b
## 3 raw alpha 0 types subcommunity c
## 4 raw alpha 1 types subcommunity a
## 5 raw alpha 1 types subcommunity b
## 6 raw alpha 1 types subcommunity c
## 7 raw alpha 2 types subcommunity a
## 8 raw alpha 2 types subcommunity b
## 9 raw alpha 2 types subcommunity c
## 10 normalised alpha 0 types subcommunity a
## 11 normalised alpha 0 types subcommunity b
## 12 normalised alpha 0 types subcommunity c
## 13 normalised alpha 1 types subcommunity a
## 14 normalised alpha 1 types subcommunity b
## 15 normalised alpha 1 types subcommunity c
## 16 normalised alpha 2 types subcommunity a
## 17 normalised alpha 2 types subcommunity b
## 18 normalised alpha 2 types subcommunity c
## 19 raw beta 0 types subcommunity a
## 20 raw beta 0 types subcommunity b
## 21 raw beta 0 types subcommunity c
## 22 raw beta 1 types subcommunity a
## 23 raw beta 1 types subcommunity b
## 24 raw beta 1 types subcommunity c
## 25 raw beta 2 types subcommunity a
## 26 raw beta 2 types subcommunity b
## 27 raw beta 2 types subcommunity c
## 28 normalised beta 0 types subcommunity a
## 29 normalised beta 0 types subcommunity b
## 30 normalised beta 0 types subcommunity c
## 31 normalised beta 1 types subcommunity a
## 32 normalised beta 1 types subcommunity b
## 33 normalised beta 1 types subcommunity c
## 34 normalised beta 2 types subcommunity a
## 35 normalised beta 2 types subcommunity b
## 36 normalised beta 2 types subcommunity c
## 37 raw rho 0 types subcommunity a
## 38 raw rho 0 types subcommunity b
## 39 raw rho 0 types subcommunity c
## 40 raw rho 1 types subcommunity a
## 41 raw rho 1 types subcommunity b
## 42 raw rho 1 types subcommunity c
## 43 raw rho 2 types subcommunity a
## 44 raw rho 2 types subcommunity b
## 45 raw rho 2 types subcommunity c
## 46 normalised rho 0 types subcommunity a
## 47 normalised rho 0 types subcommunity b
## 48 normalised rho 0 types subcommunity c
## 49 normalised rho 1 types subcommunity a
## 50 normalised rho 1 types subcommunity b
## 51 normalised rho 1 types subcommunity c
## 52 normalised rho 2 types subcommunity a
## 53 normalised rho 2 types subcommunity b
## 54 normalised rho 2 types subcommunity c
## 55 gamma 0 types subcommunity a
## 56 gamma 0 types subcommunity b
## 57 gamma 0 types subcommunity c
## 58 gamma 1 types subcommunity a
## 59 gamma 1 types subcommunity b
## 60 gamma 1 types subcommunity c
## 61 gamma 2 types subcommunity a
## 62 gamma 2 types subcommunity b
## 63 gamma 2 types subcommunity c
## diversity dat_id transformation normalised k max_d
## 1 8.0000000 naive NA NA NA NA
## 2 4.0000000 naive NA NA NA NA
## 3 4.0000000 naive NA NA NA NA
## 4 8.0000000 naive NA NA NA NA
## 5 4.0000000 naive NA NA NA NA
## 6 3.5095307 naive NA NA NA NA
## 7 8.0000000 naive NA NA NA NA
## 8 4.0000000 naive NA NA NA NA
## 9 3.2000000 naive NA NA NA NA
## 10 2.0000000 naive NA NA NA NA
## 11 1.0000000 naive NA NA NA NA
## 12 2.0000000 naive NA NA NA NA
## 13 2.0000000 naive NA NA NA NA
## 14 1.0000000 naive NA NA NA NA
## 15 1.7547654 naive NA NA NA NA
## 16 2.0000000 naive NA NA NA NA
## 17 1.0000000 naive NA NA NA NA
## 18 1.6000000 naive NA NA NA NA
## 19 0.2500000 naive NA NA NA NA
## 20 0.3333333 naive NA NA NA NA
## 21 0.5000000 naive NA NA NA NA
## 22 0.2886751 naive NA NA NA NA
## 23 0.3333333 naive NA NA NA NA
## 24 0.5000000 naive NA NA NA NA
## 25 0.3333333 naive NA NA NA NA
## 26 0.3333333 naive NA NA NA NA
## 27 0.5000000 naive NA NA NA NA
## 28 1.0000000 naive NA NA NA NA
## 29 1.3333333 naive NA NA NA NA
## 30 1.0000000 naive NA NA NA NA
## 31 1.1547005 naive NA NA NA NA
## 32 1.3333333 naive NA NA NA NA
## 33 1.0000000 naive NA NA NA NA
## 34 1.3333333 naive NA NA NA NA
## 35 1.3333333 naive NA NA NA NA
## 36 1.0000000 naive NA NA NA NA
## 37 4.0000000 naive NA NA NA NA
## 38 3.0000000 naive NA NA NA NA
## 39 2.0000000 naive NA NA NA NA
## 40 3.4641016 naive NA NA NA NA
## 41 3.0000000 naive NA NA NA NA
## 42 2.0000000 naive NA NA NA NA
## 43 3.0000000 naive NA NA NA NA
## 44 3.0000000 naive NA NA NA NA
## 45 2.0000000 naive NA NA NA NA
## 46 1.0000000 naive NA NA NA NA
## 47 0.7500000 naive NA NA NA NA
## 48 1.0000000 naive NA NA NA NA
## 49 0.8660254 naive NA NA NA NA
## 50 0.7500000 naive NA NA NA NA
## 51 1.0000000 naive NA NA NA NA
## 52 0.7500000 naive NA NA NA NA
## 53 0.7500000 naive NA NA NA NA
## 54 1.0000000 naive NA NA NA NA
## 55 2.6666667 naive NA NA NA NA
## 56 1.3333333 naive NA NA NA NA
## 57 2.0000000 naive NA NA NA NA
## 58 2.3094011 naive NA NA NA NA
## 59 1.3333333 naive NA NA NA NA
## 60 1.7547654 naive NA NA NA NA
## 61 2.0000000 naive NA NA NA NA
## 62 1.3333333 naive NA NA NA NA
## 63 1.6000000 naive NA NA NA NA
# Calculate all metacommunity diversity measures
metadiv(meta, 0:2)
## measure q type_level type_name partition_level partition_name
## 1 raw alpha 0 types metacommunity
## 2 raw alpha 1 types metacommunity
## 3 raw alpha 2 types metacommunity
## 4 normalised alpha 0 types metacommunity
## 5 normalised alpha 1 types metacommunity
## 6 normalised alpha 2 types metacommunity
## 7 raw beta 0 types metacommunity
## 8 raw beta 1 types metacommunity
## 9 raw beta 2 types metacommunity
## 10 normalised beta 0 types metacommunity
## 11 normalised beta 1 types metacommunity
## 12 normalised beta 2 types metacommunity
## 13 raw rho 0 types metacommunity
## 14 raw rho 1 types metacommunity
## 15 raw rho 2 types metacommunity
## 16 normalised rho 0 types metacommunity
## 17 normalised rho 1 types metacommunity
## 18 normalised rho 2 types metacommunity
## 19 gamma 0 types metacommunity
## 20 gamma 1 types metacommunity
## 21 gamma 2 types metacommunity
## diversity dat_id transformation normalised k max_d
## 1 5.0000000 naive NA NA NA NA
## 2 4.4556597 naive NA NA NA NA
## 3 4.0000000 naive NA NA NA NA
## 4 1.7500000 naive NA NA NA NA
## 5 1.5753136 naive NA NA NA NA
## 6 1.4545455 naive NA NA NA NA
## 7 0.3958333 naive NA NA NA NA
## 8 0.3938284 naive NA NA NA NA
## 9 0.4000000 naive NA NA NA NA
## 10 1.0833333 naive NA NA NA NA
## 11 1.1139149 naive NA NA NA NA
## 12 1.1428571 naive NA NA NA NA
## 13 2.7500000 naive NA NA NA NA
## 14 2.5391770 naive NA NA NA NA
## 15 2.4000000 naive NA NA NA NA
## 16 0.9375000 naive NA NA NA NA
## 17 0.8977346 naive NA NA NA NA
## 18 0.8571429 naive NA NA NA NA
## 19 2.0000000 naive NA NA NA NA
## 20 1.7547654 naive NA NA NA NA
## 21 1.6000000 naive NA NA NA NA
# Taxonomic lookup table
Species <- c("tenuifolium", "asterolepis", "simplex var.grandiflora", "simplex var.ochnacea")
Genus <- c("Protium", "Quararibea", "Swartzia", "Swartzia")
Family <- c("Burseraceae", "Bombacaceae", "Fabaceae", "Fabaceae")
Subclass <- c("Sapindales", "Malvales", "Fabales", "Fabales")
lookup <- cbind.data.frame(Species, Genus, Family, Subclass)
# Partition matrix
partition <- matrix(rep(1, 8), nrow = 4)
colnames(partition) <- LETTERS[1:2]
rownames(partition) <- lookup$Species
and assign values for each taxonomic level:
values <- c(Species = 0, Genus = 1, Family = 2, Subclass = 3, Other = 4)
tax2dist()
function:d <- tax2dist(lookup, values)
By default the tax2dist()
argument precompute_dist
is TRUE, such that a pairwise distance matrix is calculated automatically and is stored in d@distance
. If the taxonomy is too large, precompute_dist
can be set to FALSE, which enables pairwise taxonomic similarity to be calculated on the fly, in step 4.
dist2sim()
function:s <- dist2sim(d, "linear")
metacommunity()
function:meta <- metacommunity(partition, s)
## Metacommunity matrix was normalised to sum to 1.
meta_gamma(meta, 0:2)
## measure q type_level type_name partition_level partition_name diversity
## 1 gamma 0 types metacommunity 3.142857
## 2 gamma 1 types metacommunity 3.023716
## 3 gamma 2 types metacommunity 2.909091
## dat_id transformation normalised k max_d
## 1 taxonomic linear TRUE 1 4
## 2 taxonomic linear TRUE 1 4
## 3 taxonomic linear TRUE 1 4
Phylogenetic diversity measures can be broadly split into two categories – those that look at the phylogeny as a whole, such as Faith’s (1992) phylogenetic diversity (Faith’s PD), and those that look at pairwise tip distances, such as mean pairwise distance (MPD; Webb, 2000). The framework of measures presented in this package is able to quantify phylogenetic diversity using both of these methods.
# Example data
tree <- ape::rtree(4)
partition <- matrix(1:12, ncol=3)
partition <- partition/sum(partition)
phy2dist()
function: d <- phy2dist(tree)
By default the phy2dist()
argument precompute_dist
is TRUE, such that a pairwise distance matrix is calculated automatically and is stored in d@distance
. If the taxonomy is too large, precompute_dist
can be set to FALSE, which enables pairwise taxonomic similarity to be calculated on the fly, in step 4.
dist2sim()
function:s <- dist2sim(d, "linear")
metacommunity()
functionmeta <- metacommunity(partition, s)
meta_gamma(meta, 0:2)
## measure q type_level type_name partition_level partition_name diversity
## 1 gamma 0 types metacommunity 2.506940
## 2 gamma 1 types metacommunity 2.459285
## 3 gamma 2 types metacommunity 2.414723
## dat_id transformation normalised k max_d
## 1 phylogenetic linear TRUE 1 1.796924
## 2 phylogenetic linear TRUE 1 1.796924
## 3 phylogenetic linear TRUE 1 1.796924
tree <- ape::rtree(4)
partition <- matrix(1:12, ncol=3)
partition <- partition/sum(partition)
colnames(partition) <- letters[1:3]
row.names(partition) <- paste0("sp",1:4)
tree$tip.label <- row.names(partition)
phy2branch()
functions <- phy2branch(tree, partition)
metacommunity()
functionmeta <- metacommunity(partition, s)
meta_gamma(meta, 0:2)
## measure q type_level type_name partition_level partition_name diversity
## 1 gamma 0 types metacommunity 2.483452
## 2 gamma 1 types metacommunity 2.286031
## 3 gamma 2 types metacommunity 2.118287
## dat_id transformation normalised k max_d
## 1 phybranch NA NA NA NA
## 2 phybranch NA NA NA NA
## 3 phybranch NA NA NA NA
Note that: a metacommunity that was generated using this approach will contain three additional slots:
@raw_abundance
: the relative abundance of terminal species (where types are then considered to be historical species),@raw_structure
: the length of evolutionary history of each historical species@parameters
: parameters associated with historical speciespinfsc50
must be installed for this example to work.library(rdiversity)
vcf_file <- system.file("extdata", "pinf_sc50.vcf.gz", package = "pinfsc50")
#read in twice: first for the column names then for the data
tmp_vcf <- readLines(vcf_file)
vcf_data <- read.table(vcf_file, stringsAsFactors = FALSE)
# filter for the columns names
vcf_names <- unlist(strsplit(tmp_vcf[grep("#CHROM",tmp_vcf)],"\t"))
names(vcf_data) <- vcf_names
partition <- cbind.data.frame(A = c(rep(1, 9), rep(0, 9)), B = c(rep(0, 9), rep(1, 9)))
partition <- partition/sum(partition)
gen2dist()
function:d <- gen2dist(vcf)
dist2sim()
function:s <- dist2sim(d, transform = 'l')
Note: the dist2sim()
function contains an optional argument, max_d
, which defines the distance at which pairs of individuals have similarity zero. If not supplied this is set to the maximum distance observed in the distance matrix. If comparing different windows on a genome, for example, it is necessary to ensure max_d
is the same for each analysis.
metacommunity()
function:rownames(partition) <- rownames(s@similarity)
meta <- metacommunity(partition, s)
norm_meta_beta(meta, 0:2)
partition <- matrix(sample(6), nrow = 3)
rownames(partition) <- paste0("sp", 1:3)
partition <- partition / sum(partition)
d <- matrix(c(0,.75,1,.75,0,.3,1,.3,0), nrow = 3)
rownames(d) <- paste0("sp", 1:3)
colnames(d) <- paste0("sp", 1:3)
d <- distance(d, "my_taxonomy")
s <- dist2sim(d, "linear")
meta <- metacommunity(partition, s)
partition <- matrix(sample(6), nrow = 3)
rownames(partition) <- paste0("sp", 1:3)
partition <- partition / sum(partition)
s <- matrix(c(1,.8,0,.8,1,.1,0,.1,1), nrow = 3)
rownames(s) <- paste0("sp", 1:3)
colnames(s) <- paste0("sp", 1:3)
s <- similarity(s, "my_functional")
meta <- metacommunity(partition, s)
repartition()
tree <- ape::rtree(5)
tree$tip.label <- paste0("sp", 1:5)
partition <- matrix(rep(1,10), nrow = 5)
row.names(partition) <- paste0("sp", 1:5)
partition <- partition / sum(partition)
s <- phy2branch(tree, partition)
meta <- metacommunity(partition, s)
new_partition <- matrix(sample(10), nrow = 5)
row.names(new_partition) <- paste0("sp", 1:5)
new_partition <- new_partition / sum(new_partition)
new_meta <- repartition(meta, new_partition)
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.