The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
VariantSpark is a framework based on scala and spark to analyze genome datasets. It is being developed by CSIRO Bioinformatics team in Australia. VariantSpark was tested on datasets with 3000 samples each one containing 80 million features in either unsupervised clustering approaches and supervised applications, like classification and regression.
The genome datasets are usually writing in Variant Call Format (VCF), a specific text file format used in bioinformatics for storing gene sequence variations. So, VariantSaprk is a great tool because it is able to read VCF files, run analyses and give us the output in a spark data frame.
This repo is an R package integrating R and VaraintSpark using the sparklyr. This way, you are able to analyze huge genomics datasets without leaving your well know R environment.
To upgrade to the latest version of variantspark, run the following command and restart your R session:
install.packages("devtools")
::install_github("r-spark/variantspark") devtools
To use variantspark R package you need to create a VarianSpark connection, to do this, you have to pass a Spark connection as an argument.
library(sparklyr)
library(variantspark)
<- spark_connect(master = "local")
sc <- vs_connect(sc) vsc
VariantSpark can load VCF files and other formats too, like CSV for example.
<- vs_read_vcf(vsc, "inst/extdata/hipster.vcf.bz2")
hipster_vcf <- vs_read_csv(vsc, "inst/extdata/hipster_labels.txt")
hipster_labels <- vs_read_labels(vsc, "inst/extdata/hipster_labels.txt") # read just the label column labels
This is one of VariantSpark application and this analysis was based on this. Briefly, VariantSpark uses Random Forest to assign an “Importance” score to each tested variant reflecting its association to the interest phenotype. A variant with higher “Importance” score implies it is more strongly associated with the phenotype of interest. For more details, please look at here. This is the way you can do it in R.
# calculate the "Importance"
<- vs_importance_analysis(vsc, hipster_vcf, labels, n_trees = 100)
importance
# transform the output in a tibble spark
<- importance_tbl(importance) importance_tbl
You can use dplyr
and ggplot2
to transform
the output and plot!
library(dplyr)
library(ggplot2)
# save a importance sample in memory
<- importance_tbl %>%
importance_df arrange(-importance) %>%
head(20) %>%
collect()
# importance barplot
ggplot(importance_df) +
aes(x = variable, y = importance) +
geom_bar(stat = 'identity') +
scale_x_discrete(limits = importance_df[order(importance_df$importance), 1]$variable) +
coord_flip()
Don’t forget to disconnect your session when you finish your work.
spark_disconnect(sc)
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.