The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
The goal of this vignette is to explain the older resamplers:
ResamplingVariableSizeTrainCV
and ResamplingSameOtherCV
, which
output some data which are useful for visualizing the train/test
splits. If you do not want to visualize the train/test splits, then it
is recommended to instead use the newer resampler,
ResamplingSameOtherSizesCV
(see other vignette).
The goal of thie section is to explain how to quantify the extent to which it is possible to train on one data subset, and predict on another data subset. This kind of problem occurs frequently in many different problem domains:
The ideas are similar to my previous blog posts about how to do this
in
python
and R. Below
we explain how to use mlr3resampling
for this purpose, in simulated
regression and classification problems. To use this method in
real data, the important sections to read below are named “Benchmark:
computing test error,” which show how to create these cross-validation
experiments using mlr3 code.
We begin by generating some data which can be used with regression algorithms. Assume there is a data set with some rows from one person, some rows from another,
N <- 300
library(data.table)
set.seed(1)
abs.x <- 2
reg.dt <- data.table(
x=runif(N, -abs.x, abs.x),
person=rep(1:2, each=0.5*N))
reg.pattern.list <- list(
easy=function(x, person)x^2,
impossible=function(x, person)(x^2+person*3)*(-1)^person)
reg.task.list <- list()
for(task_id in names(reg.pattern.list)){
f <- reg.pattern.list[[task_id]]
yname <- paste0("y_",task_id)
reg.dt[, (yname) := f(x,person)+rnorm(N)][]
task.dt <- reg.dt[, c("x","person",yname), with=FALSE]
reg.task <- mlr3::TaskRegr$new(
task_id, task.dt, target=yname)
reg.task$col_roles$subset <- "person"
reg.task$col_roles$stratum <- "person"
reg.task$col_roles$feature <- "x"
reg.task.list[[task_id]] <- reg.task
}
reg.dt
#> x person y_easy y_impossible
#> <num> <int> <num> <num>
#> 1: -0.9379653 1 1.32996609 -2.918082
#> 2: -0.5115044 1 0.24307692 -3.866062
#> 3: 0.2914135 1 -0.23314657 -3.837799
#> 4: 1.6328312 1 1.73677545 -7.221749
#> 5: -1.1932723 1 -0.06356159 -5.877792
#> ---
#> 296: 0.7257701 2 -2.48130642 5.180948
#> 297: -1.6033236 2 1.20453459 9.604312
#> 298: -1.5243898 2 1.89966190 7.511988
#> 299: -1.7982414 2 3.47047566 11.035397
#> 300: 1.7170157 2 0.60541972 10.719685
The table above shows some simulated data for two regression problems:
mlr3::TaskRegr
line which tells mlr3 what data set to use, what is
the target column, and what is the subset/stratum column.First we reshape the data using the code below,
(reg.tall <- nc::capture_melt_single(
reg.dt,
task_id="easy|impossible",
value.name="y"))
#> x person task_id y
#> <num> <int> <char> <num>
#> 1: -0.9379653 1 easy 1.32996609
#> 2: -0.5115044 1 easy 0.24307692
#> 3: 0.2914135 1 easy -0.23314657
#> 4: 1.6328312 1 easy 1.73677545
#> 5: -1.1932723 1 easy -0.06356159
#> ---
#> 596: 0.7257701 2 impossible 5.18094849
#> 597: -1.6033236 2 impossible 9.60431191
#> 598: -1.5243898 2 impossible 7.51198770
#> 599: -1.7982414 2 impossible 11.03539747
#> 600: 1.7170157 2 impossible 10.71968480
The table above is a more convenient form for the visualization which we create using the code below,
if(require(animint2)){
ggplot()+
geom_point(aes(
x, y),
data=reg.tall)+
facet_grid(
task_id ~ person,
labeller=label_both,
space="free",
scales="free")+
scale_y_continuous(
breaks=seq(-100, 100, by=2))
}
#> Loading required package: animint2
#> Registered S3 methods overwritten by 'animint2':
#> method from
#> [.uneval ggplot2
#> drawDetails.zeroGrob ggplot2
#> grid.draw.absoluteGrob ggplot2
#> grobHeight.absoluteGrob ggplot2
#> grobHeight.zeroGrob ggplot2
#> grobWidth.absoluteGrob ggplot2
#> grobWidth.zeroGrob ggplot2
#> grobX.absoluteGrob ggplot2
#> grobY.absoluteGrob ggplot2
#> heightDetails.titleGrob ggplot2
#> heightDetails.zeroGrob ggplot2
#> makeContext.dotstackGrob ggplot2
#> print.element ggplot2
#> print.ggplot2_bins ggplot2
#> print.rel ggplot2
#> print.theme ggplot2
#> print.uneval ggplot2
#> widthDetails.titleGrob ggplot2
#> widthDetails.zeroGrob ggplot2
#>
#> Attaching package: 'animint2'
#> The following objects are masked from 'package:ggplot2':
#>
#> %+%, %+replace%, Coord, CoordCartesian, CoordFixed, CoordFlip,
#> CoordMap, CoordPolar, CoordQuickmap, CoordTrans, Geom, GeomAbline,
#> GeomAnnotationMap, GeomArea, GeomBar, GeomBlank, GeomContour,
#> GeomCrossbar, GeomCurve, GeomCustomAnn, GeomDensity, GeomDensity2d,
#> GeomDotplot, GeomErrorbar, GeomErrorbarh, GeomHex, GeomHline,
#> GeomLabel, GeomLine, GeomLinerange, GeomLogticks, GeomMap,
#> GeomPath, GeomPoint, GeomPointrange, GeomPolygon, GeomRaster,
#> GeomRasterAnn, GeomRect, GeomRibbon, GeomRug, GeomSegment,
#> GeomSmooth, GeomSpoke, GeomStep, GeomText, GeomTile, GeomViolin,
#> GeomVline, Position, PositionDodge, PositionFill, PositionIdentity,
#> PositionJitter, PositionJitterdodge, PositionNudge, PositionStack,
#> Scale, ScaleContinuous, ScaleContinuousDate,
#> ScaleContinuousDatetime, ScaleContinuousIdentity,
#> ScaleContinuousPosition, ScaleDiscrete, ScaleDiscreteIdentity,
#> ScaleDiscretePosition, Stat, StatBin, StatBin2d, StatBindot,
#> StatBinhex, StatContour, StatCount, StatDensity, StatDensity2d,
#> StatEcdf, StatEllipse, StatFunction, StatIdentity, StatQq,
#> StatSmooth, StatSum, StatSummary, StatSummary2d, StatSummaryBin,
#> StatSummaryHex, StatUnique, StatYdensity, aes, aes_, aes_all,
#> aes_auto, aes_q, aes_string, annotate, annotation_custom,
#> annotation_logticks, annotation_map, annotation_raster,
#> as_labeller, autoplot, benchplot, borders, calc_element,
#> continuous_scale, coord_cartesian, coord_equal, coord_fixed,
#> coord_flip, coord_map, coord_munch, coord_polar, coord_quickmap,
#> coord_trans, cut_interval, cut_number, cut_width, discrete_scale,
#> draw_key_abline, draw_key_blank, draw_key_crossbar,
#> draw_key_dotplot, draw_key_label, draw_key_path, draw_key_point,
#> draw_key_pointrange, draw_key_polygon, draw_key_rect,
#> draw_key_smooth, draw_key_text, draw_key_vline, draw_key_vpath,
#> economics, economics_long, element_blank, element_grob,
#> element_line, element_rect, element_text, expand_limits,
#> facet_grid, facet_null, facet_wrap, fortify, geom_abline,
#> geom_area, geom_bar, geom_bin2d, geom_blank, geom_contour,
#> geom_count, geom_crossbar, geom_curve, geom_density,
#> geom_density2d, geom_density_2d, geom_dotplot, geom_errorbar,
#> geom_errorbarh, geom_freqpoly, geom_hex, geom_histogram,
#> geom_hline, geom_jitter, geom_label, geom_line, geom_linerange,
#> geom_map, geom_path, geom_point, geom_pointrange, geom_polygon,
#> geom_qq, geom_raster, geom_rect, geom_ribbon, geom_rug,
#> geom_segment, geom_smooth, geom_spoke, geom_step, geom_text,
#> geom_tile, geom_violin, geom_vline, gg_dep, ggplot, ggplotGrob,
#> ggplot_build, ggplot_gtable, ggsave, ggtitle, guide_colorbar,
#> guide_colourbar, guide_legend, guides, is.Coord, is.facet,
#> is.ggplot, is.theme, label_both, label_bquote, label_context,
#> label_parsed, label_value, label_wrap_gen, labeller, labs,
#> last_plot, layer, layer_data, layer_grob, layer_scales, lims,
#> map_data, margin, mean_cl_boot, mean_cl_normal, mean_sdl, mean_se,
#> median_hilow, position_dodge, position_fill, position_identity,
#> position_jitter, position_jitterdodge, position_nudge,
#> position_stack, presidential, qplot, quickplot, rel,
#> remove_missing, resolution, scale_alpha, scale_alpha_continuous,
#> scale_alpha_discrete, scale_alpha_identity, scale_alpha_manual,
#> scale_color_brewer, scale_color_continuous, scale_color_discrete,
#> scale_color_distiller, scale_color_gradient, scale_color_gradient2,
#> scale_color_gradientn, scale_color_grey, scale_color_hue,
#> scale_color_identity, scale_color_manual, scale_colour_brewer,
#> scale_colour_continuous, scale_colour_date, scale_colour_datetime,
#> scale_colour_discrete, scale_colour_distiller,
#> scale_colour_gradient, scale_colour_gradient2,
#> scale_colour_gradientn, scale_colour_grey, scale_colour_hue,
#> scale_colour_identity, scale_colour_manual, scale_fill_brewer,
#> scale_fill_continuous, scale_fill_date, scale_fill_datetime,
#> scale_fill_discrete, scale_fill_distiller, scale_fill_gradient,
#> scale_fill_gradient2, scale_fill_gradientn, scale_fill_grey,
#> scale_fill_hue, scale_fill_identity, scale_fill_manual,
#> scale_linetype, scale_linetype_continuous, scale_linetype_discrete,
#> scale_linetype_identity, scale_linetype_manual, scale_radius,
#> scale_shape, scale_shape_continuous, scale_shape_discrete,
#> scale_shape_identity, scale_shape_manual, scale_size,
#> scale_size_area, scale_size_continuous, scale_size_date,
#> scale_size_datetime, scale_size_discrete, scale_size_identity,
#> scale_size_manual, scale_x_continuous, scale_x_date,
#> scale_x_datetime, scale_x_discrete, scale_x_log10, scale_x_reverse,
#> scale_x_sqrt, scale_y_continuous, scale_y_date, scale_y_datetime,
#> scale_y_discrete, scale_y_log10, scale_y_reverse, scale_y_sqrt,
#> should_stop, stat_bin, stat_bin2d, stat_bin_2d, stat_bin_hex,
#> stat_binhex, stat_contour, stat_count, stat_density,
#> stat_density2d, stat_density_2d, stat_ecdf, stat_ellipse,
#> stat_function, stat_identity, stat_qq, stat_smooth, stat_spoke,
#> stat_sum, stat_summary, stat_summary2d, stat_summary_2d,
#> stat_summary_bin, stat_summary_hex, stat_unique, stat_ydensity,
#> theme, theme_bw, theme_classic, theme_dark, theme_get, theme_gray,
#> theme_grey, theme_light, theme_linedraw, theme_minimal,
#> theme_replace, theme_set, theme_update, theme_void,
#> transform_position, update_geom_defaults, update_labels,
#> update_stat_defaults, waiver, xlab, xlim, ylab, ylim, zeroGrob
In the simulated data above, we can see that
In the code below, we define a K-fold cross-validation experiment.
(reg_same_other <- mlr3resampling::ResamplingSameOtherCV$new())
#> <ResamplingSameOtherCV> : Same versus Other Cross-Validation
#> * Iterations:
#> * Instantiated: FALSE
#> * Parameters:
#> List of 1
#> $ folds: int 3
In the code below, we define two learners to compare,
(reg.learner.list <- list(
if(requireNamespace("rpart"))mlr3::LearnerRegrRpart$new(),
mlr3::LearnerRegrFeatureless$new()))
#> [[1]]
#> <LearnerRegrRpart:regr.rpart>: Regression Tree
#> * Model: -
#> * Parameters: xval=0
#> * Packages: mlr3, rpart
#> * Predict Types: [response]
#> * Feature Types: logical, integer, numeric, factor, ordered
#> * Properties: importance, missings, selected_features, weights
#>
#> [[2]]
#> <LearnerRegrFeatureless:regr.featureless>: Featureless Regression Learner
#> * Model: -
#> * Parameters: robust=FALSE
#> * Packages: mlr3, stats
#> * Predict Types: [response], se, quantiles
#> * Feature Types: logical, integer, numeric, character, factor, ordered,
#> POSIXct
#> * Properties: featureless, importance, missings, selected_features
In the code below, we define the benchmark grid, which is all combinations of tasks (easy and impossible), learners (rpart and featureless), and the one resampling method.
(reg.bench.grid <- mlr3::benchmark_grid(
reg.task.list,
reg.learner.list,
reg_same_other))
#> task learner resampling
#> <char> <char> <char>
#> 1: easy regr.rpart same_other_cv
#> 2: easy regr.featureless same_other_cv
#> 3: impossible regr.rpart same_other_cv
#> 4: impossible regr.featureless same_other_cv
In the code below, we execute the benchmark experiment (in parallel using the multisession future plan).
if(FALSE){#for CRAN.
if(require(future))plan("multisession")
}
if(require(lgr))get_logger("mlr3")$set_threshold("warn")
#> Loading required package: lgr
#>
#> Attaching package: 'lgr'
#> The following object is masked from 'package:ggplot2':
#>
#> Layout
(reg.bench.result <- mlr3::benchmark(
reg.bench.grid, store_models = TRUE))
#> <BenchmarkResult> of 72 rows with 4 resampling runs
#> nr task_id learner_id resampling_id iters warnings errors
#> 1 easy regr.rpart same_other_cv 18 0 0
#> 2 easy regr.featureless same_other_cv 18 0 0
#> 3 impossible regr.rpart same_other_cv 18 0 0
#> 4 impossible regr.featureless same_other_cv 18 0 0
The code below computes the test error for each split,
reg.bench.score <- mlr3resampling::score(reg.bench.result)
reg.bench.score[1]
#> train.subsets test.fold test.subset person iteration test
#> <char> <int> <int> <int> <int> <list>
#> 1: all 1 1 1 1 1, 3, 5, 6,12,13,...
#> train uhash nr
#> <list> <char> <int>
#> 1: 4, 7, 9,10,18,20,... 5162c460-c881-4393-8b16-b7297db4d2bd 1
#> task task_id learner learner_id
#> <list> <char> <list> <char>
#> 1: <TaskRegr:easy> easy <LearnerRegrRpart:regr.rpart> regr.rpart
#> resampling resampling_id prediction_test regr.mse algorithm
#> <list> <char> <list> <num> <char>
#> 1: <ResamplingSameOtherCV> same_other_cv <PredictionRegr> 1.638015 rpart
The code below visualizes the resulting test accuracy numbers.
if(require(animint2)){
ggplot()+
scale_x_log10()+
geom_point(aes(
regr.mse, train.subsets, color=algorithm),
shape=1,
data=reg.bench.score)+
facet_grid(
task_id ~ person,
labeller=label_both,
scales="free")
}
It is clear from the plot above that
The code below can be used to create an interactive data visualization which allows exploring how different functions are learned during different splits.
inst <- reg.bench.score$resampling[[1]]$instance
rect.expand <- 0.2
grid.dt <- data.table(x=seq(-abs.x, abs.x, l=101), y=0)
grid.task <- mlr3::TaskRegr$new("grid", grid.dt, target="y")
pred.dt.list <- list()
point.dt.list <- list()
for(score.i in 1:nrow(reg.bench.score)){
reg.bench.row <- reg.bench.score[score.i]
task.dt <- data.table(
reg.bench.row$task[[1]]$data(),
reg.bench.row$resampling[[1]]$instance$id.dt)
names(task.dt)[1] <- "y"
set.ids <- data.table(
set.name=c("test","train")
)[
, data.table(row_id=reg.bench.row[[set.name]][[1]])
, by=set.name]
i.points <- set.ids[
task.dt, on="row_id"
][
is.na(set.name), set.name := "unused"
]
point.dt.list[[score.i]] <- data.table(
reg.bench.row[, .(task_id, iteration)],
i.points)
i.learner <- reg.bench.row$learner[[1]]
pred.dt.list[[score.i]] <- data.table(
reg.bench.row[, .(
task_id, iteration, algorithm
)],
as.data.table(
i.learner$predict(grid.task)
)[, .(x=grid.dt$x, y=response)]
)
}
(pred.dt <- rbindlist(pred.dt.list))
#> task_id iteration algorithm x y
#> <char> <int> <char> <num> <num>
#> 1: easy 1 rpart -2.00 3.557968
#> 2: easy 1 rpart -1.96 3.557968
#> 3: easy 1 rpart -1.92 3.557968
#> 4: easy 1 rpart -1.88 3.557968
#> 5: easy 1 rpart -1.84 3.557968
#> ---
#> 7268: impossible 18 featureless 1.84 7.204232
#> 7269: impossible 18 featureless 1.88 7.204232
#> 7270: impossible 18 featureless 1.92 7.204232
#> 7271: impossible 18 featureless 1.96 7.204232
#> 7272: impossible 18 featureless 2.00 7.204232
(point.dt <- rbindlist(point.dt.list))
#> task_id iteration set.name row_id y x fold person
#> <char> <int> <char> <int> <num> <num> <int> <int>
#> 1: easy 1 test 1 1.32996609 -0.9379653 1 1
#> 2: easy 1 train 2 0.24307692 -0.5115044 3 1
#> 3: easy 1 test 3 -0.23314657 0.2914135 1 1
#> 4: easy 1 train 4 1.73677545 1.6328312 2 1
#> 5: easy 1 test 5 -0.06356159 -1.1932723 1 1
#> ---
#> 21596: impossible 18 train 296 5.18094849 0.7257701 1 2
#> 21597: impossible 18 train 297 9.60431191 -1.6033236 1 2
#> 21598: impossible 18 test 298 7.51198770 -1.5243898 3 2
#> 21599: impossible 18 train 299 11.03539747 -1.7982414 1 2
#> 21600: impossible 18 test 300 10.71968480 1.7170157 3 2
#> subset display_row
#> <int> <int>
#> 1: 1 1
#> 2: 1 101
#> 3: 1 2
#> 4: 1 51
#> 5: 1 3
#> ---
#> 21596: 2 198
#> 21597: 2 199
#> 21598: 2 299
#> 21599: 2 200
#> 21600: 2 300
set.colors <- c(
train="#1B9E77",
test="#D95F02",
unused="white")
algo.colors <- c(
featureless="blue",
rpart="red")
make_person_subset <- function(DT){
DT[, "person/subset" := person]
}
make_person_subset(point.dt)
make_person_subset(reg.bench.score)
if(require(animint2)){
viz <- animint(
title="Train/predict on subsets, regression",
pred=ggplot()+
ggtitle("Predictions for selected train/test split")+
theme_animint(height=400)+
scale_fill_manual(values=set.colors)+
geom_point(aes(
x, y, fill=set.name),
showSelected="iteration",
size=3,
shape=21,
data=point.dt)+
scale_color_manual(values=algo.colors)+
geom_line(aes(
x, y, color=algorithm, subset=paste(algorithm, iteration)),
showSelected="iteration",
data=pred.dt)+
facet_grid(
task_id ~ `person/subset`,
labeller=label_both,
space="free",
scales="free")+
scale_y_continuous(
breaks=seq(-100, 100, by=2)),
err=ggplot()+
ggtitle("Test error for each split")+
theme_animint(height=400)+
scale_y_log10(
"Mean squared error on test set")+
scale_fill_manual(values=algo.colors)+
scale_x_discrete(
"People/subsets in train set")+
geom_point(aes(
train.subsets, regr.mse, fill=algorithm),
shape=1,
size=5,
stroke=2,
color="black",
color_off=NA,
clickSelects="iteration",
data=reg.bench.score)+
facet_grid(
task_id ~ `person/subset`,
labeller=label_both,
scales="free"),
diagram=ggplot()+
ggtitle("Select train/test split")+
theme_bw()+
theme_animint(height=300)+
facet_grid(
. ~ train.subsets,
scales="free",
space="free")+
scale_size_manual(values=c(subset=3, fold=1))+
scale_color_manual(values=c(subset="orange", fold="grey50"))+
geom_rect(aes(
xmin=-Inf, xmax=Inf,
color=rows,
size=rows,
ymin=display_row, ymax=display_end),
fill=NA,
data=inst$viz.rect.dt)+
scale_fill_manual(values=set.colors)+
geom_rect(aes(
xmin=iteration-rect.expand, ymin=display_row,
xmax=iteration+rect.expand, ymax=display_end,
fill=set.name),
clickSelects="iteration",
data=inst$viz.set.dt)+
geom_text(aes(
ifelse(rows=="subset", Inf, -Inf),
(display_row+display_end)/2,
hjust=ifelse(rows=="subset", 1, 0),
label=paste0(rows, "=", ifelse(rows=="subset", subset, fold))),
data=data.table(train.name="same", inst$viz.rect.dt))+
scale_x_continuous(
"Split number / cross-validation iteration")+
scale_y_continuous(
"Row number"),
source="https://github.com/tdhock/mlr3resampling/blob/main/vignettes/ResamplingSameOtherCV.Rmd")
viz
}
// Constructor for animint Object.
var animint = function (to_select, json_file) {
var default_axis_px = 16;
function wait_until_then(timeout, condFun, readyFun) {
var args=arguments
function checkFun() {
if(condFun()) {
readyFun(args[3],args[4]);
} else{
setTimeout(checkFun, timeout);
}
}
checkFun();
}
function convert_R_types(resp_array, types){
return resp_array.map(function (d) {
for (var v_name in d) {
if(!is_interactive_aes(v_name)){
var r_type = types[v_name];
if (r_type == "integer") {
d[v_name] = parseInt(d[v_name]);
} else if (r_type == "numeric") {
d[v_name] = parseFloat(d[v_name]);
} else if (r_type == "factor" || r_type == "rgb"
|| r_type == "linetype" || r_type == "label"
|| r_type == "character") {
// keep it as a character
} else if (r_type == "character" & v_name == "outliers") {
d[v_name] = parseFloat(d[v_name].split(" @ "));
}
}
}
return d;
});
}
// replacing periods in variable with an underscore this makes sure
// that selector doesn't confuse . in name with css selectors
function safe_name(unsafe_name){
return unsafe_name.replace(/[ .]/g, '_');
}
function legend_class_name(selector_name){
return safe_name(selector_name) + "_variable";
}
function is_interactive_aes(v_name){
if(v_name.indexOf("clickSelects") > -1){
return true;
}
if(v_name.indexOf("showSelected") > -1){
return true;
}
return false;
}
var linetypesize2dasharray = function (lt, size) {
var isInt = function(n) {
return typeof n === 'number' && parseFloat(n) == parseInt(n, 10) && !isNaN(n);
};
if(isInt(lt)){ // R integer line types.
if(lt == 1){
return null;
}
var o = {
0: size * 0 + "," + size * 10,
2: size * 4 + "," + size * 4,
3: size + "," + size * 2,
4: size + "," + size * 2 + "," + size * 4 + "," + size * 2,
5: size * 8 + "," + size * 4,
6: size * 2 + "," + size * 2 + "," + size * 6 + "," + size * 2
};
} else { // R defined line types
if(lt == "solid" || lt === null){
return null;
}
var o = {
"blank": size * 0 + "," + size * 10,
"none": size * 0 + "," + size * 10,
"dashed": size * 4 + "," + size * 4,
"dotted": size + "," + size * 2,
"dotdash": size + "," + size * 2 + "," + size * 4 + "," + size * 2,
"longdash": size * 8 + "," + size * 4,
"twodash": size * 2 + "," + size * 2 + "," + size * 6 + "," + size * 2,
"22": size * 2 + "," + size * 2,
"42": size * 4 + "," + size * 2,
"44": size * 4 + "," + size * 4,"13": size + "," + size * 3,
"1343": size + "," + size * 3 + "," + size * 4 + "," + size * 3,
"73": size * 7 + "," + size * 3,
"2262": size * 2 + "," + size * 2 + "," + size * 6 + "," + size * 2,
"12223242": size + "," + size * 2 + "," + size * 2 + "," + size * 2 + "," + size * 3 + "," + size * 2 + "," + size * 4 + "," + size * 2,
"F282": size * 15 + "," + size * 2 + "," + size * 8 + "," + size * 2,
"F4448444": size * 15 + "," + size * 4 + "," + size * 4 + "," + size * 4 + "," + size * 8 + "," + size * 4 + "," + size * 4 + "," + size * 4,
"224282F2": size * 2 + "," + size * 2 + "," + size * 4 + "," + size * 2 + "," + size * 8 + "," + size * 2 + "," + size * 16 + "," + size * 2,
"F1": size * 16 + "," + size
};
}
if (lt in o){
return o[lt];
} else{ // manually specified line types
str = lt.split("");
strnum = str.map(function (d) {
return size * parseInt(d, 16);
});
return strnum;
}
};
var isArray = function(o) {
return Object.prototype.toString.call(o) === '[object Array]';
};
// create a dummy element, apply the appropriate classes,
// and then measure the element
// Inspired from http://jsfiddle.net/uzddx/2/
var measureText = function(pText, pFontSize, pAngle, pStyle) {
if (!pText || pText.length === 0) return {height: 0, width: 0};
if (pAngle === null || isNaN(pAngle)) pAngle = 0;
var container = element.append('svg');
// do we need to set the class so that styling is applied?
//.attr('class', classname);
container.append('text')
.attr({x: -1000, y: -1000})
.attr("transform", "rotate(" + pAngle + ")")
.attr("style", pStyle)
.attr("font-size", pFontSize)
.text(pText);
var bbox = container.node().getBBox();
container.remove();
return {height: bbox.height, width: bbox.width};
};
var nest_by_group = d3.nest().key(function(d){ return d.group; });
var dirs = json_file.split("/");
dirs.pop(); //if a directory path exists, remove the JSON file from dirs
var element = d3.select(to_select);
this.element = element;
var viz_id = element.attr("id");
var Widgets = {};
this.Widgets = Widgets;
var Selectors = {};
this.Selectors = Selectors;
var Plots = {};
this.Plots = Plots;
var Geoms = {};
this.Geoms = Geoms;
// SVGs must be stored separately from Geoms since they are
// initialized first, with the Plots.
var SVGs = {};
this.SVGs = SVGs;
var Animation = {};
this.Animation = Animation;
var all_geom_names = {};
this.all_geom_names = all_geom_names;
//creating an array to contain the selectize widgets
var selectized_array = [];
var data_object_geoms = {
"line":true,
"path":true,
"ribbon":true,
"polygon":true
};
var css = document.createElement('style');
css.type = 'text/css';
var styles = [".axis path{fill: none;stroke: black;shape-rendering: crispEdges;}",
".axis line{fill: none;stroke: black;shape-rendering: crispEdges;}",
".axis text {font-family: sans-serif;font-size: 11px;}"];
var add_geom = function (g_name, g_info) {
// Determine if data will be an object or an array.
if(g_info.geom in data_object_geoms){
g_info.data_is_object = true;
}else{
g_info.data_is_object = false;
}
// Add a row to the loading table.
g_info.tr = Widgets["loading"].append("tr");
g_info.tr.append("td").text(g_name);
g_info.tr.append("td").attr("class", "chunk");
g_info.tr.append("td").attr("class", "downloaded").text(0);
g_info.tr.append("td").text(g_info.total);
g_info.tr.append("td").attr("class", "status").text("initialized");
// load chunk tsv
g_info.data = {};
g_info.download_status = {};
Geoms[g_name] = g_info;
// Determine whether common chunk tsv exists
// If yes, load it
if(g_info.hasOwnProperty("columns") && g_info.columns.common){
var common_tsv = get_tsv(g_info, "_common");
g_info.common_tsv = common_tsv;
var common_path = getTSVpath(common_tsv);
d3.tsv(common_path, function (error, response) {
var converted = convert_R_types(response, g_info.types);
g_info.data[common_tsv] = nest_by_group.map(converted);
});
} else {
g_info.common_tsv = null;
}
// Save this geom and load it!
update_geom(g_name, null);
};
var add_plot = function (p_name, p_info) {
// Each plot may have one or more legends. To make space for the
// legends, we put each plot in a table with one row and two
// columns: tdLeft and tdRight.
var plot_table = element.append("table").style("display", "inline-block");
var plot_tr = plot_table.append("tr");
var tdLeft = plot_tr.append("td");
var tdRight = plot_tr.append("td").attr("class", p_name+"_legend");
if(viz_id === null){
p_info.plot_id = p_name;
}else{
p_info.plot_id = viz_id + "_" + p_name;
}
var svg = tdLeft.append("svg")
.attr("id", p_info.plot_id)
.attr("height", p_info.options.height)
.attr("width", p_info.options.width);
// divvy up width/height based on the panel layout
var nrows = Math.max.apply(null, p_info.layout.ROW);
var ncols = Math.max.apply(null, p_info.layout.COL);
var panel_names = p_info.layout.PANEL;
var npanels = Math.max.apply(null, panel_names);
// Note axis names are "shared" across panels (just like the title)
var xtitlepadding = 5 + measureText(p_info["xtitle"], default_axis_px).height;
var ytitlepadding = 5 + measureText(p_info["ytitle"], default_axis_px).height;
// 'margins' are fixed across panels and do not
// include title/axis/label padding (since these are not
// fixed across panels). They do, however, account for
// spacing between panels
var text_height_pixels = measureText("foo", 11).height;
var margin = {
left: 0,
right: text_height_pixels * p_info.panel_margin_lines,
top: text_height_pixels * p_info.panel_margin_lines,
bottom: 0
};
var plotdim = {
width: 0,
height: 0,
xstart: 0,
xend: 0,
ystart: 0,
yend: 0,
graph: {
width: 0,
height: 0
},
margin: margin,
xlab: {
x: 0,
y: 0
},
ylab: {
x: 0,
y: 0
},
title: {
x: 0,
y: 0
}
};
// Draw the title
var titlepadding = measureText(p_info.title, p_info.title_size).height;
// why are we giving the title padding if it is undefined?
if (p_info.title === undefined) titlepadding = 0;
plotdim.title.x = p_info.options.width / 2;
plotdim.title.y = titlepadding;
svg.append("text")
.text(p_info.title)
.attr("class", "plottitle")
.attr("font-family", "sans-serif")
.attr("font-size", p_info.title_size)
.attr("transform", "translate(" + plotdim.title.x + "," +
plotdim.title.y + ")")
.style("text-anchor", "middle");
// grab max text size over axis labels and facet strip labels
var axispaddingy = 5;
if(p_info.hasOwnProperty("ylabs") && p_info.ylabs.length){
axispaddingy += Math.max.apply(null, p_info.ylabs.map(function(entry){
// + 5 to give a little extra space to avoid bad axis labels
// in shiny.
return measureText(entry, p_info.ysize).width + 5;
}));
}
var axispaddingx = 10 + 20;
if(p_info.hasOwnProperty("xlabs") && p_info.xlabs.length){
// TODO: throw warning if text height is large portion of plot height?
axispaddingx += Math.max.apply(null, p_info.xlabs.map(function(entry){
return measureText(entry, p_info.xsize, p_info.xangle).height;
}));
// TODO: carefully calculating this gets complicated with rotating xlabs
//margin.right += 5;
}
plotdim.margin = margin;
var strip_heights = p_info.strips.top.map(function(entry){
return measureText(entry, p_info.strip_text_xsize).height;
});
var strip_widths = p_info.strips.right.map(function(entry){
return measureText(entry, p_info.strip_text_ysize).height;
});
// compute the number of x/y axes, max strip height per row, and
// max strip width per columns, for calculating height/width of
// graphing region.
var row_strip_heights = [];
var col_strip_widths = [];
var n_xaxes = 0;
var n_yaxes = 0;
var current_row, current_col;
for (var layout_i = 0; layout_i < npanels; layout_i++) {
current_row = p_info.layout.ROW[layout_i] - 1;
current_col = p_info.layout.COL[layout_i] - 1;
if(row_strip_heights[current_row] === undefined){
row_strip_heights[current_row] = [];
}
if(col_strip_widths[current_col] === undefined){
col_strip_widths[current_col] = [];
}
row_strip_heights[current_row].push(strip_heights[layout_i]);
col_strip_widths[current_col].push(strip_widths[layout_i]);
if (p_info.layout.COL[layout_i] == 1) {
n_xaxes += p_info.layout.AXIS_X[layout_i];
}
if (p_info.layout.ROW[layout_i] == 1) {
n_yaxes += p_info.layout.AXIS_Y[layout_i];
}
}
function cumsum_array(array_of_arrays){
var cumsum = [], max_value, cumsum_value = 0;
for(var i=0; i, and |