The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
b1 %A0% b2
and
b1 %Xa0% b2
now also work when lambda
is
specified for b1
and df
is specified for
b2
(or vice versa).clr()
to compute the centered-log-ratio
transform and its inverse for density-on-scalar regression in Bayes
spaces.birthDistribution
.birthDistribution
data.factorize()
added for tensor-product
factorization of estimated effects or models.predict()
for bsignal()
with
newdata
and the functional covariate given as a numeric
matrix, raised in #17.LINPACK
in solve()
removed.timeformula
. This feature is
needed for the manifoldboost package.stabsel.FDboost()
now uses
applyFolds()
instead of validateFDboost()
to
do cross-validation with recomputation of the smooth offset. This is
only relevant for models with a functional response. This will change
results if the model contains base-learners like bbsc()
or
bolsc()
, as applyFolds()
also recomputes the
Z-matrix for those base-learners.integrationWeights()
and
integrationWeightsLeft()
for unsorted time variables.predict.FDboost()
such that interaction
effects of two functional covariates like
bsignal() %X% bsignal()
can be predicted with new
data.dots$aggregate
(i.e.,
dots$aggregate[1] != "sum"
) in
predict.FDboost()
so that it also works with the default,
where aggregate
is a vector of length 3 and later only the
first argument is used via match.arg()
.corrected
in cvrisk()
removed.cvrisk()
has by default adequate folds for a noncyclic
fitted FDboostLSS model, see #14.cBind()
(which is deprecated) with
cbind()
.bootstrapCI()
to compute bootstrapped
coefficients.emotion
containing EEG and EMG
measures under different experimental conditions.FDboost()
now works with the
response as a vector (instead of a 1-row matrix); thus,
fitted()
and predict()
return a vector.update.FDboost()
now works with a scalar response.FDboost()
works with family
Binomial(type = "glm")
, see #1.applyFolds()
works for factor response, see #7.cvLong()
and cvMA()
return a matrix for
only one resampling fold with B = 1
(proposed by Almond
Stoecker).FDboost
to mboost
2.8-0, which
allows for mstop = 0
.FDboostLSS()
such that it calls
mboostLSS_fit()
from gamboostLSS
2.0-0.FDboost
, set
options("mboost_indexmin" = +Inf)
to disable internal use
of ties in model fitting, as this breaks some methods for models with
responses in long format and for models containing
bhistx()
, see #10.validateFDboost()
, use
applyFolds()
and bootstrapCI()
instead.applyFolds()
to compute the optimal
stopping iteration.predict()
with
bbsc()
.bolsc()
: correctly use the index in
bolsc()
/bbsc()
. Previously, each observation
was used only once for computing Z.%Xa0%
that computes a row-tensor product
of two base-learners where the penalty in one direction is zero.reweightData()
that computes the data
for Bootstrap or cross-validation folds.stabsel.FDboost()
that refits the smooth
offset in each fold.fun
to
validateFDboost()
.update.FDboost()
that overwrites
update.mboost()
.FDboost()
works with
family = Binomial()
.oobpred
in validateFDboost()
for
irregular response and resampling at the curve level so that
plot.validateFDboost()
works for that case.FDboost()
: now the formula
given to mboost()
within FDboost()
uses the
variables in the environment of the formula specified in
FDboost()
.plot.FDboost()
works for more effects, especially for
effects like bolsc() %X% bhistx()
.%A0%
for Kronecker product of two
base-learners with an anisotropic penalty for the special case where
lambda1
or lambda2
is zero.bbsc()
can be used with
center = TRUE
(derived by Almond Stoecker).FDboostLSS()
, a list of one-sided formulas can be
specified for timeformula
.FDboostLSS()
works with
families = GammaLSS()
.%A%
uses weights in the model call. This only
works correctly for weights on the level of blg1
and
blg2
(same as weights on rows and columns of the response
matrix).mboost
are done using
mboost_intern()
.hyper_olsc()
is based on hyper_ols()
from
mboost
.%Xc%
for the row tensor product of
two scalar covariates. The design matrix of the interaction effects is
constrained such that the interaction is centered around the intercept
and around the two main effects of the scalar covariates
(experimental!). Use, for example,
bols(x1) %Xc% bols(x2)
.%Xc%
for row tensor product where
the sum-to-zero constraint is applied to the design matrix resulting
from the row-tensor product (experimental!). Specifically, an
intercept-column is first added, and then the sum-to-zero constraint is
applied. Use, for example, bolsc(x1) %Xc% bolsc(x2)
.s
is now used as
argsvals
in the FPCA conducted within
bfpc()
.%A%
that implies anisotropic penalties for
differently specified df
in the two base-learners.ONEx
in a
smooth intercept specified implicitly by ~1
, for example,
bols(ONEx, intercept=FALSE, df=1) %A% bbs(time)
.%A%
or %O%
are not
expanded with the timeformula
, allowing for different
effects over time in the model.FDboostLSS()
to fit GAMLSS models
with functional data using R-package gamboostLSS
.%Xc%
for row tensor product where the
sum-to-zero constraint is applied to the design matrix resulting from
the row-tensor product (experimental!).newdata
to be a list in
predict.FDboost()
when used with signal base-learners.coef.FDboost()
so that it works for
3-dimensional tensor products of the form
bhistx() %X% bolsc() %X% bolsc()
(with David
Ruegamer).timeformula=NULL
, no Kronecker product with 1
is used, which changes the penalty (otherwise, the direction of
1
would also be penalized).gamboostLSS
.MASS
.prediction
in the internal
computation of the base-learners (work in progress).timeLab
of the
hmatrix
-object in bhistx()
is not equal to the
time variable in timeformula
.FDboost()
, the offset is supplied
differently. For a scalar offset, use offset = "scalar"
.
The default remains offset = NULL
.predict.FDboost()
has a new argument
toFDboost
(logical).fitted.FDboost()
has argument toFDboost
explicitly (not only via ...
).bhistx()
, especially suited for
effects used with %X%
, e.g.,
bhistx() %X% bolsc()
.coef.FDboost()
and plot.FDboost()
now
handle effects like bhistx() %X% bolsc()
.predict.FDboost()
with effects
bhistx()
and newdata, the latest mboostPatch
is necessary.integrationWeights()
now gives equal weights for
regular grids.bfpc()
for a functional covariate
where both the functional covariate and the coefficient are expanded
using fPCA (experimental feature!). Only works for regularly observed
functional covariate.coef.FDboost()
only works for bhist()
if
the time variable is the same in the timeformula and in
bhist()
.predict.FDboost()
now checks that only
type = "link"
can be predicted for newdata.differences = 1
), improving identifiability.cvrisk.FDboost()
that uses (by default)
sampling on the levels of curves, which is important for functional
responses.cvrisk()
and
validateFDboost()
.bhist()
, an effect can be standardized.CITATION
file.mboost 2.4-2
, which exports all important
functions.main
argument is always passed in
plot.FDboost()
.bhist()
and bconcurrent()
now work for
equal time
and s
.predict.FDboost()
works with tensor-product
base-learners like bl1 %X% bl2
.These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.