The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
Added DOI for JSS 2023 paper and corrected some typos in
documentation (nfold
-> nfolds
) and
vignette.
Removed unneeded legacy fortran code, leaving only coxnet. Fixed up Matrix as() sequences
Relatively minor changes to bugs in survival functions and bigGlm, and some improved failure messages.
Most of the Fortran code has been replaced by C++ by James Yang, leading to speedups in all cases. The exception is the Cox routine for right censored data, which is still under development.
Some of the Fortran in glmnet has been replaced by C++, written by
the newest member of our team, James Yang. * the wls
routines (dense and sparse), that are the engines under the
glmnet.path
function when we use programmable families, are
now written in C++, and lead to speedups of around 8x. * the family of
elnet routines (sparse/dense, covariance/naive) for
glmnet(...,family="gaussian")
are all in C++, and lead to
speedups around 4x.
A new feature added, as well as some minor fixes to documentation. *
The exclude argument has come to life. Users can now pass a function
that can take arguments x, y and weights, or a subset of these, for
filtering variables. Details in documentation and vignette. * Prediction
with single newx
observation failed before. This is fixed.
* Labeling of predictions from cv.glmnet
improved. * Fixed
a bug in mortran/fortran that caused program to loop ad infinitum
Fixed some bugs in the coxpath function to do with sparse X. * when
some penalty factors are zero, and X is sparse, we should not call GLM
to get the start * apply does not work as intended with sparse X, so we
now use matrix multiplies instead in computing lambda_max * added
documentation for cv.glmnet
to explain implications of
supplying lambda
Expanded scope for the Cox model. * We now allow (start, stop) data
in addition to the original right-censored all start at zero option. *
Allow for strata as in survival::coxph
* Allow for sparse X
matrix with Cox models (was not available before) * Provide method for
survival::survfit
Vignettes are revised and reorganized. Additional index information
stored on cv.glmnet
objects, and included when printed.
concordance
function from package
survival
Major revision with added functionality. Any GLM family can be used
now with glmnet
, not just the built-in families. By passing
a “family” object as the family argument (rather than a character
string), one gets access to all families supported by glm
.
This development was programmed by our newest member of the
glmnet
team, Kenneth Tay.
Bug fixes
Intercept=FALSE
with “Gaussian” is fixed. The
dev.ratio
comes out correctly now. The mortran code was
changed directly in 4 places. look for “standard”. Thanks to Kenneth
Tay.Bug fixes
confusion.glmnet
was sometimes not returning a list
because of apply collapsing structurecv.mrelnet
and cv.multnet
dropping
dimensions inappropriatelystorePB
to avoid segfault. Thanks Tomas Kalibera!assess.glmnet
and cousins to be
more helpful!lambda.interp
to avoid edge cases
(thanks David Keplinger)Minor fix to correct Depends in the DESCRIPTION to R (>= 3.6.0)
This is a major revision with much added functionality, listed
roughly in order of importance. An additional vignette called
relax
is supplied to describe the usage.
relax
argument added to glmnet
. This
causes the models in the path to be refit without regularization. The
resulting object inherits from class glmnet
, and has an
additional component, itself a glmnet object, which is the relaxed
fit.relax
argument to cv.glmnet
. This allows
selection from a mixture of the relaxed fit and the regular fit. The
mixture is governed by an argument gamma
with a default of
5 values between 0 and 1.predict
, coef
and plot
methods for relaxed
and cv.relaxed
objects.print
method for relaxed
object, and new
print
methods for cv.glmnet
and
cv.relaxed
objects.trace.it=TRUE
to glmnet
and
cv.glmnet
. This can also be set for the session via
glmnet.control
.assess.glmnet
,
roc.glmnet
and confusion.glmnet
for displaying
the performance of models.makeX
for building the x
matrix for input
to glmnet
. Main functionality is one-hot-encoding
of factor variables, treatment of NA
and creating sparse
inputs.bigGlm
for fitting the GLMs of glmnet
unpenalized.In addition to these new features, some of the code in
glmnet
has been tidied up, especially related to CV.
coxnet.deviance
to do
with input pred
, as well as saturated loglike
(missing) and weightscoxgrad
function for computing the
gradientcv.glmnet
, for cases when
wierd things happeninst/mortran
inst/mortran
-Wall
warningsnewoffset
created problems all over - fixed theseexact=TRUE
calls to
coef
and predict
. See help file for more
detailsy
blows up elnet
; error trap
includedlambda.interp
which was returning
NaN
under degenerate circumstances.Surv
objectpredict
and coef
with
exact=TRUE
. The user is strongly encouraged to supply the
original x
and y
values, as well as any other
data such as weights that were used in the original fit.lognet
when some weights are
zero and x
is sparsepredict.glmnet
,
predict.multnet
and predict.coxnet
, when
s=
argument is used with a vector of values. It was not
doing the matrix multiply correctlyintercept
optionglmnet.control
for setting systems
parameterscoxnet
exact=TRUE
option for prediction and coef
functionsmgaussian
family for multivariate responsegrouped
option for multinomial familynewx
and make dgCmatrix
if
sparselognet
added a classnames component to the objectpredict.lognet(type="class")
now returns a character
vector/matrixpredict.glmnet
: fixed bug with
type="nonzero"
glmnet
: Now x can inherit from
sparseMatrix
rather than the very specific
dgCMatrix
, and this will trigger sparse mode for
glmnetglmnet.Rd
(lambda.min
) : changed value to
0.01 if nobs < nvars
, (lambda
) added
warnings to avoid single value, (lambda.min
): renamed it
lambda.min.ratio
glmnet
(lambda.min
) : changed value to
0.01 if nobs < nvars
(HessianExact
) :
changed the sense (it was wrong), (lambda.min
): renamed it
lambda.min.ratio
. This allows it to be called
lambda.min
in a call thoughpredict.cv.glmnet
(new function) : makes predictions
directly from the saved glmnet
object on the cv objectcoef.cv.glmnet
(new function) : as abovepredict.cv.glmnet.Rd
: help functions for the
abovecv.glmnet
: insert drop(y)
to avoid 1
column matrices; now include a glmnet.fit
object for later
predictionsnonzeroCoef
: added a special case for a single
variable in x
; it was dying on thisdeviance.glmnet
: includeddeviance.glmnet.Rd
: includedglmnet_1.4
.These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.