The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
tibble
. (#55)skimr
, dplyr
, and purrr
. Users
should not notice any change in behavior from these.long_panel()
to check whether
values are varying, it no longer performs a series of operations that
are rendered unnecessary. This will slightly speed up performance when
check.varying = FALSE
. (#44)long_panel()
and are_varying()
functions.Compatibility update for upcoming changes to the
clubSandwich
. Thanks to James Pustejovksy for submitting
the necessary fixes.
Re-release: There are no changes, but panelr
was removed
from CRAN because one of the packages it depended on had also been
removed. That package is now back on CRAN, so panelr
will
return as well.
Re-release: There are no changes, but panelr
was removed
from CRAN because one of the packages it depended on had also been
removed. That package is now back on CRAN, so panelr
will
return as well.
Bugfixes:
dplyr
updates.
(#28, #29)wave
variable.broom
package. (#30)Bugfix:
long_panel()
now handles numeric waves correctly when
the input data are unbalanced.brms
package’s
interface for autocorrelated errors.tidyr
package.Bugfixes:
wbm()
(#14; thanks @strengejacke).pdata.frame
to panel_data
has been fixed.interaction.style
argument to
make_wb_data()
.predict.wbm()
and
predict.wbgee()
has been improved. Notably, the DV does not
need to be included in newdata
and the ID variable is only
required when necessary.Lots of new stuff! CRAN coming soon as well.
wbgee()
works just like wbm()
, except uses
GEE (via the geepack
package) for estimation. This can give
you more trustworthy results under some circumstances and is much less
likely to have convergence problems.fdm()
estimates first differences models via GLS (from
the nlme
package).asym()
estimates the linear asymmetric effects model
described by Allison
(2019) via first differences.asym_gee()
estimates a similar asymmetric effects model
to the one using cumulative differences described in Allison (2019), but
using GEE rather than conditional logit.heise()
produces stability and reliability estimates
via the popular method described in Heise (1969).nlsy
and
teen_poverty
).New stuff: * There is now a vignette to walk users through the
process of reshaping panel data. * There is now more sophisticated
handling of interactions between time-varying variables in line with the
recommendations of Giesselmann and
Schmidt-Catran (2018). * are_varying()
can now also
assess individual-level variation, so using the
type = "individual"
argument you can instead assess
variables like age that vary over time but change equally for every
case. * wbm()
can now handle transformed dependent
variables (e.g. log(y)
). Transformations on the right-hand
side of the equation were always supported. * panel_data
objects are now quite a bit more difficult to break by accidentally
subsetting the ID and wave columns out of existence. Now, subsetting via
data[]
, select()
and implicitly via
transmute()
will never remove the ID and wave columns. You
will also be warned if you arrange()
a
panel_data
object since it will generally break
lag()
functions. * panel_data
objects now
store information about what the periods are for the data, which you can
access with the get_periods()
function. For example, if the
waves in your data are the numbers 1 through 7, that’s what you’ll get.
This is more useful when the periods are irregular, such as if the waves
are the years of a biennial survey.
Bugfixes: * The way lagged predictors are mean-centered is now
consistent with the conventional fixed effects estimator. Results may
change non-trivially due to this change. Previously, the mean used for
mean-centering was based on all waves of data, but now it is based on
all waves except the number of lags away from the latest wave. *
Detrending has also been tweaked to work comparably with the changes to
the mean-centering. * You now can add the wave
variable to
wbm()
in the formula without running into cryptic errors. *
Fixed a problem in which transformed variables (like
lag(x)
) could not be included as a user-specified random
effect. Pre-0.5.0, these could be included if they were surrounded by
backticks, but now that hack is unnecessary and does not work. *
make_wb_data()
is now updated to work with other internal
updates introduced in 0.5.0. * long_panel()
was never
really working right when the source data’s labels were located at the
beginning (i.e., label_location = "beginning"
). It is now
much more robust. * wbm()
’s wave.factor
argument had become non-functional for some time but is now fixed.
Starting to polish things up for CRAN.
Key changes:
panel_data
frames now always place the id
and wave
columns first (in that order).wbm()
can now handle time-varying factors
appropriately. Do note that it only uses treatment contrasts, however.
(#8)line_plot()
, to help you
explore trends in data. It’s a little rough around the edges for
now.wbm
objects are now a bit more streamlined
and nice-looking.tidy()
and glance()
methods
(from the broom
package) for wbm
objects.
(#4)as_panel_data()
is an alias for
panel_data()
when supplying a data frame and an S3 method
otherwise. It can be used to convert pdata.frame
objects
from the plm
package to panel_data
.wbm()
are now converted to
Formula
objects to make working with their multiple parts
easier (see the Formula
package for more info).summary
method for
panel_data
frames, which works best if you have
skimr
installed. You can use dplyr::select()
style syntax to select which variables you want to describe and choose
to get descriptives by wave and/or entity.This version has switched the default degrees of freedom calculation
for linear wbm
models to Satterthwaite, which are more
computationally efficient and less prone to breaking R. They are also
calculated on a per-variable basis. Kenward-Roger standard errors and
degrees of freedom can be requested with the
t.df = "Kenward-Roger"
argument.
This version includes some major under-the-hood changes, converting
from an S3 object representation to S4. This allows the wbm
objects to formally be extensions of merMod
objects,
meaning any method that could apply to wbm
but isn’t
formally implemented will fall back to the merMod
implementation.
The panel_data
class no longer hardcodes the id and wave
variables as “id” and “wave”. Instead, they remain whatever they are
named and the panelr
functions will simply know which
variables are these special ones.
A new function, make_wb_data
, allows users to do the
data prepping that wbm
does internally without having to
use all the modeling choices made by wbm
.
A series of helper functions have been added to make wbm
objects behave more like regular model objects. Now update
,
formula
, terms
, model.frame
,
coef
, predict
, and several more are defined
for wbm
.
The summary
function for wbm
has been
refined and had some minor bugs squished.
More tweaks to widen_panel
, giving users the option to
opt out of the feature introduced in 0.3.2
that stores data
about varying and constant variables from long_panel
. Since
poor data labeling in the original wide data can cause those stored
attributes to be wrong, users can use
ignore.attributes = TRUE
with widen_panel
to
force checking for varying variables with are_varying
.
Users can now also supply a vector of varying variables, similar to
reshape
in base R.
This small update adds an enhancement to long_panel
and
widen_panel
. If you start with wide data, convert it to
long format, and then want to convert back to wide, the
panel_data
object in long format will cache information
about the variables to drastically speed up widen_panel
when you run it again.
Additionally, are_varying
was sped up by about 50%,
though it slows widen_panel
down for data with many
variables.
Tiny bugfixes:
long_panel
would error when supplied a
tibble
rather than a base data.frame
.magrittr
operators used internally.New functions:
widen_panel
converts your panel_data
object to wide format, with one row per entity. This can be useful for
SEM analysis and some other things.long_panel
does a much more difficult thing, which is
convert wide-formatted data to the more conventional long panel data
format. It contains several means for parsing the variable names of the
wide formatted data to produce a sensible long data frame with all the
time-variant variables accounted for properly. Unlike
reshape
, it can deal with unbalanced data.are_varying
is a function that can let you check
whether variables in long-formatted panel data vary over time or
not.New feature:
detrend
and balance_correction
arguments
were added to wbm
to implement the procedures described in
Curran and Bauer (2011). These, respectively, account for over-time
trends in the predictors and correcting between-subject effects when
panels are unbalanced.NEWS.md
file to track changes to the
package.These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.