The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
The pcFactorStan package for R provides convenience functions and pre-programmed Stan models related to analysis of paired comparison data. Its purpose is to make fitting models using Stan easy and easy to understand. pcFactorStan relies on the rstan package, which should be installed first. See here for instructions on installing rstan.
One situation where a factor model might be useful is when there are people that play in tournaments of more than one game. For example, the computer player AlphaZero (Silver et al. 2018) has trained to play chess, shogi, and Go. We can take the tournament match outcome data for each of these games and find rankings among the players. We may also suspect that there is a latent board game skill that accounts for some proportion of the variance in the per-board game rankings. This proportion can be recovered by the factor model.
Our goal may be to fit a factor model, but it is necessary to build up the model step-by-step. There are essentially three models: ‘unidim’, ‘correlation’, and ‘factor’. ‘unidim’ analyzes a single item. ‘correlation’ is suitable for two or more items. Once you have vetted your items with the ‘unidim’ and ‘correlation’ models, then you can try the ‘factor’ model. There is also a special model ‘unidim_adapt’. Except for this model, the other models require scaling constants. To find appropriate scaling constants, we will fit ‘unidim_adapt’ to each item separately.
The R code below first loads rstan and pcFactorStan. We load loo for extra diagnostics, and qgraph and ggplot2 for visualization.
library(rstan)
library(pcFactorStan)
library(loo)
library(qgraph)
library(ggplot2)
library(Matrix)
Next we take a peek at the data.
head(phyActFlowPropensity)
pa1 | pa2 | skill | predict | novelty | creative | complex | goal1 | feedback1 | chatter | waiting | body | control | present | spont | stakes | evaluated | reward |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
mountain biking | tennis | 1 | -1 | -2 | 1 | 1 | 1 | 1 | -2 | 1 | 1 | 1 | 1 | 1 | 1 | 2 | 0 |
mountain biking | tennis | 1 | 2 | -1 | -1 | -1 | 0 | 2 | 1 | 2 | 0 | 1 | 0 | 0 | 1 | 2 | -1 |
ice skating | running | -2 | 1 | -1 | -2 | -1 | 1 | 1 | -2 | -2 | -1 | 0 | 0 | -1 | -1 | -1 | 0 |
climbing | rowing | -2 | 2 | -2 | -2 | -2 | 0 | -1 | -1 | -1 | -1 | -1 | -1 | 1 | 0 | 0 | 0 |
card game | gardening | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 0 | -2 | 2 | 1 | 0 | 0 | 2 | -2 | 2 |
dance | table tennis | 0 | -2 | -1 | -1 | 0 | -1 | -1 | -1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
These data consist of paired comparisons of 87 physical activities on 16 flow-related facets. Participants submitted two activities using free-form input. These activities were substituted into item templates. For example, Item predict consisted of the prompt, “How predictable is the action?” with response options:
A1
is much more predictable than A2
.A1
is somewhat more predictable than
A2
.A2
is somewhat more predictable than
A1
.A2
is much more predictable than A1
.If the participant selected ‘golf’ and ‘running’ for activities then
‘golf’ was substituted into A1
and ‘running’ into
A2
. Duly prepared, the item was presented and the
participant asked to select the most plausible statement.
A somewhat more response is scored 1 or -1 and much
more scored 2 or -2. A tie (i.e. roughly equal) is scored
as zero. A negative value indicates > (greater than) and positive
value indicates > (less than). For example, if A1
is
golf, A2
is running, and the observed
response is 2 then the endorsement is “golf is much less
predictable than running.”
We will need to analyze each item separately before we analyze them
together. Therefore, we will start with Item skill. Data must
be fed into Stan in a partially digested form. The next
block of code demonstrates how a suitable data list may be constructed
using the prepData()
function. This function automatically
determines the number of threshold parameters based on the range
observed in your data. One thing it does not do is pick a
varCorrection
factor. The varCorrection
determines the degree of adaption in the model. Usually some choice
between 2.0 to 4.0 will obtain optimal results.
dl <- prepData(phyActFlowPropensity[,c(paste0('pa',1:2), 'skill')])
dl$varCorrection <- 5.0
Next we fit the model using the pcStan()
function, which
is a wrapper for stan()
from rstan. We
also choose the number of chains. As is customary Stan
procedure, the first half of each chain is used to estimate the
sampler’s weight matrix (i.e. warm up) and excluded from inference.
fit1 <- pcStan("unidim_adapt", data=dl)
A variety of diagnostics are available to check whether the sampler ran into trouble.
check_hmc_diagnostics(fit1)
#>
#> Divergences:
#> 0 of 4000 iterations ended with a divergence.
#>
#> Tree depth:
#> 0 of 4000 iterations saturated the maximum tree depth of 10.
#>
#> Energy:
#> E-BFMI indicated no pathological behavior.
Everything looks good, but there are a few more things to check. We want \(\widehat R\) < 1.015 and effective sample size greater than 100 times the number of chains (Vehtari et al., 2019).
allPars <- summary(fit1, probs=c())$summary
print(min(allPars[,'n_eff']))
#> [1] 593.8
print(max(allPars[,'Rhat']))
#> [1] 1.007
Again, everything looks good. If the target values were not reached then we would sample the model again with more iterations. Time for a plot,
theta <- summary(fit1, pars=c("theta"), probs=c())$summary[,'mean']
ggplot(data.frame(x=theta, activity=dl$nameInfo$pa, y=0.47)) +
geom_point(aes(x=x),y=0) +
geom_text(aes(label=activity, x=x, y=y),
angle=85, hjust=0, size=2,
position = position_jitter(width = 0, height = 0.4)) + ylim(0,1) +
theme(legend.position="none",
axis.title.x=element_blank(),
axis.title.y=element_blank(),
axis.text.y=element_blank(),
axis.ticks.y=element_blank())
Intuitively, this seems like a fairly reasonable ranking for skill. As pretty as the plot is, the main reason that we fit this model was to find a scaling constant to produce a score variance close to 1.0,
s50 <- summary(fit1, pars=c("scale"), probs=c(.5))$summary[,'50%']
print(s50)
#> [1] 0.7181
We use the median instead of the mean because scale
is
not likely to have a symmetric marginal posterior distribution. We
obtained 0.7181, but that value is just for one item. We have to perform
the same procedure for every item. Wow, that would be really tedious …
if we did not have a function to do it for us! Fortunately,
calibrateItems
takes care of it and produces a table of the
pertinent data,
result <- calibrateItems(phyActFlowPropensity, iter=1000L)
print(result)
item | iter | divergent | treedepth | low_bfmi | n_eff | Rhat | scale | thetaVar |
---|---|---|---|---|---|---|---|---|
skill | 1500 | 0 | 0 | 0 | 434.3 | 1.011 | 0.7170 | 0.8754 |
predict | 1500 | 0 | 0 | 0 | 545.1 | 1.012 | 0.7030 | 0.8685 |
novelty | 1000 | 0 | 0 | 0 | 418.4 | 1.012 | 0.5647 | 0.7957 |
creative | 1000 | 0 | 0 | 0 | 439.8 | 1.007 | 0.5689 | 0.7980 |
complex | 1500 | 0 | 0 | 0 | 578.7 | 1.010 | 0.6459 | 0.8396 |
goal1 | 1500 | 0 | 0 | 0 | 491.8 | 1.008 | 0.0819 | 0.3676 |
feedback1 | 1500 | 0 | 0 | 0 | 449.8 | 1.012 | 0.1756 | 0.4986 |
chatter | 1000 | 0 | 0 | 0 | 497.1 | 1.003 | 0.2818 | 0.6026 |
waiting | 1500 | 0 | 0 | 0 | 756.0 | 1.009 | 0.5762 | 0.8021 |
body | 1000 | 0 | 0 | 0 | 492.2 | 1.006 | 0.4143 | 0.7030 |
control | 1000 | 0 | 0 | 0 | 430.9 | 1.003 | 0.3453 | 0.6535 |
present | 1000 | 0 | 0 | 0 | 533.0 | 1.004 | 0.2573 | 0.5810 |
spont | 1000 | 0 | 0 | 0 | 567.8 | 1.008 | 0.2894 | 0.6090 |
stakes | 1500 | 0 | 0 | 0 | 600.0 | 1.010 | 0.3002 | 0.6180 |
evaluated | 2250 | 0 | 0 | 0 | 949.6 | 1.006 | 0.5190 | 0.7693 |
reward | 1500 | 0 | 0 | 0 | 656.1 | 1.005 | 0.2137 | 0.5395 |
Items goal1 and feedback1 are prone to failure.
This happens when there is no clear ranking between objects. For
example, if we observe that A<B
, B<C
,
and C<A
then the only sensible interpretation is that
A=B=C
which can only have close to zero variance. We
exclude these two items with the smallest scale
. I
requested iter=1000L
to demonstrate how
calibrateItems
will resample the model until
n_eff
is large enough and Rhat
small enough.
As demonstrated in the iter column, some items needed more than
1000 samples to converge.
Next we will fit the correlation model. We exclude parameters that
start with the prefix raw
. These parameters are needed by
the model, but should not be interpreted.
pafp <- phyActFlowPropensity
excl <- match(c('goal1','feedback1'), colnames(pafp))
pafp <- pafp[,-excl]
dl <- prepData(pafp)
dl$scale <- result[match(dl$nameInfo$item, result$item), 'scale']
fit2 <- pcStan("correlation", data=dl, include=FALSE, pars=c('rawTheta', 'rawThetaCorChol'))
check_hmc_diagnostics(fit2)
#>
#> Divergences:
#> 0 of 4000 iterations ended with a divergence.
#>
#> Tree depth:
#> 0 of 4000 iterations saturated the maximum tree depth of 10.
#>
#> Energy:
#> E-BFMI indicated no pathological behavior.
allPars <- summary(fit2, probs=0.5)$summary
print(min(allPars[,'n_eff']))
#> [1] NaN
print(max(allPars[,'Rhat']))
#> [1] NaN
The HMC diagnostics look good, but … oh dear! Something is wrong with
the n_eff
and \(\widehat
R\). Let us look more carefully,
head(allPars[order(allPars[,'sd']),])
#> mean se_mean sd 50% n_eff Rhat
#> thetaCor[1,1] 1 NaN 0.000e+00 1 NaN NaN
#> thetaCor[2,2] 1 9.391e-19 5.862e-17 1 3897.1 0.999
#> thetaCor[3,3] 1 1.083e-18 6.177e-17 1 3253.6 0.999
#> thetaCor[4,4] 1 4.465e-18 6.671e-17 1 223.2 0.999
#> thetaCor[5,5] 1 1.329e-18 6.743e-17 1 2574.7 0.999
#> thetaCor[7,7] 1 1.448e-18 7.711e-17 1 2837.5 0.999
Ah ha! It looks like all the entries of the correlation matrix are reported, including the entries that are not stochastic but are fixed to constant values. We need to filter those out to get sensible results.
allPars <- allPars[allPars[,'sd'] > 1e-6,]
print(min(allPars[,'n_eff']))
#> [1] 560.5
print(max(allPars[,'Rhat']))
#> [1] 1.008
Ah, much better. Now we can inspect the correlation matrix. There are many ways to visualize a correlation matrix. One of my favorite ways is to plot it using the qgraph package,
corItemNames <- dl$nameInfo$item
tc <- summary(fit2, pars=c("thetaCor"), probs=c(.1,.5,.9))$summary
tcor <- matrix(tc, length(corItemNames), length(corItemNames))
#> Warning in matrix(tc, length(corItemNames), length(corItemNames)): data length
#> differs from size of matrix: [1568 != 14 x 14]
tcor[sign(tc[,'10%']) != sign(tc[,'90%'])] <- 0 # delete faint edges
dimnames(tcor) <- list(corItemNames, corItemNames)
tcor <- nearPD(tcor, corr=TRUE)$mat
qgraph(tcor, layout = "spring", graph = "cor", labels=colnames(tcor),
legend.cex = 0.3,
cut = 0.3, maximum = 1, minimum = 0, esize = 20,
vsize = 7, repulsion = 0.8, negDashed=TRUE, theme="colorblind")
Based on this plot and theoretical considerations, I decided to exclude spont, control, evaluated, and waiting from the factor model. A detailed rationale for why these items, and not others, are excluded will be presented in a forthcoming article. For now, let us focus on the mechanics of data analysis. Here are item response curves,
df <- responseCurve(dl, fit2,
item=setdiff(dl$nameInfo$item, c('spont','control','evaluated','waiting')),
responseNames=c("much more","somewhat more", 'equal',
"somewhat less", "much less"))
ggplot(df) +
geom_line(aes(x=worthDiff, y=prob, color=response,linetype=response,
group=responseSample), size=.2, alpha=.2) +
xlab("difference in latent worths") + ylab("probability") +
ylim(0,1) + facet_wrap(~item, scales="free_x") +
guides(color=guide_legend(override.aes=list(alpha = 1, size=1)))
#> Warning: Using `size` aesthetic for lines was deprecated in ggplot2 3.4.0.
#> ℹ Please use `linewidth` instead.
#> This warning is displayed once every 8 hours.
#> Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
#> generated.
We plot response curves from the correlation model and not the factor
model because the factor model is expected to report slightly inflated
discrimination estimates. These response curves are a function of the
thresholds
, scale
, and alpha
parameters. The ability of an item to discriminate amongst objects is
partitioned into the scale
and alpha
parameters. Most of the information is accounted for by the
scale
parameter and the alpha
parameter should
always be estimated near 1.75. The distribution of objects is always
standardized to a variance near 1.0; the scale
parameter
zooms in on the x-axis to account for the ability to make finer and
finer distinctions among objects. Notice the variation in x-axis among
the plots above. A detailed description of the item response model can
be found in the man page for responseCurve
.
I will fit model ‘factor_ll’ instead of ‘factor’ so I can use the
loo package to look for outliers. We also need to take
care that the data pafp
matches, one-to-one, the data seen
by Stan so we can map back from the model to the data. Hence, we update
pafp
using the usual the data cleaning sequence of
filterGraph
and normalizeData
and pass the
result to prepCleanData
.
Up until version 1.0.2, only a single factor model was available. As of 1.1.0, the factor model supports an arbitrary number of factors and arbitrary factor-to-item structure. In this example, we will stay with the simplest factor model, a single factor that predicts all items.
pafp <- pafp[,c(paste0('pa',1:2),
setdiff(corItemNames, c('spont','control','evaluated','waiting')))]
pafp <- normalizeData(filterGraph(pafp))
dl <- prepCleanData(pafp)
dl <- prepSingleFactorModel(dl)
dl$scale <- result[match(dl$nameInfo$item, result$item), 'scale']
fit3 <- pcStan("factor1_ll", data=dl, include=FALSE,
pars=c('rawUnique', 'rawUniqueTheta', 'rawPerComponentVar',
'rawFactor', 'rawLoadings', 'rawFactorProp', 'rawThreshold',
'rawPathProp', 'rawCumTh'))
To check the fit diagnostics, we have to take care to examine only
the parameters of interest. The factor model outputs many parameters
that should not be interpreted (those that start with the prefix
raw
).
check_hmc_diagnostics(fit3)
#>
#> Divergences:
#> 0 of 4000 iterations ended with a divergence.
#>
#> Tree depth:
#> 0 of 4000 iterations saturated the maximum tree depth of 10.
#>
#> Energy:
#> E-BFMI indicated no pathological behavior.
interest <- c("threshold", "alpha", "pathProp", "factor", "residualItemCor", "lp__")
allPars <- summary(fit3, pars=interest)$summary
allPars <- allPars[allPars[,'sd'] > 1e-6,]
print(min(allPars[,'n_eff']))
#> [1] 1147
print(max(allPars[,'Rhat']))
#> [1] 1.005
Looks good!
Let us see which data are the most unexpected by the model. We create
a loo
object and inspect the summary output.
options(mc.cores=1) # otherwise loo consumes too much RAM
kThreshold <- 0.1
l1 <- toLoo(fit3)
print(l1)
#>
#> Computed from 4000 by 9893 log-likelihood matrix
#>
#> Estimate SE
#> elpd_loo -13724.8 61.1
#> p_loo 254.8 3.5
#> looic 27449.6 122.3
#> ------
#> Monte Carlo SE of elpd_loo is 0.2.
#>
#> All Pareto k estimates are good (k < 0.5).
#> See help('pareto-k-diagnostic') for details.
The estimated Pareto \(k\) estimates
are particularly noisy due to the many activities with a small sample
size. Sometimes all \(k<0.5\) and
sometimes not. We can look at p_loo
, the effective number
of parameters. In well behaving cases, p_loo
is less than
the sample size and the number of parameters. This looks good. There are
610 parameters just for the unique scores.
To connect \(k\) statistics with
observations, we pass the loo
object to
outlierTable
and use a threshold of 0.1 instead of 0.5 to
ensure that we get enough lines. Activities with small sample sizes are
retained by filterGraph
if they connect other activities
because they contribute information to the model. When we look at
outliers, we can limit ourselves to activities with a sample size of at
least 11.
pa11 <- levels(filterGraph(pafp, minDifferent=11L)$pa1)
ot <- outlierTable(dl, l1, kThreshold)
ot <- subset(ot, pa1 %in% pa11 & pa2 %in% pa11)
print(ot[1:6,])
pa1 | pa2 | item | pick | k | |
---|---|---|---|---|---|
11 | badminton | elliptical | creative | 0 | 0.3334 |
16 | badminton | cricket | creative | -1 | 0.3268 |
18 | cycling | aerobic exercise | present | 0 | 0.3241 |
32 | climbing | snow skiing | predict | -1 | 0.3042 |
38 | dance | swimming | reward | 1 | 0.2975 |
39 | badminton | volleyball | creative | 1 | 0.2937 |
xx <- which(ot[,'pa1'] == 'mountain biking' & ot[,'pa2'] == 'climbing' & ot[,'item'] == 'predict' & ot[,'pick'] == -2)
pa1 | pa2 | item | pick | k | |
---|---|---|---|---|---|
1287 | mountain biking | climbing | predict | -2 | 0.1185 |
We will take a closer look at row 1287. What does a pick
of -2 mean? Pick
numbers are converted to response
categories by adding the number of thresholds plus one. There are two
thresholds (much and somewhat) so 3 + -2 = 1. Looking
back at our item response curve plot, the legend gives the response
category order from top (1) to bottom (5). The first response category
is much more. Putting it all together we obtain an endorsement
of mountain biking is much more predictable than climbing.
Specifically what about that assertion is unexpected? We can examine how
other participants have responded,
pafp[pafp$pa1 == ot[xx,'pa1'] & pafp$pa2 == ot[xx,'pa2'],
c('pa1','pa2', as.character(ot[xx,'item']))]
#> pa1 pa2 predict
#> 563 mountain biking climbing -2
#> 564 mountain biking climbing -2
Hm, both participants agreed. Let us look a little deeper to understand why this response was unexpected.
loc <- sapply(ot[xx,c('pa1','pa2','item')], unfactor)
exam <- summary(fit3, pars=paste0("theta[",loc[paste0('pa',1:2)],
",", loc['item'],"]"))$summary
rownames(exam) <- c(as.character(ot[xx,'pa1']), as.character(ot[xx,'pa2']))
mean | se_mean | sd | 2.5% | 25% | 50% | 75% | 97.5% | n_eff | Rhat | |
---|---|---|---|---|---|---|---|---|---|---|
mountain biking | -0.0588 | 0.0050 | 0.2758 | -0.6005 | -0.2490 | -0.0604 | 0.1373 | 0.4645 | 2991 | 1.000 |
climbing | -0.6752 | 0.0065 | 0.3662 | -1.4165 | -0.9144 | -0.6651 | -0.4290 | 0.0260 | 3150 | 1.001 |
Here we find that mountain biking was estimated 0.6164 units more predictable than climbing. I guess this difference was expected to be larger. What sample sizes are associated with these activities?
sum(c(pafp$pa1 == ot[xx,'pa1'], pafp$pa2 == ot[xx,'pa1']))
#> [1] 25
sum(c(pafp$pa1 == ot[xx,'pa2'], pafp$pa2 == ot[xx,'pa2']))
#> [1] 13
Hm, the predictability 95% uncertainty interval for climbing is from -1.4165 to 0.026. So there is little information. We could continue our investigation by looking at which responses justified these predict estimates. However, let us move on and plot the marginal posterior distributions of the factor proportions. Typical jargon is factor loadings, but proportion is preferable since the scale is arbitrary and standardized.
pi <- parInterval(fit3, 'pathProp', dl$nameInfo$item, label='item')
pi <- pi[order(abs(pi$M)),]
ggplot(pi) +
geom_vline(xintercept=0, color="green") +
geom_jitter(data=parDistributionFor(fit3, pi),
aes(value, item), height = 0.35, alpha=.05) +
geom_segment(aes(y=item, yend=item, x=L, xend=U),
color="yellow", alpha=.5) +
geom_point(aes(x=M, y=item), color="red", size=1) +
theme(axis.title.y=element_blank())
Finally, we can plot the factor scores.
pick <- paste0('factor[',match(pa11, dl$nameInfo$pa),',1]')
pi <- parInterval(fit3, pick, pa11, label='activity')
pi <- pi[order(pi$M),]
ggplot(pi) +
geom_vline(xintercept=0, color="green") +
geom_jitter(data=parDistributionFor(fit3, pi, samples=200),
aes(value, activity), height = 0.35, alpha=.05) +
geom_segment(aes(y=activity, yend=activity, x=L, xend=U),
color="yellow", alpha=.5) +
geom_point(aes(x=M, y=activity), color="red", size=1) +
theme(axis.title.y=element_blank())
If this factor model is a good fit to the data then the residual item activity scores should be uncorrelated. Let us examine the residual item correlation matrix.
m <- matrix(apply(expand.grid(r=1:dl$NITEMS, c=1:dl$NITEMS), 1,
function(x) paste0("residualItemCor[",x['r'],",",x['c'],"]")),
dl$NITEMS, dl$NITEMS)
n <- matrix(apply(expand.grid(r=dl$nameInfo$item, c=dl$nameInfo$item), 1,
function(x) paste0(x['r'],":",x['c'])),
dl$NITEMS, dl$NITEMS)
pi <- parInterval(fit3, m[lower.tri(m)], n[lower.tri(n)], label='cor')
pi <- pi[abs(pi$M) > .08,]
pi <- pi[order(-abs(pi$M)),]
ggplot(pi) +
geom_vline(xintercept=0, color="green") +
geom_jitter(data=parDistributionFor(fit3, pi, samples=800),
aes(value, cor), height = 0.35, alpha=.05) +
geom_segment(aes(y=cor, yend=cor, x=L, xend=U),
color="yellow", alpha=.5) +
geom_point(aes(x=M, y=cor), color="red", size=1) +
theme(axis.title.y=element_blank())
Many survey measures are going to exhibit some faint correlations of this nature. Residual correlations can suggest items that could benefit from refinement. Item chatter is involved in relatively high residual correlations. Thought might be give to splitting this item or rewording it.
And there you have it. If you have not done so already, go find a dojo and commence study of martial arts!
If you read through the Stan models included with
this package, you will find some variables prefixed with
raw
. These are special variables internal to the model. In
particular, you should not try to evaluate the \(\widehat R\) or effective sample size of
raw
parameters. These parameters are best excluded from the
sampling output.
Latent worths are estimated by theta
parameters.
theta
is always standard normally distributed.
Thresholds are parameterized as a proportion with distribution
Beta(1.1, 2.0)
. The shape of this prior is fairly
arbitrary. Uniform(0,1)
also works in many cases. There is
usually plenty of information available to estimate thresholds
accurately. To convert from a proportion to threshold units, the
following formula is employed,
rawThreshold * (max(theta) - min(theta))
.
This model is fairly robust; priors are unlikely to need tweaking.
The ‘unidim_adapt’ model has a varCorrection
constant that
is used to calibrate the scale
. For all other models, the
per-item scale
must be supplied as data.
All models except ‘unidim_adapt’ estimate the item discrimination
parameter alpha
. A
normal(1.749, alphaScalePrior)
prior is used with
alphaScalePrior
set to 0.2 by default. alpha
must be positive so the normal distribution is truncated at zero. The
distribution is centered at 1.749 because this allows the logistic to
approximate the standard normal cumulative distribution (Savalei, 2006).
We need to estimate alpha
because scale
is
entered as a constant and we need to account for the stochastic
uncertainty in the item’s ability to discriminate objects.
The correlation matrix uses a lkj_corr(corLKJPrior)
prior with corLKJPrior
set to 2.5 by default. It may be
necessary to increase the prior if divergences are observed.
factor
scores are standard normally distributed.
pathProp
is shaped by two priors that act on different
parts of the distribution. rawLoadings
are distributed
beta(propShape, propShape)
with propShape
set
to 4.0 by default. rawLoadings
has an indirect influence on
pathProp
. The quantity 2*rawLoadings-1
is used
to scale the factor scores, but pathProp
is computed based
on Equation 3 of Gelman et al. (in press). pathProp
is a
signed proportion bounded between -1 and 1. pathProp
is
additionally constrained by prior
normal(logit(0.5 + pathProp/2), pathScalePrior)
where
pathScalePrior
is set to 1.2 by default. This prevents
extreme factor proportions (i.e. |pathProp|>.95). The purpose of
propShape
is to nudge rawLoadings
toward zero.
If may be necessary to increase propShape
if divergences
are observed.
If you have more than one factor then Psi
is available
to estimate correlations among factors. The prior on entries of
Psi
is
normal(logit(0.5 + Psi/2), psiScalePrior)
. It may be
necessary to reduce psiScalePrior
toward zero if factors
are highly correlated.
The idea of putting a prior on pathProp
was inspired by
Gelman (2019, Aug 23).
Gelman, A. (2019, Aug 23). Yes, you can include prior information on quantities of interest, not just on parameters in your model [Blog post]. Retrieved from https://statmodeling.stat.columbia.edu/2019/08/23/yes-you-can-include-prior-information-on-quantities-of-interest-not-just-on-parameters-in-your-model/.
Gelman, A., Goodrich, B., Gabry, J., & Vehtari, A. (in press). R-squared for Bayesian regression models. The American Statistician.
Savalei, V. (2006). Logistic approximation to the normal: The KL rationale. Psychometrika, 71(4), 763–767.
Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., … & Lillicrap, T. (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 362(6419), 1140-1144.
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2019). Rank-normalization, folding, and localization: An improved \(\widehat R\) for assessing convergence of MCMC. arXiv preprint arXiv:1903.08008.
sessionInfo()
#> R version 4.2.2 Patched (2022-11-10 r83330)
#> Platform: x86_64-pc-linux-gnu (64-bit)
#> Running under: Ubuntu 23.04
#>
#> Matrix products: default
#> BLAS: /usr/lib/x86_64-linux-gnu/atlas/libblas.so.3.10.3
#> LAPACK: /usr/lib/x86_64-linux-gnu/atlas/liblapack.so.3.10.3
#>
#> locale:
#> [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
#> [3] LC_TIME=en_US.UTF-8 LC_COLLATE=C
#> [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
#> [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
#> [9] LC_ADDRESS=C LC_TELEPHONE=C
#> [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
#>
#> attached base packages:
#> [1] stats graphics grDevices utils datasets methods base
#>
#> other attached packages:
#> [1] Matrix_1.5-3 ggplot2_3.4.0 qgraph_1.9.5
#> [4] loo_2.6.0 pcFactorStan_1.5.4 Rcpp_1.0.9
#> [7] rstan_2.26.23 StanHeaders_2.26.28 knitr_1.42
#>
#> loaded via a namespace (and not attached):
#> [1] sass_0.4.5 jsonlite_1.8.4 QuickJSR_1.0.5 gtools_3.9.4
#> [5] bslib_0.4.2 RcppParallel_5.1.5 Formula_1.2-5 highr_0.10
#> [9] stats4_4.2.2 yaml_2.3.7 pbivnorm_0.6.0 pillar_1.9.0
#> [13] backports_1.4.1 lattice_0.20-45 glue_1.6.2 digest_0.6.31
#> [17] checkmate_2.2.0 colorspace_2.0-3 htmltools_0.5.5 plyr_1.8.8
#> [21] psych_2.3.6 pkgconfig_2.0.3 corpcor_1.6.10 mvtnorm_1.1-3
#> [25] scales_1.2.1 processx_3.8.0 glasso_1.11 jpeg_0.1-10
#> [29] fdrtool_1.2.17 tibble_3.2.1 htmlTable_2.4.1 farver_2.1.1
#> [33] generics_0.1.3 withr_2.5.0 cachem_1.0.7 pbapply_1.7-2
#> [37] nnet_7.3-18 cli_3.6.1 mnormt_2.1.1 magrittr_2.0.3
#> [41] crayon_1.5.2 evaluate_0.20 ps_1.7.3 fansi_1.0.4
#> [45] nlme_3.1-162 foreign_0.8-84 pkgbuild_1.4.0 tools_4.2.2
#> [49] data.table_1.14.8 prettyunits_1.1.1 lifecycle_1.0.3 matrixStats_1.0.0
#> [53] stringr_1.5.0 munsell_0.5.0 cluster_2.1.4 callr_3.7.3
#> [57] compiler_4.2.2 jquerylib_0.1.4 rlang_1.1.0 grid_4.2.2
#> [61] rstudioapi_0.14 htmlwidgets_1.6.2 igraph_1.3.5 labeling_0.4.2
#> [65] lavaan_0.6-13 base64enc_0.1-3 rmarkdown_2.21 gtable_0.3.1
#> [69] codetools_0.2-19 abind_1.4-5 inline_0.3.19 reshape2_1.4.4
#> [73] R6_2.5.1 gridExtra_2.3 dplyr_1.1.1 fastmap_1.1.1
#> [77] utf8_1.2.3 Hmisc_5.1-0 stringi_1.7.12 parallel_4.2.2
#> [81] vctrs_0.6.1 rpart_4.1.19 png_0.1-8 tidyselect_1.2.0
#> [85] xfun_0.38
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.