CRAN Package Check Results for Maintainer ‘Dominique Makowski <officialeasystats at gmail.com>’

Last updated on 2026-01-17 03:52:36 CET.

Package ERROR NOTE OK
bayestestR 13
insight 1 1 11
modelbased 13
parameters 1 1 11
performance 1 12

Package bayestestR

Current CRAN status: OK: 13

Package insight

Current CRAN status: ERROR: 1, NOTE: 1, OK: 11

Version: 1.4.4
Check: tests
Result: ERROR Running ‘testthat.R’ [9m/10m] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(insight) > test_check("insight") Starting 2 test processes. Saving _problems/test-feis-10.R > test-find_transformation.R: boundary (singular) fit: see help('isSingular') > test-gamlss.R: GAMLSS-RS iteration 1: Global Deviance = 365.2328 > test-gamlss.R: GAMLSS-RS iteration 2: Global Deviance = 365.1292 > test-gamlss.R: GAMLSS-RS iteration 3: Global Deviance = 365.1269 > test-gamlss.R: GAMLSS-RS iteration 4: Global Deviance = 365.1268 > test-gamlss.R: GAMLSS-RS iteration 1: Global Deviance = 5779.746 > test-gamlss.R: GAMLSS-RS iteration 2: Global Deviance = 5779.746 > test-gamlss.R: GAMLSS-RS iteration 1: Global Deviance = 703.1164 > test-gamlss.R: GAMLSS-RS iteration 2: Global Deviance = 703.1164 > test-get_model.R: Loading required namespace: GPArotation > test-get_random.R: boundary (singular) fit: see help('isSingular') > test-glmmPQL.R: iteration 1 > test-is_converged.R: boundary (singular) fit: see help('isSingular') > test-mmrm.R: mmrm() registered as emmeans extension > test-mmrm.R: mmrm() registered as car::Anova extension > test-model_info.R: boundary (singular) fit: see help('isSingular') > test-nestedLogit.R: list(work = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 1L, > test-nestedLogit.R: 0L, 0L, 0L, 0L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, > test-nestedLogit.R: 0L, 1L, 1L, 1L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 1L, 1L, > test-nestedLogit.R: 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 1L, 1L, 1L, 1L, 1L, > test-nestedLogit.R: 1L, 1L, 0L, 0L, 0L, 1L, 1L, 1L, 0L, 0L, 0L, 1L, 1L, 0L, 0L, 1L, > test-nestedLogit.R: 1L, 1L, 1L, 1L, 1L, 0L, 1L, 1L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, > test-nestedLogit.R: 1L, 1L, 0L, 0L, 1L, 0L, 0L, 1L, 1L, 1L, 0L, 1L, 1L, 1L, 0L, 0L > test-nestedLogit.R: ), full = c(1L, 1L, 0L, 1L, 1L, 0L, 0L, 0L, 1L, 1L, 0L, 0L, 0L, > test-nestedLogit.R: 1L, 0L, 1L, 1L, 0L, 1L, 0L, 0L, 0L, 1L, 1L, 0L, 1L, 1L, 0L, 0L, > test-nestedLogit.R: 0L, 1L, 1L, 1L, 1L, 0L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L)) > test-polr.R: > test-polr.R: Re-fitting to get Hessian > test-polr.R: > test-polr.R: > test-polr.R: Re-fitting to get Hessian > test-polr.R: > test-survey_coxph.R: Stratified Independent Sampling design (with replacement) > test-survey_coxph.R: dpbc <- survey::svydesign( > test-survey_coxph.R: id = ~1, > test-survey_coxph.R: prob = ~randprob, > test-survey_coxph.R: strata = ~edema, > test-survey_coxph.R: data = subset(pbc, randomized) > test-survey_coxph.R: ) > test-survey_coxph.R: Stratified Independent Sampling design (with replacement) > test-survey_coxph.R: dpbc <- survey::svydesign( > test-survey_coxph.R: id = ~1, > test-survey_coxph.R: prob = ~randprob, > test-survey_coxph.R: strata = ~edema, > test-survey_coxph.R: data = subset(pbc, randomized) > test-survey_coxph.R: ) > test-survey_coxph.R: Stratified Independent Sampling design (with replacement) > test-survey_coxph.R: dpbc <- survey::svydesign( > test-survey_coxph.R: id = ~1, > test-survey_coxph.R: prob = ~randprob, > test-survey_coxph.R: strata = ~edema, > test-survey_coxph.R: data = subset(pbc, randomized) > test-survey_coxph.R: ) [ FAIL 1 | WARN 2 | SKIP 96 | PASS 3423 ] ══ Skipped tests (96) ══════════════════════════════════════════════════════════ • On CRAN (87): 'test-GLMMadaptive.R:2:1', 'test-averaging.R:1:1', 'test-bias_correction.R:1:1', 'test-blmer.R:262:3', 'test-brms.R:1:1', 'test-brms_aterms.R:1:1', 'test-brms_gr_random_effects.R:1:1', 'test-brms_missing.R:1:1', 'test-brms_mm.R:1:1', 'test-brms_von_mises.R:1:1', 'test-betareg.R:197:5', 'test-clean_names.R:109:3', 'test-clean_parameters.R:1:1', 'test-coxme.R:1:1', 'test-clmm.R:170:3', 'test-cpglmm.R:152:3', 'test-display.R:1:1', 'test-display.R:15:1', 'test-export_table.R:3:1', 'test-export_table.R:7:1', 'test-export_table.R:134:3', 'test-export_table.R:164:3', 'test-export_table.R:193:1', 'test-export_table.R:278:1', 'test-export_table.R:296:3', 'test-export_table.R:328:3', 'test-export_table.R:385:1', 'test-export_table.R:406:3', 'test-export_table.R:470:3', 'test-find_random.R:43:3', 'test-fixest.R:2:1', 'test-format_table.R:2:1', 'test-format_table_ci.R:72:1', 'test-gam.R:2:1', 'test-find_smooth.R:39:3', 'test-get_data.R:507:1', 'test-get_loglikelihood.R:143:3', 'test-get_loglikelihood.R:223:3', 'test-get_predicted.R:2:1', 'test-get_priors.R:1:1', 'test-get_varcov.R:43:3', 'test-get_varcov.R:57:3', 'test-get_datagrid.R:1068:3', 'test-get_datagrid.R:1105:5', 'test-is_converged.R:47:1', 'test-iv_robust.R:120:3', 'test-lavaan.R:1:1', 'test-lcmm.R:1:1', 'test-lme.R:28:3', 'test-lme.R:212:3', 'test-glmmTMB.R:67:3', 'test-glmmTMB.R:767:3', 'test-glmmTMB.R:803:3', 'test-glmmTMB.R:1142:3', 'test-marginaleffects.R:1:1', 'test-mgcv.R:1:1', 'test-mipo.R:1:1', 'test-mlogit.R:1:1', 'test-model_info.R:106:3', 'test-modelbased.R:1:1', 'test-mvrstanarm.R:1:1', 'test-null_model.R:85:3', 'test-phylolm.R:1:1', 'test-print_parameters.R:1:1', 'test-r2_nakagawa_bernoulli.R:1:1', 'test-r2_nakagawa_beta.R:1:1', 'test-r2_nakagawa_binomial.R:1:1', 'test-r2_nakagawa_gamma.R:1:1', 'test-r2_nakagawa_linear.R:1:1', 'test-r2_nakagawa_negbin.R:1:1', 'test-r2_nakagawa_negbin_zi.R:1:1', 'test-r2_nakagawa_ordered_beta.R:1:1', 'test-r2_nakagawa_poisson.R:1:1', 'test-r2_nakagawa_poisson_zi.R:1:1', 'test-r2_nakagawa_truncated_poisson.R:1:1', 'test-r2_nakagawa_tweedie.R:1:1', 'test-rlmer.R:276:3', 'test-rms.R:1:1', 'test-rqss.R:1:1', 'test-rstanarm.R:1:1', 'test-sdmTMB.R:1:1', 'test-selection.R:2:1', 'test-spatial.R:2:1', 'test-svylme.R:1:1', 'test-tidymodels.R:1:1', 'test-vgam.R:2:1', 'test-weightit.R:1:1' • On Linux (3): 'test-BayesFactorBF.R:1:1', 'test-MCMCglmm.R:1:1', 'test-get_data.R:161:3' • Package `logistf` is loaded and breaks `mmrm::mmrm()` (1): 'test-mmrm.R:4:1' • works interactively (2): 'test-coxph-panel.R:34:3', 'test-coxph.R:38:3' • {bigglm} is not installed (1): 'test-model_info.R:24:3' • {panelr} is not installed (2): 'test-panelr-asym.R:1:1', 'test-panelr.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-feis.R:5:1'): (code run outside of `test_that()`) ────────────── Error in `list_flatten(dots, fn = is_flattenable)`: `x` must be a list, not a list matrix. Backtrace: ▆ 1. ├─feisr::feis(...) at test-feis.R:5:1 2. │ └─dplyr::bind_rows(rbind(dhat), .id = NULL) 3. │ └─dplyr:::list_flatten(dots, fn = is_flattenable) 4. │ └─vctrs::obj_check_list(x) 5. └─vctrs:::stop_non_list_type(x, y, z) 6. └─cli::cli_abort(...) 7. └─rlang::abort(...) [ FAIL 1 | WARN 2 | SKIP 96 | PASS 3423 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-fedora-gcc

Version: 1.4.4
Check: package dependencies
Result: NOTE Package suggested but not available for checking: 'fungible' Flavor: r-oldrel-windows-x86_64

Package modelbased

Current CRAN status: OK: 13

Package parameters

Current CRAN status: ERROR: 1, NOTE: 1, OK: 11

Version: 0.28.3
Check: package dependencies
Result: NOTE Package suggested but not available for checking: ‘M3C’ Flavor: r-oldrel-macos-arm64

Version: 0.28.3
Check: package dependencies
Result: NOTE Package suggested but not available for checking: 'EGAnet' Flavor: r-oldrel-windows-x86_64

Version: 0.28.3
Check: tests
Result: ERROR Running 'testthat.R' [40s] Running the tests in 'tests/testthat.R' failed. Complete output: > library(parameters) > library(testthat) > > test_check("parameters") Starting 2 test processes. > test-model_parameters.afex_aov.R: Contrasts set to contr.sum for the following variables: condition, talk > test-model_parameters.afex_aov.R: Contrasts set to contr.sum for the following variables: condition, talk > test-model_parameters.afex_aov.R: Contrasts set to contr.sum for the following variables: treatment, gender > test-model_parameters.aov_es_ci.R: Error: ! testthat subprocess exited in file 'test-model_parameters.aov_es_ci.R'. Caused by error: ! R session crashed with exit code -1073741819 Backtrace: ▆ 1. └─testthat::test_check("parameters") 2. └─testthat::test_dir(...) 3. └─testthat:::test_files(...) 4. └─testthat:::test_files_parallel(...) 5. ├─withr::with_dir(...) 6. │ └─base::force(code) 7. ├─testthat::with_reporter(...) 8. │ └─base::tryCatch(...) 9. │ └─base (local) tryCatchList(expr, classes, parentenv, handlers) 10. │ └─base (local) tryCatchOne(expr, names, parentenv, handlers[[1L]]) 11. │ └─base (local) doTryCatch(return(expr), name, parentenv, handler) 12. └─testthat:::parallel_event_loop_chunky(queue, reporters, ".") 13. └─queue$poll(Inf) 14. └─base::lapply(...) 15. └─testthat (local) FUN(X[[i]], ...) 16. └─private$handle_error(msg, i) 17. └─cli::cli_abort(...) 18. └─rlang::abort(...) Execution halted Flavor: r-oldrel-windows-x86_64

Package performance

Current CRAN status: ERROR: 1, OK: 12

Version: 0.15.3
Check: tests
Result: ERROR Running ‘testthat.R’ [18s/10s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(performance) > > test_check("performance") Starting 2 test processes. > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be > test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-check_collinearity.R: NOTE: 2 fixed-effect singletons were removed (2 observations). Saving _problems/test-check_collinearity-157.R Saving _problems/test-check_collinearity-185.R > test-check_overdispersion.R: Overdispersion detected. > test-check_overdispersion.R: Underdispersion detected. > test-check_outliers.R: No outliers were detected (p = 0.238). > test-glmmPQL.R: iteration 1 > test-item_discrimination.R: Some of the values are negative. Maybe affected items need to be > test-item_discrimination.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-item_discrimination.R: Some of the values are negative. Maybe affected items need to be > test-item_discrimination.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-item_discrimination.R: Some of the values are negative. Maybe affected items need to be > test-item_discrimination.R: reverse-coded, e.g. using `datawizard::reverse()`. > test-performance_aic.R: Model was not fitted with REML, however, `estimator = "REML"`. Set > test-performance_aic.R: `estimator = "ML"` to obtain identical results as from `AIC()`. [ FAIL 2 | WARN 0 | SKIP 41 | PASS 443 ] ══ Skipped tests (41) ══════════════════════════════════════════════════════════ • On CRAN (36): 'test-bootstrapped_icc_ci.R:2:3', 'test-bootstrapped_icc_ci.R:44:3', 'test-binned_residuals.R:163:3', 'test-binned_residuals.R:190:3', 'test-check_convergence.R:1:1', 'test-check_dag.R:1:1', 'test-check_distribution.R:1:1', 'test-check_itemscale.R:1:1', 'test-check_itemscale.R:100:1', 'test-check_model.R:1:1', 'test-check_collinearity.R:193:1', 'test-check_collinearity.R:226:1', 'test-check_residuals.R:2:3', 'test-check_singularity.R:2:3', 'test-check_singularity.R:30:3', 'test-check_zeroinflation.R:73:3', 'test-check_zeroinflation.R:112:3', 'test-check_outliers.R:115:3', 'test-check_outliers.R:339:3', 'test-helpers.R:1:1', 'test-item_omega.R:1:1', 'test-item_omega.R:31:3', 'test-compare_performance.R:1:1', 'test-mclogit.R:56:1', 'test-model_performance.bayesian.R:1:1', 'test-model_performance.lavaan.R:1:1', 'test-model_performance.merMod.R:2:3', 'test-model_performance.merMod.R:37:3', 'test-model_performance.psych.R:1:1', 'test-model_performance.rma.R:36:1', 'test-performance_reliability.R:23:3', 'test-pkg-ivreg.R:1:1', 'test-r2_bayes.R:39:3', 'test-r2_nagelkerke.R:35:3', 'test-rmse.R:39:3', 'test-test_likelihoodratio.R:55:1' • On Mac (4): 'test-check_predictions.R:1:1', 'test-icc.R:1:1', 'test-nestedLogit.R:1:1', 'test-r2_nakagawa.R:1:1' • getRversion() > "4.4.0" is TRUE (1): 'test-check_outliers.R:300:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-check_collinearity.R:157:3'): check_collinearity | afex ────── Expected `expect_message(ccoW <- check_collinearity(aW))` to throw a warning. ── Failure ('test-check_collinearity.R:185:3'): check_collinearity | afex ────── Expected `expect_message(ccoW <- check_collinearity(aW))` to throw a warning. [ FAIL 2 | WARN 0 | SKIP 41 | PASS 443 ] Error: ! Test failures. Execution halted Flavor: r-release-macos-arm64

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.