CRAN Package Check Results for Maintainer ‘Dominique Makowski <dom.makowski at gmail.com>’

Last updated on 2025-03-11 14:55:48 CET.

Package ERROR NOTE OK
bayestestR 2 13
modelbased 7 8
psycho 12 3

Package bayestestR

Current CRAN status: ERROR: 2, OK: 13

Version: 0.15.2
Check: tests
Result: ERROR Running ‘testthat.R’ [14m/20m] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(bayestestR) > > test_check("bayestestR") Starting 2 test processes [ FAIL 4 | WARN 16 | SKIP 79 | PASS 220 ] ══ Skipped tests (79) ══════════════════════════════════════════════════════════ • On CRAN (73): 'test-bayesfactor_parameters.R:56:3', 'test-bayesfactor_parameters.R:94:3', 'test-bayesfactor_restricted.R:36:3', 'test-bayesian_as_frequentist.R:1:1', 'test-blavaan.R:2:3', 'test-brms.R:2:3', 'test-brms.R:38:3', 'test-brms.R:57:3', 'test-brms.R:81:3', 'test-check_prior.R:4:3', 'test-check_prior.R:66:3', 'test-check_prior.R:104:3', 'test-ci.R:33:3', 'test-ci.R:50:3', 'test-contr.R:2:3', 'test-contr.R:25:3', 'test-contr.R:59:3', 'test-data.frame-with-rvar.R:3:3', 'test-data.frame-with-rvar.R:74:3', 'test-describe_posterior.R:3:3', 'test-describe_posterior.R:249:3', 'test-describe_posterior.R:266:3', 'test-describe_posterior.R:284:3', 'test-describe_posterior.R:321:3', 'test-describe_prior.R:2:3', 'test-bayesfactor_models.R:58:3', 'test-bayesfactor_models.R:85:3', 'test-bayesfactor_models.R:122:3', 'test-effective_sample.R:2:3', 'test-equivalence_test.R:1:1', 'test-hdi.R:3:3', 'test-hdi.R:25:3', 'test-hdi.R:43:3', 'test-hdi.R:61:3', 'test-map_estimate.R:23:3', 'test-map_estimate.R:36:3', 'test-marginaleffects.R:1:1', 'test-p_direction.R:38:3', 'test-p_direction.R:70:3', 'test-p_map.R:24:3', 'test-p_map.R:39:3', 'test-p_rope.R:2:3', 'test-p_significance.R:38:3', 'test-p_significance.R:50:3', 'test-p_significance.R:84:3', 'test-point_estimate.R:2:3', 'test-point_estimate.R:18:3', 'test-posterior.R:2:3', 'test-posterior.R:22:3', 'test-posterior.R:42:3', 'test-posterior.R:62:3', 'test-posterior.R:82:3', 'test-posterior.R:103:3', 'test-print.R:2:3', 'test-emmGrid.R:132:3', 'test-emmGrid.R:142:3', 'test-emmGrid.R:172:3', 'test-emmGrid.R:188:3', 'test-emmGrid.R:224:3', 'test-emmGrid.R:234:3', 'test-rope.R:2:3', 'test-rope.R:40:3', 'test-rope.R:83:3', 'test-rstanarm.R:2:3', 'test-rstanarm.R:50:3', 'test-rstanarm.R:69:3', 'test-rstanarm.R:89:3', 'test-rstanarm.R:109:3', 'test-si.R:33:3', 'test-spi.R:3:3', 'test-spi.R:25:3', 'test-spi.R:42:3', 'test-spi.R:60:3' • On Linux (5): 'test-BFBayesFactor.R:1:1', 'test-ci.R:2:3', 'test-describe_posterior.R:111:3', 'test-rope.R:98:3', 'test-weighted_posteriors.R:1:1' • TODO: check hard-coded values (1): 'test-check_prior.R:27:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-bayesfactor_models.R:35:3'): bayesfactor_models BIC ────────── update(BFM2, reference = 1)$log_BF (`actual`) not equal to c(0, -2.8, -6.2, -57.4) (`expected`). `actual`: 0.00 -9.82 -12.10 -57.18 `expected`: 0.00 -2.80 -6.20 -57.40 ── Failure ('test-bayesfactor_models.R:208:3'): bayesfactor_inclusion | LMM ──── bfinc_all$log_BF (`actual`) not equal to c(NaN, 57.651, -2.352, -4.064, -4.788) (`expected`). `actual`: NaN 55.8 -10.0 -10.9 -10.5 `expected`: NaN 57.7 -2.4 -4.1 -4.8 ── Failure ('test-bayesfactor_models.R:213:3'): bayesfactor_inclusion | LMM ──── bfinc_matched$p_posterior (`actual`) not equal to c(1, 0.875, 0.125, 0.009, 0.002) (`expected`). `actual`: 1.00 1.00 0.00 0.00 0.00 `expected`: 1.00 0.88 0.12 0.01 0.00 ── Failure ('test-bayesfactor_models.R:214:3'): bayesfactor_inclusion | LMM ──── bfinc_matched$log_BF (`actual`) not equal to c(NaN, 58.904, -3.045, -3.573, -1.493) (`expected`). `actual`: NaN 57.2 -10.7 -11.0 0.2 `expected`: NaN 58.9 -3.0 -3.6 -1.5 [ FAIL 4 | WARN 16 | SKIP 79 | PASS 220 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-fedora-clang

Version: 0.15.2
Check: tests
Result: ERROR Running ‘testthat.R’ [736s/383s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(bayestestR) > > test_check("bayestestR") Starting 2 test processes [ FAIL 4 | WARN 16 | SKIP 79 | PASS 220 ] ══ Skipped tests (79) ══════════════════════════════════════════════════════════ • On CRAN (73): 'test-bayesfactor_parameters.R:56:3', 'test-bayesfactor_parameters.R:94:3', 'test-bayesfactor_restricted.R:36:3', 'test-bayesian_as_frequentist.R:1:1', 'test-blavaan.R:2:3', 'test-brms.R:2:3', 'test-brms.R:38:3', 'test-brms.R:57:3', 'test-brms.R:81:3', 'test-check_prior.R:4:3', 'test-check_prior.R:66:3', 'test-check_prior.R:104:3', 'test-ci.R:33:3', 'test-ci.R:50:3', 'test-contr.R:2:3', 'test-contr.R:25:3', 'test-contr.R:59:3', 'test-data.frame-with-rvar.R:3:3', 'test-data.frame-with-rvar.R:74:3', 'test-describe_posterior.R:3:3', 'test-describe_posterior.R:249:3', 'test-describe_posterior.R:266:3', 'test-describe_posterior.R:284:3', 'test-describe_posterior.R:321:3', 'test-describe_prior.R:2:3', 'test-bayesfactor_models.R:58:3', 'test-bayesfactor_models.R:85:3', 'test-bayesfactor_models.R:122:3', 'test-effective_sample.R:2:3', 'test-equivalence_test.R:1:1', 'test-hdi.R:3:3', 'test-hdi.R:25:3', 'test-hdi.R:43:3', 'test-hdi.R:61:3', 'test-map_estimate.R:23:3', 'test-map_estimate.R:36:3', 'test-marginaleffects.R:1:1', 'test-p_direction.R:38:3', 'test-p_direction.R:70:3', 'test-p_map.R:24:3', 'test-p_map.R:39:3', 'test-p_rope.R:2:3', 'test-p_significance.R:38:3', 'test-p_significance.R:50:3', 'test-p_significance.R:84:3', 'test-point_estimate.R:2:3', 'test-point_estimate.R:18:3', 'test-posterior.R:2:3', 'test-posterior.R:22:3', 'test-posterior.R:42:3', 'test-posterior.R:62:3', 'test-posterior.R:82:3', 'test-posterior.R:103:3', 'test-print.R:2:3', 'test-emmGrid.R:132:3', 'test-emmGrid.R:142:3', 'test-emmGrid.R:172:3', 'test-emmGrid.R:188:3', 'test-emmGrid.R:224:3', 'test-emmGrid.R:234:3', 'test-rope.R:2:3', 'test-rope.R:40:3', 'test-rope.R:83:3', 'test-rstanarm.R:2:3', 'test-rstanarm.R:50:3', 'test-rstanarm.R:69:3', 'test-rstanarm.R:89:3', 'test-rstanarm.R:109:3', 'test-si.R:33:3', 'test-spi.R:3:3', 'test-spi.R:25:3', 'test-spi.R:42:3', 'test-spi.R:60:3' • On Linux (5): 'test-BFBayesFactor.R:1:1', 'test-ci.R:2:3', 'test-describe_posterior.R:111:3', 'test-rope.R:98:3', 'test-weighted_posteriors.R:1:1' • TODO: check hard-coded values (1): 'test-check_prior.R:27:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-bayesfactor_models.R:35:3'): bayesfactor_models BIC ────────── update(BFM2, reference = 1)$log_BF (`actual`) not equal to c(0, -2.8, -6.2, -57.4) (`expected`). `actual`: 0.00 -9.82 -12.10 -57.18 `expected`: 0.00 -2.80 -6.20 -57.40 ── Failure ('test-bayesfactor_models.R:208:3'): bayesfactor_inclusion | LMM ──── bfinc_all$log_BF (`actual`) not equal to c(NaN, 57.651, -2.352, -4.064, -4.788) (`expected`). `actual`: NaN 55.8 -10.0 -10.9 -10.5 `expected`: NaN 57.7 -2.4 -4.1 -4.8 ── Failure ('test-bayesfactor_models.R:213:3'): bayesfactor_inclusion | LMM ──── bfinc_matched$p_posterior (`actual`) not equal to c(1, 0.875, 0.125, 0.009, 0.002) (`expected`). `actual`: 1.00 1.00 0.00 0.00 0.00 `expected`: 1.00 0.88 0.12 0.01 0.00 ── Failure ('test-bayesfactor_models.R:214:3'): bayesfactor_inclusion | LMM ──── bfinc_matched$log_BF (`actual`) not equal to c(NaN, 58.904, -3.045, -3.573, -1.493) (`expected`). `actual`: NaN 57.2 -10.7 -11.0 0.2 `expected`: NaN 58.9 -3.0 -3.6 -1.5 [ FAIL 4 | WARN 16 | SKIP 79 | PASS 220 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-fedora-gcc

Package modelbased

Current CRAN status: ERROR: 7, OK: 8

Version: 0.9.0
Check: tests
Result: ERROR Running ‘testthat.R’ [55s/28s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 3 | WARN 0 | SKIP 17 | PASS 166 ] ══ Skipped tests (17) ══════════════════════════════════════════════════════════ • .Platform$OS.type == "windows" is not TRUE (1): 'test-estimate_predicted.R:56:3' • On CRAN (13): 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-ordinal.R:1:1', 'test-predict-dpar.R:1:1', 'test-vcov.R:1:1' • On Linux (3): 'test-plot-facet.R:1:1', 'test-plot.R:1:1', 'test-print.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-estimate_expectation.R:49:3'): estimate_expectation - data-grid ── dim(estim) (`actual`) not identical to c(10L, 5L) (`expected`). `actual`: 3 5 `expected`: 10 5 ── Failure ('test-estimate_predicted.R:149:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 ── Failure ('test-estimate_predicted.R:155:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 [ FAIL 3 | WARN 0 | SKIP 17 | PASS 166 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-debian-clang

Version: 0.9.0
Check: tests
Result: ERROR Running ‘testthat.R’ [35s/19s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 3 | WARN 0 | SKIP 17 | PASS 166 ] ══ Skipped tests (17) ══════════════════════════════════════════════════════════ • .Platform$OS.type == "windows" is not TRUE (1): 'test-estimate_predicted.R:56:3' • On CRAN (13): 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-ordinal.R:1:1', 'test-predict-dpar.R:1:1', 'test-vcov.R:1:1' • On Linux (3): 'test-plot-facet.R:1:1', 'test-plot.R:1:1', 'test-print.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-estimate_expectation.R:49:3'): estimate_expectation - data-grid ── dim(estim) (`actual`) not identical to c(10L, 5L) (`expected`). `actual`: 3 5 `expected`: 10 5 ── Failure ('test-estimate_predicted.R:149:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 ── Failure ('test-estimate_predicted.R:155:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 [ FAIL 3 | WARN 0 | SKIP 17 | PASS 166 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-debian-gcc

Version: 0.10.0
Check: tests
Result: ERROR Running ‘testthat.R’ [106s/155s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 1 | WARN 12 | SKIP 39 | PASS 189 ] ══ Skipped tests (39) ══════════════════════════════════════════════════════════ • .Platform$OS.type == "windows" is not TRUE (1): 'test-estimate_predicted.R:58:3' • On CRAN (31): 'test-backtransform_invlink.R:1:1', 'test-betareg.R:1:1', 'test-bias_correction.R:1:1', 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts-average.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_effectsize.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_filter.R:1:1', 'test-estimate_means-average.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_ci.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_dotargs.R:1:1', 'test-estimate_means_marginalization.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-estimate_slopes.R:97:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-keep_iterations.R:1:1', 'test-mice.R:1:1', 'test-ordinal.R:1:1', 'test-plot-grouplevel.R:1:1', 'test-predict-dpar.R:1:1', 'test-standardize.R:1:1', 'test-summary_estimate_slopes.R:3:1', 'test-transform_response.R:16:3', 'test-vcov.R:1:1', 'test-zeroinfl.R:1:1' • On Linux (6): 'test-plot-facet.R:1:1', 'test-plot-flexible_numeric.R:1:1', 'test-plot-ordinal.R:1:1', 'test-plot.R:1:1', 'test-print.R:1:1', 'test-scoping_issues.R:1:1' • utils::packageVersion("insight") <= "1.1.0" is TRUE (1): 'test-estimate_grouplevel.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-estimate_predicted.R:206:3'): estimate_expectation - predicting RE works ── out$Predicted (`actual`) not equal to c(...) (`expected`). `actual`: 12.2617 12.0693 11.1560 11.6318 11.1657 10.3811 11.1074 11.0749 `expected`: 12.2064 12.0631 11.2071 11.6286 11.2327 10.5839 11.2085 11.1229 [ FAIL 1 | WARN 12 | SKIP 39 | PASS 189 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-fedora-clang

Version: 0.10.0
Check: tests
Result: ERROR Running ‘testthat.R’ [95s/72s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 1 | WARN 11 | SKIP 39 | PASS 189 ] ══ Skipped tests (39) ══════════════════════════════════════════════════════════ • .Platform$OS.type == "windows" is not TRUE (1): 'test-estimate_predicted.R:58:3' • On CRAN (31): 'test-backtransform_invlink.R:1:1', 'test-betareg.R:1:1', 'test-bias_correction.R:1:1', 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts-average.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_effectsize.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_filter.R:1:1', 'test-estimate_means-average.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_ci.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_dotargs.R:1:1', 'test-estimate_means_marginalization.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-estimate_slopes.R:97:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-keep_iterations.R:1:1', 'test-mice.R:1:1', 'test-ordinal.R:1:1', 'test-plot-grouplevel.R:1:1', 'test-predict-dpar.R:1:1', 'test-standardize.R:1:1', 'test-summary_estimate_slopes.R:3:1', 'test-transform_response.R:16:3', 'test-vcov.R:1:1', 'test-zeroinfl.R:1:1' • On Linux (6): 'test-plot-facet.R:1:1', 'test-plot-flexible_numeric.R:1:1', 'test-plot-ordinal.R:1:1', 'test-plot.R:1:1', 'test-print.R:1:1', 'test-scoping_issues.R:1:1' • utils::packageVersion("insight") <= "1.1.0" is TRUE (1): 'test-estimate_grouplevel.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-estimate_predicted.R:206:3'): estimate_expectation - predicting RE works ── out$Predicted (`actual`) not equal to c(...) (`expected`). `actual`: 12.2617 12.0693 11.1560 11.6318 11.1657 10.3811 11.1074 11.0749 `expected`: 12.2064 12.0631 11.2071 11.6286 11.2327 10.5839 11.2085 11.1229 [ FAIL 1 | WARN 11 | SKIP 39 | PASS 189 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-fedora-gcc

Version: 0.9.0
Check: tests
Result: ERROR Running 'testthat.R' [28s] Running the tests in 'tests/testthat.R' failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 3 | WARN 0 | SKIP 23 | PASS 168 ] ══ Skipped tests (23) ══════════════════════════════════════════════════════════ • On CRAN (23): 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-ordinal.R:1:1', 'test-plot-facet.R:7:1', 'test-plot.R:7:1', 'test-predict-dpar.R:1:1', 'test-print.R:14:3', 'test-print.R:26:3', 'test-print.R:37:3', 'test-print.R:50:3', 'test-print.R:65:3', 'test-print.R:78:5', 'test-print.R:92:3', 'test-print.R:106:3', 'test-vcov.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-estimate_expectation.R:49:3'): estimate_expectation - data-grid ── dim(estim) (`actual`) not identical to c(10L, 5L) (`expected`). `actual`: 3 5 `expected`: 10 5 ── Failure ('test-estimate_predicted.R:149:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 ── Failure ('test-estimate_predicted.R:155:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 [ FAIL 3 | WARN 0 | SKIP 23 | PASS 168 ] Error: Test failures Execution halted Flavor: r-devel-windows-x86_64

Version: 0.9.0
Check: tests
Result: ERROR Running ‘testthat.R’ [49s/26s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 3 | WARN 0 | SKIP 17 | PASS 166 ] ══ Skipped tests (17) ══════════════════════════════════════════════════════════ • .Platform$OS.type == "windows" is not TRUE (1): 'test-estimate_predicted.R:56:3' • On CRAN (13): 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-ordinal.R:1:1', 'test-predict-dpar.R:1:1', 'test-vcov.R:1:1' • On Linux (3): 'test-plot-facet.R:1:1', 'test-plot.R:1:1', 'test-print.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-estimate_expectation.R:49:3'): estimate_expectation - data-grid ── dim(estim) (`actual`) not identical to c(10L, 5L) (`expected`). `actual`: 3 5 `expected`: 10 5 ── Failure ('test-estimate_predicted.R:149:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 ── Failure ('test-estimate_predicted.R:155:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 [ FAIL 3 | WARN 0 | SKIP 17 | PASS 166 ] Error: Test failures Execution halted Flavor: r-patched-linux-x86_64

Version: 0.9.0
Check: tests
Result: ERROR Running ‘testthat.R’ [47s/25s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 3 | WARN 0 | SKIP 17 | PASS 166 ] ══ Skipped tests (17) ══════════════════════════════════════════════════════════ • .Platform$OS.type == "windows" is not TRUE (1): 'test-estimate_predicted.R:56:3' • On CRAN (13): 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-ordinal.R:1:1', 'test-predict-dpar.R:1:1', 'test-vcov.R:1:1' • On Linux (3): 'test-plot-facet.R:1:1', 'test-plot.R:1:1', 'test-print.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-estimate_expectation.R:49:3'): estimate_expectation - data-grid ── dim(estim) (`actual`) not identical to c(10L, 5L) (`expected`). `actual`: 3 5 `expected`: 10 5 ── Failure ('test-estimate_predicted.R:149:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 ── Failure ('test-estimate_predicted.R:155:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 [ FAIL 3 | WARN 0 | SKIP 17 | PASS 166 ] Error: Test failures Execution halted Flavor: r-release-linux-x86_64

Package psycho

Current CRAN status: NOTE: 12, OK: 3

Version: 0.6.1
Check: Rd files
Result: NOTE checkRd: (-1) dprime.Rd:37: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) dprime.Rd:38: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) dprime.Rd:39: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) dprime.Rd:40: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) dprime.Rd:41: Lost braces in \itemize; \value handles \item{}{} directly Flavors: r-devel-linux-x86_64-debian-clang, r-devel-linux-x86_64-debian-gcc, r-devel-linux-x86_64-fedora-clang, r-devel-linux-x86_64-fedora-gcc, r-devel-macos-arm64, r-devel-macos-x86_64, r-patched-linux-x86_64, r-release-linux-x86_64, r-release-macos-arm64, r-release-macos-x86_64

Version: 0.6.1
Flags: --no-vignettes
Check: Rd files
Result: NOTE checkRd: (-1) dprime.Rd:37: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) dprime.Rd:38: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) dprime.Rd:39: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) dprime.Rd:40: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) dprime.Rd:41: Lost braces in \itemize; \value handles \item{}{} directly Flavors: r-devel-windows-x86_64, r-release-windows-x86_64

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.