Package check result: NOTE Check: CRAN incoming feasibility, Result: NOTE Maintainer: ‘Ben Goodrich ’ Found the following (possibly) invalid URLs: URL: https://sites.stat.columbia.edu/gelman/arm/ From: inst/doc/binomial.html inst/doc/continuous.html inst/doc/count.html Status: Error Message: SSL peer certificate or SSH remote key was not OK [sites.stat.columbia.edu]: server verification failed: certificate signer not trusted. (CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none) Changes to worse in reverse depends: Package: BayesERtools Check: tests New result: ERROR Running ‘testthat.R’ [129s/129s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/testing-design.html#sec-tests-files-overview > # * https://testthat.r-lib.org/articles/special-files.html > > library(testthat) > library(BayesERtools) > > test_check("BayesERtools") Loading required namespace: projpred Loading required package: bayestestR [ FAIL 1 | WARN 0 | SKIP 0 | PASS 175 ] ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-loo_kfold.R:87:5'): loo ────────────────────────────────────── loo_ermod_bin$estimates[, 1] (`actual`) not equal to c(elpd_loo = -38.528662, p_loo = 3.325979, looic = 77.057323) (`expected`). `actual`: -38.52895 3.32626 77.05789 `expected`: -38.52866 3.32598 77.05732 [ FAIL 1 | WARN 0 | SKIP 0 | PASS 175 ] Error: Test failures Execution halted Package: performance Check: tests New result: ERROR Running ‘testthat.R’ [47s/24s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(performance) > > test_check("performance") Starting 2 test processes [ FAIL 1 | WARN 0 | SKIP 40 | PASS 406 ] ══ Skipped tests (40) ══════════════════════════════════════════════════════════ • On CRAN (36): 'test-bootstrapped_icc_ci.R:2:3', 'test-bootstrapped_icc_ci.R:44:3', 'test-binned_residuals.R:137:3', 'test-binned_residuals.R:164:3', 'test-check_dag.R:1:1', 'test-check_distribution.R:35:3', 'test-check_itemscale.R:31:3', 'test-check_itemscale.R:103:3', 'test-check_model.R:1:1', 'test-check_collinearity.R:181:3', 'test-check_collinearity.R:218:3', 'test-check_predictions.R:2:1', 'test-check_residuals.R:2:3', 'test-check_singularity.R:2:3', 'test-check_singularity.R:30:3', 'test-check_zeroinflation.R:73:3', 'test-check_zeroinflation.R:112:3', 'test-compare_performance.R:21:3', 'test-helpers.R:1:1', 'test-icc.R:2:1', 'test-item_omega.R:10:3', 'test-item_omega.R:31:3', 'test-mclogit.R:53:3', 'test-model_performance.bayesian.R:1:1', 'test-model_performance.lavaan.R:1:1', 'test-check_outliers.R:110:3', 'test-model_performance.merMod.R:2:3', 'test-model_performance.merMod.R:25:3', 'test-model_performance.psych.R:1:1', 'test-model_performance.rma.R:33:3', 'test-performance_reliability.R:23:3', 'test-pkg-ivreg.R:7:3', 'test-r2_nagelkerke.R:22:3', 'test-r2_nakagawa.R:20:1', 'test-rmse.R:35:3', 'test-test_likelihoodratio.R:55:1' • On Linux (3): 'test-nestedLogit.R:1:1', 'test-r2_bayes.R:1:1', 'test-test_wald.R:1:1' • getRversion() > "4.4.0" is TRUE (1): 'test-check_outliers.R:258:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-check_outliers.R:304:3'): pareto which ─────────────────────── which(check_outliers(model, method = "pareto", threshold = list(pareto = 0.5))) (`actual`) not identical to 17L (`expected`). `actual`: 17 18 `expected`: 17 [ FAIL 1 | WARN 0 | SKIP 40 | PASS 406 ] Error: Test failures Execution halted