Package check result: OK Changes to worse in reverse depends: Package: finetune Check: tests New result: ERROR Running ‘spelling.R’ [0s/0s] Running ‘testthat.R’ [31s/31s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > > suppressPackageStartupMessages(library(finetune)) > > # CRAN wants packages to be able to be check without the Suggests dependencies > if (rlang::is_installed(c("modeldata", "lme4", "testthat"))) { + suppressPackageStartupMessages(library(testthat)) + test_check("finetune") + } Loading required package: scales Attaching package: 'scales' The following object is masked from 'package:purrr': discard Attaching package: 'dials' The following object is masked from 'package:rpart': prune [ FAIL 2 | WARN 0 | SKIP 20 | PASS 95 ] ══ Skipped tests (20) ══════════════════════════════════════════════════════════ • On CRAN (20): 'test-anova-filter.R:130:3', 'test-anova-overall.R:3:3', 'test-anova-overall.R:20:3', 'test-anova-overall.R:37:3', 'test-anova-overall.R:57:3', 'test-race-control.R:17:3', 'test-race-control.R:38:3', 'test-sa-control.R:19:3', 'test-sa-control.R:48:3', 'test-sa-misc.R:5:3', 'test-sa-overall.R:2:3', 'test-sa-overall.R:19:3', 'test-sa-overall.R:40:3', 'test-sa-overall.R:90:3', 'test-sa-overall.R:127:3', 'test-win-loss-filter.R:2:3', 'test-win-loss-overall.R:3:3', 'test-win-loss-overall.R:22:3', 'test-win-loss-overall.R:39:3', 'test-win-loss-overall.R:56:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-race-s3.R:31:3'): racing S3 methods ──────────────────────────── Error in `collect_metrics(anova_race, all_configs = TRUE)`: `...` must be empty. x Problematic argument: * all_configs = all_configs Backtrace: ▆ 1. ├─testthat::expect_equal(...) at test-race-s3.R:31:3 2. │ └─testthat::quasi_label(enquo(object), label, arg = "object") 3. │ └─rlang::eval_bare(expr, quo_get_env(quo)) 4. ├─base::nrow(collect_metrics(anova_race, all_configs = TRUE)) 5. ├─tune::collect_metrics(anova_race, all_configs = TRUE) 6. ├─finetune:::collect_metrics.tune_race(anova_race, all_configs = TRUE) 7. ├─base::NextMethod(summarize = summarize, ...) 8. └─tune:::collect_metrics.tune_results(anova_race, all_configs = TRUE, summarize = ``) 9. └─rlang::check_dots_empty() 10. └─rlang:::action_dots(...) 11. ├─base (local) try_dots(...) 12. └─rlang (local) action(...) ── Error ('test-sa-decision.R:16:5'): simulated annealing decisions ──────────── Error in `tune:::new_tune_results(., parameters = cart_param, outcomes = cart_outcomes, metrics = cart_metrics, rset_info = cart_rset_info)`: argument "eval_time" is missing, with no default Backtrace: ▆ 1. ├─cart_search %>% filter(.iter == iter_val) %>% ... at test-sa-decision.R:16:5 2. └─tune:::new_tune_results(...) 3. └─tune:::new_bare_tibble(...) 4. └─tibble::new_tibble(x, nrow = nrow(x), ..., class = class) 5. └─rlang::pairlist2(...) [ FAIL 2 | WARN 0 | SKIP 20 | PASS 95 ] Error: Test failures Execution halted