Package: mlexperiments Check: tests New result: ERROR Running ‘testthat.R’ [51s/135s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mlexperiments) > > test_check("mlexperiments") CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold4 CV fold: Fold5 Testing for identical folds in 2 and 1. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 3 times in 2 thread(s)... 4.727 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.786 seconds 3) Running FUN 2 times in 2 thread(s)... 1.966 seconds Registering parallel backend using 2 cores. Running initial scoring function 3 times in 2 thread(s)... 4.838 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.706 seconds 3) Running FUN 2 times in 2 thread(s)... 1.913 seconds Registering parallel backend using 2 cores. Running initial scoring function 4 times in 2 thread(s)... 4.847 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.436 seconds 3) Running FUN 2 times in 2 thread(s)... 1.993 seconds CV fold: Fold1 Registering parallel backend using 2 cores. Running initial scoring function 3 times in 2 thread(s)... 2.777 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.604 seconds 3) Running FUN 2 times in 2 thread(s)... 0.979 seconds CV fold: Fold2 Registering parallel backend using 2 cores. Running initial scoring function 3 times in 2 thread(s)... 2.647 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.741 seconds 3) Running FUN 2 times in 2 thread(s)... 0.892 seconds CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 3 times in 2 thread(s)... 2.793 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.819 seconds 3) Running FUN 2 times in 2 thread(s)... 0.996 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 9.651 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.407 seconds 3) Running FUN 2 times in 2 thread(s)... 1.85 seconds Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 5.808 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.418 seconds 3) Running FUN 2 times in 2 thread(s)... 1.058 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 5.506 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.449 seconds 3) Running FUN 2 times in 2 thread(s)... 0.992 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 5.695 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.473 seconds 3) Running FUN 2 times in 2 thread(s)... 1.074 seconds CV fold: Fold1 Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. CV fold: Fold2 Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. CV fold: Fold3 Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 1.736 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.473 seconds 3) Running FUN 2 times in 2 thread(s)... 0.237 seconds Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 1.881 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.419 seconds 3) Running FUN 2 times in 2 thread(s)... 0.227 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 1.717 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.492 seconds 3) Running FUN 2 times in 2 thread(s)... 0.239 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 1.639 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.4 seconds 3) Running FUN 2 times in 2 thread(s)... 0.208 seconds CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. [ FAIL 1 | WARN 0 | SKIP 1 | PASS 77 ] ══ Skipped tests (1) ═══════════════════════════════════════════════════════════ • On CRAN (1): 'test-lints.R:10:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-glm_predictions.R:209:5'): test predictions, regression - lm ─── Error in `get(x, envir = ns, inherits = FALSE)`: object 'rsq' not found Backtrace: ▆ 1. └─mlexperiments::performance(...) at test-glm_predictions.R:209:5 2. └─mlexperiments:::.metric_from_char(append_metrics) 3. └─base::sapply(...) 4. └─base::lapply(X = X, FUN = FUN, ...) 5. └─mlexperiments (local) FUN(X[[i]], ...) 6. └─mlexperiments::metric(x) 7. ├─base::stopifnot(...) 8. └─utils::getFromNamespace(x = name, ns = "mlr3measures") 9. └─base::get(x, envir = ns, inherits = FALSE) [ FAIL 1 | WARN 0 | SKIP 1 | PASS 77 ] Error: Test failures Execution halted