R Under development (unstable) (2025-03-03 r87871 ucrt) -- "Unsuffered Consequences" Copyright (C) 2025 The R Foundation for Statistical Computing Platform: x86_64-w64-mingw32/x64 R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mlexperiments) > > test_check("mlexperiments") CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold4 CV fold: Fold5 Testing for identical folds in 2 and 1. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 3 times in 2 thread(s)... 5.07 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.55 seconds 3) Running FUN 2 times in 2 thread(s)... 2.32 seconds Registering parallel backend using 2 cores. Running initial scoring function 3 times in 2 thread(s)... 5.27 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.55 seconds 3) Running FUN 2 times in 2 thread(s)... 2.35 seconds Registering parallel backend using 2 cores. Running initial scoring function 4 times in 2 thread(s)... 5.75 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.35 seconds 3) Running FUN 2 times in 2 thread(s)... 2.46 seconds CV fold: Fold1 Registering parallel backend using 2 cores. Running initial scoring function 3 times in 2 thread(s)... 2.53 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.43 seconds 3) Running FUN 2 times in 2 thread(s)... 0.88 seconds CV fold: Fold2 Registering parallel backend using 2 cores. Running initial scoring function 3 times in 2 thread(s)... 2.58 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.58 seconds 3) Running FUN 2 times in 2 thread(s)... 0.85 seconds CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 3 times in 2 thread(s)... 2.55 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.61 seconds 3) Running FUN 2 times in 2 thread(s)... 0.81 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 12.36 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.31 seconds 3) Running FUN 2 times in 2 thread(s)... 2.31 seconds Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 6.46 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.29 seconds 3) Running FUN 2 times in 2 thread(s)... 1.12 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 6.05 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.32 seconds 3) Running FUN 2 times in 2 thread(s)... 1.08 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 6.08 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.31 seconds 3) Running FUN 2 times in 2 thread(s)... 1.14 seconds CV fold: Fold1 Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. CV fold: Fold2 Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. CV fold: Fold3 Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 1.42 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.3 seconds 3) Running FUN 2 times in 2 thread(s)... 0.12 seconds Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 1.5 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.31 seconds 3) Running FUN 2 times in 2 thread(s)... 0.11 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 1.39 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.3 seconds 3) Running FUN 2 times in 2 thread(s)... 0.13 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 1.31 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.32 seconds 3) Running FUN 2 times in 2 thread(s)... 0.13 seconds CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. [ FAIL 0 | WARN 0 | SKIP 1 | PASS 78 ] ══ Skipped tests (1) ═══════════════════════════════════════════════════════════ • On CRAN (1): 'test-lints.R:10:5' [ FAIL 0 | WARN 0 | SKIP 1 | PASS 78 ] > > proc.time() user system elapsed 52.09 2.04 137.71