Package: alookr Check: examples New result: ERROR Running examples in ‘alookr-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: run_models > ### Title: Fit binary classification model > ### Aliases: run_models > > ### ** Examples > > library(dplyr) Attaching package: ‘dplyr’ The following object is masked from ‘package:randomForest’: combine The following objects are masked from ‘package:stats’: filter, lag The following objects are masked from ‘package:base’: intersect, setdiff, setequal, union > > # Divide the train data set and the test data set. > sb <- rpart::kyphosis %>% + split_by(Kyphosis) > > # Extract the train data set from original data set. > train <- sb %>% + extract_set(set = "train") > > # Extract the test data set from original data set. > test <- sb %>% + extract_set(set = "test") > > # Sampling for unbalanced data set using SMOTE(synthetic minority over-sampling technique). > train <- sb %>% + sampling_target(seed = 1234L, method = "ubSMOTE") > > # Cleaning the set. > train <- train %>% + cleanse ── Checking unique value ─────────────────────────── unique value is one ── No variables that unique value is one. ── Checking unique rate ─────────────────────────────── high unique rate ── No variables that high unique rate. ── Checking character variables ─────────────────────── categorical data ── No character variables. > > # Run the model fitting. > result <- run_models(.data = train, target = "Kyphosis", positive = "present") Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: verbose. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'label' has been renamed to 'y'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'eta' has been renamed to 'learning_rate'. This warning will become an error in a future version. Error in `purrr::map()`: ℹ In index: 6. Caused by error in `process.y.margin.and.objective()`: ! Got numeric 'y' - supported objectives for this data are: reg:squarederror, reg:squaredlogerror, reg:logistic, reg:pseudohubererror, reg:absoluteerror, reg:quantileerror, count:poisson, reg:gamma, reg:tweedie. Was passed: binary:logistic Backtrace: ▆ 1. ├─alookr::run_models(.data = train, target = "Kyphosis", positive = "present") 2. │ └─... %>% ... 3. ├─tibble::tibble(...) 4. │ └─tibble:::tibble_quos(xs, .rows, .name_repair) 5. │ └─rlang::eval_tidy(xs[[j]], mask) 6. ├─purrr::map(., ~future::value(.x)) 7. │ └─purrr:::map_("list", .x, .f, ..., .progress = .progress) 8. │ ├─purrr:::with_indexed_errors(...) 9. │ │ └─base::withCallingHandlers(...) 10. │ ├─purrr:::call_with_cleanup(...) 11. │ └─alookr (local) .f(.x[[i]], ...) 12. │ ├─future::value(.x) 13. │ └─future:::value.Future(.x) 14. │ └─future:::signalConditions(...) 15. │ └─base::stop(condition) 16. └─purrr (local) ``(``) 17. └─cli::cli_abort(...) 18. └─rlang::abort(...) Execution halted Package: automatedRecLin Check: examples New result: ERROR Running examples in ‘automatedRecLin-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: custom_rec_lin_model > ### Title: Create a Custom Record Linkage Model > ### Aliases: custom_rec_lin_model > > ### ** Examples > > if (requireNamespace("xgboost", quietly = TRUE)) { + df_1 <- data.frame( + "name" = c("James", "Emma", "William", "Olivia", "Thomas", + "Sophie", "Harry", "Amelia", "George", "Isabella"), + "surname" = c("Smith", "Johnson", "Brown", "Taylor", "Wilson", + "Davis", "Clark", "Harris", "Lewis", "Walker") + ) + df_2 <- data.frame( + "name" = c("James", "Ema", "Wimliam", "Olivia", "Charlotte", + "Henry", "Lucy", "Edward", "Alice", "Jack"), + "surname" = c("Smith", "Johnson", "Bron", "Tailor", "Moore", + "Evans", "Hall", "Wright", "Green", "King") + ) + comparators <- list("name" = jarowinkler_complement(), + "surname" = jarowinkler_complement()) + matches <- data.frame("a" = 1:4, "b" = 1:4) + vectors <- comparison_vectors(A = df_1, B = df_2, variables = c("name", "surname"), + comparators = comparators, matches = matches) + train_data <- xgboost::xgb.DMatrix( + data = as.matrix(vectors$Omega[, c("gamma_name", "gamma_surname")]), + label = vectors$Omega$match + ) + params <- list(objective = "binary:logistic", + eval_metric = "logloss") + model_xgb <- xgboost::xgboost(data = train_data, params = params, + nrounds = 100, verbose = 0) + custom_xgb_model <- custom_rec_lin_model(model_xgb, vectors) + custom_xgb_model + } Warning in throw_err_or_depr_msg("Parameter(s) have been removed from this function: ", : Parameter(s) have been removed from this function: params. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: verbose. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. Error in xgboost::xgboost(data = train_data, params = params, nrounds = 100, : argument "y" is missing, with no default Calls: -> process.y.margin.and.objective -> NROW Execution halted Package: automatedRecLin Check: tests New result: ERROR Running ‘tinytest.R’ [0s/4s] Running the tests in ‘tests/tinytest.R’ failed. Complete output: > if ( requireNamespace("tinytest", quietly=TRUE) ){ + tinytest::test_package("automatedRecLin", ncpu = 1) + } starting worker pid=1144165 on localhost:11105 at 00:32:03.282 test_comparators.R............ 2 tests OK 68ms test_comparison_vectors.R..... 2 tests OK 52ms kliep function error: missing value where TRUE/FALSE needed ======================================================== kliep function error: missing value where TRUE/FALSE needed ======================================================== test_mec.R.................... 11 tests OK 0.9s Error in checkForRemoteErrors(val) : one node produced an error: argument "y" is missing, with no default Calls: ... clusterApply -> staticClusterApply -> checkForRemoteErrors Warning messages: 1: In check.sigma(nsigma, sigma_quantile, sigma, dist_nu) : There are duplicate values in 'sigma', only the unique values are used. 2: In check.sigma(nsigma, sigma_quantile, sigma, dist_nu) : There are duplicate values in 'sigma', only the unique values are used. 3: In check.sigma(nsigma, sigma_quantile, sigma, dist_nu) : There are duplicate values in 'sigma', only the unique values are used. 4: In check.sigma(nsigma, sigma_quantile, sigma, dist_nu) : There are duplicate values in 'sigma', only the unique values are used. 5: In check.sigma(nsigma, sigma_quantile, sigma, dist_nu) : There are duplicate values in 'sigma', only the unique values are used. 6: In throw_err_or_depr_msg("Parameter(s) have been removed from this function: ", : Parameter(s) have been removed from this function: params. This warning will become an error in a future version. 7: In throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: verbose. This warning will become an error in a future version. 8: In throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. Execution halted Package: autostats Check: examples New result: ERROR Running examples in ‘autostats-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: auto_variable_contributions > ### Title: Plot Variable Contributions > ### Aliases: auto_variable_contributions > > ### ** Examples > > > iris %>% + framecleaner::create_dummies() %>% + auto_variable_contributions( + tidy_formula(., target = Petal.Width) + ) 1 column(s) have become 3 dummy columns Error in if (validate & xgb_obj != "multi:softprob") { : argument is of length zero Calls: %>% -> auto_variable_contributions Execution halted Package: autostats Check: re-building of vignette outputs New result: ERROR Error(s) in re-building vignettes: ... --- re-building ‘autostats.Rmd’ using rmarkdown Quitting from autostats.Rmd:40-43 [unnamed-chunk-3] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error in `if (validate & xgb_obj != "multi:softprob") ...`: ! argument is of length zero --- Backtrace: ▆ 1. ├─iris %>% auto_variable_contributions(species_formula) 2. ├─autostats::auto_variable_contributions(., species_formula) 3. │ ├─base::suppressWarnings(...) 4. │ │ └─base::withCallingHandlers(...) 5. │ └─data %>% tidy_xgboost(formula, validate = FALSE) 6. └─autostats::tidy_xgboost(., formula, validate = FALSE) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'autostats.Rmd' failed with diagnostics: argument is of length zero --- failed re-building ‘autostats.Rmd’ --- re-building ‘tidyXgboost.Rmd’ using rmarkdown Quitting from tidyXgboost.Rmd:78-89 [unnamed-chunk-4] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error in `xgb$feature_names <- nms`: ! ALTLIST classes must provide a Set_elt method [class: XGBAltrepPointerClass, pkg: xgboost] --- Backtrace: ▆ 1. ├─xgb_tuned_fit_grid %>% visualize_model() 2. ├─autostats::visualize_model(.) 3. └─autostats:::visualize_model.xgb.Booster(.) 4. └─autostats:::plot_varimp_xgboost(...) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'tidyXgboost.Rmd' failed with diagnostics: ALTLIST classes must provide a Set_elt method [class: XGBAltrepPointerClass, pkg: xgboost] --- failed re-building ‘tidyXgboost.Rmd’ SUMMARY: processing the following files failed: ‘autostats.Rmd’ ‘tidyXgboost.Rmd’ Error: Vignette re-building failed. Execution halted Package: BioMoR Check: tests New result: ERROR Running ‘testthat.R’ [111s/74s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(BioMoR) > > test_check("BioMoR") Loading required namespace: randomForest Loading required package: ggplot2 Loading required package: lattice Loading required package: dplyr Attaching package: 'dplyr' The following objects are masked from 'package:stats': filter, lag The following objects are masked from 'package:base': intersect, setdiff, setequal, union Attaching package: 'recipes' The following object is masked from 'package:stats': step randomForest 4.7-1.2 Type rfNews() to see new features/changes/bug fixes. Attaching package: 'randomForest' The following object is masked from 'package:dplyr': combine The following object is masked from 'package:ggplot2': margin Setting direction: controls > cases note: only 1 unique complexity parameters in default grid. Truncating the grid to 1 . ------------------------------------------------------------------------------ You have loaded plyr after dplyr - this is likely to cause problems. If you need functions from both plyr and dplyr, please load plyr first, then dplyr: library(plyr); library(dplyr) ------------------------------------------------------------------------------ Attaching package: 'plyr' The following objects are masked from 'package:dplyr': arrange, count, desc, failwith, id, mutate, rename, summarise, summarize Saving _problems/test_models-35.R [ FAIL 1 | WARN 600 | SKIP 0 | PASS 5 ] ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test_models.R:35:3'): XGB model trains and predicts ───────────────── Error in `{ if (!(length(ctrl$seeds) == 1L && is.na(ctrl$seeds))) set.seed(ctrl$seeds[[iter]][parm]) loadNamespace("caret") loadNamespace("recipes") if (ctrl$verboseIter) progress(printed[parm, , drop = FALSE], names(resampleIndex), iter) if (names(resampleIndex)[iter] != "AllData") { modelIndex <- resampleIndex[[iter]] holdoutIndex <- ctrl$indexOut[[iter]] } else { modelIndex <- 1:nrow(dat) holdoutIndex <- modelIndex } if (testing) cat("pre-model\n") if (!is.null(info$submodels[[parm]]) && nrow(info$submodels[[parm]]) > 0) { submod <- info$submodels[[parm]] } else submod <- NULL mod_rec <- try(rec_model(rec, subset_x(dat, modelIndex), method = method, tuneValue = info$loop[parm, , drop = FALSE], obsLevels = lev, classProbs = ctrl$classProbs, sampling = ctrl$sampling, ...), silent = TRUE) if (testing) print(mod_rec) if (!model_failed(mod_rec)) { predicted <- try(rec_pred(method = method, object = mod_rec, newdata = subset_x(dat, holdoutIndex), param = submod), silent = TRUE) if (pred_failed(predicted)) { fail_warning(settings = printed[parm, , drop = FALSE], msg = predicted, where = "predictions", iter = names(resampleIndex)[iter], verb = ctrl$verboseIter) predicted <- fill_failed_pred(index = holdoutIndex, lev = lev, submod) } } else { fail_warning(settings = printed[parm, , drop = FALSE], msg = mod_rec, iter = names(resampleIndex)[iter], verb = ctrl$verboseIter) predicted <- fill_failed_pred(index = holdoutIndex, lev = lev, submod) } if (testing) print(head(predicted)) if (ctrl$classProbs) { if (!model_failed(mod_rec)) { probValues <- rec_prob(method = method, object = mod_rec, newdata = subset_x(dat, holdoutIndex), param = submod) } else { probValues <- fill_failed_prob(holdoutIndex, lev, submod) } if (testing) print(head(probValues)) } predicted <- trim_values(predicted, ctrl, is.null(lev)) ho_data <- holdout_rec(mod_rec, dat, holdoutIndex) if (!is.null(submod)) { allParam <- expandParameters(info$loop[parm, , drop = FALSE], submod) allParam <- allParam[complete.cases(allParam), , drop = FALSE] predicted <- lapply(predicted, function(x, lv, dat) { x <- outcome_conversion(x, lv = lev) dat$pred <- x dat }, lv = lev, dat = ho_data) if (testing) print(head(predicted)) if (ctrl$classProbs) predicted <- mapply(cbind, predicted, probValues, SIMPLIFY = FALSE) if (keep_pred) { tmpPred <- predicted for (modIndex in seq(along.with = tmpPred)) { tmpPred[[modIndex]] <- merge(tmpPred[[modIndex]], allParam[modIndex, , drop = FALSE], all = TRUE) } tmpPred <- rbind.fill(tmpPred) tmpPred$Resample <- names(resampleIndex)[iter] } else tmpPred <- NULL thisResample <- lapply(predicted, ctrl$summaryFunction, lev = lev, model = method) if (testing) print(head(thisResample)) if (length(lev) > 1 && length(lev) <= 50) { cells <- lapply(predicted, function(x) flatTable(x$pred, x$obs)) for (ind in seq(along.with = cells)) thisResample[[ind]] <- c(thisResample[[ind]], cells[[ind]]) } thisResample <- do.call("rbind", thisResample) thisResample <- cbind(allParam, thisResample) } else { pred_val <- outcome_conversion(predicted, lv = lev) tmp <- ho_data tmp$pred <- pred_val if (ctrl$classProbs) tmp <- cbind(tmp, probValues) if (keep_pred) { tmpPred <- tmp tmpPred$rowIndex <- holdoutIndex tmpPred <- merge(tmpPred, info$loop[parm, , drop = FALSE], all = TRUE) tmpPred$Resample <- names(resampleIndex)[iter] } else tmpPred <- NULL thisResample <- ctrl$summaryFunction(tmp, lev = lev, model = method) if (length(lev) > 1 && length(lev) <= 50) thisResample <- c(thisResample, flatTable(tmp$pred, tmp$obs)) thisResample <- as.data.frame(t(thisResample), stringsAsFactors = FALSE) thisResample <- cbind(thisResample, info$loop[parm, , drop = FALSE]) } thisResample$Resample <- names(resampleIndex)[iter] thisResampleExtra <- optimism_rec(ctrl, dat, iter, lev, method, mod_rec, predicted, submod, info$loop[parm, , drop = FALSE]) if (ctrl$verboseIter) progress(printed[parm, , drop = FALSE], names(resampleIndex), iter, FALSE) if (testing) print(thisResample) list(resamples = thisResample, pred = tmpPred, resamplesExtra = thisResampleExtra) }`: task 1 failed - "$ operator is invalid for atomic vectors" Backtrace: ▆ 1. └─BioMoR::train_xgb_caret(df, "Label", ctrl) at test_models.R:35:3 2. ├─caret::train(...) 3. └─caret:::train.recipe(...) 4. └─caret:::train_rec(...) 5. └─... %op% ... 6. └─e$fun(obj, substitute(ex), parent.frame(), e$data) [ FAIL 1 | WARN 600 | SKIP 0 | PASS 5 ] Error: ! Test failures. Execution halted Package: bundle Check: examples New result: ERROR Running examples in ‘bundle-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: bundle.model_fit > ### Title: Bundle a parsnip 'model_fit' object > ### Aliases: bundle.model_fit bundle_model_fit > > ### ** Examples > > ## Don't show: > if (rlang::is_installed("parsnip") && rlang::is_installed("xgboost")) (if (getRversion() >= "3.4") withAutoprint else force)({ # examplesIf + ## End(Don't show) + # fit model and bundle ------------------------------------------------ + library(parsnip) + library(xgboost) + + set.seed(1) + + mod <- + boost_tree(trees = 5, mtry = 3) %>% + set_mode("regression") %>% + set_engine("xgboost") %>% + fit(mpg ~ ., data = mtcars) + + mod_bundle <- bundle(mod) + + # then, after saveRDS + readRDS or passing to a new session ---------- + mod_unbundled <- unbundle(mod_bundle) + + mod_unbundled_preds <- predict(mod_unbundled, new_data = mtcars) + ## Don't show: + }) # examplesIf > library(parsnip) > library(xgboost) > set.seed(1) > mod <- boost_tree(trees = 5, mtry = 3) %>% set_mode("regression") %>% + set_engine("xgboost") %>% fit(mpg ~ ., data = mtcars) > mod_bundle <- bundle(mod) > mod_unbundled <- unbundle(mod_bundle) Error in xgboost::xgb.load.raw(object, as_booster = TRUE) : unused argument (as_booster = TRUE) Calls: ... -> unbundle -> unbundle.bundle -> Execution halted Package: bundle Check: tests New result: ERROR Running ‘testthat.R’ [103s/103s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > library(testthat) > library(bundle) > > test_check("bundle") Loading required package: ggplot2 Loading required package: lattice Attaching package: 'parsnip' The following object is masked from 'package:dbarts': bart Saving _problems/test_bundle_parsnip-47.R Loading required package: dplyr Attaching package: 'dplyr' The following objects are masked from 'package:stats': filter, lag The following objects are masked from 'package:base': intersect, setdiff, setequal, union Attaching package: 'recipes' The following object is masked from 'package:stats': step Saving _problems/test_bundle_workflows-69.R Saving _problems/test_bundle_xgboost-33.R Saving _problems/test_utils-43.R [ FAIL 4 | WARN 0 | SKIP 11 | PASS 24 ] ══ Skipped tests (11) ══════════════════════════════════════════════════════════ • !interactive() is TRUE (4): 'test_bundle_h2o.R:137:3', 'test_bundle_h2o.R:284:3', 'test_bundle_h2o.R:428:3', 'test_bundle_h2o.R:564:3' • On CRAN (7): 'test_bundle_bart.R:100:1', 'test_bundle_bart.R:106:1', 'test_bundle_embed.R:2:3', 'test_bundle_h2o.R:2:3', 'test_bundle_keras.R:2:3', 'test_bundle_torch.R:6:3', 'test_bundle_workflows.R:126:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test_bundle_parsnip.R:34:3'): bundling + unbundling parsnip model_fits (xgboost) ── Error: ! in callr subprocess. Caused by error in `xgboost::xgb.load.raw(object, as_booster = TRUE)`: ! unused argument (as_booster = TRUE) Backtrace: ▆ 1. └─callr::r(...) at test_bundle_parsnip.R:34:3 2. └─callr:::get_result(output = out, options) 3. └─throw(callr_remote_error(remerr, output), parent = fix_msg(remerr[[3]])) ── Error ('test_bundle_workflows.R:53:3'): bundling + unbundling tidymodels workflows (xgboost + step_log) ── Error: ! in callr subprocess. Caused by error in `xgboost::xgb.load.raw(object, as_booster = TRUE)`: ! unused argument (as_booster = TRUE) Backtrace: ▆ 1. └─callr::r(...) at test_bundle_workflows.R:53:3 2. └─callr:::get_result(output = out, options) 3. └─throw(callr_remote_error(remerr, output), parent = fix_msg(remerr[[3]])) ── Error ('test_bundle_xgboost.R:23:3'): bundling + unbundling xgboost fits ──── Error: ! in callr subprocess. Caused by error in `process.y.margin.and.objective(y, base_margin, objective, params)`: ! Got numeric 'y' - supported objectives for this data are: reg:squarederror, reg:squaredlogerror, reg:logistic, reg:pseudohubererror, reg:absoluteerror, reg:quantileerror, count:poisson, reg:gamma, reg:tweedie. Was passed: binary:logistic Backtrace: ▆ 1. └─callr::r(...) at test_bundle_xgboost.R:23:3 2. └─callr:::get_result(output = out, options) 3. └─throw(callr_remote_error(remerr, output), parent = fix_msg(remerr[[3]])) ── Error ('test_utils.R:43:3'): swap_element works ───────────────────────────── Error in `xgboost::xgb.load.raw(object, as_booster = TRUE)`: unused argument (as_booster = TRUE) Backtrace: ▆ 1. └─bundle::swap_element(res, "fit") at test_utils.R:43:3 2. ├─bundle::unbundle(component) 3. └─bundle:::unbundle.bundle(component) 4. └─x$situate(get_object(x)) [ FAIL 4 | WARN 0 | SKIP 11 | PASS 24 ] Error: ! Test failures. Execution halted Package: butcher Check: examples New result: ERROR Running examples in ‘butcher-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: axe-flexsurvreg > ### Title: Axing an flexsurvreg. > ### Aliases: axe-flexsurvreg axe_call.flexsurvreg axe_env.flexsurvreg > > ### ** Examples > > ## Don't show: > if (rlang::is_installed("flexsurv")) (if (getRversion() >= "3.4") withAutoprint else force)({ # examplesIf + ## End(Don't show) + # Load libraries + library(parsnip) + library(flexsurv) + + # Create model and fit + flexsurvreg_fit <- surv_reg(mode = "regression", dist = "gengamma") %>% + set_engine("flexsurv") %>% + fit(Surv(Tstart, Tstop, status) ~ trans, data = bosms3) + + out <- butcher(flexsurvreg_fit, verbose = TRUE) + + # Another flexsurvreg model object + wrapped_flexsurvreg <- function() { + some_junk_in_environment <- runif(1e6) + fit <- flexsurvreg(Surv(futime, fustat) ~ 1, + data = ovarian, dist = "weibull") + return(fit) + } + + out <- butcher(wrapped_flexsurvreg(), verbose = TRUE) + ## Don't show: + }) # examplesIf > library(parsnip) > library(flexsurv) Loading required package: survival > flexsurvreg_fit <- surv_reg(mode = "regression", dist = "gengamma") %>% + set_engine("flexsurv") %>% fit(Surv(Tstart, Tstop, status) ~ trans, data = bosms3) Error: ! `surv_reg()` was deprecated in parsnip 1.4.0 and is now defunct. ℹ Please use `survival_reg()` instead. Backtrace: ▆ 1. ├─(if (getRversion() >= "3.4") withAutoprint else force)(...) 2. │ └─base::source(...) 3. │ ├─base::withVisible(eval(ei, envir)) 4. │ └─base::eval(ei, envir) 5. │ └─base::eval(ei, envir) 6. ├─... %>% ... 7. ├─generics::fit(., Surv(Tstart, Tstop, status) ~ trans, data = bosms3) 8. ├─parsnip::set_engine(., "flexsurv") 9. └─parsnip::surv_reg(mode = "regression", dist = "gengamma") 10. └─lifecycle::deprecate_stop("1.4.0", "surv_reg()", "survival_reg()") 11. └─lifecycle:::deprecate_stop0(msg) 12. └─rlang::cnd_signal(...) Execution halted Package: butcher Check: tests New result: ERROR Running ‘testthat.R’ [21s/20s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(butcher) > > test_check("butcher") 1 package (dimRed) is needed for this step but is not installed. To install run: `install.packages("dimRed")` Saving _problems/test-xrf-12.R Saving _problems/test-xrf-25.R Saving _problems/test-xrf-40.R Saving _problems/test-xrf-58.R [ FAIL 4 | WARN 16 | SKIP 35 | PASS 196 ] ══ Skipped tests (35) ══════════════════════════════════════════════════════════ • On CRAN (34): 'test-c5.R:2:3', 'test-earth.R:2:3', 'test-earth.R:31:3', 'test-elnet.R:2:3', 'test-flexsurvreg.R:2:3', 'test-flexsurvreg.R:15:3', 'test-flexsurvreg.R:43:3', 'test-gausspr.R:2:3', 'test-glmnet.R:2:3', 'test-kknn.R:2:3', 'test-klaR.R:2:3', 'test-klaR.R:17:3', 'test-ksvm.R:2:3', 'test-mda.R:2:3', 'test-mda.R:23:3', 'test-mda.R:95:3', 'test-mixOmics.R:2:3', 'test-mixOmics.R:21:3', 'test-mixOmics.R:38:3', 'test-multnet.R:2:3', 'test-nnet.R:2:3', 'test-randomForest.R:2:3', 'test-rpart.R:2:3', 'test-rpart.R:18:3', 'test-sclass.R:2:3', 'test-survreg.R:2:3', 'test-survreg.penal.R:2:3', 'test-train.R:2:3', 'test-train.R:42:3', 'test-train.recipe.R:9:3', 'test-ui.R:1:1', 'test-weigh.R:1:1', 'test-xgb.R:6:3', 'test-xgb.R:34:3' • {mixOmics} is not installed (1): 'test-recipe.R:419:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-xrf.R:12:3'): xrf + axe_call() works ───────────────────────── Expected `x$xgb$call` to equal `rlang::expr(dummy_call())`. Differences: `actual` is NULL `expected` is a call ── Error ('test-xrf.R:25:3'): xrf + axe_env() works ──────────────────────────── Error in `x$callbacks <- purrr::map(x$callbacks, function(x) as.function(c(formals(x), body(x)), env = rlang::base_env()))`: ALTLIST classes must provide a Set_elt method [class: XGBAltrepPointerClass, pkg: xgboost] Backtrace: ▆ 1. ├─butcher::axe_env(res) at test-xrf.R:25:3 2. └─butcher:::axe_env.xrf(res) 3. ├─butcher::axe_env(res$xgb, ...) 4. └─butcher:::axe_env.xgb.Booster(res$xgb, ...) ── Error ('test-xrf.R:40:3'): xrf + butcher() works ──────────────────────────── Error in `x$callbacks <- purrr::map(x$callbacks, function(x) as.function(c(formals(x), body(x)), env = rlang::base_env()))`: ALTLIST classes must provide a Set_elt method [class: XGBAltrepPointerClass, pkg: xgboost] Backtrace: ▆ 1. └─butcher::butcher(res) at test-xrf.R:40:3 2. ├─butcher::axe_env(x, verbose = FALSE, ...) 3. └─butcher:::axe_env.xrf(x, verbose = FALSE, ...) 4. ├─butcher::axe_env(res$xgb, ...) 5. └─butcher:::axe_env.xgb.Booster(res$xgb, ...) ── Error ('test-xrf.R:58:3'): xrf + predict() works ──────────────────────────── Error in `x$callbacks <- purrr::map(x$callbacks, function(x) as.function(c(formals(x), body(x)), env = rlang::base_env()))`: ALTLIST classes must provide a Set_elt method [class: XGBAltrepPointerClass, pkg: xgboost] Backtrace: ▆ 1. └─butcher::butcher(res) at test-xrf.R:58:3 2. ├─butcher::axe_env(x, verbose = FALSE, ...) 3. └─butcher:::axe_env.xrf(x, verbose = FALSE, ...) 4. ├─butcher::axe_env(res$xgb, ...) 5. └─butcher:::axe_env.xgb.Booster(res$xgb, ...) [ FAIL 4 | WARN 16 | SKIP 35 | PASS 196 ] Error: ! Test failures. Execution halted Package: CausalGPS Check: tests New result: ERROR Running ‘testthat.R’ [11s/11s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(CausalGPS) > > Sys.setenv("R_TESTS" = "") > library(testthat) > test_check("CausalGPS") Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Saving _problems/test-estimate_erf-22.R Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Saving _problems/test-estimate_gps-29.R Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Saving _problems/test-generate_pseudo_pop-35.R Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Saving _problems/test-trim_it-34.R [ FAIL 4 | WARN 200 | SKIP 19 | PASS 68 ] ══ Skipped tests (19) ══════════════════════════════════════════════════════════ • On CRAN (17): 'test-CausalGPS_smooth.R:3:4', 'test-absolute_corr_fun.R:4:3', 'test-absolute_weighted_corr_fun.R:4:3', 'test-check_covar_balance.R:2:1', 'test-check_kolmogorov_smirnov.R:2:3', 'test-compile_pseudo_pop.R:3:3', 'test-compute_density.R:3:3', 'test-compute_minmax.R:3:3', 'test-compute_resid.R:3:3', 'test-create_weighting.R:3:3', 'test-estimate_gps.R:63:3', 'test-estimate_gps.R:92:3', 'test-estimate_hat_vals.R:3:3', 'test-matching_fn.R:3:3', 'test-set_logger.R:3:5', 'test-train_it.R:3:3', 'test-train_it.R:18:3' • empty test (2): 'test-estimate_npmetric_erf.R:1:1', 'test-estimate_npmetric_erf.R:89:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-estimate_erf.R:19:3'): estimate erf works as expected ────────── Error in `SuperLearner::SuperLearner(Y = target, X = data.frame(input), SL.library = sl_lib_internal)`: All algorithms dropped from library Backtrace: ▆ 1. └─CausalGPS::estimate_gps(...) at test-estimate_erf.R:19:3 2. └─CausalGPS:::train_it(...) 3. └─SuperLearner::SuperLearner(...) ── Error ('test-estimate_gps.R:26:3'): estimate_gps works as expected. ───────── Error in `SuperLearner::SuperLearner(Y = target, X = data.frame(input), SL.library = sl_lib_internal)`: All algorithms dropped from library Backtrace: ▆ 1. └─CausalGPS::estimate_gps(...) at test-estimate_gps.R:26:3 2. └─CausalGPS:::train_it(...) 3. └─SuperLearner::SuperLearner(...) ── Error ('test-generate_pseudo_pop.R:31:3'): generate_pseudo_pop works as expected. ── Error in `SuperLearner::SuperLearner(Y = target, X = data.frame(input), SL.library = sl_lib_internal)`: All algorithms dropped from library Backtrace: ▆ 1. └─CausalGPS::estimate_gps(...) at test-generate_pseudo_pop.R:31:3 2. └─CausalGPS:::train_it(...) 3. └─SuperLearner::SuperLearner(...) ── Error ('test-trim_it.R:31:3'): trim_it works as expected ──────────────────── Error in `SuperLearner::SuperLearner(Y = target, X = data.frame(input), SL.library = sl_lib_internal)`: All algorithms dropped from library Backtrace: ▆ 1. └─CausalGPS::estimate_gps(...) at test-trim_it.R:31:3 2. └─CausalGPS:::train_it(...) 3. └─SuperLearner::SuperLearner(...) [ FAIL 4 | WARN 200 | SKIP 19 | PASS 68 ] Error: ! Test failures. Execution halted Package: CRE Check: tests New result: ERROR Running ‘testthat.R’ [5s/5s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(CRE) > > test_check("CRE") Error in xgboost::xgboost(data = xgmat, objective = "binary:logistic", : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = "binary:logistic", : argument "y" is missing, with no default Saving _problems/test-estimate_cate-165.R [ FAIL 1 | WARN 10 | SKIP 25 | PASS 4 ] ══ Skipped tests (25) ══════════════════════════════════════════════════════════ • On CRAN (25): 'test-check_input_data.R:3:3', 'test-cre.R:3:3', 'test-discover_rules.R:3:3', 'test-estimate_cate.R:3:3', 'test-estimate_ite.R:4:3', 'test-estimate_ite_aipw.R:3:3', 'test-estimate_ite_bart.R:4:3', 'test-estimate_ite_cf.R:4:3', 'test-estimate_ite_poisson.R:3:3', 'test-estimate_ite_slearner.R:3:3', 'test-estimate_ite_tlearner.R:3:3', 'test-estimate_ite_xlearner.R:3:3', 'test-estimate_ps.R:3:3', 'test-evaluate.R:3:3', 'test-extract_effect_modifiers.R:2:3', 'test-extract_rules.R:4:3', 'test-filter_correlated_rules.R:3:3', 'test-filter_extreme_rules.R:3:3', 'test-filter_irrelevant_rules.R:4:3', 'test-generate_cre_dataset.R:4:3', 'test-generate_rules.R:3:3', 'test-generate_rules_matrix.R:3:3', 'test-honest_splitting.R:3:3', 'test-interpret_rules.R:3:3', 'test-predict.R:3:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-estimate_cate.R:162:3'): CATE Estimation Runs Correctly (test 2/2) ── Error in `SuperLearner::SuperLearner(Y = z, X = as.data.frame(X), newX = as.data.frame(X), family = binomial(), SL.library = ps_method, cvControl = list(V = 0))`: All algorithms dropped from library Backtrace: ▆ 1. └─CRE:::estimate_ite(...) at test-estimate_cate.R:162:3 2. └─CRE:::estimate_ite_aipw(y, z, X, learner_ps, learner_y) 3. └─CRE:::estimate_ps(z, X, learner_ps) 4. └─SuperLearner::SuperLearner(...) [ FAIL 1 | WARN 10 | SKIP 25 | PASS 4 ] Error: ! Test failures. Execution halted Package: csmpv Check: examples New result: ERROR Running examples in ‘csmpv-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: LASSO2_XGBtraining > ### Title: Variable Selection with LASSO2 and Modeling with XGBoost > ### Aliases: LASSO2_XGBtraining > > ### ** Examples > > # Load in data sets: > data("datlist", package = "csmpv") > tdat = datlist$training > > # The function saves files locally. You can define your own temporary directory. > # If not, tempdir() can be used to get the system's temporary directory. > temp_dir = tempdir() > # As an example, let's define Xvars, which will be used later: > Xvars = c("highIPI", "B.Symptoms", "MYC.IHC", "BCL2.IHC", "CD10.IHC", "BCL6.IHC") > # The function can work with three different outcome types. > # Here, we use binary as an example: > blxfit = LASSO2_XGBtraining(data = tdat, biomks = Xvars, Y = "DZsig", + outfile = paste0(temp_dir, "/binary_LASSO2_XGBoost")) Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: verbose. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'gamma' has been renamed to 'min_split_loss'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'eta' has been renamed to 'learning_rate'. This warning will become an error in a future version. Error in xgboost::xgboost(objective = "binary:logistic", data = Dtrain, : argument "y" is missing, with no default Calls: LASSO2_XGBtraining ... -> process.y.margin.and.objective -> NROW Execution halted Package: csmpv Check: re-building of vignette outputs New result: ERROR Error(s) in re-building vignettes: ... --- re-building ‘csmpv_vignette.rmd’ using rmarkdown Quitting from csmpv_vignette.rmd:421-425 [unnamed-chunk-34] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error in `xgboost::xgboost()`: ! argument "y" is missing, with no default --- Backtrace: ▆ 1. └─csmpv::XGBtraining(...) 2. └─xgboost::xgboost(...) 3. └─xgboost:::process.y.margin.and.objective(...) 4. └─base::NROW(y) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'csmpv_vignette.rmd' failed with diagnostics: argument "y" is missing, with no default --- failed re-building ‘csmpv_vignette.rmd’ SUMMARY: processing the following file failed: ‘csmpv_vignette.rmd’ Error: Vignette re-building failed. Execution halted Package: CytoProfile Check: re-building of vignette outputs New result: ERROR Error(s) in re-building vignettes: ... --- re-building ‘getting_started.Rmd’ using rmarkdown Warning: ggrepel: 6 unlabeled data points (too many overlaps). Consider increasing max.overlaps Warning: ggrepel: 2 unlabeled data points (too many overlaps). Consider increasing max.overlaps Quitting from getting_started.Rmd:316-338 [ML1] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error in `confusionMatrix.default()`: ! The data must contain some levels that overlap the reference. --- Backtrace: ▆ 1. └─CytoProfile::cyt_xgb(...) 2. ├─caret::confusionMatrix(as.factor(cv_pred_labels), as.factor(actual_labels)) 3. └─caret:::confusionMatrix.default(as.factor(cv_pred_labels), as.factor(actual_labels)) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'getting_started.Rmd' failed with diagnostics: The data must contain some levels that overlap the reference. --- failed re-building ‘getting_started.Rmd’ SUMMARY: processing the following file failed: ‘getting_started.Rmd’ Error: Vignette re-building failed. Execution halted Package: DALEXtra Check: examples New result: ERROR Running examples in ‘DALEXtra-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: explain_xgboost > ### Title: Create explainer from your xgboost model > ### Aliases: explain_xgboost > > ### ** Examples > > library("xgboost") > library("DALEXtra") > library("mlr") Loading required package: ParamHelpers > # 8th column is target that has to be omitted in X data > data <- as.matrix(createDummyFeatures(titanic_imputed[,-8])) > model <- xgboost(data, titanic_imputed$survived, nrounds = 10, + params = list(objective = "binary:logistic"), + prediction = TRUE) Warning in throw_err_or_depr_msg("Parameter(s) have been removed from this function: ", : Parameter(s) have been removed from this function: params. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: prediction. This warning will become an error in a future version. > # explainer with encode functiom > explainer_1 <- explain_xgboost(model, data = titanic_imputed[,-8], + titanic_imputed$survived, + encode_function = function(data) { + as.matrix(createDummyFeatures(data)) + }) Preparation of a new explainer is initiated -> model label : xgb.Booster (  default  ) -> data : 2207 rows 7 cols -> target variable : 2207 values -> predict function : yhat.xgb.Booster will be used (  default  ) -> predicted values : No value for predict function target column. (  default  ) Error in strsplit(model$params$objective, ":", fixed = TRUE) : non-character argument Calls: explain_xgboost ... explain -> model_info -> model_info.xgb.Booster -> strsplit Execution halted Package: DALEXtra Check: tests New result: ERROR Running ‘testthat.R’ [171s/168s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(DALEXtra) Loading required package: DALEX Welcome to DALEX (version: 2.5.3). Find examples and detailed introduction at: http://ema.drwhy.ai/ Additional features will be available after installation of: ggpubr. Use 'install_dependencies()' to get all suggested dependencies > > test_check("DALEXtra") Preparation of a new explainer is initiated -> model label : LM -> data : 9000 rows 6 cols -> target variable : 9000 values -> predict function : yhat.WrappedModel will be used (  default  ) -> predicted values : No value for predict function target column. (  default  ) -> model_info : package mlr , ver. 2.19.3 , task regression (  default  ) -> predicted values : numerical, min = 1792.597 , mean = 3506.836 , max = 6241.447 -> residual function : difference between y and yhat (  default  ) -> residuals : numerical, min = -257.2555 , mean = 4.687686 , max = 472.356  A new explainer has been created!  Preparation of a new explainer is initiated -> model label : RF -> data : 9000 rows 6 cols -> target variable : 9000 values -> predict function : yhat.WrappedModel will be used (  default  ) -> predicted values : No value for predict function target column. (  default  ) -> model_info : package mlr , ver. 2.19.3 , task regression (  default  ) -> predicted values : numerical, min = 1811.874 , mean = 3503.909 , max = 6251.841 -> residual function : difference between y and yhat (  default  ) -> residuals : numerical, min = -551.8409 , mean = 7.614965 , max = 784.4979  A new explainer has been created!  Preparation of a new explainer is initiated -> model label : GBM -> data : 9000 rows 6 cols -> target variable : 9000 values -> predict function : yhat.WrappedModel will be used (  default  ) -> predicted values : No value for predict function target column. (  default  ) -> model_info : package mlr , ver. 2.19.3 , task regression (  default  ) -> predicted values : numerical, min = 2121.711 , mean = 3502.64 , max = 6055.528 -> residual function : difference between y and yhat (  default  ) -> residuals : numerical, min = -518.7114 , mean = 8.883995 , max = 747.0608  A new explainer has been created!  additional arguments ignored in warning() Preparation of a new explainer is initiated -> model label : ranger (  default  ) -> data : 2207 rows 7 cols -> target variable : 2207 values -> predict function : yhat.ranger will be used (  default  ) -> predicted values : No value for predict function target column. (  default  ) -> model_info : package ranger , ver. 0.17.0 , task classification (  default  ) -> predicted values : numerical, min = 0.01717233 , mean = 0.3218475 , max = 0.9895025 -> residual function : difference between y and yhat (  default  ) -> residuals : numerical, min = -0.7852652 , mean = 0.0003092278 , max = 0.8834231  A new explainer has been created!  additional arguments ignored in warning() Saving _problems/test_xgboost_explain-13.R Saving _problems/test_xgboost_explain-31.R Saving _problems/test_xgboost_explain-50.R [ FAIL 3 | WARN 15 | SKIP 11 | PASS 44 ] ══ Skipped tests (11) ══════════════════════════════════════════════════════════ • Conda test env needed for tests (6): 'test_create_env.R:6:3', 'test_create_env.R:27:3', 'test_create_env.R:40:3', 'test_keras_explain.R:6:2', 'test_scikitlearn_explain.R:6:3', 'tests_prints.R:8:3' • JAVA entry needed for tests (4): 'test_h2o_explain.R:8:3', 'test_h2o_explain.R:33:3', 'test_h2o_explain.R:56:3', 'test_h2o_explain.R:90:3' • Test with windows (1): 'test_champion_challenger.R:5:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test_xgboost_explain.R:11:3'): creating explainer classif ─────────── Error in `strsplit(model$params$objective, ":", fixed = TRUE)`: non-character argument Backtrace: ▆ 1. └─DALEXtra::explain_xgboost(...) at test_xgboost_explain.R:11:3 2. └─DALEX::explain(...) 3. ├─DALEX::model_info(model, is_multiclass = task_subtype) 4. └─DALEXtra:::model_info.xgb.Booster(model, is_multiclass = task_subtype) 5. └─base::strsplit(model$params$objective, ":", fixed = TRUE) ── Error ('test_xgboost_explain.R:29:3'): creating explainer regr ────────────── Error in `strsplit(model$params$objective, ":", fixed = TRUE)`: non-character argument Backtrace: ▆ 1. └─DALEXtra::explain_xgboost(...) at test_xgboost_explain.R:29:3 2. └─DALEX::explain(...) 3. ├─DALEX::model_info(model, is_multiclass = task_subtype) 4. └─DALEXtra:::model_info.xgb.Booster(model, is_multiclass = task_subtype) 5. └─base::strsplit(model$params$objective, ":", fixed = TRUE) ── Error ('test_xgboost_explain.R:48:3'): creating explainer multi ───────────── Error in `strsplit(model$params$objective, ":", fixed = TRUE)`: non-character argument Backtrace: ▆ 1. └─DALEXtra::explain_xgboost(...) at test_xgboost_explain.R:48:3 2. └─DALEX::explain(...) 3. ├─DALEX::model_info(model, is_multiclass = task_subtype) 4. └─DALEXtra:::model_info.xgb.Booster(model, is_multiclass = task_subtype) 5. └─base::strsplit(model$params$objective, ":", fixed = TRUE) [ FAIL 3 | WARN 15 | SKIP 11 | PASS 44 ] Error: ! Test failures. Execution halted Package: dblr Check: examples New result: ERROR Running examples in ‘dblr-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: dblr_train > ### Title: Discrete Boosting Logistic Regression Training > ### Aliases: dblr_train > > ### ** Examples > > # use iris data for example > dat <- iris > # create two categorical variables > dat$Petal.Width <- as.factor((iris$Petal.Width<=0.2)*1+(iris$Petal.Width>1.0)*2) > dat$Sepal.Length <- (iris$Sepal.Length<=3.0)*2+(iris$Sepal.Length>6.0)*1.25 > # create the response variable > dat$Species <- as.numeric(dat$Species=='versicolor') > set.seed(123) > # random sampling > index <- sample(1:150,100,replace = FALSE) > # train the dblr model using the training data > dblr_fit <- dblr_train(train_x=dat[index,c(1:4)], + train_y=dat[index,5],category_cols = c('Petal.Width','Sepal.Length'), + metric = 'logloss',subsample = 0.5,eta = 0.05,colsample = 1.0, + lambda = 1.0,cv_early_stops = 10,verbose=FALSE) Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: metrics. This warning will become an error in a future version. Error in begin_iteration:end_iteration : argument of length 0 Calls: dblr_train -> Execution halted Package: ddml Check: tests New result: ERROR Running ‘testthat.R’ [34s/34s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(ddml) > > test_check("ddml") Saving _problems/test-ml_wrappers-35.R sample fold 1/3 sample fold 2/3 sample fold 3/3[ FAIL 1 | WARN 6 | SKIP 0 | PASS 134 ] ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-ml_wrappers.R:35:3'): mdl_xgboost is working ─────────────────── Error in `process.y.margin.and.objective(as.vector(y), base_margin, objective, params)`: Got numeric 'y' - supported objectives for this data are: reg:squarederror, reg:squaredlogerror, reg:logistic, reg:pseudohubererror, reg:absoluteerror, reg:quantileerror, count:poisson, reg:gamma, reg:tweedie. Was passed: binary:logistic Backtrace: ▆ 1. └─ddml::mdl_xgboost(y, X, objective = "binary:logistic") at test-ml_wrappers.R:35:3 2. └─xgboost::xgboost(...) 3. └─xgboost:::process.y.margin.and.objective(...) 4. └─xgboost:::process.y.margin.and.objective(...) [ FAIL 1 | WARN 6 | SKIP 0 | PASS 134 ] Error: ! Test failures. Execution halted Package: E2E Check: re-building of vignette outputs New result: ERROR Error(s) in re-building vignettes: ... --- re-building ‘advanced-features.Rmd’ using rmarkdown Quitting from advanced-features.Rmd:102-113 [unnamed-chunk-5] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error in `bagging_dia()`: ! No base models were successfully trained or made valid predictions. Cannot perform bagging. --- Backtrace: ▆ 1. └─E2E::bagging_dia(...) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'advanced-features.Rmd' failed with diagnostics: No base models were successfully trained or made valid predictions. Cannot perform bagging. --- failed re-building ‘advanced-features.Rmd’ --- re-building ‘diagnostic-workflow.Rmd’ using rmarkdown Quitting from diagnostic-workflow.Rmd:80-85 [unnamed-chunk-6] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error in `bagging_dia()`: ! No base models were successfully trained or made valid predictions. Cannot perform bagging. --- Backtrace: ▆ 1. └─E2E::bagging_dia(train_dia, base_model_name = "xb", n_estimators = 5) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'diagnostic-workflow.Rmd' failed with diagnostics: No base models were successfully trained or made valid predictions. Cannot perform bagging. --- failed re-building ‘diagnostic-workflow.Rmd’ --- re-building ‘getting-started.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘getting-started.Rmd’ --- re-building ‘integrated-pipeline.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘integrated-pipeline.Rmd’ --- re-building ‘parameter-reference.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘parameter-reference.Rmd’ --- re-building ‘prognostic-workflow.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘prognostic-workflow.Rmd’ SUMMARY: processing the following files failed: ‘advanced-features.Rmd’ ‘diagnostic-workflow.Rmd’ Error: Vignette re-building failed. Execution halted Package: EIX Check: whether package can be installed New result: ERROR Installation failed. Package: explore Check: examples New result: ERROR Running examples in ‘explore-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: explain_xgboost > ### Title: Explain a binary target using xgboost > ### Aliases: explain_xgboost > > ### ** Examples > > data <- use_data_iris() > data$is_versicolor <- ifelse(data$Species == "versicolor", 1, 0) > data$Species <- NULL > explain_xgboost(data, target = is_versicolor, log = FALSE) Warning in check.deprecation(deprecated_cv_params, match.call(), ...) : Passed invalid function arguments: eval_metric, nthread. These should be passed as a list to argument 'params'. Conversion from argument to 'params' entry will be done automatically, but this behavior will become an error in a future version. Warning in check.custom.obj(params, objective) : Argument 'objective' is only for custom objectives. For built-in objectives, pass the objective under 'params'. This warning will become an error in a future version. Error in all_nrounds[k] <- cv$best_iteration : replacement has length zero Calls: explain_xgboost Execution halted Package: explore Check: re-building of vignette outputs New result: ERROR Error(s) in re-building vignettes: ... --- re-building ‘abtest.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘abtest.Rmd’ --- re-building ‘clean-drop.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘clean-drop.Rmd’ --- re-building ‘data.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘data.Rmd’ --- re-building ‘describe.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘describe.Rmd’ --- re-building ‘explain.Rmd’ using rmarkdown Quitting from explain.Rmd:65-69 [unnamed-chunk-6] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error in `all_nrounds[k] <- cv$best_iteration`: ! replacement has length zero --- Backtrace: ▆ 1. └─explore::explain_xgboost(data %>% drop_var_not_numeric(), target = buy) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'explain.Rmd' failed with diagnostics: replacement has length zero --- failed re-building ‘explain.Rmd’ --- re-building ‘explore-mtcars.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘explore-mtcars.Rmd’ --- re-building ‘explore-penguins.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘explore-penguins.Rmd’ --- re-building ‘explore-titanic.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘explore-titanic.Rmd’ --- re-building ‘explore.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘explore.Rmd’ --- re-building ‘predict.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘predict.Rmd’ --- re-building ‘report-target.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘report-target.Rmd’ --- re-building ‘report-targetpct.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘report-targetpct.Rmd’ --- re-building ‘report.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘report.Rmd’ --- re-building ‘tips-tricks.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘tips-tricks.Rmd’ SUMMARY: processing the following file failed: ‘explain.Rmd’ Error: Vignette re-building failed. Execution halted Package: FastRet Check: tests New result: ERROR Running ‘testthat.R’ [38s/22s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/testing-design.html#sec-tests-files-overview > # * https://testthat.r-lib.org/articles/special-files.html > > library(testthat) > library(FastRet) > > test_check("FastRet") Starting 2 test processes. Saving _problems/test-train_frm-gbtree-11.R Saving _problems/test-fit_gbtree-8.R Saving _problems/test-fit_gbtree-16.R > test-plot_frm.R: 2025-12-02 00:25:25.13 Starting training of a lasso model > test-plot_frm.R: 2025-12-02 00:25:25.13 Mocking is enabled. Returning 'mockdata/lasso_model.rds' > test-plot_frm.R: 2025-12-02 00:25:25.30 Starting training of a lasso model > test-plot_frm.R: 2025-12-02 00:25:25.30 Mocking is enabled. Returning 'mockdata/lasso_model.rds' > test-plot_frm.R: 2025-12-02 00:25:25.31 Starting model Adjustment > test-plot_frm.R: 2025-12-02 00:25:25.31 dim(original_data): 442 x 126 > test-adjust_frm.R: 2025-12-02 00:25:25.69 Starting model Adjustment > test-adjust_frm.R: 2025-12-02 00:25:25.69 dim(original_data): 442 x 126 > test-adjust_frm.R: 2025-12-02 00:25:25.69 dim(new_data): 25 x 3 > test-adjust_frm.R: 2025-12-02 00:25:25.69 predictors: 1, 2 > test-adjust_frm.R: 2025-12-02 00:25:25.69 nfolds: 5 > test-adjust_frm.R: 2025-12-02 00:25:25.69 Preprocessing data > test-adjust_frm.R: 2025-12-02 00:25:25.70 Formula: RT_ADJ ~ RT + I(RT^2) > test-adjust_frm.R: 2025-12-02 00:25:25.70 Estimating performance of adjusted model in CV > test-adjust_frm.R: 2025-12-02 00:25:25.77 Fitting adjustment model on full new data set > test-adjust_frm.R: 2025-12-02 00:25:25.78 Returning adjusted frm object > test-adjust_frm.R: 2025-12-02 00:25:25.78 Starting model Adjustment > test-adjust_frm.R: 2025-12-02 00:25:25.78 dim(original_data): 442 x 126 > test-adjust_frm.R: 2025-12-02 00:25:25.78 dim(new_data): 25 x 3 > test-adjust_frm.R: 2025-12-02 00:25:25.78 predictors: 1, 2, 3, 4, 5, 6 > test-adjust_frm.R: 2025-12-02 00:25:25.78 nfolds: 5 > test-adjust_frm.R: 2025-12-02 00:25:25.78 Preprocessing data > test-adjust_frm.R: 2025-12-02 00:25:25.78 Formula: RT_ADJ ~ RT + I(RT^2) + I(RT^3) + log(RT) + exp(RT) + sqrt(RT) > test-adjust_frm.R: 2025-12-02 00:25:25.79 Estimating performance of adjusted model in CV > test-adjust_frm.R: 2025-12-02 00:25:25.84 Fitting adjustment model on full new data set > test-adjust_frm.R: 2025-12-02 00:25:25.85 Returning adjusted frm object > test-selective_measuring.R: 2025-12-02 00:25:26.26 Starting Selective Measuring > test-selective_measuring.R: 2025-12-02 00:25:26.26 Preprocessing input data > test-selective_measuring.R: 2025-12-02 00:25:26.27 Mocking is enabled for 'preprocess_data'. Returning 'mockdata/RPCD_prepro.rds'. > test-selective_measuring.R: 2025-12-02 00:25:26.27 Standardizing features > test-selective_measuring.R: 2025-12-02 00:25:26.28 Training Ridge Regression model > test-selective_measuring.R: 2025-12-02 00:25:26.28 Fitting Ridge model > test-plot_frm.R: 2025-12-02 00:25:25.31 dim(new_data): 25 x 3 > test-plot_frm.R: 2025-12-02 00:25:26.58 predictors: 1, 2, 3, 4, 5, 6 > test-plot_frm.R: 2025-12-02 00:25:26.58 nfolds: 5 > test-plot_frm.R: 2025-12-02 00:25:26.58 Preprocessing data > test-plot_frm.R: 2025-12-02 00:25:26.59 Formula: RT_ADJ ~ RT + I(RT^2) + I(RT^3) + log(RT) + exp(RT) + sqrt(RT) > test-plot_frm.R: 2025-12-02 00:25:26.59 Estimating performance of adjusted model in CV > test-selective_measuring.R: 2025-12-02 00:25:26.65 End training > test-selective_measuring.R: 2025-12-02 00:25:26.65 Scaling features by coefficients of Ridge Regression model > test-plot_frm.R: 2025-12-02 00:25:26.66 Fitting adjustment model on full new data set > test-plot_frm.R: 2025-12-02 00:25:26.66 Returning adjusted frm object > test-selective_measuring.R: 2025-12-02 00:25:26.67 Applying PAM clustering > test-selective_measuring.R: 2025-12-02 00:25:27.17 Returning clustering results [ FAIL 3 | WARN 5 | SKIP 0 | PASS 19 ] ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-train_frm-gbtree.R:5:5'): train_frm works if `method == "GBTree"` ── Error in `FUN(X[[i]], ...)`: subscript out of bounds Backtrace: ▆ 1. └─FastRet::train_frm(...) at test-train_frm-gbtree.R:5:5 2. └─base::lapply(tmp, "[[", 2) ── Error ('test-fit_gbtree.R:8:5'): fit.gbtrees works as expected ────────────── Error in `begin_iteration:end_iteration`: argument of length 0 Backtrace: ▆ 1. └─FastRet:::fit_gbtree(df, verbose = 0) at test-fit_gbtree.R:8:5 2. └─FastRet:::fit_gbtree_grid(...) 3. └─xgboost::xgb.train(...) ── Error ('test-fit_gbtree.R:16:5'): fit.gbtrees works for data from reverse phase column ── Error in `begin_iteration:end_iteration`: argument of length 0 Backtrace: ▆ 1. └─FastRet:::fit_gbtree(df, verbose = 0) at test-fit_gbtree.R:16:5 2. └─FastRet:::fit_gbtree_grid(...) 3. └─xgboost::xgb.train(...) [ FAIL 3 | WARN 5 | SKIP 0 | PASS 19 ] Error: ! Test failures. Execution halted Package: forecastML Check: re-building of vignette outputs New result: ERROR Error(s) in re-building vignettes: ... --- re-building ‘combine_forecasts.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘combine_forecasts.Rmd’ --- re-building ‘custom_functions.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘custom_functions.Rmd’ --- re-building ‘grouped_forecast.Rmd’ using rmarkdown [00:35:37] WARNING: src/objective/regression_obj.cu:282: reg:linear is now deprecated in favor of reg:squarederror. [00:35:41] WARNING: src/objective/regression_obj.cu:282: reg:linear is now deprecated in favor of reg:squarederror. [00:35:45] WARNING: src/objective/regression_obj.cu:282: reg:linear is now deprecated in favor of reg:squarederror. [00:35:50] WARNING: src/objective/regression_obj.cu:282: reg:linear is now deprecated in favor of reg:squarederror. [00:35:53] WARNING: src/objective/regression_obj.cu:282: reg:linear is now deprecated in favor of reg:squarederror. [00:35:56] WARNING: src/objective/regression_obj.cu:282: reg:linear is now deprecated in favor of reg:squarederror. [00:36:00] WARNING: src/objective/regression_obj.cu:282: reg:linear is now deprecated in favor of reg:squarederror. [00:36:01] WARNING: src/objective/regression_obj.cu:282: reg:linear is now deprecated in favor of reg:squarederror. [00:36:02] WARNING: src/objective/regression_obj.cu:282: reg:linear is now deprecated in favor of reg:squarederror. Quitting from grouped_forecast.Rmd:320-322 [unnamed-chunk-14] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error in `array()`: ! length of 'dimnames' [1] not equal to array extent --- Backtrace: ▆ 1. ├─base::summary(model_results_cv$horizon_1$window_1$model) 2. └─base::summary.default(model_results_cv$horizon_1$window_1$model) 3. └─base::array(...) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'grouped_forecast.Rmd' failed with diagnostics: length of 'dimnames' [1] not equal to array extent --- failed re-building ‘grouped_forecast.Rmd’ --- re-building ‘lagged_features.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘lagged_features.Rmd’ --- re-building ‘package_overview.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘package_overview.Rmd’ SUMMARY: processing the following file failed: ‘grouped_forecast.Rmd’ Error: Vignette re-building failed. Execution halted Package: GeneralisedCovarianceMeasure Check: R code for possible problems New result: NOTE train.xgboost: no visible global function definition for ‘cb.evaluation.log’ Undefined global functions or variables: cb.evaluation.log Package: GPCERF Check: re-building of vignette outputs New result: ERROR Error(s) in re-building vignettes: ... --- re-building ‘A-Note-on-Choosing-Hyperparameters.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘A-Note-on-Choosing-Hyperparameters.Rmd’ --- re-building ‘Developers-Guide.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘Developers-Guide.Rmd’ --- re-building ‘GPCERF.Rmd’ using rmarkdown 2025-12-02 00:27:27.423109 anduin2 1085872 GPCERF estimate_gps INFO: Started estimating GPS values ... Quitting from GPCERF.Rmd:38-45 [unnamed-chunk-4] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error in `SuperLearner::SuperLearner()`: ! All algorithms dropped from library --- Backtrace: ▆ 1. └─GPCERF::estimate_gps(...) 2. └─SuperLearner::SuperLearner(...) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'GPCERF.Rmd' failed with diagnostics: All algorithms dropped from library --- failed re-building ‘GPCERF.Rmd’ --- re-building ‘Nearest-neighbor-Gaussian-Processes.Rmd’ using rmarkdown 2025-12-02 00:27:34.973179 anduin2 1086318 GPCERF estimate_gps INFO: Started estimating GPS values ... Quitting from Nearest-neighbor-Gaussian-Processes.Rmd:83-92 [unnamed-chunk-4] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error in `SuperLearner::SuperLearner()`: ! All algorithms dropped from library --- Backtrace: ▆ 1. └─GPCERF::estimate_gps(...) 2. └─SuperLearner::SuperLearner(...) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'Nearest-neighbor-Gaussian-Processes.Rmd' failed with diagnostics: All algorithms dropped from library --- failed re-building ‘Nearest-neighbor-Gaussian-Processes.Rmd’ --- re-building ‘Standard-Gaussian-Processes.Rmd’ using rmarkdown 2025-12-02 00:27:39.748312 anduin2 1087237 GPCERF estimate_gps INFO: Started estimating GPS values ... Quitting from Standard-Gaussian-Processes.Rmd:83-93 [unnamed-chunk-3] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error in `SuperLearner::SuperLearner()`: ! All algorithms dropped from library --- Backtrace: ▆ 1. └─GPCERF::estimate_gps(...) 2. └─SuperLearner::SuperLearner(...) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'Standard-Gaussian-Processes.Rmd' failed with diagnostics: All algorithms dropped from library --- failed re-building ‘Standard-Gaussian-Processes.Rmd’ SUMMARY: processing the following files failed: ‘GPCERF.Rmd’ ‘Nearest-neighbor-Gaussian-Processes.Rmd’ ‘Standard-Gaussian-Processes.Rmd’ Error: Vignette re-building failed. Execution halted Package: GPCERF Check: tests New result: ERROR Running ‘testthat.R’ [23s/22s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(GPCERF) > > test_check("GPCERF") 2025-12-02 00:27:01.411832 anduin2 1081295 GPCERF estimate_gps INFO: Started estimating GPS values ... Loading required package: nnls Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Saving _problems/test-compute_deriv_nn-8.R 2025-12-02 00:27:02.874024 anduin2 1081295 GPCERF estimate_gps INFO: Started estimating GPS values ... Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Saving _problems/test-compute_deriv_weights_gp-8.R 2025-12-02 00:27:03.802651 anduin2 1081295 GPCERF estimate_gps INFO: Started estimating GPS values ... Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Saving _problems/test-compute_m_sigma-12.R 2025-12-02 00:27:04.972649 anduin2 1081295 GPCERF estimate_gps INFO: Started estimating GPS values ... Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Saving _problems/test-compute_posterior_m_nn-10.R 2025-12-02 00:27:06.086513 anduin2 1081295 GPCERF estimate_gps INFO: Started estimating GPS values ... Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Saving _problems/test-compute_posterior_sd_nn-10.R 2025-12-02 00:27:06.952101 anduin2 1081295 GPCERF estimate_gps INFO: Started estimating GPS values ... Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Saving _problems/test-compute_rl_deriv_gp-8.R 2025-12-02 00:27:07.905843 anduin2 1081295 GPCERF estimate_gps INFO: Started estimating GPS values ... Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Saving _problems/test-compute_rl_deriv_nn-8.R 2025-12-02 00:27:08.960673 anduin2 1081295 GPCERF estimate_gps INFO: Started estimating GPS values ... Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Saving _problems/test-compute_sd_gp-17.R Saving _problems/test-compute_weight_gp-49.R 2025-12-02 00:27:10.705621 anduin2 1081295 GPCERF estimate_gps INFO: Started estimating GPS values ... Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Saving _problems/test-estimate_cerf_gp-10.R 2025-12-02 00:27:12.014942 anduin2 1081295 GPCERF estimate_gps INFO: Started estimating GPS values ... Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Saving _problems/test-estimate_cerf_nngp-10.R 2025-12-02 00:27:13.146551 anduin2 1081295 GPCERF estimate_gps INFO: Started estimating GPS values ... Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Saving _problems/test-estimate_gps-10.R 2025-12-02 00:27:14.338053 anduin2 1081295 GPCERF estimate_gps INFO: Started estimating GPS values ... Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Saving _problems/test-estimate_mean_sd_nn-10.R 2025-12-02 00:27:15.591994 anduin2 1081295 GPCERF estimate_gps INFO: Started estimating GPS values ... Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Saving _problems/test-estimate_noise_gp-11.R 2025-12-02 00:27:16.698685 anduin2 1081295 GPCERF estimate_gps INFO: Started estimating GPS values ... Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Saving _problems/test-estimate_noise_nn-10.R 2025-12-02 00:27:17.845355 anduin2 1081295 GPCERF estimate_gps INFO: Started estimating GPS values ... Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Saving _problems/test-find_optimal_nn-9.R [ FAIL 16 | WARN 752 | SKIP 0 | PASS 45 ] ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-compute_deriv_nn.R:5:3'): compute_deriv_nn works as expected! ── Error in `SuperLearner::SuperLearner(Y = w_all, X = as.data.frame(cov_mt), SL.library = sl_lib)`: All algorithms dropped from library Backtrace: ▆ 1. └─GPCERF::estimate_gps(...) at test-compute_deriv_nn.R:5:3 2. └─SuperLearner::SuperLearner(...) ── Error ('test-compute_deriv_weights_gp.R:5:3'): compute_deriv_weights_gp works as expected! ── Error in `SuperLearner::SuperLearner(Y = w_all, X = as.data.frame(cov_mt), SL.library = sl_lib)`: All algorithms dropped from library Backtrace: ▆ 1. └─GPCERF::estimate_gps(...) at test-compute_deriv_weights_gp.R:5:3 2. └─SuperLearner::SuperLearner(...) ── Error ('test-compute_m_sigma.R:9:4'): compute_m_sigma works as expected! ──── Error in `SuperLearner::SuperLearner(Y = w_all, X = as.data.frame(cov_mt), SL.library = sl_lib)`: All algorithms dropped from library Backtrace: ▆ 1. └─GPCERF::estimate_gps(...) at test-compute_m_sigma.R:9:4 2. └─SuperLearner::SuperLearner(...) ── Error ('test-compute_posterior_m_nn.R:7:3'): compute_posterior_m_nn works as expected. ── Error in `SuperLearner::SuperLearner(Y = w_all, X = as.data.frame(cov_mt), SL.library = sl_lib)`: All algorithms dropped from library Backtrace: ▆ 1. └─GPCERF::estimate_gps(...) at test-compute_posterior_m_nn.R:7:3 2. └─SuperLearner::SuperLearner(...) ── Error ('test-compute_posterior_sd_nn.R:7:3'): compute_posterior_sd_nn works as expected. ── Error in `SuperLearner::SuperLearner(Y = w_all, X = as.data.frame(cov_mt), SL.library = sl_lib)`: All algorithms dropped from library Backtrace: ▆ 1. └─GPCERF::estimate_gps(...) at test-compute_posterior_sd_nn.R:7:3 2. └─SuperLearner::SuperLearner(...) ── Error ('test-compute_rl_deriv_gp.R:5:3'): compute_rl_deriv_gp works as expected! ── Error in `SuperLearner::SuperLearner(Y = w_all, X = as.data.frame(cov_mt), SL.library = sl_lib)`: All algorithms dropped from library Backtrace: ▆ 1. └─GPCERF::estimate_gps(...) at test-compute_rl_deriv_gp.R:5:3 2. └─SuperLearner::SuperLearner(...) ── Error ('test-compute_rl_deriv_nn.R:5:3'): compute_rl_deriv_nn works as expected! ── Error in `SuperLearner::SuperLearner(Y = w_all, X = as.data.frame(cov_mt), SL.library = sl_lib)`: All algorithms dropped from library Backtrace: ▆ 1. └─GPCERF::estimate_gps(...) at test-compute_rl_deriv_nn.R:5:3 2. └─SuperLearner::SuperLearner(...) ── Error ('test-compute_sd_gp.R:14:4'): compute_sd_gp works as expected. ─────── Error in `SuperLearner::SuperLearner(Y = w_all, X = as.data.frame(cov_mt), SL.library = sl_lib)`: All algorithms dropped from library Backtrace: ▆ 1. └─GPCERF::estimate_gps(...) at test-compute_sd_gp.R:14:4 2. └─SuperLearner::SuperLearner(...) ── Failure ('test-compute_weight_gp.R:49:3'): multiplication works ───────────── Expected `weight$weight[28]` to equal 0.0002182767. Differences: `actual`: 0.00000 `expected`: 0.00022 ── Error ('test-estimate_cerf_gp.R:7:3'): estimate_cerf_gp works as expected! ── Error in `SuperLearner::SuperLearner(Y = w_all, X = as.data.frame(cov_mt), SL.library = sl_lib)`: All algorithms dropped from library Backtrace: ▆ 1. └─GPCERF::estimate_gps(...) at test-estimate_cerf_gp.R:7:3 2. └─SuperLearner::SuperLearner(...) ── Error ('test-estimate_cerf_nngp.R:7:3'): estimate_cerf_nngp works as expected! ── Error in `SuperLearner::SuperLearner(Y = w_all, X = as.data.frame(cov_mt), SL.library = sl_lib)`: All algorithms dropped from library Backtrace: ▆ 1. └─GPCERF::estimate_gps(...) at test-estimate_cerf_nngp.R:7:3 2. └─SuperLearner::SuperLearner(...) ── Error ('test-estimate_gps.R:7:3'): estimate_gps works as expected. ────────── Error in `SuperLearner::SuperLearner(Y = w_all, X = as.data.frame(cov_mt), SL.library = sl_lib)`: All algorithms dropped from library Backtrace: ▆ 1. └─GPCERF::estimate_gps(...) at test-estimate_gps.R:7:3 2. └─SuperLearner::SuperLearner(...) ── Error ('test-estimate_mean_sd_nn.R:7:3'): estimate_mean_sd_nn works as expected! ── Error in `SuperLearner::SuperLearner(Y = w_all, X = as.data.frame(cov_mt), SL.library = sl_lib)`: All algorithms dropped from library Backtrace: ▆ 1. └─GPCERF::estimate_gps(...) at test-estimate_mean_sd_nn.R:7:3 2. └─SuperLearner::SuperLearner(...) ── Error ('test-estimate_noise_gp.R:8:3'): estimate_noise_gp works as expected ── Error in `SuperLearner::SuperLearner(Y = w_all, X = as.data.frame(cov_mt), SL.library = sl_lib)`: All algorithms dropped from library Backtrace: ▆ 1. └─GPCERF::estimate_gps(...) at test-estimate_noise_gp.R:8:3 2. └─SuperLearner::SuperLearner(...) ── Error ('test-estimate_noise_nn.R:7:3'): estimate_noise_nn works as expected! ── Error in `SuperLearner::SuperLearner(Y = w_all, X = as.data.frame(cov_mt), SL.library = sl_lib)`: All algorithms dropped from library Backtrace: ▆ 1. └─GPCERF::estimate_gps(...) at test-estimate_noise_nn.R:7:3 2. └─SuperLearner::SuperLearner(...) ── Error ('test-find_optimal_nn.R:6:3'): find_optimal_nn works as expected! ──── Error in `SuperLearner::SuperLearner(Y = w_all, X = as.data.frame(cov_mt), SL.library = sl_lib)`: All algorithms dropped from library Backtrace: ▆ 1. └─GPCERF::estimate_gps(...) at test-find_optimal_nn.R:6:3 2. └─SuperLearner::SuperLearner(...) [ FAIL 16 | WARN 752 | SKIP 0 | PASS 45 ] Error: ! Test failures. Execution halted Package: IBLM Check: examples New result: ERROR Running examples in ‘IBLM-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: beta_corrected_density > ### Title: Density Plot of Beta Corrections for a Variable > ### Aliases: beta_corrected_density > > ### ** Examples > > # This function is created inside explain_iblm() and is output as an item > > df_list <- freMTPLmini |> split_into_train_validate_test(seed = 9000) > > iblm_model <- train_iblm_xgb( + df_list, + response_var = "ClaimRate", + family = "poisson" + ) Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'watchlist' has been renamed to 'evals'. This warning will become an error in a future version. > > explain_objects <- explain_iblm(iblm_model, df_list$test) Error in xgboost::xgb.DMatrix(data.matrix(data)) : [00:25:24] src/data/../collective/../data/array_interface.h:422: Check failed: ptr % alignment == 0 (1 vs. 0) : Input pointer misalignment. Stack trace: [bt] (0) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x798b1) [0x7f8a560798b1] [bt] (1) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x27888d) [0x7f8a5627888d] [bt] (2) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixCreateFromDense+0x146) [0x7f8a56531c46] [bt] (3) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixCreateFromMat_R+0x21b) [0x7f8a5606f59b] [bt] (4) /home/hornik/tmp/R/lib/libR.so(+0x103f6e) [0x7f8a6e703f6e] [bt] (5) /home/hornik/tmp/R/lib/libR.so(+0x140421) [0x7f8a6e740421] [bt] (6) /home/hornik/tmp/R/lib/libR.so(+0x15044b) [0x7f8a6e75044b] [bt] (7) /home/hornik/tmp/R/lib/libR.so(Rf_eval+0x14b) [0x7f8a6e7507fb] [bt] (8) /home/hornik/tmp/R/lib/libR.so(+0x15275e) [0x7f8a6e75275e] Calls: explain_iblm ... data.frame -> -> predict.xgb.Booster -> Execution halted Package: IBLM Check: tests New result: ERROR Running ‘testthat.R’ [10s/8s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/testing-design.html#sec-tests-files-overview > # * https://testthat.r-lib.org/articles/special-files.html > > library(testthat) > library(IBLM) > > test_check("IBLM") Saving _problems/test-explain_iblm-222.R Saving _problems/test-explain_iblm-247.R Saving _problems/test-explain_iblm-272.R Saving _problems/test-explain_iblm-300.R Saving _problems/test-explain_iblm-328.R Saving _problems/test-explain_iblm-361.R Saving _problems/test-explain_iblm-421.R Saving _problems/test-explain_iblm-454.R Saving _problems/test-explain_iblm-484.R Saving _problems/test-predict-32.R [ FAIL 10 | WARN 12 | SKIP 4 | PASS 2 ] ══ Skipped tests (4) ═══════════════════════════════════════════════════════════ • On CRAN (4): 'test-explain_iblm.R:4:3', 'test-get_pinball_scores.R:5:3', 'test-get_pinball_scores.R:85:3', 'test-train_iblm.R:6:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-explain_iblm.R:214:3'): test explain completes when one categorical and one continuous ── Expected `{ ... }` not to throw any errors. Actually got a with message: [00:25:28] src/data/../collective/../data/array_interface.h:422: Check failed: ptr % alignment == 0 (1 vs. 0) : Input pointer misalignment. Stack trace: [bt] (0) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x798b1) [0x7f3486a798b1] [bt] (1) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x27888d) [0x7f3486c7888d] [bt] (2) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixCreateFromDense+0x146) [0x7f3486f31c46] [bt] (3) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixCreateFromMat_R+0x21b) [0x7f3486a6f59b] [bt] (4) /home/hornik/tmp/R/lib/libR.so(+0x103f6e) [0x7f349f303f6e] [bt] (5) /home/hornik/tmp/R/lib/libR.so(+0x140421) [0x7f349f340421] [bt] (6) /home/hornik/tmp/R/lib/libR.so(+0x15044b) [0x7f349f35044b] [bt] (7) /home/hornik/tmp/R/lib/libR.so(Rf_eval+0x14b) [0x7f349f3507fb] [bt] (8) /home/hornik/tmp/R/lib/libR.so(+0x15275e) [0x7f349f35275e] ── Failure ('test-explain_iblm.R:239:3'): test explain completes when categorical only ── Expected `{ ... }` not to throw any errors. Actually got a with message: [00:25:29] src/data/../collective/../data/array_interface.h:422: Check failed: ptr % alignment == 0 (1 vs. 0) : Input pointer misalignment. Stack trace: [bt] (0) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x798b1) [0x7f3486a798b1] [bt] (1) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x27888d) [0x7f3486c7888d] [bt] (2) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixCreateFromDense+0x146) [0x7f3486f31c46] [bt] (3) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixCreateFromMat_R+0x21b) [0x7f3486a6f59b] [bt] (4) /home/hornik/tmp/R/lib/libR.so(+0x103f6e) [0x7f349f303f6e] [bt] (5) /home/hornik/tmp/R/lib/libR.so(+0x140421) [0x7f349f340421] [bt] (6) /home/hornik/tmp/R/lib/libR.so(+0x15044b) [0x7f349f35044b] [bt] (7) /home/hornik/tmp/R/lib/libR.so(Rf_eval+0x14b) [0x7f349f3507fb] [bt] (8) /home/hornik/tmp/R/lib/libR.so(+0x15275e) [0x7f349f35275e] ── Failure ('test-explain_iblm.R:264:3'): test explain completes when continuous only ── Expected `{ ... }` not to throw any errors. Actually got a with message: [00:25:29] src/data/../collective/../data/array_interface.h:422: Check failed: ptr % alignment == 0 (1 vs. 0) : Input pointer misalignment. Stack trace: [bt] (0) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x798b1) [0x7f3486a798b1] [bt] (1) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x27888d) [0x7f3486c7888d] [bt] (2) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixCreateFromDense+0x146) [0x7f3486f31c46] [bt] (3) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixCreateFromMat_R+0x21b) [0x7f3486a6f59b] [bt] (4) /home/hornik/tmp/R/lib/libR.so(+0x103f6e) [0x7f349f303f6e] [bt] (5) /home/hornik/tmp/R/lib/libR.so(+0x140421) [0x7f349f340421] [bt] (6) /home/hornik/tmp/R/lib/libR.so(+0x15044b) [0x7f349f35044b] [bt] (7) /home/hornik/tmp/R/lib/libR.so(Rf_eval+0x14b) [0x7f349f3507fb] [bt] (8) /home/hornik/tmp/R/lib/libR.so(+0x15275e) [0x7f349f35275e] ── Failure ('test-explain_iblm.R:292:3'): test explain completes when logical field ── Expected `{ ... }` not to throw any errors. Actually got a with message: [00:25:30] src/data/../collective/../data/array_interface.h:422: Check failed: ptr % alignment == 0 (1 vs. 0) : Input pointer misalignment. Stack trace: [bt] (0) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x798b1) [0x7f3486a798b1] [bt] (1) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x27888d) [0x7f3486c7888d] [bt] (2) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixCreateFromDense+0x146) [0x7f3486f31c46] [bt] (3) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixCreateFromMat_R+0x21b) [0x7f3486a6f59b] [bt] (4) /home/hornik/tmp/R/lib/libR.so(+0x103f6e) [0x7f349f303f6e] [bt] (5) /home/hornik/tmp/R/lib/libR.so(+0x140421) [0x7f349f340421] [bt] (6) /home/hornik/tmp/R/lib/libR.so(+0x15044b) [0x7f349f35044b] [bt] (7) /home/hornik/tmp/R/lib/libR.so(Rf_eval+0x14b) [0x7f349f3507fb] [bt] (8) /home/hornik/tmp/R/lib/libR.so(+0x15275e) [0x7f349f35275e] ── Failure ('test-explain_iblm.R:320:3'): test explain completes when no reference/zero levels ── Expected `{ ... }` not to throw any errors. Actually got a with message: [00:25:30] src/data/../collective/../data/array_interface.h:422: Check failed: ptr % alignment == 0 (1 vs. 0) : Input pointer misalignment. Stack trace: [bt] (0) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x798b1) [0x7f3486a798b1] [bt] (1) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x27888d) [0x7f3486c7888d] [bt] (2) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixCreateFromDense+0x146) [0x7f3486f31c46] [bt] (3) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixCreateFromMat_R+0x21b) [0x7f3486a6f59b] [bt] (4) /home/hornik/tmp/R/lib/libR.so(+0x103f6e) [0x7f349f303f6e] [bt] (5) /home/hornik/tmp/R/lib/libR.so(+0x140421) [0x7f349f340421] [bt] (6) /home/hornik/tmp/R/lib/libR.so(+0x15044b) [0x7f349f35044b] [bt] (7) /home/hornik/tmp/R/lib/libR.so(Rf_eval+0x14b) [0x7f349f3507fb] [bt] (8) /home/hornik/tmp/R/lib/libR.so(+0x15275e) [0x7f349f35275e] ── Error ('test-explain_iblm.R:361:3'): test migrate-to-bias vs non-migrate-to-bias options ── Error in `xgboost::xgb.DMatrix(data.matrix(data))`: [00:25:30] src/data/../collective/../data/array_interface.h:422: Check failed: ptr % alignment == 0 (1 vs. 0) : Input pointer misalignment. Stack trace: [bt] (0) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x798b1) [0x7f3486a798b1] [bt] (1) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x27888d) [0x7f3486c7888d] [bt] (2) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixCreateFromDense+0x146) [0x7f3486f31c46] [bt] (3) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixCreateFromMat_R+0x21b) [0x7f3486a6f59b] [bt] (4) /home/hornik/tmp/R/lib/libR.so(+0x103f6e) [0x7f349f303f6e] [bt] (5) /home/hornik/tmp/R/lib/libR.so(+0x140421) [0x7f349f340421] [bt] (6) /home/hornik/tmp/R/lib/libR.so(+0x15044b) [0x7f349f35044b] [bt] (7) /home/hornik/tmp/R/lib/libR.so(Rf_eval+0x14b) [0x7f349f3507fb] [bt] (8) /home/hornik/tmp/R/lib/libR.so(+0x15275e) [0x7f349f35275e] Backtrace: ▆ 1. └─IBLM::explain_iblm(iblm_model = IBLM, data = splits$test, migrate_reference_to_bias = TRUE) at test-explain_iblm.R:361:3 2. ├─IBLM::extract_booster_shap(iblm_model$booster_model, data) 3. └─IBLM:::extract_booster_shap.xgb.Booster(...) 4. ├─base::data.frame(...) 5. ├─stats::predict(...) 6. ├─xgboost:::predict.xgb.Booster(...) 7. └─xgboost::xgb.DMatrix(data.matrix(data)) ── Failure ('test-explain_iblm.R:413:3'): test gaussian can run ──────────────── Expected `{ ... }` not to throw any errors. Actually got a with message: [00:25:30] src/data/../collective/../data/array_interface.h:422: Check failed: ptr % alignment == 0 (1 vs. 0) : Input pointer misalignment. Stack trace: [bt] (0) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x798b1) [0x7f3486a798b1] [bt] (1) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x27888d) [0x7f3486c7888d] [bt] (2) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixCreateFromDense+0x146) [0x7f3486f31c46] [bt] (3) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixCreateFromMat_R+0x21b) [0x7f3486a6f59b] [bt] (4) /home/hornik/tmp/R/lib/libR.so(+0x103f6e) [0x7f349f303f6e] [bt] (5) /home/hornik/tmp/R/lib/libR.so(+0x140421) [0x7f349f340421] [bt] (6) /home/hornik/tmp/R/lib/libR.so(+0x15044b) [0x7f349f35044b] [bt] (7) /home/hornik/tmp/R/lib/libR.so(Rf_eval+0x14b) [0x7f349f3507fb] [bt] (8) /home/hornik/tmp/R/lib/libR.so(+0x15275e) [0x7f349f35275e] ── Failure ('test-explain_iblm.R:445:3'): test gamma can run ─────────────────── Expected `{ ... }` not to throw any errors. Actually got a with message: [00:25:31] src/data/../collective/../data/array_interface.h:422: Check failed: ptr % alignment == 0 (1 vs. 0) : Input pointer misalignment. Stack trace: [bt] (0) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x798b1) [0x7f3486a798b1] [bt] (1) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x27888d) [0x7f3486c7888d] [bt] (2) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixCreateFromDense+0x146) [0x7f3486f31c46] [bt] (3) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixCreateFromMat_R+0x21b) [0x7f3486a6f59b] [bt] (4) /home/hornik/tmp/R/lib/libR.so(+0x103f6e) [0x7f349f303f6e] [bt] (5) /home/hornik/tmp/R/lib/libR.so(+0x140421) [0x7f349f340421] [bt] (6) /home/hornik/tmp/R/lib/libR.so(+0x15044b) [0x7f349f35044b] [bt] (7) /home/hornik/tmp/R/lib/libR.so(Rf_eval+0x14b) [0x7f349f3507fb] [bt] (8) /home/hornik/tmp/R/lib/libR.so(+0x15275e) [0x7f349f35275e] ── Failure ('test-explain_iblm.R:475:3'): test tweedie can run ───────────────── Expected `{ ... }` not to throw any errors. Actually got a with message: [00:25:31] src/data/../collective/../data/array_interface.h:422: Check failed: ptr % alignment == 0 (1 vs. 0) : Input pointer misalignment. Stack trace: [bt] (0) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x798b1) [0x7f3486a798b1] [bt] (1) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x27888d) [0x7f3486c7888d] [bt] (2) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixCreateFromDense+0x146) [0x7f3486f31c46] [bt] (3) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixCreateFromMat_R+0x21b) [0x7f3486a6f59b] [bt] (4) /home/hornik/tmp/R/lib/libR.so(+0x103f6e) [0x7f349f303f6e] [bt] (5) /home/hornik/tmp/R/lib/libR.so(+0x140421) [0x7f349f340421] [bt] (6) /home/hornik/tmp/R/lib/libR.so(+0x15044b) [0x7f349f35044b] [bt] (7) /home/hornik/tmp/R/lib/libR.so(Rf_eval+0x14b) [0x7f349f3507fb] [bt] (8) /home/hornik/tmp/R/lib/libR.so(+0x15275e) [0x7f349f35275e] ── Error ('test-predict.R:32:3'): test corrected beta coeffecient predictions are same as predict iblm() ── Error in `xgboost::xgb.DMatrix(data.matrix(data))`: [00:25:33] src/data/../collective/../data/array_interface.h:422: Check failed: ptr % alignment == 0 (1 vs. 0) : Input pointer misalignment. Stack trace: [bt] (0) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x798b1) [0x7f3486a798b1] [bt] (1) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x27888d) [0x7f3486c7888d] [bt] (2) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixCreateFromDense+0x146) [0x7f3486f31c46] [bt] (3) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixCreateFromMat_R+0x21b) [0x7f3486a6f59b] [bt] (4) /home/hornik/tmp/R/lib/libR.so(+0x103f6e) [0x7f349f303f6e] [bt] (5) /home/hornik/tmp/R/lib/libR.so(+0x140421) [0x7f349f340421] [bt] (6) /home/hornik/tmp/R/lib/libR.so(+0x15044b) [0x7f349f35044b] [bt] (7) /home/hornik/tmp/R/lib/libR.so(Rf_eval+0x14b) [0x7f349f3507fb] [bt] (8) /home/hornik/tmp/R/lib/libR.so(+0x15275e) [0x7f349f35275e] Backtrace: ▆ 1. └─IBLM::explain_iblm(iblm_model = IBLM, data = splits$test, migrate_reference_to_bias = TRUE) at test-predict.R:32:3 2. ├─IBLM::extract_booster_shap(iblm_model$booster_model, data) 3. └─IBLM:::extract_booster_shap.xgb.Booster(...) 4. ├─base::data.frame(...) 5. ├─stats::predict(...) 6. ├─xgboost:::predict.xgb.Booster(...) 7. └─xgboost::xgb.DMatrix(data.matrix(data)) [ FAIL 10 | WARN 12 | SKIP 4 | PASS 2 ] Error: ! Test failures. Execution halted Package: inTrees Check: examples New result: ERROR Running examples in ‘inTrees-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: XGB2List > ### Title: Transform an xgboost object to a list of trees > ### Aliases: XGB2List > ### Keywords: xgboost > > ### ** Examples > > library(data.table) > library(xgboost) > # test data set 1: iris > X <- within(iris,rm("Species")); Y <- iris[,"Species"] > X <- within(iris,rm("Species")); Y <- iris[,"Species"] > model_mat <- model.matrix(~. -1, data=X) > xgb <- xgboost(model_mat, label = as.numeric(Y) - 1, nrounds = 20, + objective = "multi:softprob", num_class = 3 ) Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: num_class. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'label' has been renamed to 'y'. This warning will become an error in a future version. Error in process.y.margin.and.objective(y, base_margin, objective, params) : Got numeric 'y' - supported objectives for this data are: reg:squarederror, reg:squaredlogerror, reg:logistic, reg:pseudohubererror, reg:absoluteerror, reg:quantileerror, count:poisson, reg:gamma, reg:tweedie. Was passed: multi:softprob Calls: xgboost -> process.y.margin.and.objective Execution halted Package: IVDML Check: tests New result: ERROR Running ‘testthat.R’ [16s/17s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/testing-design.html#sec-tests-files-overview > # * https://testthat.r-lib.org/articles/special-files.html > > library(testthat) > library(IVDML) > > test_check("IVDML") Saving _problems/test-fit_IVDML-86.R Saving _problems/test-fit_IVDML-98.R Fitted IVDML object Machine learning method: gam IV methods: linearIV, mlIV Number of cross-fitting sample splits: 1[ FAIL 2 | WARN 6 | SKIP 0 | PASS 31 ] ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-fit_IVDML.R:86:3'): fit_IVDML works when ml_par are set globally. ── Expected `fit_IVDML(...)` not to throw any errors. Actually got a with message: argument "y" is missing, with no default ── Failure ('test-fit_IVDML.R:98:3'): fit_IVDML works when ml_par is set differently for the different nuisance functions. ── Expected `fit_IVDML(...)` not to throw any errors. Actually got a with message: argument "y" is missing, with no default [ FAIL 2 | WARN 6 | SKIP 0 | PASS 31 ] Error: ! Test failures. Execution halted Package: ldmppr Check: examples New result: ERROR Running examples in ‘ldmppr-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: predict_marks > ### Title: Predict values from the mark distribution > ### Aliases: predict_marks > > ### ** Examples > > # Simulate a realization > generating_parameters <- c(2, 8, .02, 2.5, 3, 1, 2.5, .2) > M_n <- matrix(c(10, 14), ncol = 1) > generated_locs <- simulate_sc( + t_min = 0, + t_max = 1, + sc_params = generating_parameters, + anchor_point = M_n, + xy_bounds = c(0, 25, 0, 25) + ) > > # Load the raster files > raster_paths <- list.files(system.file("extdata", package = "ldmppr"), + pattern = "\\.tif$", full.names = TRUE + ) > rasters <- lapply(raster_paths, terra::rast) > > # Scale the rasters > scaled_raster_list <- scale_rasters(rasters) > > # Load the example mark model > file_path <- system.file("extdata", "example_mark_model.rds", package = "ldmppr") > example_mark_model <- readRDS(file_path) > > # Unbundle the model > mark_model <- bundle::unbundle(example_mark_model) Error in xgboost::xgb.load.raw(object, as_booster = TRUE) : unused argument (as_booster = TRUE) Calls: ... -> unbundle -> unbundle.bundle -> Execution halted Examples with CPU (user + system) or elapsed time > 5s user system elapsed extract_covars 6.481 0.3 6.782 Package: ldmppr Check: re-building of vignette outputs New result: ERROR Error(s) in re-building vignettes: ... --- re-building ‘ldmppr_howto.Rmd’ using rmarkdown Quitting from ldmppr_howto.Rmd:100-130 [train_mark_model] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error in `xgboost::xgb.load.raw()`: ! unused argument (as_booster = TRUE) --- Backtrace: ▆ 1. ├─bundle::unbundle(example_mark_model$bundled_model) 2. └─bundle:::unbundle.bundle(example_mark_model$bundled_model) 3. └─x$situate(get_object(x)) 4. └─bundle::swap_element(object, "fit") 5. ├─bundle::unbundle(component) 6. └─bundle:::unbundle.bundle(component) 7. └─x$situate(get_object(x)) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'ldmppr_howto.Rmd' failed with diagnostics: unused argument (as_booster = TRUE) --- failed re-building ‘ldmppr_howto.Rmd’ SUMMARY: processing the following file failed: ‘ldmppr_howto.Rmd’ Error: Vignette re-building failed. Execution halted Package: lime Check: tests New result: ERROR Running ‘testthat.R’ [28s/28s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(lime) > > test_check("lime") Saving _problems/test-text-28.R [ FAIL 1 | WARN 6 | SKIP 1 | PASS 21 ] ══ Skipped tests (1) ═══════════════════════════════════════════════════════════ • On CRAN (1): 'test-h2o.R:3:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-text.R:28:3'): single sentence explanation ─────────────────── Expected `explanation` to have length 13. Actual length: 11. [ FAIL 1 | WARN 6 | SKIP 1 | PASS 21 ] Error: ! Test failures. Execution halted Package: MBMethPred Check: examples New result: ERROR Running examples in ‘MBMethPred-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: ModelMetrics > ### Title: Model metrics > ### Aliases: ModelMetrics > > ### ** Examples > > xgboost <- XGBoostModel(SplitRatio = 0.2, + CV = 2, + NCores = 1, + NewData = NULL) Warning in throw_err_or_depr_msg("Parameter(s) have been removed from this function: ", : Parameter(s) have been removed from this function: params. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: verbose, num_class. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'label' has been renamed to 'y'. This warning will become an error in a future version. Error in prescreen.objective(objective) : Objectives with non-default prediction mode (reg:logistic, binary:logitraw, multi:softmax) are not supported in 'xgboost()'. Try 'xgb.train()'. Calls: XGBoostModel ... lapply -> FUN -> -> prescreen.objective Execution halted Package: MBMethPred Check: tests New result: ERROR Running ‘testthat.R’ [18s/18s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(MBMethPred) > > test_check("MBMethPred") y_pred y_true Group3 SHH Group4 Group3 1 0 0 SHH 1 1 0 Group4 0 0 1 y_pred y_true Group3 SHH WNT Group4 Group3 8 0 0 0 SHH 0 24 2 0 WNT 0 0 24 0 Group4 1 0 0 31 y_pred y_true Group3 SHH WNT Group4 Group3 20 0 0 1 SHH 0 20 0 0 WNT 0 0 18 0 Group4 1 0 0 32 y_pred y_true Group3 SHH WNT Group4 Group3 18 0 0 1 SHH 0 41 0 0 WNT 0 0 37 0 Group4 6 2 0 52 y_pred y_true Group3 SHH WNT Group4 Group3 26 0 1 0 SHH 0 41 0 0 WNT 0 0 38 0 Group4 1 1 0 53 y_pred y_true Group3 SHH WNT Group4 Group3 27 0 0 0 SHH 0 43 0 0 WNT 0 0 37 0 Group4 4 2 0 54 y_pred y_true Group3 SHH WNT Group4 Group3 23 0 0 3 SHH 0 42 0 0 WNT 0 0 35 0 Group4 6 1 0 55 y_pred y_true Group3 SHH WNT Group4 Group3 26 0 0 1 SHH 0 38 1 0 WNT 0 0 39 0 Group4 5 0 0 52 y_pred y_true Group3 SHH WNT Group4 Group3 26 0 0 2 SHH 0 41 0 0 WNT 0 0 39 0 Group4 0 0 0 57 y_pred y_true Group3 SHH WNT Group4 Group3 27 0 0 2 SHH 0 41 0 0 WNT 0 0 39 0 Group4 3 1 1 53 y_pred y_true Group3 SHH WNT Group4 Group3 21 0 0 3 SHH 0 42 0 0 WNT 0 0 37 0 Group4 6 0 0 54 y_pred y_true Group3 SHH WNT Group4 Group3 20 0 1 5 SHH 0 42 0 0 WNT 0 0 37 0 Group4 6 1 0 46 y_pred y_true Group3 SHH WNT Group4 Group3 27 0 0 1 SHH 0 41 1 0 WNT 0 0 40 0 Group4 6 0 0 57 Saving _problems/test-ModelMetrics-2.R y_pred y_true Group3 SHH WNT Group4 Group3 13 0 0 0 SHH 0 27 0 0 WNT 0 0 22 1 Group4 1 0 0 24 y_pred y_true Group3 SHH WNT Group4 Group3 15 0 0 1 SHH 0 19 0 0 WNT 0 0 19 0 Group4 4 0 0 36 Saving _problems/test-NewDataPredictionResult-10.R y_pred y_true Group3 SHH WNT Group4 Group3 15 0 0 2 SHH 0 18 0 0 WNT 0 0 18 0 Group4 0 0 0 34 y_pred y_true Group3 SHH WNT Group4 Group3 11 0 0 1 SHH 0 27 0 1 WNT 0 0 24 0 Group4 0 0 0 31 The NMI score is: 0.903790812437385 y_pred y_true Group3 SHH WNT Group4 Group3 12 0 0 1 SHH 0 26 0 0 WNT 1 0 20 0 Group4 0 0 0 31 y_pred y_true Group3 SHH WNT Group4 Group3 15 0 0 1 SHH 0 20 0 0 WNT 0 0 21 0 Group4 2 0 0 32 Saving _problems/test-XGBoostModel-2.R [ FAIL 3 | WARN 12 | SKIP 0 | PASS 9 ] ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-ModelMetrics.r:2:3'): ModelMetrics returns correct class. ────── Error in `prescreen.objective(objective)`: Objectives with non-default prediction mode (reg:logistic, binary:logitraw, multi:softmax) are not supported in 'xgboost()'. Try 'xgb.train()'. Backtrace: ▆ 1. └─MBMethPred::XGBoostModel(...) at test-ModelMetrics.r:2:3 2. └─parallel::mclapply(...) 3. └─base::lapply(X = X, FUN = FUN, ...) 4. └─MBMethPred (local) FUN(X[[i]], ...) 5. └─xgboost::xgboost(...) 6. └─xgboost:::prescreen.objective(objective) ── Error ('test-NewDataPredictionResult.R:10:3'): NewDataPredictionResult returns correct type ── Error in `prescreen.objective(objective)`: Objectives with non-default prediction mode (reg:logistic, binary:logitraw, multi:softmax) are not supported in 'xgboost()'. Try 'xgb.train()'. Backtrace: ▆ 1. └─MBMethPred::XGBoostModel(...) at test-NewDataPredictionResult.R:10:3 2. └─parallel::mclapply(...) 3. └─base::lapply(X = X, FUN = FUN, ...) 4. └─MBMethPred (local) FUN(X[[i]], ...) 5. └─xgboost::xgboost(...) 6. └─xgboost:::prescreen.objective(objective) ── Error ('test-XGBoostModel.R:2:3'): XGBoostModel returns correct class. ────── Error in `prescreen.objective(objective)`: Objectives with non-default prediction mode (reg:logistic, binary:logitraw, multi:softmax) are not supported in 'xgboost()'. Try 'xgb.train()'. Backtrace: ▆ 1. ├─testthat::expect_type(...) at test-XGBoostModel.R:2:3 2. │ └─testthat::quasi_label(enquo(object)) 3. │ └─rlang::eval_bare(expr, quo_get_env(quo)) 4. └─MBMethPred::XGBoostModel(...) 5. └─parallel::mclapply(...) 6. └─base::lapply(X = X, FUN = FUN, ...) 7. └─MBMethPred (local) FUN(X[[i]], ...) 8. └─xgboost::xgboost(...) 9. └─xgboost:::prescreen.objective(objective) [ FAIL 3 | WARN 12 | SKIP 0 | PASS 9 ] Error: ! Test failures. Execution halted Package: MIC Check: dependencies in R code New result: WARNING Missing or unexported object: ‘xgboost::slice’ Package: MIC Check: examples New result: ERROR Running examples in ‘MIC-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: xgb.cv.lowmem > ### Title: Low memory cross-validation wrapper for XGBoost > ### Aliases: xgb.cv.lowmem > > ### ** Examples > > train <- list(data = matrix(rnorm(20), ncol = 2), + label = rbinom(10, 1, 0.5)) > dtrain <- xgboost::xgb.DMatrix(train$data, label = train$label, nthread = 1) > cv <- xgb.cv.lowmem(data = dtrain, + params = list(objective = "binary:logistic"), + nrounds = 2, + nfold = 3, + prediction = TRUE, + nthread = 1) Warning: `xgb.cv.lowmem()` was deprecated in MIC 1.2.0. ℹ Please use `faLearn::xgb.cv.lowmem()` instead. ℹ This function has been moved to the faLearn package. Error: 'slice' is not an exported object from 'namespace:xgboost' Execution halted Package: MIC Check: tests New result: ERROR Running ‘testthat.R’ [86s/86s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > library(testthat) > library(MIC) Attaching package: 'MIC' The following object is masked from 'package:base': table > > test_check("MIC") Saving _problems/test-cv-148.R [ FAIL 1 | WARN 420 | SKIP 4 | PASS 231 ] ══ Skipped tests (4) ═══════════════════════════════════════════════════════════ • On CRAN (4): 'test-cv.R:4:3', 'test-patric.R:17:3', 'test-patric.R:32:3', 'test-patric.R:39:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-cv.R:148:3'): iris example ───────────────────────────────────── Error in `apply(model$pred, 1, which.max)`: dim(X) must have a positive length Backtrace: ▆ 1. └─base::apply(model$pred, 1, which.max) at test-cv.R:148:3 [ FAIL 1 | WARN 420 | SKIP 4 | PASS 231 ] Error: ! Test failures. Execution halted Package: mixgb Check: examples New result: ERROR Running examples in ‘mixgb-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: mixgb_cv > ### Title: Use cross-validation to find the optimal 'nrounds' > ### Aliases: mixgb_cv > > ### ** Examples > > params <- list(max_depth = 3, subsample = 0.7, nthread = 2) > cv.results <- mixgb_cv(data = nhanes3, xgb.params = params) Warning in throw_err_or_depr_msg("Parameter(s) have been removed from this function: ", : Parameter(s) have been removed from this function: label. This warning will become an error in a future version. Error in xgb.cv(data = obs.data, params = xgb.params, label = obs.y, objective = obj.type, : inherits(data, "xgb.DMatrix") is not TRUE Calls: mixgb_cv -> xgb.cv -> stopifnot Execution halted Package: mixgb Check: re-building of vignette outputs New result: ERROR Error(s) in re-building vignettes: ... --- re-building ‘Imputing-newdata.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘Imputing-newdata.Rmd’ --- re-building ‘Using-mixgb.Rmd’ using rmarkdown Quitting from Using-mixgb.Rmd:114-120 [unnamed-chunk-5] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error in `xgb.cv()`: ! inherits(data, "xgb.DMatrix") is not TRUE --- Backtrace: ▆ 1. └─mixgb::mixgb_cv(...) 2. └─xgboost::xgb.cv(...) 3. └─base::stopifnot(inherits(data, "xgb.DMatrix")) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'Using-mixgb.Rmd' failed with diagnostics: inherits(data, "xgb.DMatrix") is not TRUE --- failed re-building ‘Using-mixgb.Rmd’ SUMMARY: processing the following file failed: ‘Using-mixgb.Rmd’ Error: Vignette re-building failed. Execution halted Package: mllrnrs Check: tests New result: ERROR Running ‘testthat.R’ [43s/99s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > # https://github.com/Rdatatable/data.table/issues/5658 > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mllrnrs) > > test_check("mllrnrs") CV fold: Fold1 CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 4.824 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 7.199 seconds 3) Running FUN 2 times in 2 thread(s)... 0.348 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 4.906 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 6.172 seconds 3) Running FUN 2 times in 2 thread(s)... 0.369 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 4.825 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 10.453 seconds 3) Running FUN 2 times in 2 thread(s)... 0.379 seconds CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold2 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold3 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold1 Saving _problems/test-binary-356.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold2 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold3 Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. Classification: using 'mean classification error' as optimization metric. CV fold: Fold1 Saving _problems/test-multiclass-294.R CV fold: Fold1 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 4.509 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.964 seconds 3) Running FUN 2 times in 2 thread(s)... 0.324 seconds CV fold: Fold2 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 4.338 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.385 seconds 3) Running FUN 2 times in 2 thread(s)... 0.674 seconds CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 5 times in 2 thread(s)... 4.271 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.308 seconds 3) Running FUN 2 times in 2 thread(s)... 0.454 seconds CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 5.474 seconds subsample colsample_bytree min_child_weight learning_rate max_depth 1: 1.0 0.8 1 0.1 5 2: 0.8 1.0 1 0.2 5 3: 1.0 1.0 5 0.2 5 4: 0.6 0.8 1 0.1 5 5: 0.6 0.8 5 0.2 5 6: 0.8 0.8 5 0.2 5 7: 0.8 0.8 1 0.1 1 8: 0.6 0.6 1 0.2 5 9: 0.6 1.0 1 0.1 1 10: 0.6 0.8 1 0.2 5 errorMessage 1: FUN returned these elements with length > 1: Score,metric_optim_mean 2: FUN returned these elements with length > 1: Score,metric_optim_mean 3: FUN returned these elements with length > 1: Score,metric_optim_mean 4: FUN returned these elements with length > 1: Score,metric_optim_mean 5: FUN returned these elements with length > 1: Score,metric_optim_mean 6: FUN returned these elements with length > 1: Score,metric_optim_mean 7: FUN returned these elements with length > 1: Score,metric_optim_mean 8: FUN returned these elements with length > 1: Score,metric_optim_mean 9: FUN returned these elements with length > 1: Score,metric_optim_mean 10: FUN returned these elements with length > 1: Score,metric_optim_mean Saving _problems/test-regression-309.R CV fold: Fold1 Saving _problems/test-regression-352.R [ FAIL 4 | WARN 3 | SKIP 3 | PASS 22 ] ══ Skipped tests (3) ═══════════════════════════════════════════════════════════ • On CRAN (3): 'test-binary.R:57:5', 'test-lints.R:10:5', 'test-multiclass.R:57:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-binary.R:356:5'): test nested cv, grid, binary:logistic - xgboost ── Error in `.get_best_setting(results = outlist$summary, opt_metric = "metric_optim_mean", param_names = param_names, higher_better = metric_higher_better)`: nrow(best_row) == 1 is not TRUE Backtrace: ▆ 1. └─xgboost_optimizer$execute() at test-binary.R:356:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) ``(train_index = ``, fold_train = ``, fold_test = ``) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) ``(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer) 10. └─mlexperiments:::.run_optimizer(...) 11. └─mlexperiments:::.optimize_postprocessing(...) 12. └─mlexperiments:::.get_best_setting(...) 13. └─base::stopifnot(nrow(best_row) == 1) ── Error ('test-multiclass.R:294:5'): test nested cv, grid, multi:softprob - xgboost, with weights ── Error in `.get_best_setting(results = outlist$summary, opt_metric = "metric_optim_mean", param_names = param_names, higher_better = metric_higher_better)`: nrow(best_row) == 1 is not TRUE Backtrace: ▆ 1. └─xgboost_optimizer$execute() at test-multiclass.R:294:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) ``(train_index = ``, fold_train = ``, fold_test = ``) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) ``(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer) 10. └─mlexperiments:::.run_optimizer(...) 11. └─mlexperiments:::.optimize_postprocessing(...) 12. └─mlexperiments:::.get_best_setting(...) 13. └─base::stopifnot(nrow(best_row) == 1) ── Error ('test-regression.R:309:5'): test nested cv, bayesian, reg:squarederror - xgboost ── Error in `(function (FUN, bounds, saveFile = NULL, initGrid, initPoints = 4, iters.n = 3, iters.k = 1, otherHalting = list(timeLimit = Inf, minUtility = 0), acq = "ucb", kappa = 2.576, eps = 0, parallel = FALSE, gsPoints = pmax(100, length(bounds)^3), convThresh = 1e+08, acqThresh = 1, errorHandling = "stop", plotProgress = FALSE, verbose = 1, ...) { startT <- Sys.time() optObj <- list() class(optObj) <- "bayesOpt" optObj$FUN <- FUN optObj$bounds <- bounds optObj$iters <- 0 optObj$initPars <- list() optObj$optPars <- list() optObj$GauProList <- list() optObj <- changeSaveFile(optObj, saveFile) checkParameters(bounds, iters.n, iters.k, otherHalting, acq, acqThresh, errorHandling, plotProgress, parallel, verbose) boundsDT <- boundsToDT(bounds) otherHalting <- formatOtherHalting(otherHalting) if (missing(initGrid) + missing(initPoints) != 1) stop("Please provide 1 of initGrid or initPoints, but not both.") if (!missing(initGrid)) { setDT(initGrid) inBounds <- checkBounds(initGrid, bounds) inBounds <- as.logical(apply(inBounds, 1, prod)) if (any(!inBounds)) stop("initGrid not within bounds.") optObj$initPars$initialSample <- "User Provided Grid" initPoints <- nrow(initGrid) } else { initGrid <- randParams(boundsDT, initPoints) optObj$initPars$initialSample <- "Latin Hypercube Sampling" } optObj$initPars$initGrid <- initGrid if (nrow(initGrid) <= 2) stop("Cannot initialize with less than 3 samples.") optObj$initPars$initPoints <- nrow(initGrid) if (initPoints <= length(bounds)) stop("initPoints must be greater than the number of FUN inputs.") sinkFile <- file() on.exit({ while (sink.number() > 0) sink() close(sinkFile) }) `%op%` <- ParMethod(parallel) if (parallel) Workers <- getDoParWorkers() else Workers <- 1 if (verbose > 0) cat("\nRunning initial scoring function", nrow(initGrid), "times in", Workers, "thread(s)...") sink(file = sinkFile) tm <- system.time(scoreSummary <- foreach(iter = 1:nrow(initGrid), .options.multicore = list(preschedule = FALSE), .combine = list, .multicombine = TRUE, .inorder = FALSE, .errorhandling = "pass", .verbose = FALSE) %op% { Params <- initGrid[get("iter"), ] Elapsed <- system.time(Result <- tryCatch({ do.call(what = FUN, args = as.list(Params)) }, error = function(e) e)) if (any(class(Result) %in% c("simpleError", "error", "condition"))) return(Result) if (!inherits(x = Result, what = "list")) stop("Object returned from FUN was not a list.") resLengths <- lengths(Result) if (!any(names(Result) == "Score")) stop("FUN must return list with element 'Score' at a minimum.") if (!is.numeric(Result$Score)) stop("Score returned from FUN was not numeric.") if (any(resLengths != 1)) { badReturns <- names(Result)[which(resLengths != 1)] stop("FUN returned these elements with length > 1: ", paste(badReturns, collapse = ",")) } data.table(Params, Elapsed = Elapsed[[3]], as.data.table(Result)) })[[3]] while (sink.number() > 0) sink() if (verbose > 0) cat(" ", tm, "seconds\n") se <- which(sapply(scoreSummary, function(cl) any(class(cl) %in% c("simpleError", "error", "condition")))) if (length(se) > 0) { print(data.table(initGrid[se, ], errorMessage = sapply(scoreSummary[se], function(x) x$message))) stop("Errors encountered in initialization are listed above.") } else { scoreSummary <- rbindlist(scoreSummary) } scoreSummary[, `:=`(("gpUtility"), rep(as.numeric(NA), nrow(scoreSummary)))] scoreSummary[, `:=`(("acqOptimum"), rep(FALSE, nrow(scoreSummary)))] scoreSummary[, `:=`(("Epoch"), rep(0, nrow(scoreSummary)))] scoreSummary[, `:=`(("Iteration"), 1:nrow(scoreSummary))] scoreSummary[, `:=`(("inBounds"), rep(TRUE, nrow(scoreSummary)))] scoreSummary[, `:=`(("errorMessage"), rep(NA, nrow(scoreSummary)))] extraRet <- setdiff(names(scoreSummary), c("Epoch", "Iteration", boundsDT$N, "inBounds", "Elapsed", "Score", "gpUtility", "acqOptimum")) setcolorder(scoreSummary, c("Epoch", "Iteration", boundsDT$N, "gpUtility", "acqOptimum", "inBounds", "Elapsed", "Score", extraRet)) if (any(scoreSummary$Elapsed < 1) & acq == "eips") { cat("\n FUN elapsed time is too low to be precise. Switching acq to 'ei'.\n") acq <- "ei" } optObj$optPars$acq <- acq optObj$optPars$kappa <- kappa optObj$optPars$eps <- eps optObj$optPars$parallel <- parallel optObj$optPars$gsPoints <- gsPoints optObj$optPars$convThresh <- convThresh optObj$optPars$acqThresh <- acqThresh optObj$scoreSummary <- scoreSummary optObj$GauProList$gpUpToDate <- FALSE optObj$iters <- nrow(scoreSummary) optObj$stopStatus <- "OK" optObj$elapsedTime <- as.numeric(difftime(Sys.time(), startT, units = "secs")) saveSoFar(optObj, 0) optObj <- addIterations(optObj, otherHalting = otherHalting, iters.n = iters.n, iters.k = iters.k, parallel = parallel, plotProgress = plotProgress, errorHandling = errorHandling, saveFile = saveFile, verbose = verbose, ...) return(optObj) })(FUN = function (...) { kwargs <- list(...) args <- .method_params_refactor(kwargs, method_helper) set.seed(self$seed) res <- do.call(private$fun_bayesian_scoring_function, args) if (isFALSE(self$metric_optimization_higher_better)) { res$Score <- as.numeric(I(res$Score * -1L)) } return(res) }, bounds = list(subsample = c(0.2, 1), colsample_bytree = c(0.2, 1), min_child_weight = c(1L, 10L), learning_rate = c(0.1, 0.2), max_depth = c(1L, 10L)), initGrid = structure(list(subsample = c(1, 0.8, 1, 0.6, 0.6, 0.8, 0.8, 0.6, 0.6, 0.6), colsample_bytree = c(0.8, 1, 1, 0.8, 0.8, 0.8, 0.8, 0.6, 1, 0.8), min_child_weight = c(1, 1, 5, 1, 5, 5, 1, 1, 1, 1), learning_rate = c(0.1, 0.2, 0.2, 0.1, 0.2, 0.2, 0.1, 0.2, 0.1, 0.2), max_depth = c(5, 5, 5, 5, 5, 5, 1, 5, 1, 5)), out.attrs = list(dim = c(subsample = 3L, colsample_bytree = 3L, min_child_weight = 2L, learning_rate = 2L, max_depth = 2L), dimnames = list(subsample = c("subsample=0.6", "subsample=0.8", "subsample=1.0"), colsample_bytree = c("colsample_bytree=0.6", "colsample_bytree=0.8", "colsample_bytree=1.0"), min_child_weight = c("min_child_weight=1", "min_child_weight=5"), learning_rate = c("learning_rate=0.1", "learning_rate=0.2"), max_depth = c("max_depth=1", "max_depth=5"))), row.names = c(NA, -10L), class = c("data.table", "data.frame"), .internal.selfref = ), iters.n = 2L, iters.k = 2L, otherHalting = list(timeLimit = Inf, minUtility = 0), acq = "ucb", kappa = 3.5, eps = 0, parallel = TRUE, gsPoints = 125, convThresh = 1e+08, acqThresh = 1, errorHandling = "stop", plotProgress = FALSE, verbose = 1)`: Errors encountered in initialization are listed above. Backtrace: ▆ 1. └─xgboost_optimizer$execute() at test-regression.R:309:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) ``(train_index = ``, fold_train = ``, fold_test = ``) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) ``(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer) 10. └─mlexperiments:::.run_optimizer(...) 11. └─optimizer$execute(x = private$x, y = private$y, method_helper = private$method_helper) 12. ├─base::do.call(...) 13. └─mlexperiments (local) ``(...) 14. ├─base::do.call(ParBayesianOptimization::bayesOpt, args) 15. └─ParBayesianOptimization (local) ``(...) ── Error ('test-regression.R:352:5'): test nested cv, grid - xgboost ─────────── Error in `.get_best_setting(results = outlist$summary, opt_metric = "metric_optim_mean", param_names = param_names, higher_better = metric_higher_better)`: nrow(best_row) == 1 is not TRUE Backtrace: ▆ 1. └─xgboost_optimizer$execute() at test-regression.R:352:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) ``(train_index = ``, fold_train = ``, fold_test = ``) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) ``(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer) 10. └─mlexperiments:::.run_optimizer(...) 11. └─mlexperiments:::.optimize_postprocessing(...) 12. └─mlexperiments:::.get_best_setting(...) 13. └─base::stopifnot(nrow(best_row) == 1) [ FAIL 4 | WARN 3 | SKIP 3 | PASS 22 ] Error: ! Test failures. Execution halted Package: mlr3benchmark Check: examples New result: ERROR Running examples in ‘mlr3benchmark-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: autoplot.BenchmarkAggr > ### Title: Plots for BenchmarkAggr > ### Aliases: autoplot.BenchmarkAggr > > ### ** Examples > > if (requireNamespaces(c("mlr3learners", "mlr3", "rpart", "xgboost"))) { + library(mlr3) + library(mlr3learners) + library(ggplot2) + + set.seed(1) + task = tsks(c("iris", "sonar", "wine", "zoo")) + learns = lrns(c("classif.featureless", "classif.rpart", "classif.xgboost")) + learns$classif.xgboost$param_set$values$nrounds = 50 + bm = benchmark(benchmark_grid(task, learns, rsmp("cv", folds = 3))) + obj = as_benchmark_aggr(bm) + + # mean and error bars + autoplot(obj, type = "mean", level = 0.95) + + if (requireNamespace("PMCMRplus", quietly = TRUE)) { + # critical differences + autoplot(obj, type = "cd",style = 1) + autoplot(obj, type = "cd",style = 2) + + # post-hoc friedman-nemenyi + autoplot(obj, type = "fn") + } + + } INFO [00:34:19.431] [mlr3] Running benchmark with 36 resampling iterations INFO [00:34:19.452] [mlr3] Applying learner 'classif.featureless' on task 'iris' (iter 1/3) INFO [00:34:19.484] [mlr3] Applying learner 'classif.featureless' on task 'iris' (iter 2/3) INFO [00:34:19.515] [mlr3] Applying learner 'classif.featureless' on task 'iris' (iter 3/3) INFO [00:34:19.546] [mlr3] Applying learner 'classif.rpart' on task 'iris' (iter 1/3) INFO [00:34:19.582] [mlr3] Applying learner 'classif.rpart' on task 'iris' (iter 2/3) INFO [00:34:19.619] [mlr3] Applying learner 'classif.rpart' on task 'iris' (iter 3/3) INFO [00:34:19.657] [mlr3] Applying learner 'classif.xgboost' on task 'iris' (iter 1/3) INFO [00:34:19.724] [mlr3] Applying learner 'classif.xgboost' on task 'iris' (iter 2/3) INFO [00:34:19.781] [mlr3] Applying learner 'classif.xgboost' on task 'iris' (iter 3/3) INFO [00:34:19.833] [mlr3] Applying learner 'classif.featureless' on task 'sonar' (iter 1/3) INFO [00:34:19.865] [mlr3] Applying learner 'classif.featureless' on task 'sonar' (iter 2/3) INFO [00:34:19.897] [mlr3] Applying learner 'classif.featureless' on task 'sonar' (iter 3/3) INFO [00:34:19.929] [mlr3] Applying learner 'classif.rpart' on task 'sonar' (iter 1/3) INFO [00:34:19.992] [mlr3] Applying learner 'classif.rpart' on task 'sonar' (iter 2/3) INFO [00:34:20.060] [mlr3] Applying learner 'classif.rpart' on task 'sonar' (iter 3/3) INFO [00:34:20.124] [mlr3] Applying learner 'classif.xgboost' on task 'sonar' (iter 1/3) INFO [00:34:20.222] [mlr3] Applying learner 'classif.xgboost' on task 'sonar' (iter 2/3) INFO [00:34:20.310] [mlr3] Applying learner 'classif.xgboost' on task 'sonar' (iter 3/3) INFO [00:34:20.400] [mlr3] Applying learner 'classif.featureless' on task 'wine' (iter 1/3) INFO [00:34:20.429] [mlr3] Applying learner 'classif.featureless' on task 'wine' (iter 2/3) INFO [00:34:20.454] [mlr3] Applying learner 'classif.featureless' on task 'wine' (iter 3/3) INFO [00:34:20.480] [mlr3] Applying learner 'classif.rpart' on task 'wine' (iter 1/3) INFO [00:34:20.512] [mlr3] Applying learner 'classif.rpart' on task 'wine' (iter 2/3) INFO [00:34:20.540] [mlr3] Applying learner 'classif.rpart' on task 'wine' (iter 3/3) INFO [00:34:20.569] [mlr3] Applying learner 'classif.xgboost' on task 'wine' (iter 1/3) INFO [00:34:20.612] [mlr3] Applying learner 'classif.xgboost' on task 'wine' (iter 2/3) INFO [00:34:20.654] [mlr3] Applying learner 'classif.xgboost' on task 'wine' (iter 3/3) INFO [00:34:20.695] [mlr3] Applying learner 'classif.featureless' on task 'zoo' (iter 1/3) INFO [00:34:20.716] [mlr3] Applying learner 'classif.featureless' on task 'zoo' (iter 2/3) INFO [00:34:20.739] [mlr3] Applying learner 'classif.featureless' on task 'zoo' (iter 3/3) INFO [00:34:20.759] [mlr3] Applying learner 'classif.rpart' on task 'zoo' (iter 1/3) INFO [00:34:20.786] [mlr3] Applying learner 'classif.rpart' on task 'zoo' (iter 2/3) INFO [00:34:20.812] [mlr3] Applying learner 'classif.rpart' on task 'zoo' (iter 3/3) INFO [00:34:20.839] [mlr3] Applying learner 'classif.xgboost' on task 'zoo' (iter 1/3) INFO [00:34:20.932] [mlr3] Applying learner 'classif.xgboost' on task 'zoo' (iter 2/3) INFO [00:34:20.993] [mlr3] Applying learner 'classif.xgboost' on task 'zoo' (iter 3/3) Warning in check.deprecation(deprecated_train_params, match.call(), ...) : Passed invalid function arguments: nthread, num_class, eval_metric. These should be passed as a list to argument 'params'. Conversion from argument to 'params' entry will be done automatically, but this behavior will become an error in a future version. Warning in check.custom.obj(params, objective) : Argument 'objective' is only for custom objectives. For built-in objectives, pass the objective under 'params'. This warning will become an error in a future version. Warning in check.deprecation(deprecated_train_params, match.call(), ...) : Passed invalid function arguments: nthread, num_class, eval_metric. These should be passed as a list to argument 'params'. Conversion from argument to 'params' entry will be done automatically, but this behavior will become an error in a future version. Warning in check.custom.obj(params, objective) : Argument 'objective' is only for custom objectives. For built-in objectives, pass the objective under 'params'. This warning will become an error in a future version. Warning in check.deprecation(deprecated_train_params, match.call(), ...) : Passed invalid function arguments: nthread, num_class, eval_metric. These should be passed as a list to argument 'params'. Conversion from argument to 'params' entry will be done automatically, but this behavior will become an error in a future version. Warning in check.custom.obj(params, objective) : Argument 'objective' is only for custom objectives. For built-in objectives, pass the objective under 'params'. This warning will become an error in a future version. Warning in check.deprecation(deprecated_train_params, match.call(), ...) : Passed invalid function arguments: nthread, eval_metric. These should be passed as a list to argument 'params'. Conversion from argument to 'params' entry will be done automatically, but this behavior will become an error in a future version. Warning in check.custom.obj(params, objective) : Argument 'objective' is only for custom objectives. For built-in objectives, pass the objective under 'params'. This warning will become an error in a future version. Warning in check.deprecation(deprecated_train_params, match.call(), ...) : Passed invalid function arguments: nthread, eval_metric. These should be passed as a list to argument 'params'. Conversion from argument to 'params' entry will be done automatically, but this behavior will become an error in a future version. Warning in check.custom.obj(params, objective) : Argument 'objective' is only for custom objectives. For built-in objectives, pass the objective under 'params'. This warning will become an error in a future version. Warning in check.deprecation(deprecated_train_params, match.call(), ...) : Passed invalid function arguments: nthread, eval_metric. These should be passed as a list to argument 'params'. Conversion from argument to 'params' entry will be done automatically, but this behavior will become an error in a future version. Warning in check.custom.obj(params, objective) : Argument 'objective' is only for custom objectives. For built-in objectives, pass the objective under 'params'. This warning will become an error in a future version. Warning in check.deprecation(deprecated_train_params, match.call(), ...) : Passed invalid function arguments: nthread, num_class, eval_metric. These should be passed as a list to argument 'params'. Conversion from argument to 'params' entry will be done automatically, but this behavior will become an error in a future version. Warning in check.custom.obj(params, objective) : Argument 'objective' is only for custom objectives. For built-in objectives, pass the objective under 'params'. This warning will become an error in a future version. Warning in check.deprecation(deprecated_train_params, match.call(), ...) : Passed invalid function arguments: nthread, num_class, eval_metric. These should be passed as a list to argument 'params'. Conversion from argument to 'params' entry will be done automatically, but this behavior will become an error in a future version. Warning in check.custom.obj(params, objective) : Argument 'objective' is only for custom objectives. For built-in objectives, pass the objective under 'params'. This warning will become an error in a future version. Warning in check.deprecation(deprecated_train_params, match.call(), ...) : Passed invalid function arguments: nthread, num_class, eval_metric. These should be passed as a list to argument 'params'. Conversion from argument to 'params' entry will be done automatically, but this behavior will become an error in a future version. Warning in check.custom.obj(params, objective) : Argument 'objective' is only for custom objectives. For built-in objectives, pass the objective under 'params'. This warning will become an error in a future version. Warning in check.deprecation(deprecated_train_params, match.call(), ...) : Passed invalid function arguments: nthread, num_class, eval_metric. These should be passed as a list to argument 'params'. Conversion from argument to 'params' entry will be done automatically, but this behavior will become an error in a future version. Warning in check.custom.obj(params, objective) : Argument 'objective' is only for custom objectives. For built-in objectives, pass the objective under 'params'. This warning will become an error in a future version. Warning in check.deprecation(deprecated_train_params, match.call(), ...) : Passed invalid function arguments: nthread, num_class, eval_metric. These should be passed as a list to argument 'params'. Conversion from argument to 'params' entry will be done automatically, but this behavior will become an error in a future version. Warning in check.custom.obj(params, objective) : Argument 'objective' is only for custom objectives. For built-in objectives, pass the objective under 'params'. This warning will become an error in a future version. Warning in check.deprecation(deprecated_train_params, match.call(), ...) : Passed invalid function arguments: nthread, num_class, eval_metric. These should be passed as a list to argument 'params'. Conversion from argument to 'params' entry will be done automatically, but this behavior will become an error in a future version. Warning in check.custom.obj(params, objective) : Argument 'objective' is only for custom objectives. For built-in objectives, pass the objective under 'params'. This warning will become an error in a future version. INFO [00:34:21.066] [mlr3] Finished benchmark Error: Global Friedman test non-significant (p > 0.05), try type = 'mean' instead. Execution halted Package: mlr3tuning Check: examples New result: ERROR Running examples in ‘mlr3tuning-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: mlr_tuners_internal > ### Title: Hyperparameter Tuning with Internal Tuning > ### Aliases: mlr_tuners_internal TunerBatchInternal > > ### ** Examples > > ## Don't show: > if (mlr3misc::require_namespaces(c("mlr3learners", "xgboost"), quietly = TRUE)) withAutoprint({ # examplesIf + ## End(Don't show) + library(mlr3learners) + + # Retrieve task + task = tsk("pima") + + # Load learner and set search space + learner = lrn("classif.xgboost", + nrounds = to_tune(upper = 1000, internal = TRUE), + early_stopping_rounds = 10, + validate = "test", + eval_metric = "merror" + ) + + # Internal hyperparameter tuning on the pima indians diabetes data set + instance = tune( + tnr("internal"), + tsk("iris"), + learner, + rsmp("cv", folds = 3), + msr("internal_valid_score", minimize = TRUE, select = "merror") + ) + + # best performing hyperparameter configuration + instance$result_learner_param_vals + + instance$result_learner_param_vals$internal_tuned_values + ## Don't show: + }) # examplesIf > library(mlr3learners) > task = tsk("pima") > learner = lrn("classif.xgboost", nrounds = to_tune(upper = 1000, internal = TRUE), + early_stopping_rounds = 10, validate = "test", eval_metric = "merror") > instance = tune(tnr("internal"), tsk("iris"), learner, rsmp("cv", folds = 3), + msr("internal_valid_score", minimize = TRUE, select = "merror")) INFO [00:40:31.408] [bbotk] Starting to optimize 0 parameter(s) with '' and '' INFO [00:40:31.409] [bbotk] Evaluating 1 configuration(s) INFO [00:40:31.417] [mlr3] Running benchmark with 3 resampling iterations INFO [00:40:31.433] [mlr3] Applying learner 'classif.xgboost' on task 'iris' (iter 1/3) INFO [00:40:31.487] [mlr3] Applying learner 'classif.xgboost' on task 'iris' (iter 2/3) INFO [00:40:31.525] [mlr3] Applying learner 'classif.xgboost' on task 'iris' (iter 3/3) Warning in check.deprecation(deprecated_train_params, match.call(), ...) : Passed invalid function arguments: eval_metric, nthread, num_class. These should be passed as a list to argument 'params'. Conversion from argument to 'params' entry will be done automatically, but this behavior will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'watchlist' has been renamed to 'evals'. This warning will become an error in a future version. Warning in check.custom.obj(params, objective) : Argument 'objective' is only for custom objectives. For built-in objectives, pass the objective under 'params'. This warning will become an error in a future version. Warning in check.deprecation(deprecated_train_params, match.call(), ...) : Passed invalid function arguments: eval_metric, nthread, num_class. These should be passed as a list to argument 'params'. Conversion from argument to 'params' entry will be done automatically, but this behavior will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'watchlist' has been renamed to 'evals'. This warning will become an error in a future version. Warning in check.custom.obj(params, objective) : Argument 'objective' is only for custom objectives. For built-in objectives, pass the objective under 'params'. This warning will become an error in a future version. Warning in check.deprecation(deprecated_train_params, match.call(), ...) : Passed invalid function arguments: eval_metric, nthread, num_class. These should be passed as a list to argument 'params'. Conversion from argument to 'params' entry will be done automatically, but this behavior will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'watchlist' has been renamed to 'evals'. This warning will become an error in a future version. Warning in check.custom.obj(params, objective) : Argument 'objective' is only for custom objectives. For built-in objectives, pass the objective under 'params'. This warning will become an error in a future version. Warning: Caught simpleError. Canceling all iterations ... Warning in check.deprecation(deprecated_train_params, match.call(), ...) : Passed invalid function arguments: eval_metric, nthread, num_class. These should be passed as a list to argument 'params'. Conversion from argument to 'params' entry will be done automatically, but this behavior will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'watchlist' has been renamed to 'evals'. This warning will become an error in a future version. Warning in check.custom.obj(params, objective) : Argument 'objective' is only for custom objectives. For built-in objectives, pass the objective under 'params'. This warning will become an error in a future version. Warning in check.deprecation(deprecated_train_params, match.call(), ...) : Passed invalid function arguments: eval_metric, nthread, num_class. These should be passed as a list to argument 'params'. Conversion from argument to 'params' entry will be done automatically, but this behavior will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'watchlist' has been renamed to 'evals'. This warning will become an error in a future version. Warning in check.custom.obj(params, objective) : Argument 'objective' is only for custom objectives. For built-in objectives, pass the objective under 'params'. This warning will become an error in a future version. Error in names(x) <- nm : attempt to set an attribute on NULL Calls: withAutoprint ... tryCatchList -> tryCatchOne -> -> onError Execution halted Package: mlspatial Check: re-building of vignette outputs New result: ERROR Error(s) in re-building vignettes: ... --- re-building ‘mlspatial.Rmd’ using rmarkdown Quitting from mlspatial.Rmd:127-181 [unnamed-chunk-7] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error in `xgboost()`: ! argument "y" is missing, with no default --- Backtrace: ▆ 1. └─xgboost::xgboost(...) 2. └─xgboost:::process.y.margin.and.objective(...) 3. └─base::NROW(y) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'mlspatial.Rmd' failed with diagnostics: argument "y" is missing, with no default --- failed re-building ‘mlspatial.Rmd’ SUMMARY: processing the following file failed: ‘mlspatial.Rmd’ Error: Vignette re-building failed. Execution halted Package: mlsurvlrnrs Check: tests New result: ERROR Running ‘testthat.R’ [19s/87s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mlsurvlrnrs) > > test_check("mlsurvlrnrs") CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerSurvCoxPHCox'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerSurvCoxPHCox'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerSurvCoxPHCox'. CV fold: Fold1 Registering parallel backend using 2 cores. Running initial scoring function 6 times in 2 thread(s)... 4.154 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.869 seconds 3) Running FUN 2 times in 2 thread(s)... 0.617 seconds CV fold: Fold2 Registering parallel backend using 2 cores. Running initial scoring function 6 times in 2 thread(s)... 3.755 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.807 seconds 3) Running FUN 2 times in 2 thread(s)... 0.525 seconds CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 6 times in 2 thread(s)... 3.984 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.483 seconds 3) Running FUN 2 times in 2 thread(s)... 0.622 seconds CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 3.497 seconds Starting Epoch 1 1) Fitting Gaussian Process... - Could not obtain meaningful lengthscales. 2) Running local optimum search... - Convergence Not Found. Trying again with tighter parameters... - Convergence Not Found. Trying again with tighter parameters... 5.512 seconds 3) Running FUN 2 times in 2 thread(s)... 0.525 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 3.598 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 11.724 seconds 3) Running FUN 2 times in 2 thread(s)... 0.374 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 3.572 seconds Starting Epoch 1 1) Fitting Gaussian Process... - Could not obtain meaningful lengthscales. 2) Running local optimum search... 0.667 seconds 3) Running FUN 2 times in 2 thread(s)... 0.485 seconds CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 3.386 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.41 seconds 3) Running FUN 2 times in 2 thread(s)... 0.329 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 3.298 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.38 seconds 3) Running FUN 2 times in 2 thread(s)... 0.355 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 3.515 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.46 seconds 3) Running FUN 2 times in 2 thread(s)... 0.36 seconds CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 3.367 seconds Starting Epoch 1 1) Fitting Gaussian Process... Saving _problems/test-surv_xgboost_aft-116.R CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 3.067 seconds subsample colsample_bytree min_child_weight learning_rate max_depth 1: 0.6 0.8 5 0.2 1 2: 1.0 0.8 5 0.1 5 3: 0.8 0.8 5 0.1 1 4: 0.6 0.8 5 0.2 5 5: 1.0 0.8 1 0.1 5 6: 0.8 0.8 5 0.1 5 7: 0.6 1.0 1 0.1 5 8: 0.6 1.0 5 0.2 5 9: 1.0 1.0 5 0.1 5 10: 0.6 1.0 1 0.2 1 errorMessage 1: FUN returned these elements with length > 1: Score,metric_optim_mean 2: FUN returned these elements with length > 1: Score,metric_optim_mean 3: FUN returned these elements with length > 1: Score,metric_optim_mean 4: FUN returned these elements with length > 1: Score,metric_optim_mean 5: FUN returned these elements with length > 1: Score,metric_optim_mean 6: FUN returned these elements with length > 1: Score,metric_optim_mean 7: FUN returned these elements with length > 1: Score,metric_optim_mean 8: FUN returned these elements with length > 1: Score,metric_optim_mean 9: FUN returned these elements with length > 1: Score,metric_optim_mean 10: FUN returned these elements with length > 1: Score,metric_optim_mean Saving _problems/test-surv_xgboost_cox-115.R [ FAIL 2 | WARN 0 | SKIP 1 | PASS 11 ] ══ Skipped tests (1) ═══════════════════════════════════════════════════════════ • On CRAN (1): 'test-lints.R:10:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-surv_xgboost_aft.R:116:5'): test nested cv, bayesian - surv_xgboost_aft ── Error in `if (r == 0) stop("Results from FUN have 0 variance, cannot build GP.")`: missing value where TRUE/FALSE needed Backtrace: ▆ 1. └─surv_xgboost_aft_optimizer$execute() at test-surv_xgboost_aft.R:116:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) ``(train_index = ``, fold_train = ``, fold_test = ``) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) ``(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer) 10. └─mlexperiments:::.run_optimizer(...) 11. └─optimizer$execute(x = private$x, y = private$y, method_helper = private$method_helper) 12. ├─base::do.call(...) 13. └─mlexperiments (local) ``(...) 14. ├─base::do.call(ParBayesianOptimization::bayesOpt, args) 15. └─ParBayesianOptimization (local) ``(...) 16. └─ParBayesianOptimization::addIterations(...) 17. └─ParBayesianOptimization::updateGP(...) 18. └─ParBayesianOptimization:::zeroOneScale(scoreSummary$Score) ── Error ('test-surv_xgboost_cox.R:115:5'): test nested cv, bayesian - surv_xgboost_cox ── Error in `(function (FUN, bounds, saveFile = NULL, initGrid, initPoints = 4, iters.n = 3, iters.k = 1, otherHalting = list(timeLimit = Inf, minUtility = 0), acq = "ucb", kappa = 2.576, eps = 0, parallel = FALSE, gsPoints = pmax(100, length(bounds)^3), convThresh = 1e+08, acqThresh = 1, errorHandling = "stop", plotProgress = FALSE, verbose = 1, ...) { startT <- Sys.time() optObj <- list() class(optObj) <- "bayesOpt" optObj$FUN <- FUN optObj$bounds <- bounds optObj$iters <- 0 optObj$initPars <- list() optObj$optPars <- list() optObj$GauProList <- list() optObj <- changeSaveFile(optObj, saveFile) checkParameters(bounds, iters.n, iters.k, otherHalting, acq, acqThresh, errorHandling, plotProgress, parallel, verbose) boundsDT <- boundsToDT(bounds) otherHalting <- formatOtherHalting(otherHalting) if (missing(initGrid) + missing(initPoints) != 1) stop("Please provide 1 of initGrid or initPoints, but not both.") if (!missing(initGrid)) { setDT(initGrid) inBounds <- checkBounds(initGrid, bounds) inBounds <- as.logical(apply(inBounds, 1, prod)) if (any(!inBounds)) stop("initGrid not within bounds.") optObj$initPars$initialSample <- "User Provided Grid" initPoints <- nrow(initGrid) } else { initGrid <- randParams(boundsDT, initPoints) optObj$initPars$initialSample <- "Latin Hypercube Sampling" } optObj$initPars$initGrid <- initGrid if (nrow(initGrid) <= 2) stop("Cannot initialize with less than 3 samples.") optObj$initPars$initPoints <- nrow(initGrid) if (initPoints <= length(bounds)) stop("initPoints must be greater than the number of FUN inputs.") sinkFile <- file() on.exit({ while (sink.number() > 0) sink() close(sinkFile) }) `%op%` <- ParMethod(parallel) if (parallel) Workers <- getDoParWorkers() else Workers <- 1 if (verbose > 0) cat("\nRunning initial scoring function", nrow(initGrid), "times in", Workers, "thread(s)...") sink(file = sinkFile) tm <- system.time(scoreSummary <- foreach(iter = 1:nrow(initGrid), .options.multicore = list(preschedule = FALSE), .combine = list, .multicombine = TRUE, .inorder = FALSE, .errorhandling = "pass", .verbose = FALSE) %op% { Params <- initGrid[get("iter"), ] Elapsed <- system.time(Result <- tryCatch({ do.call(what = FUN, args = as.list(Params)) }, error = function(e) e)) if (any(class(Result) %in% c("simpleError", "error", "condition"))) return(Result) if (!inherits(x = Result, what = "list")) stop("Object returned from FUN was not a list.") resLengths <- lengths(Result) if (!any(names(Result) == "Score")) stop("FUN must return list with element 'Score' at a minimum.") if (!is.numeric(Result$Score)) stop("Score returned from FUN was not numeric.") if (any(resLengths != 1)) { badReturns <- names(Result)[which(resLengths != 1)] stop("FUN returned these elements with length > 1: ", paste(badReturns, collapse = ",")) } data.table(Params, Elapsed = Elapsed[[3]], as.data.table(Result)) })[[3]] while (sink.number() > 0) sink() if (verbose > 0) cat(" ", tm, "seconds\n") se <- which(sapply(scoreSummary, function(cl) any(class(cl) %in% c("simpleError", "error", "condition")))) if (length(se) > 0) { print(data.table(initGrid[se, ], errorMessage = sapply(scoreSummary[se], function(x) x$message))) stop("Errors encountered in initialization are listed above.") } else { scoreSummary <- rbindlist(scoreSummary) } scoreSummary[, `:=`(("gpUtility"), rep(as.numeric(NA), nrow(scoreSummary)))] scoreSummary[, `:=`(("acqOptimum"), rep(FALSE, nrow(scoreSummary)))] scoreSummary[, `:=`(("Epoch"), rep(0, nrow(scoreSummary)))] scoreSummary[, `:=`(("Iteration"), 1:nrow(scoreSummary))] scoreSummary[, `:=`(("inBounds"), rep(TRUE, nrow(scoreSummary)))] scoreSummary[, `:=`(("errorMessage"), rep(NA, nrow(scoreSummary)))] extraRet <- setdiff(names(scoreSummary), c("Epoch", "Iteration", boundsDT$N, "inBounds", "Elapsed", "Score", "gpUtility", "acqOptimum")) setcolorder(scoreSummary, c("Epoch", "Iteration", boundsDT$N, "gpUtility", "acqOptimum", "inBounds", "Elapsed", "Score", extraRet)) if (any(scoreSummary$Elapsed < 1) & acq == "eips") { cat("\n FUN elapsed time is too low to be precise. Switching acq to 'ei'.\n") acq <- "ei" } optObj$optPars$acq <- acq optObj$optPars$kappa <- kappa optObj$optPars$eps <- eps optObj$optPars$parallel <- parallel optObj$optPars$gsPoints <- gsPoints optObj$optPars$convThresh <- convThresh optObj$optPars$acqThresh <- acqThresh optObj$scoreSummary <- scoreSummary optObj$GauProList$gpUpToDate <- FALSE optObj$iters <- nrow(scoreSummary) optObj$stopStatus <- "OK" optObj$elapsedTime <- as.numeric(difftime(Sys.time(), startT, units = "secs")) saveSoFar(optObj, 0) optObj <- addIterations(optObj, otherHalting = otherHalting, iters.n = iters.n, iters.k = iters.k, parallel = parallel, plotProgress = plotProgress, errorHandling = errorHandling, saveFile = saveFile, verbose = verbose, ...) return(optObj) })(FUN = function (...) { kwargs <- list(...) args <- .method_params_refactor(kwargs, method_helper) set.seed(self$seed) res <- do.call(private$fun_bayesian_scoring_function, args) if (isFALSE(self$metric_optimization_higher_better)) { res$Score <- as.numeric(I(res$Score * -1L)) } return(res) }, bounds = list(subsample = c(0.2, 1), colsample_bytree = c(0.2, 1), min_child_weight = c(1L, 10L), learning_rate = c(0.1, 0.2), max_depth = c(1L, 10L)), initGrid = structure(list(subsample = c(0.6, 1, 0.8, 0.6, 1, 0.8, 0.6, 0.6, 1, 0.6), colsample_bytree = c(0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 1, 1, 1, 1), min_child_weight = c(5, 5, 5, 5, 1, 5, 1, 5, 5, 1), learning_rate = c(0.2, 0.1, 0.1, 0.2, 0.1, 0.1, 0.1, 0.2, 0.1, 0.2), max_depth = c(1, 5, 1, 5, 5, 5, 5, 5, 5, 1)), out.attrs = list(dim = c(objective = 1L, eval_metric = 1L, subsample = 3L, colsample_bytree = 3L, min_child_weight = 2L, learning_rate = 2L, max_depth = 2L), dimnames = list(objective = "objective=survival:cox", eval_metric = "eval_metric=cox-nloglik", subsample = c("subsample=0.6", "subsample=0.8", "subsample=1.0"), colsample_bytree = c("colsample_bytree=0.6", "colsample_bytree=0.8", "colsample_bytree=1.0"), min_child_weight = c("min_child_weight=1", "min_child_weight=5"), learning_rate = c("learning_rate=0.1", "learning_rate=0.2"), max_depth = c("max_depth=1", "max_depth=5"))), row.names = c(NA, -10L), class = c("data.table", "data.frame"), .internal.selfref = ), iters.n = 2L, iters.k = 2L, otherHalting = list(timeLimit = Inf, minUtility = 0), acq = "ucb", kappa = 3.5, eps = 0, parallel = TRUE, gsPoints = 125, convThresh = 1e+08, acqThresh = 1, errorHandling = "stop", plotProgress = FALSE, verbose = 1)`: Errors encountered in initialization are listed above. Backtrace: ▆ 1. └─surv_xgboost_cox_optimizer$execute() at test-surv_xgboost_cox.R:115:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) ``(train_index = ``, fold_train = ``, fold_test = ``) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) ``(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer) 10. └─mlexperiments:::.run_optimizer(...) 11. └─optimizer$execute(x = private$x, y = private$y, method_helper = private$method_helper) 12. ├─base::do.call(...) 13. └─mlexperiments (local) ``(...) 14. ├─base::do.call(ParBayesianOptimization::bayesOpt, args) 15. └─ParBayesianOptimization (local) ``(...) [ FAIL 2 | WARN 0 | SKIP 1 | PASS 11 ] Error: ! Test failures. Execution halted Package: modeltime Check: re-building of vignette outputs New result: ERROR Error(s) in re-building vignettes: ... --- re-building ‘getting-started-with-modeltime.Rmd’ using rmarkdown Quitting from getting-started-with-modeltime.Rmd:162-171 [unnamed-chunk-9] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error in `switch()`: ! EXPR must be a length 1 vector --- Backtrace: ▆ 1. ├─... %>% ... 2. ├─generics::fit(...) 3. ├─parsnip::fit.model_spec(...) 4. │ └─parsnip:::form_xy(...) 5. │ └─parsnip:::xy_xy(...) 6. │ └─parsnip:::eval_mod(...) 7. │ └─rlang::eval_tidy(e, env = envir, ...) 8. └─modeltime::auto_arima_xgboost_fit_impl(...) 9. └─modeltime::xgboost_predict(fit_xgboost, newdata = xreg_tbl) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'getting-started-with-modeltime.Rmd' failed with diagnostics: EXPR must be a length 1 vector --- failed re-building ‘getting-started-with-modeltime.Rmd’ SUMMARY: processing the following file failed: ‘getting-started-with-modeltime.Rmd’ Error: Vignette re-building failed. Execution halted Package: modeltime Check: tests New result: ERROR Running ‘testthat.R’ [14s/14s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > > library(xgboost) > library(randomForest) randomForest 4.7-1.2 Type rfNews() to see new features/changes/bug fixes. > library(thief) Loading required package: forecast > library(smooth) Loading required package: greybox Package "greybox", v2.0.6 loaded. This is package "smooth", v4.3.1 > library(greybox) > > library(stats) > > library(tidymodels) ── Attaching packages ────────────────────────────────────── tidymodels 1.4.1 ── ✔ broom 1.0.10 ✔ recipes 1.3.1 ✔ dials 1.4.2 ✔ rsample 1.3.1 ✔ dplyr 1.1.4 ✔ tailor 0.1.0 ✔ ggplot2 4.0.1 ✔ tidyr 1.3.1 ✔ infer 1.0.9 ✔ tune 2.0.1 ✔ modeldata 1.5.1 ✔ workflows 1.3.0 ✔ parsnip 1.4.0 ✔ workflowsets 1.1.1 ✔ purrr 1.2.0 ✔ yardstick 1.3.2 ── Conflicts ───────────────────────────────────────── tidymodels_conflicts() ── ✖ yardstick::accuracy() masks smooth::accuracy(), greybox::accuracy(), forecast::accuracy() ✖ dplyr::combine() masks randomForest::combine() ✖ purrr::discard() masks scales::discard() ✖ dplyr::filter() masks stats::filter() ✖ dplyr::lag() masks stats::lag() ✖ ggplot2::margin() masks randomForest::margin() ✖ parsnip::pls() masks smooth::pls() ✖ tidyr::spread() masks greybox::spread() ✖ recipes::step() masks stats::step() > library(parsnip) > library(workflows) > library(rsample) > library(recipes) > library(tune) > library(dials) > library(yardstick) > library(slider) > > library(timetk) > library(modeltime) > > test_check("modeltime") Saving _problems/test-algo-prophet_boost-120.R [ FAIL 1 | WARN 0 | SKIP 80 | PASS 0 ] ══ Skipped tests (80) ══════════════════════════════════════════════════════════ • On CRAN (80): 'test-algo-adam_reg-Adam.R:12:5', 'test-algo-adam_reg-Adam.R:111:5', 'test-algo-adam_reg-auto_adam.R:13:5', 'test-algo-adam_reg-auto_adam.R:63:5', 'test-algo-adam_reg-auto_adam.R:120:5', 'test-algo-arima_boost-Arima.R:57:5', 'test-algo-arima_boost-Arima.R:114:5', 'test-algo-arima_boost-Arima.R:190:5', 'test-algo-arima_boost-auto_arima.R:41:5', 'test-algo-arima_boost-auto_arima.R:98:5', 'test-algo-arima_boost-auto_arima.R:172:5', 'test-algo-arima_reg-Arima.R:35:5', 'test-algo-arima_reg-Arima.R:89:5', 'test-algo-arima_reg-Arima.R:145:5', 'test-algo-arima_reg-auto_arima.R:20:5', 'test-algo-arima_reg-auto_arima.R:76:5', 'test-algo-arima_reg-auto_arima.R:137:5', 'test-algo-exp_smoothing-ets.R:25:5', 'test-algo-exp_smoothing-ets.R:82:5', 'test-algo-exp_smoothing-ets.R:159:5', 'test-algo-exp_smoothing-ets.R:234:5', 'test-algo-exp_smoothing-ets.R:312:5', 'test-algo-exp_smoothing-ets.R:373:5', 'test-algo-nnetar_reg.R:21:5', 'test-algo-nnetar_reg.R:146:5', 'test-algo-prophet_boost.R:44:5', 'test-algo-prophet_boost.R:206:5', 'test-algo-prophet_boost.R:313:5', 'test-algo-prophet_reg.R:35:5', 'test-algo-prophet_reg.R:105:5', 'test-algo-prophet_reg.R:178:5', 'test-algo-prophet_reg.R:266:5', 'test-algo-seasonal_decomp_arima.R:8:5', 'test-algo-seasonal_decomp_ets.R:10:5', 'test-algo-seasonal_reg_tbats.R:20:5', 'test-algo-seasonal_reg_tbats.R:35:5', 'test-algo-seasonal_reg_tbats.R:93:5', 'test-algo-temporal_hierarchy.R:8:5', 'test-algo-window_reg.R:24:5', 'test-algo-window_reg.R:69:5', 'test-algo-window_reg.R:100:5', 'test-algo-window_reg.R:153:5', 'test-algo-window_reg.R:206:5', 'test-algo-window_reg.R:241:5', 'test-algo-window_reg.R:293:5', 'test-algo-window_reg.R:363:5', 'test-algo-window_reg.R:402:5', 'test-conf_by_id.R:6:5', 'test-default_accuracy_metric_sets.R:9:5', 'test-default_accuracy_metric_sets.R:29:5', 'test-developer-tools-constructor.R:10:5', 'test-developer-tools-xregs.R:26:5', 'test-developer-tools-xregs.R:47:5', 'test-extended_accuracy_metric_set.R:7:5', 'test-extended_accuracy_metric_set.R:28:5', 'test-fit_workflowsets.R:13:5', 'test-helpers-combine-modeltime-tables.R:8:5', 'test-helpers-pull_parsnip_preprocessor.R:14:5', 'test-helpers-pull_parsnip_preprocessor.R:33:5', 'test-helpers-pull_parsnip_preprocessor.R:51:5', 'test-helpers-pull_parsnip_preprocessor.R:69:5', 'test-helpers-update-modeltime-tables.R:9:5', 'test-modeltime_residuals.R:6:5', 'test-modeltime_table-forecast-accuracy-refitting.R:20:5', 'test-modeltime_table-forecast-accuracy-refitting.R:84:5', 'test-modeltime_table-forecast-accuracy-refitting.R:149:5', 'test-modeltime_table-no-calib-refit.R:16:5', 'test-nested-modeltime.R:10:5', 'test-panel-data.R:8:5', 'test-recursive-chunk-uneven.R:9:5', 'test-recursive-chunk-uneven.R:331:5', 'test-recursive-chunk.R:9:5', 'test-recursive.R:9:3', 'test-refit-parallel.R:5:5', 'test-results-accuracy-tables.R:15:5', 'test-results-accuracy-tables.R:149:5', 'test-results-forecast-plots.R:14:5', 'test-results-forecast-plots.R:87:5', 'test-results-residuals-tests.R:9:5', 'test-tune_workflows.R:8:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-algo-prophet_boost.R:118:5'): prophet_boost: prophet, XREGS ──── Error in `switch(object$params$objective, `reg:linear` = , `reg:logistic` = , `binary:logistic` = res, `binary:logitraw` = stats::binomial()$linkinv(res), `multi:softprob` = matrix(res, ncol = object$params$num_class, byrow = TRUE), res)`: EXPR must be a length 1 vector Backtrace: ▆ 1. ├─model_spec %>% ... at test-algo-prophet_boost.R:118:5 2. ├─generics::fit(...) 3. ├─parsnip::fit.model_spec(...) 4. │ └─parsnip:::form_xy(...) 5. │ └─parsnip:::xy_xy(...) 6. │ └─parsnip:::eval_mod(...) 7. │ └─rlang::eval_tidy(e, env = envir, ...) 8. └─modeltime::prophet_xgboost_fit_impl(...) 9. └─modeltime::xgboost_predict(fit_xgboost, newdata = xreg_tbl) [ FAIL 1 | WARN 0 | SKIP 80 | PASS 0 ] Error: ! Test failures. Execution halted Package: modeltime.ensemble Check: tests New result: ERROR Running ‘testthat.R’ [36s/35s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > > # Machine Learning > library(tidymodels) ── Attaching packages ────────────────────────────────────── tidymodels 1.4.1 ── ✔ broom 1.0.10 ✔ recipes 1.3.1 ✔ dials 1.4.2 ✔ rsample 1.3.1 ✔ dplyr 1.1.4 ✔ tailor 0.1.0 ✔ ggplot2 4.0.1 ✔ tidyr 1.3.1 ✔ infer 1.0.9 ✔ tune 2.0.1 ✔ modeldata 1.5.1 ✔ workflows 1.3.0 ✔ parsnip 1.4.0 ✔ workflowsets 1.1.1 ✔ purrr 1.2.0 ✔ yardstick 1.3.2 ── Conflicts ───────────────────────────────────────── tidymodels_conflicts() ── ✖ purrr::discard() masks scales::discard() ✖ dplyr::filter() masks stats::filter() ✖ dplyr::lag() masks stats::lag() ✖ recipes::step() masks stats::step() > library(modeltime) > library(modeltime.ensemble) Loading required package: modeltime.resample > library(modeltime.resample) > > # Model dependencies > library(xgboost) > library(glmnet) Loading required package: Matrix Attaching package: 'Matrix' The following objects are masked from 'package:tidyr': expand, pack, unpack Loaded glmnet 4.1-10 > > # Core Packages > library(timetk) > library(lubridate) Attaching package: 'lubridate' The following objects are masked from 'package:base': date, intersect, setdiff, union > > test_check("modeltime.ensemble") ── Modeltime Ensemble ─────────────────────────────────────────── Ensemble of 3 Models (WEIGHTED) # Modeltime Table # A tibble: 3 × 4 .model_id .model .model_desc .loadings 1 1 ARIMA(0,1,1)(0,1,1)[12] 0.5 2 2 PROPHET 0.333 3 3 GLMNET 0.167 Saving _problems/test-panel-data-25.R [ FAIL 1 | WARN 0 | SKIP 4 | PASS 61 ] ══ Skipped tests (4) ═══════════════════════════════════════════════════════════ • On CRAN (4): 'test-conf_by_id.R:6:5', 'test-ensemble_average.R:57:5', 'test-ensemble_model_spec.R:55:5', 'test-nested-ensembles.R:189:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-panel-data.R:16:1'): (code run outside of `test_that()`) ─────── Error in `switch(object$params$objective, `reg:linear` = , `reg:logistic` = , `binary:logistic` = res, `binary:logitraw` = stats::binomial()$linkinv(res), `multi:softprob` = matrix(res, ncol = object$params$num_class, byrow = TRUE), res)`: EXPR must be a length 1 vector Backtrace: ▆ 1. ├─... %>% fit(data_set) at test-panel-data.R:16:1 2. ├─generics::fit(., data_set) 3. ├─workflows:::fit.workflow(., data_set) 4. │ └─workflows::.fit_model(workflow, control) 5. │ ├─generics::fit(action_model, workflow = workflow, control = control) 6. │ └─workflows:::fit.action_model(...) 7. │ └─workflows:::fit_from_xy(spec, mold, case_weights, control_parsnip) 8. │ ├─generics::fit_xy(...) 9. │ └─parsnip::fit_xy.model_spec(...) 10. │ └─parsnip:::xy_xy(...) 11. │ └─parsnip:::eval_mod(...) 12. │ └─rlang::eval_tidy(e, env = envir, ...) 13. └─modeltime::prophet_xgboost_fit_impl(...) 14. └─modeltime::xgboost_predict(fit_xgboost, newdata = xreg_tbl) [ FAIL 1 | WARN 0 | SKIP 4 | PASS 61 ] Error: ! Test failures. Execution halted Package: nlpred Check: examples New result: ERROR Running examples in ‘nlpred-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: xgboost_wrapper > ### Title: Wrapper for fitting eXtreme gradient boosting via 'xgboost' > ### Aliases: xgboost_wrapper > > ### ** Examples > > # simulate data > # make list of training data > train_X <- data.frame(x1 = runif(50)) > train_Y <- rbinom(50, 1, plogis(train_X$x1)) > train <- list(Y = train_Y, X = train_X) > # make list of test data > test_X <- data.frame(x1 = runif(50)) > test_Y <- rbinom(50, 1, plogis(train_X$x1)) > test <- list(Y = test_Y, X = test_X) > # fit xgboost > xgb_wrap <- xgboost_wrapper(train = train, test = test) Warning in throw_err_or_depr_msg("Parameter(s) have been removed from this function: ", : Parameter(s) have been removed from this function: params, save_period. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: verbose. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'eta' has been renamed to 'learning_rate'. This warning will become an error in a future version. Error in xgboost::xgboost(data = xgmat, objective = "binary:logistic", : argument "y" is missing, with no default Calls: xgboost_wrapper ... -> process.y.margin.and.objective -> NROW Execution halted Package: nlpred Check: tests New result: ERROR Running ‘testthat.R’ [9s/9s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(nlpred) Loading required package: data.table > > test_check("nlpred") Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 | Multistart 1 of 1 / Multistart 1 of 1 | Multistart 1 of 1 | Saving _problems/testWrappers-13.R [ FAIL 1 | WARN 4 | SKIP 0 | PASS 40 ] ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('testWrappers.R:12:3'): wrappers work ──────────────────────────────── Error in `xgboost::xgboost(data = xgmat, objective = "binary:logistic", nrounds = ntrees, max_depth = max_depth, min_child_weight = minobspernode, eta = shrinkage, verbose = verbose, nthread = nthread, params = params, save_period = save_period)`: argument "y" is missing, with no default Backtrace: ▆ 1. └─nlpred (local) check_wrapper(paste0(wrap, "_wrapper"), test = test, train = train) at testWrappers.R:31:17 2. ├─base::do.call(wrapper, args = list(train = train, test = test)) at testWrappers.R:12:17 3. └─nlpred::xgboost_wrapper(train = ``, test = ``) 4. └─xgboost::xgboost(...) 5. └─xgboost:::process.y.margin.and.objective(...) 6. └─base::NROW(y) [ FAIL 1 | WARN 4 | SKIP 0 | PASS 40 ] Error: ! Test failures. Execution halted Package: ParBayesianOptimization Check: re-building of vignette outputs New result: ERROR Error(s) in re-building vignettes: ... --- re-building ‘functionMaximization.Rmd’ using rmarkdown 0.462 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.123 seconds 3) Running FUN 1 times in 1 thread(s)... 0.145 seconds Starting Epoch 2 1) Fitting Gaussian Process... 2) Running local optimum search... 0.221 seconds 3) Running FUN 1 times in 1 thread(s)... 0.147 seconds [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘functionMaximization.Rmd’ --- re-building ‘multiPointSampling.Rmd’ using rmarkdown --- finished re-building ‘multiPointSampling.Rmd’ --- re-building ‘tuningHyperparameters.Rmd’ using rmarkdown 1.308 seconds max_depth min_child_weight subsample errorMessage 1: 9 5.863591 0.2585819 FUN returned these elements with length > 1: nrounds 2: 4 10.154185 0.5230172 FUN returned these elements with length > 1: nrounds 3: 6 24.487949 0.8622225 FUN returned these elements with length > 1: nrounds 4: 2 17.988070 0.6821260 FUN returned these elements with length > 1: nrounds Quitting from tuningHyperparameters.Rmd:107-116 [unnamed-chunk-4] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error in `bayesOpt()`: ! Errors encountered in initialization are listed above. --- Backtrace: ▆ 1. └─ParBayesianOptimization::bayesOpt(...) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'tuningHyperparameters.Rmd' failed with diagnostics: Errors encountered in initialization are listed above. --- failed re-building ‘tuningHyperparameters.Rmd’ SUMMARY: processing the following file failed: ‘tuningHyperparameters.Rmd’ Error: Vignette re-building failed. Execution halted Package: pdp Check: tests New result: ERROR Running ‘tinytest.R’ [81s/81s] Running the tests in ‘tests/tinytest.R’ failed. Complete output: > # Run tests in local environment > if (requireNamespace("tinytest", quietly = TRUE)) { + home <- length(unclass(packageVersion("pdp"))[[1]]) == 4 + tinytest::test_package("pdp", at_home = home) + } test_cats_argument.R.......... 0 tests test_cats_argument.R.......... 0 tests test_cats_argument.R.......... 0 tests test_cats_argument.R.......... 0 tests test_cats_argument.R.......... 0 tests test_cats_argument.R.......... 0 tests test_cats_argument.R.......... 0 tests test_cats_argument.R.......... 0 tests test_cats_argument.R.......... 4 tests OK randomForest 4.7-1.2 Type rfNews() to see new features/changes/bug fixes. test_cats_argument.R.......... 5 tests OK 0.3s test_exemplar.R............... 0 tests test_exemplar.R............... 0 tests test_exemplar.R............... 0 tests test_exemplar.R............... 1 tests OK 6ms test_get_training_data.R...... 0 tests Attaching package: 'ggplot2' The following object is masked from 'package:randomForest': margin test_get_training_data.R...... 2 tests OK test_get_training_data.R...... 5 tests OK test_get_training_data.R...... 8 tests OK Attaching package: 'ranger' The following object is masked from 'package:randomForest': importance test_get_training_data.R...... 10 tests OK 3.8s test_pkg_C50.R................ 0 tests test_pkg_C50.R................ 4 tests OK 86ms test_pkg_MASS.R............... 0 tests test_pkg_MASS.R............... 0 tests test_pkg_MASS.R............... 8 tests OK 2.0s test_pkg_e1071.R.............. 0 tests test_pkg_e1071.R.............. 0 tests test_pkg_e1071.R.............. 0 tests test_pkg_e1071.R.............. 0 tests Attaching package: 'e1071' The following object is masked from 'package:ggplot2': element test_pkg_e1071.R.............. 8 tests OK 1.2s Attaching package: 'zoo' The following objects are masked from 'package:base': as.Date, as.Date.numeric test_pkg_party.R.............. 12 tests OK 1.2s test_pkg_ranger.R............. 0 tests test_pkg_ranger.R............. 0 tests test_pkg_ranger.R............. 0 tests 3ms test_pkg_stats.R.............. 10 tests OK 1.4s test_pkg_xgboost.R............ 0 tests test_pkg_xgboost.R............ 0 tests test_pkg_xgboost.R............ 0 tests test_pkg_xgboost.R............ 0 tests test_pkg_xgboost.R............ 0 tests test_pkg_xgboost.R............ 0 tests Error in if (object$params$objective %in% c("reg:gamma", "reg:linear", : argument is of length zero Calls: ... partial.default -> get_task -> get_task.xgb.Booster In addition: Warning messages: 1: In partial.default(fit4, pred.var = "x.4", prob = TRUE, ice = TRUE, : Centering may result in probabilities outside of [0, 1]. 2: glm.fit: algorithm did not converge 3: glm.fit: fitted probabilities numerically 0 or 1 occurred 4: In partial.default(fit2_glm, pred.var = "x.3", prob = TRUE, ice = TRUE, : Centering may result in probabilities outside of [0, 1]. 5: In throw_err_or_depr_msg("Parameter(s) have been removed from this function: ", : Parameter(s) have been removed from this function: params, save_period. This warning will become an error in a future version. 6: In throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: verbose. This warning will become an error in a future version. 7: In throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. 8: In throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'label' has been renamed to 'y'. This warning will become an error in a future version. Execution halted Package: personalized Check: whether package can be installed New result: ERROR Installation failed. Package: pmml Check: tests New result: ERROR Running ‘testthat.R’ [42s/42s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(pmml, quietly = T) > > test_check("pmml") Saving _problems/test_pmml.xgb.Booster-59.R Saving _problems/test_pmml.xgb.Booster-74.R Saving _problems/test_pmml.xgb.Booster-102.R [ FAIL 6 | WARN 26 | SKIP 51 | PASS 356 ] ══ Skipped tests (51) ══════════════════════════════════════════════════════════ • On CRAN (48): 'test_pmml.iForest.R:6:3', 'test_pmml_integration_ARIMA.R:109:3', 'test_pmml_integration_ARIMA.R:184:3', 'test_pmml_integration_ARIMA.R:269:3', 'test_pmml_integration_e1071_svm.R:27:3', 'test_pmml_integration_e1071_svm.R:275:3', 'test_pmml_integration_lm.R:13:3', 'test_pmml_integration_lm.R:123:3', 'test_pmml_integration_lm.R:175:3', 'test_pmml_integration_other.R:120:3', 'test_pmml_integration_other.R:167:3', 'test_pmml_integration_other.R:265:3', 'test_pmml_integration_other.R:439:3', 'test_pmml_integration_other.R:607:3', 'test_pmml_integration_other.R:692:3', 'test_pmml_integration_other.R:849:3', 'test_pmml_integration_other.R:1062:3', 'test_pmml_integration_other.R:1322:3', 'test_pmml_integration_other.R:1442:3', 'test_pmml_integration_other.R:1547:3', 'test_pmml_integration_other.R:1633:3', 'test_pmml_integration_other.R:1822:3', 'test_pmml_integration_transformations.R:19:3', 'test_pmml_integration_transformations.R:319:3', 'test_pmml_integration_transformations.R:354:3', 'test_pmml_integration_transformations.R:377:3', 'test_pmml_integration_transformations.R:407:3', 'test_pmml_integration_transformations.R:469:3', 'test_pmml_integration_xgboost.R:21:3', 'test_schema_validation.R:135:3', 'test_schema_validation.R:183:3', 'test_schema_validation.R:204:3', 'test_schema_validation.R:248:3', 'test_schema_validation.R:343:3', 'test_schema_validation.R:426:3', 'test_schema_validation.R:458:3', 'test_schema_validation.R:500:3', 'test_schema_validation.R:603:3', 'test_schema_validation.R:795:3', 'test_schema_validation.R:933:3', 'test_schema_validation.R:1008:3', 'test_schema_validation.R:1045:3', 'test_schema_validation.R:1077:3', 'test_schema_validation.R:1146:3', 'test_schema_validation.R:1193:3', 'test_schema_validation.R:1429:3', 'test_schema_validation.R:1510:3', 'test_schema_validation.R:1540:3' • skip (2): 'test_pmml_integration_lm.R:147:3', 'test_pmml_integration_transformations.R:439:3' • skip until export issue is resolved (1): 'test_pmml.nnet.R:66:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test_pmml.miningschema.R:29:3'): invalidValueTreatment attribute is exported correctly for xgboost models ── Error in `process.y.margin.and.objective(y, base_margin, objective, params)`: Got numeric 'y' - supported objectives for this data are: reg:squarederror, reg:squaredlogerror, reg:logistic, reg:pseudohubererror, reg:absoluteerror, reg:quantileerror, count:poisson, reg:gamma, reg:tweedie. Was passed: binary:logistic Backtrace: ▆ 1. ├─utils::capture.output(...) at test_pmml.miningschema.R:29:3 2. │ └─base::withVisible(...elt(i)) 3. └─xgboost::xgboost(...) 4. └─xgboost:::process.y.margin.and.objective(...) ── Error ('test_pmml.miningschema.R:286:3'): error is thrown if invalidValueTreatment argument is incorrect ── Error in `process.y.margin.and.objective(y, base_margin, objective, params)`: Got numeric 'y' - supported objectives for this data are: reg:squarederror, reg:squaredlogerror, reg:logistic, reg:pseudohubererror, reg:absoluteerror, reg:quantileerror, count:poisson, reg:gamma, reg:tweedie. Was passed: binary:logistic Backtrace: ▆ 1. ├─utils::capture.output(...) at test_pmml.miningschema.R:286:3 2. │ └─base::withVisible(...elt(i)) 3. └─xgboost::xgboost(...) 4. └─xgboost:::process.y.margin.and.objective(...) ── Error ('test_pmml.xgb.Booster.R:18:3'): discrete variables are one-hot-encoded ── Error in `process.y.margin.and.objective(y, base_margin, objective, params)`: Got numeric 'y' - supported objectives for this data are: reg:squarederror, reg:squaredlogerror, reg:logistic, reg:pseudohubererror, reg:absoluteerror, reg:quantileerror, count:poisson, reg:gamma, reg:tweedie. Was passed: binary:logistic Backtrace: ▆ 1. ├─utils::capture.output(...) at test_pmml.xgb.Booster.R:18:3 2. │ └─base::withVisible(...elt(i)) 3. └─xgboost::xgboost(...) 4. └─xgboost:::process.y.margin.and.objective(...) ── Failure ('test_pmml.xgb.Booster.R:52:3'): error is thrown when objective = reg:linear ── `pmml(...)` threw an error with unexpected message. Expected match: "Only the following objectives are supported: multi:softprob, multi:softmax, binary:logistic." Actual message: "argument is of length zero" Backtrace: ▆ 1. ├─testthat::expect_error(...) at test_pmml.xgb.Booster.R:52:3 2. │ └─testthat:::quasi_capture(...) 3. │ ├─testthat (local) .capture(...) 4. │ │ └─base::withCallingHandlers(...) 5. │ └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo)) 6. ├─pmml::pmml(...) 7. └─pmml::pmml.xgb.Booster(...) ── Error ('test_pmml.xgb.Booster.R:70:3'): error is thrown when objective = reg:logistic ── Error in `prescreen.objective(objective)`: Objectives with non-default prediction mode (reg:logistic, binary:logitraw, multi:softmax) are not supported in 'xgboost()'. Try 'xgb.train()'. Backtrace: ▆ 1. └─xgboost::xgboost(...) at test_pmml.xgb.Booster.R:70:3 2. └─xgboost:::prescreen.objective(objective) ── Error ('test_pmml.xgb.Booster.R:97:3'): error is thrown when objective = binary:logitraw ── Error in `prescreen.objective(objective)`: Objectives with non-default prediction mode (reg:logistic, binary:logitraw, multi:softmax) are not supported in 'xgboost()'. Try 'xgb.train()'. Backtrace: ▆ 1. └─xgboost::xgboost(...) at test_pmml.xgb.Booster.R:97:3 2. └─xgboost:::prescreen.objective(objective) [ FAIL 6 | WARN 26 | SKIP 51 | PASS 356 ] Error: ! Test failures. Execution halted Package: polle [Old version: 1.6.0, New version: 1.6.1] Check: tests New result: ERROR Running ‘test-all.R’ [158s/154s] Running the tests in ‘tests/test-all.R’ failed. Complete output: > suppressPackageStartupMessages(library("testthat")) > options(Ncpus = 2) > data.table::setDTthreads(2) > test_check("polle") Loading required package: polle Loading required package: SuperLearner Loading required package: nnls Loading required package: gam Loading required package: splines Loading required package: foreach Loaded gam 1.22-6 Super Learner Version: 2.0-29 Package created on 2024-02-06 Saving _problems/test-g_xgboost-10.R Saving _problems/test-g_xgboost-42.R Saving _problems/test-g_xgboost-71.R Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = objective, nrounds = ntrees, : argument "y" is missing, with no default Saving _problems/test-q_xgboost-32.R [ FAIL 4 | WARN 202 | SKIP 0 | PASS 935 ] ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-g_xgboost.R:7:3'): g_xgboost gives the same result as plain xgboost ── Error in `xgboost::xgboost(data = xgboost_data, max_depth = 2, eta = 1, nrounds = 2, objective = "binary:logistic", verbose = FALSE)`: argument "y" is missing, with no default Backtrace: ▆ 1. └─xgboost::xgboost(...) at test-g_xgboost.R:7:3 2. └─xgboost:::process.y.margin.and.objective(...) 3. └─base::NROW(y) ── Error ('test-g_xgboost.R:41:3'): g_xgboost gives the same result as SL.xgboost ── Error in `process.y.margin.and.objective(y, base_margin, objective, params)`: Got numeric 'y' - supported objectives for this data are: reg:squarederror, reg:squaredlogerror, reg:logistic, reg:pseudohubererror, reg:absoluteerror, reg:quantileerror, count:poisson, reg:gamma, reg:tweedie. Was passed: binary:logistic Backtrace: ▆ 1. └─polle::fit_g_functions(...) at test-g_xgboost.R:41:3 2. └─polle:::fit_g_function(history, g_models) 3. └─polle (local) g_model(A = A, H = H, action_set = stage_action_set) 4. └─model$estimate(data) 5. └─private$fitfun(data, ...) 6. ├─base::structure(do.call(private$init.estimate, args), design = summary(xx)) 7. ├─base::do.call(private$init.estimate, args) 8. └─polle (local) ``(x = ``, y = ``) 9. └─xgboost::xgboost(...) 10. └─xgboost:::process.y.margin.and.objective(...) ── Error ('test-g_xgboost.R:70:3'): g_xgboost tunes parameters ───────────────── Error in `terms.formula(formula)`: '.' in formula and no 'data' argument Backtrace: ▆ 1. └─polle::fit_g_functions(...) at test-g_xgboost.R:70:3 2. └─polle:::fit_g_function(history, g_models) 3. └─polle (local) g_model(A = A, H = H, action_set = stage_action_set) 4. ├─stats::delete.response(terms(formula)) 5. ├─stats::terms(formula) 6. └─stats::terms.formula(formula) ── Error ('test-q_xgboost.R:28:3'): q_xgboost gives the same result as plain xgboost ── Error in `xgboost::xgboost(data = xgboost_data, max_depth = 2, eta = 1, nrounds = 2, verbose = FALSE)`: argument "y" is missing, with no default Backtrace: ▆ 1. └─xgboost::xgboost(...) at test-q_xgboost.R:28:3 2. └─xgboost:::process.y.margin.and.objective(...) 3. └─base::NROW(y) [ FAIL 4 | WARN 202 | SKIP 0 | PASS 935 ] Error: ! Test failures. Execution halted Package: PoweREST Check: examples New result: ERROR Running examples in ‘PoweREST-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: fit_XGBoost > ### Title: Fit with XGBoost > ### Aliases: fit_XGBoost > > ### ** Examples > > data(power_example) > # Fit the local power surface of avg_log2FC_abs between 1 and 2 > avg_log2FC_abs_1_2<-dplyr::filter(power_example,avg_log2FC_abs>1 & avg_log2FC_abs<2) > # Fit the model > bst<-fit_XGBoost(power_example$power,avg_log2FC=power_example$avg_log2FC_abs, + avg_PCT=power_example$mean_pct,replicates=power_example$sample_size) Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: max.depth, verbose. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'eta' has been renamed to 'learning_rate'. This warning will become an error in a future version. Error in xgboost::xgboost(data = dtrain, max.depth = max.depth, eta = eta, : argument "y" is missing, with no default Calls: fit_XGBoost ... -> process.y.margin.and.objective -> NROW Execution halted Package: predictoR Check: examples New result: ERROR Running examples in ‘predictoR-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: e_xgb_importance > ### Title: Var importance XGBoosting > ### Aliases: e_xgb_importance > > ### ** Examples > > model <- traineR::train.xgboost(Species ~ ., data = iris, nrounds = 20) Warning in check.deprecation(deprecated_train_params, match.call(), ...) : Passed invalid function arguments: eval_metric. These should be passed as a list to argument 'params'. Conversion from argument to 'params' entry will be done automatically, but this behavior will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'watchlist' has been renamed to 'evals'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'feval' has been renamed to 'custom_metric'. This warning will become an error in a future version. [1] train-mlogloss:0.736115 [2] train-mlogloss:0.524235 [3] train-mlogloss:0.387996 [4] train-mlogloss:0.294146 [5] train-mlogloss:0.226824 [6] train-mlogloss:0.177835 [7] train-mlogloss:0.141766 [8] train-mlogloss:0.115002 [9] train-mlogloss:0.094791 [10] train-mlogloss:0.078860 [11] train-mlogloss:0.066746 [12] train-mlogloss:0.057845 [13] train-mlogloss:0.050360 [14] train-mlogloss:0.044290 [15] train-mlogloss:0.039567 [16] train-mlogloss:0.035267 [17] train-mlogloss:0.032581 [18] train-mlogloss:0.030403 [19] train-mlogloss:0.028410 [20] train-mlogloss:0.026969 Error in model$prmdt <- `*vtmp*` : ALTLIST classes must provide a Set_elt method [class: XGBAltrepPointerClass, pkg: xgboost] Calls: -> create.model Execution halted Package: radiant.model Check: examples New result: ERROR Running examples in ‘radiant.model-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: gbt > ### Title: Gradient Boosted Trees using XGBoost > ### Aliases: gbt > > ### ** Examples > > ## Not run: > ##D gbt(titanic, "survived", c("pclass", "sex"), lev = "Yes") %>% summary() > ##D gbt(titanic, "survived", c("pclass", "sex")) %>% str() > ## End(Not run) > gbt( + titanic, "survived", c("pclass", "sex"), lev = "Yes", + early_stopping_rounds = 0, nthread = 1 + ) %>% summary() Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'label' has been renamed to 'y'. This warning will become an error in a future version. Error in process.y.margin.and.objective(y, base_margin, objective, params) : Got numeric 'y' - supported objectives for this data are: reg:squarederror, reg:squaredlogerror, reg:logistic, reg:pseudohubererror, reg:absoluteerror, reg:quantileerror, count:poisson, reg:gamma, reg:tweedie. Was passed: binary:logistic Calls: %>% ... do.call -> -> process.y.margin.and.objective Execution halted Package: RIIM Check: examples New result: ERROR Running examples in ‘RIIM-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: IPPW > ### Title: Randomization-based inference using inverse post-matching > ### probability weighting (IPPW) > ### Aliases: IPPW > > ### ** Examples > > library(MASS) > library(xgboost) > library(optmatch) > > # Generate data > set.seed(1) > d = 3 > n = 30 > sigma = diag(d) > > # Generate X > X_d = mvtnorm::rmvnorm(n, mean = rep(0,d), sigma = sigma) > > # Generate Z > C = -2.5 > fx = 0.1*(X_d[,1])^3 + 0.3*(X_d[,2]) + 0.2*log((X_d[,3])^2) + + abs(X_d[,1]*X_d[,2]) + rnorm(n,0,1) + C > p = exp(fx)/(1+exp(fx)) # the probability of receiving the treatment > Z = rep(0,length(p)) > for(i in seq_along(p)){ + Z[i] = rbinom(1,1,p[i]) + } > > # Generate Y > Y_0 = 0.2*(X_d[,1])^3 + 0.2*abs(X_d[,2]) + 0.5*abs(X_d[,3]) + rnorm(n,0,1) > Y_1 = Y_0 + 1 + 0.3*X_d[,1] + 0.2*X_d[,3]^3 > Y = (1-Z)*Y_0 + Z*Y_1 > > # The output > est = IPPW(Y,Z,X_d,min.controls = 0.01,max.controls = 100,caliper=FALSE, + calipersd = 0.2,dim=FALSE,gamma=0.1,alpha=0.05)$estimate Warning in optmatch::fullmatch(distmat, min.controls, max.controls) : Without 'data' argument the order of the match is not guaranteed to be the same as your original data. Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: verbose. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'label' has been renamed to 'y'. This warning will become an error in a future version. Error in process.y.margin.and.objective(y, base_margin, objective, params) : Got numeric 'y' - supported objectives for this data are: reg:squarederror, reg:squaredlogerror, reg:logistic, reg:pseudohubererror, reg:absoluteerror, reg:quantileerror, count:poisson, reg:gamma, reg:tweedie. Was passed: binary:logistic Calls: IPPW -> -> process.y.margin.and.objective Execution halted Package: SHAPBoost Check: examples New result: ERROR Running examples in ‘SHAPBoost-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: SHAPBoostEstimator-class > ### Title: SHAPBoostEstimator Class > ### Aliases: SHAPBoostEstimator-class SHAPBoostEstimator > > ### ** Examples > > if (requireNamespace("flare", quietly = TRUE)) { + data("eyedata", package = "flare") + shapboost <- SHAPBoostRegressor$new( + max_number_of_features = 1, + evaluator = "lr", + metric = "mae", + siso_ranking_size = 10, + verbose = 0 + ) + X <- as.data.frame(x) + y <- as.data.frame(y) + subset <- shapboost$fit(X, y) + } Iteration:1 Selected variables: Error in .internal.setinfo.xgb.DMatrix(object, name, info) : [00:31:42] xgboost_R.cc:167: Array or matrix has unsupported type. Stack trace: [bt] (0) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x798b1) [0x7f7a978798b1] [bt] (1) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(+0x6f006) [0x7f7a9786f006] [bt] (2) /home/hornik/tmp/CRAN_recheck/Library/xgboost/libs/xgboost.so(XGDMatrixSetInfo_R+0x59) [0x7f7a9786f7d9] [bt] (3) /home/hornik/tmp/R/lib/libR.so(+0x103f6e) [0x7f7ab0103f6e] [bt] (4) /home/hornik/tmp/R/lib/libR.so(+0x1 Calls: ... setinfo.xgb.DMatrix -> .internal.setinfo.xgb.DMatrix Execution halted Package: SHAPforxgboost Check: examples New result: ERROR Running examples in ‘SHAPforxgboost-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: shap.plot.summary > ### Title: SHAP summary plot core function using the long format SHAP > ### values > ### Aliases: shap.plot.summary > > ### ** Examples > > data("iris") > X1 = as.matrix(iris[,-5]) > mod1 = xgboost::xgboost( + data = X1, label = iris$Species, gamma = 0, eta = 1, + lambda = 0, nrounds = 1, verbose = FALSE, nthread = 1) Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: verbose. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'label' has been renamed to 'y'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'gamma' has been renamed to 'min_split_loss'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'eta' has been renamed to 'learning_rate'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'lambda' has been renamed to 'reg_lambda'. This warning will become an error in a future version. > > # shap.values(model, X_dataset) returns the SHAP > # data matrix and ranked features by mean|SHAP| > shap_values <- shap.values(xgb_model = mod1, X_train = X1) Warning in `[.data.table`(shap_contrib, , `:=`(BIAS, NULL)) : Tried to assign NULL to column 'BIAS', but this column does not exist to remove > shap_values$mean_shap_score setosa versicolor virginica 0.3360065 0.3326459 0.3313476 > shap_values_iris <- shap_values$shap_score > > # shap.prep() returns the long-format SHAP data from either model or > shap_long_iris <- shap.prep(xgb_model = mod1, X_train = X1) Warning in `[.data.table`(shap_contrib, , `:=`(BIAS, NULL)) : Tried to assign NULL to column 'BIAS', but this column does not exist to remove Error in `[.data.table`(setDT(shap$shap_score), , names(shap$mean_shap_score)[1:top_n], : column not found: [NA] Calls: shap.prep ... [ -> [.data.table -> stopf -> raise_condition -> signal Execution halted Package: SHAPforxgboost Check: re-building of vignette outputs New result: ERROR Error(s) in re-building vignettes: ... --- re-building ‘basic_workflow.Rmd’ using rmarkdown Quitting from basic_workflow.Rmd:62-87 [unnamed-chunk-3] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error in `[.data.table`: ! column not found: [(Intercept)] --- Backtrace: ▆ 1. └─SHAPforxgboost::shap.prep(fit, X_train = X) 2. ├─...[] 3. └─data.table:::`[.data.table`(...) 4. └─data.table:::stopf(...) 5. └─data.table:::raise_condition(...) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'basic_workflow.Rmd' failed with diagnostics: column not found: [(Intercept)] --- failed re-building ‘basic_workflow.Rmd’ SUMMARY: processing the following file failed: ‘basic_workflow.Rmd’ Error: Vignette re-building failed. Execution halted Package: SuperLearner Check: re-building of vignette outputs New result: ERROR Error(s) in re-building vignettes: ... --- re-building ‘Guide-to-SuperLearner.Rmd’ using rmarkdown Boston package:MASS R Documentation _H_o_u_s_i_n_g _V_a_l_u_e_s _i_n _S_u_b_u_r_b_s _o_f _B_o_s_t_o_n _D_e_s_c_r_i_p_t_i_o_n: The 'Boston' data frame has 506 rows and 14 columns. _U_s_a_g_e: Boston _F_o_r_m_a_t: This data frame contains the following columns: 'crim' per capita crime rate by town. 'zn' proportion of residential land zoned for lots over 25,000 sq.ft. 'indus' proportion of non-retail business acres per town. 'chas' Charles River dummy variable (= 1 if tract bounds river; 0 otherwise). 'nox' nitrogen oxides concentration (parts per 10 million). 'rm' average number of rooms per dwelling. 'age' proportion of owner-occupied units built prior to 1940. 'dis' weighted mean of distances to five Boston employment centres. 'rad' index of accessibility to radial highways. 'tax' full-value property-tax rate per $10,000. 'ptratio' pupil-teacher ratio by town. 'black' 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town. 'lstat' lower status of the population (percent). 'medv' median value of owner-occupied homes in $1000s. _S_o_u_r_c_e: Harrison, D. and Rubinfeld, D.L. (1978) Hedonic prices and the demand for clean air. _J. Environ. Economics and Management_ *5*, 81-102. Belsley D.A., Kuh, E. and Welsch, R.E. (1980) _Regression Diagnostics. Identifying Influential Data and Sources of Collinearity._ New York: Wiley. Quitting from Guide-to-SuperLearner.Rmd:557-590 [xgboost] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error in `FUN()`: ! subscript out of bounds --- Backtrace: ▆ 1. ├─base::system.time(...) 2. └─SuperLearner::CV.SuperLearner(...) 3. └─base::lapply(cvList, "[[", "cvAllSL") ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'Guide-to-SuperLearner.Rmd' failed with diagnostics: subscript out of bounds --- failed re-building ‘Guide-to-SuperLearner.Rmd’ SUMMARY: processing the following file failed: ‘Guide-to-SuperLearner.Rmd’ Error: Vignette re-building failed. Execution halted Package: SuperLearner Check: tests New result: ERROR Running ‘testthat.R’ [67s/63s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(SuperLearner) Loading required package: nnls Loading required package: gam Loading required package: splines Loading required package: foreach Loaded gam 1.22-6 Super Learner Version: 2.0-29 Package created on 2024-02-06 > > test_check("SuperLearner") Error in xgboost::xgboost(data = xgmat, objective = "binary:logistic", : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = "binary:logistic", : argument "y" is missing, with no default Error in xgboost::xgboost(data = xgmat, objective = "binary:logistic", : argument "y" is missing, with no default Saving _problems/test-XGBoost-25.R Warning: The response y is integer, bartMachine will run regression. Warning: The response y is integer, bartMachine will run regression. Warning: The response y is integer, bartMachine will run regression. lasso-penalized linear regression with n=506, p=13 At minimum cross-validation error (lambda=0.0222): ------------------------------------------------- Nonzero coefficients: 11 Cross-validation error (deviance): 23.29 R-squared: 0.72 Signal-to-noise ratio: 2.63 Scale estimate (sigma): 4.826 lasso-penalized logistic regression with n=506, p=13 At minimum cross-validation error (lambda=0.0026): ------------------------------------------------- Nonzero coefficients: 12 Cross-validation error (deviance): 0.66 R-squared: 0.48 Signal-to-noise ratio: 0.94 Prediction error: 0.123 lasso-penalized linear regression with n=506, p=13 At minimum cross-validation error (lambda=0.0362): ------------------------------------------------- Nonzero coefficients: 11 Cross-validation error (deviance): 23.30 R-squared: 0.72 Signal-to-noise ratio: 2.62 Scale estimate (sigma): 4.827 lasso-penalized logistic regression with n=506, p=13 At minimum cross-validation error (lambda=0.0016): ------------------------------------------------- Nonzero coefficients: 13 Cross-validation error (deviance): 0.63 R-squared: 0.50 Signal-to-noise ratio: 0.99 Prediction error: 0.132 Call: SuperLearner(Y = Y_gaus, X = X, family = gaussian(), SL.library = c("SL.mean", "SL.biglasso"), cvControl = list(V = 2)) Risk Coef SL.mean_All 84.62063 0.02136708 SL.biglasso_All 26.01864 0.97863292 Call: SuperLearner(Y = Y_bin, X = X, family = binomial(), SL.library = c("SL.mean", "SL.biglasso"), cvControl = list(V = 2)) Risk Coef SL.mean_All 0.2346857 0 SL.biglasso_All 0.1039122 1 Y 0 1 53 47 $grid NULL $names [1] "SL.randomForest_1" $base_learner [1] "SL.randomForest" $params $params$ntree [1] 100 [1] "SL.randomForest_1" "X" "Y" [4] "create_rf" "data" Call: SuperLearner(Y = Y, X = X, family = binomial(), SL.library = create_rf$names, cvControl = list(V = 2)) Risk Coef SL.randomForest_1_All 0.045984 1 $grid mtry 1 1 2 4 3 20 $names [1] "SL.randomForest_1" "SL.randomForest_2" "SL.randomForest_3" $base_learner [1] "SL.randomForest" $params list() Call: SuperLearner(Y = Y, X = X, family = binomial(), SL.library = create_rf$names, cvControl = list(V = 2)) Risk Coef SL.randomForest_1_All 0.06729890 0.93195369 SL.randomForest_2_All 0.07219426 0.00000000 SL.randomForest_3_All 0.07243423 0.06804631 $grid alpha 1 0.00 2 0.25 3 0.50 4 0.75 5 1.00 $names [1] "SL.glmnet_0" "SL.glmnet_0.25" "SL.glmnet_0.5" "SL.glmnet_0.75" [5] "SL.glmnet_1" $base_learner [1] "SL.glmnet" $params list() [1] "SL.glmnet_0" "SL.glmnet_0.25" "SL.glmnet_0.5" "SL.glmnet_0.75" [5] "SL.glmnet_1" Call: SuperLearner(Y = Y, X = X, family = binomial(), SL.library = ls(learners), cvControl = list(V = 2), env = learners) Risk Coef SL.glmnet_0_All 0.08849610 0 SL.glmnet_0.25_All 0.08116755 0 SL.glmnet_0.5_All 0.06977106 1 SL.glmnet_0.75_All 0.07686953 0 SL.glmnet_1_All 0.07730595 0 Call: SuperLearner(Y = Y, X = X_clean, family = binomial(), SL.library = c("SL.mean", svm$names), cvControl = list(V = 3)) Risk Coef SL.mean_All 0.25711218 0.0000000 SL.svm_polynomial_All 0.08463484 0.1443046 SL.svm_radial_All 0.06530910 0.0000000 SL.svm_sigmoid_All 0.05716227 0.8556954 Call: glm(formula = Y ~ ., family = family, data = X, weights = obsWeights, model = model) Coefficients: (Intercept) crim zn indus chas nox 3.646e+01 -1.080e-01 4.642e-02 2.056e-02 2.687e+00 -1.777e+01 rm age dis rad tax ptratio 3.810e+00 6.922e-04 -1.476e+00 3.060e-01 -1.233e-02 -9.527e-01 black lstat 9.312e-03 -5.248e-01 Degrees of Freedom: 505 Total (i.e. Null); 492 Residual Null Deviance: 42720 Residual Deviance: 11080 AIC: 3028 Call: glm(formula = Y ~ ., family = family, data = X, weights = obsWeights, model = model) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.646e+01 5.103e+00 7.144 3.28e-12 *** crim -1.080e-01 3.286e-02 -3.287 0.001087 ** zn 4.642e-02 1.373e-02 3.382 0.000778 *** indus 2.056e-02 6.150e-02 0.334 0.738288 chas 2.687e+00 8.616e-01 3.118 0.001925 ** nox -1.777e+01 3.820e+00 -4.651 4.25e-06 *** rm 3.810e+00 4.179e-01 9.116 < 2e-16 *** age 6.922e-04 1.321e-02 0.052 0.958229 dis -1.476e+00 1.995e-01 -7.398 6.01e-13 *** rad 3.060e-01 6.635e-02 4.613 5.07e-06 *** tax -1.233e-02 3.760e-03 -3.280 0.001112 ** ptratio -9.527e-01 1.308e-01 -7.283 1.31e-12 *** black 9.312e-03 2.686e-03 3.467 0.000573 *** lstat -5.248e-01 5.072e-02 -10.347 < 2e-16 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 (Dispersion parameter for gaussian family taken to be 22.51785) Null deviance: 42716 on 505 degrees of freedom Residual deviance: 11079 on 492 degrees of freedom AIC: 3027.6 Number of Fisher Scoring iterations: 2 Call: glm(formula = Y ~ ., family = family, data = X, weights = obsWeights, model = model) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 10.682635 3.921395 2.724 0.006446 ** crim -0.040649 0.049796 -0.816 0.414321 zn 0.012134 0.010678 1.136 0.255786 indus -0.040715 0.045615 -0.893 0.372078 chas 0.248209 0.653283 0.380 0.703989 nox -3.601085 2.924365 -1.231 0.218170 rm 1.155157 0.374843 3.082 0.002058 ** age -0.018660 0.009319 -2.002 0.045252 * dis -0.518934 0.146286 -3.547 0.000389 *** rad 0.255522 0.061391 4.162 3.15e-05 *** tax -0.009500 0.003107 -3.057 0.002233 ** ptratio -0.409317 0.103191 -3.967 7.29e-05 *** black -0.001451 0.002558 -0.567 0.570418 lstat -0.318436 0.054735 -5.818 5.96e-09 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 669.76 on 505 degrees of freedom Residual deviance: 296.39 on 492 degrees of freedom AIC: 324.39 Number of Fisher Scoring iterations: 7 [1] "coefficients" "residuals" "fitted.values" [4] "effects" "R" "rank" [7] "qr" "family" "linear.predictors" [10] "deviance" "aic" "null.deviance" [13] "iter" "weights" "prior.weights" [16] "df.residual" "df.null" "y" [19] "converged" "boundary" "call" [22] "formula" "terms" "data" [25] "offset" "control" "method" [28] "contrasts" "xlevels" Call: glm(formula = Y ~ ., family = family, data = X, weights = obsWeights, model = model) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.646e+01 5.103e+00 7.144 3.28e-12 *** crim -1.080e-01 3.286e-02 -3.287 0.001087 ** zn 4.642e-02 1.373e-02 3.382 0.000778 *** indus 2.056e-02 6.150e-02 0.334 0.738288 chas 2.687e+00 8.616e-01 3.118 0.001925 ** nox -1.777e+01 3.820e+00 -4.651 4.25e-06 *** rm 3.810e+00 4.179e-01 9.116 < 2e-16 *** age 6.922e-04 1.321e-02 0.052 0.958229 dis -1.476e+00 1.995e-01 -7.398 6.01e-13 *** rad 3.060e-01 6.635e-02 4.613 5.07e-06 *** tax -1.233e-02 3.760e-03 -3.280 0.001112 ** ptratio -9.527e-01 1.308e-01 -7.283 1.31e-12 *** black 9.312e-03 2.686e-03 3.467 0.000573 *** lstat -5.248e-01 5.072e-02 -10.347 < 2e-16 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 (Dispersion parameter for gaussian family taken to be 22.51785) Null deviance: 42716 on 505 degrees of freedom Residual deviance: 11079 on 492 degrees of freedom AIC: 3027.6 Number of Fisher Scoring iterations: 2 Call: glm(formula = Y ~ ., family = family, data = X, weights = obsWeights, model = model) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 10.682635 3.921395 2.724 0.006446 ** crim -0.040649 0.049796 -0.816 0.414321 zn 0.012134 0.010678 1.136 0.255786 indus -0.040715 0.045615 -0.893 0.372078 chas 0.248209 0.653283 0.380 0.703989 nox -3.601085 2.924365 -1.231 0.218170 rm 1.155157 0.374843 3.082 0.002058 ** age -0.018660 0.009319 -2.002 0.045252 * dis -0.518934 0.146286 -3.547 0.000389 *** rad 0.255522 0.061391 4.162 3.15e-05 *** tax -0.009500 0.003107 -3.057 0.002233 ** ptratio -0.409317 0.103191 -3.967 7.29e-05 *** black -0.001451 0.002558 -0.567 0.570418 lstat -0.318436 0.054735 -5.818 5.96e-09 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 669.76 on 505 degrees of freedom Residual deviance: 296.39 on 492 degrees of freedom AIC: 324.39 Number of Fisher Scoring iterations: 7 Call: SuperLearner(Y = Y_gaus, X = X, family = gaussian(), SL.library = c("SL.mean", "SL.glm")) Risk Coef SL.mean_All 84.74142 0.0134192 SL.glm_All 23.62549 0.9865808 V1 Min. :-3.921 1st Qu.:17.514 Median :22.124 Mean :22.533 3rd Qu.:27.345 Max. :44.376 Call: SuperLearner(Y = Y_bin, X = X, family = binomial(), SL.library = c("SL.mean", "SL.glm")) Risk Coef SL.mean_All 0.23580362 0.01315872 SL.glm_All 0.09519266 0.98684128 V1 Min. :0.004942 1st Qu.:0.035424 Median :0.196222 Mean :0.375494 3rd Qu.:0.781687 Max. :0.991313 Got an error, as expected. Got an error, as expected. Call: lda(X, grouping = Y, prior = prior, method = method, tol = tol, CV = CV, nu = nu) Prior probabilities of groups: 0 1 0.6245059 0.3754941 Group means: crim zn indus chas nox rm age dis 0 5.2936824 4.708861 13.622089 0.05379747 0.5912399 5.985693 77.93228 3.349307 1 0.8191541 22.431579 7.003316 0.09473684 0.4939153 6.781821 53.01211 4.536371 rad tax ptratio black lstat 0 11.588608 459.9209 19.19968 340.6392 16.042468 1 6.157895 322.2789 17.21789 383.3425 7.015947 Coefficients of linear discriminants: LD1 crim 0.0012515925 zn 0.0095179029 indus -0.0166376334 chas 0.1399207112 nox -2.9934367740 rm 0.5612713068 age -0.0128420045 dis -0.3095403096 rad 0.0695027989 tax -0.0027771271 ptratio -0.2059853828 black 0.0006058031 lstat -0.0816668897 Call: lda(X, grouping = Y, prior = prior, method = method, tol = tol, CV = CV, nu = nu) Prior probabilities of groups: 0 1 0.6245059 0.3754941 Group means: crim zn indus chas nox rm age dis 0 5.2936824 4.708861 13.622089 0.05379747 0.5912399 5.985693 77.93228 3.349307 1 0.8191541 22.431579 7.003316 0.09473684 0.4939153 6.781821 53.01211 4.536371 rad tax ptratio black lstat 0 11.588608 459.9209 19.19968 340.6392 16.042468 1 6.157895 322.2789 17.21789 383.3425 7.015947 Coefficients of linear discriminants: LD1 crim 0.0012515925 zn 0.0095179029 indus -0.0166376334 chas 0.1399207112 nox -2.9934367740 rm 0.5612713068 age -0.0128420045 dis -0.3095403096 rad 0.0695027989 tax -0.0027771271 ptratio -0.2059853828 black 0.0006058031 lstat -0.0816668897 Call: stats::lm(formula = Y ~ ., data = X, weights = obsWeights, model = model) Coefficients: (Intercept) crim zn indus chas nox 3.646e+01 -1.080e-01 4.642e-02 2.056e-02 2.687e+00 -1.777e+01 rm age dis rad tax ptratio 3.810e+00 6.922e-04 -1.476e+00 3.060e-01 -1.233e-02 -9.527e-01 black lstat 9.312e-03 -5.248e-01 Call: stats::lm(formula = Y ~ ., data = X, weights = obsWeights, model = model) Residuals: Min 1Q Median 3Q Max -15.595 -2.730 -0.518 1.777 26.199 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.646e+01 5.103e+00 7.144 3.28e-12 *** crim -1.080e-01 3.286e-02 -3.287 0.001087 ** zn 4.642e-02 1.373e-02 3.382 0.000778 *** indus 2.056e-02 6.150e-02 0.334 0.738288 chas 2.687e+00 8.616e-01 3.118 0.001925 ** nox -1.777e+01 3.820e+00 -4.651 4.25e-06 *** rm 3.810e+00 4.179e-01 9.116 < 2e-16 *** age 6.922e-04 1.321e-02 0.052 0.958229 dis -1.476e+00 1.995e-01 -7.398 6.01e-13 *** rad 3.060e-01 6.635e-02 4.613 5.07e-06 *** tax -1.233e-02 3.760e-03 -3.280 0.001112 ** ptratio -9.527e-01 1.308e-01 -7.283 1.31e-12 *** black 9.312e-03 2.686e-03 3.467 0.000573 *** lstat -5.248e-01 5.072e-02 -10.347 < 2e-16 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 4.745 on 492 degrees of freedom Multiple R-squared: 0.7406, Adjusted R-squared: 0.7338 F-statistic: 108.1 on 13 and 492 DF, p-value: < 2.2e-16 Call: stats::lm(formula = Y ~ ., data = X, weights = obsWeights, model = model) Residuals: Min 1Q Median 3Q Max -0.80469 -0.23612 -0.03105 0.23080 1.05224 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.6675402 0.3662392 4.553 6.67e-06 *** crim 0.0003028 0.0023585 0.128 0.897888 zn 0.0023028 0.0009851 2.338 0.019808 * indus -0.0040254 0.0044131 -0.912 0.362135 chas 0.0338534 0.0618295 0.548 0.584264 nox -0.7242540 0.2741160 -2.642 0.008501 ** rm 0.1357981 0.0299915 4.528 7.48e-06 *** age -0.0031071 0.0009480 -3.278 0.001121 ** dis -0.0748924 0.0143135 -5.232 2.48e-07 *** rad 0.0168160 0.0047612 3.532 0.000451 *** tax -0.0006719 0.0002699 -2.490 0.013110 * ptratio -0.0498376 0.0093885 -5.308 1.68e-07 *** black 0.0001466 0.0001928 0.760 0.447370 lstat -0.0197591 0.0036395 -5.429 8.91e-08 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 0.3405 on 492 degrees of freedom Multiple R-squared: 0.5192, Adjusted R-squared: 0.5065 F-statistic: 40.86 on 13 and 492 DF, p-value: < 2.2e-16 [1] "coefficients" "residuals" "fitted.values" "effects" [5] "weights" "rank" "assign" "qr" [9] "df.residual" "xlevels" "call" "terms" Call: stats::lm(formula = Y ~ ., data = X, weights = obsWeights, model = model) Residuals: Min 1Q Median 3Q Max -15.595 -2.730 -0.518 1.777 26.199 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.646e+01 5.103e+00 7.144 3.28e-12 *** crim -1.080e-01 3.286e-02 -3.287 0.001087 ** zn 4.642e-02 1.373e-02 3.382 0.000778 *** indus 2.056e-02 6.150e-02 0.334 0.738288 chas 2.687e+00 8.616e-01 3.118 0.001925 ** nox -1.777e+01 3.820e+00 -4.651 4.25e-06 *** rm 3.810e+00 4.179e-01 9.116 < 2e-16 *** age 6.922e-04 1.321e-02 0.052 0.958229 dis -1.476e+00 1.995e-01 -7.398 6.01e-13 *** rad 3.060e-01 6.635e-02 4.613 5.07e-06 *** tax -1.233e-02 3.760e-03 -3.280 0.001112 ** ptratio -9.527e-01 1.308e-01 -7.283 1.31e-12 *** black 9.312e-03 2.686e-03 3.467 0.000573 *** lstat -5.248e-01 5.072e-02 -10.347 < 2e-16 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 4.745 on 492 degrees of freedom Multiple R-squared: 0.7406, Adjusted R-squared: 0.7338 F-statistic: 108.1 on 13 and 492 DF, p-value: < 2.2e-16 Call: stats::lm(formula = Y ~ ., data = X, weights = obsWeights, model = model) Residuals: Min 1Q Median 3Q Max -0.80469 -0.23612 -0.03105 0.23080 1.05224 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.6675402 0.3662392 4.553 6.67e-06 *** crim 0.0003028 0.0023585 0.128 0.897888 zn 0.0023028 0.0009851 2.338 0.019808 * indus -0.0040254 0.0044131 -0.912 0.362135 chas 0.0338534 0.0618295 0.548 0.584264 nox -0.7242540 0.2741160 -2.642 0.008501 ** rm 0.1357981 0.0299915 4.528 7.48e-06 *** age -0.0031071 0.0009480 -3.278 0.001121 ** dis -0.0748924 0.0143135 -5.232 2.48e-07 *** rad 0.0168160 0.0047612 3.532 0.000451 *** tax -0.0006719 0.0002699 -2.490 0.013110 * ptratio -0.0498376 0.0093885 -5.308 1.68e-07 *** black 0.0001466 0.0001928 0.760 0.447370 lstat -0.0197591 0.0036395 -5.429 8.91e-08 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 0.3405 on 492 degrees of freedom Multiple R-squared: 0.5192, Adjusted R-squared: 0.5065 F-statistic: 40.86 on 13 and 492 DF, p-value: < 2.2e-16 Call: SuperLearner(Y = Y_gaus, X = X, family = gaussian(), SL.library = c("SL.mean", "SL.lm")) Risk Coef SL.mean_All 84.6696 0.02186479 SL.lm_All 24.3340 0.97813521 V1 Min. :-3.695 1st Qu.:17.557 Median :22.128 Mean :22.533 3rd Qu.:27.303 Max. :44.189 Call: SuperLearner(Y = Y_bin, X = X, family = binomial(), SL.library = c("SL.mean", "SL.lm")) Risk Coef SL.mean_All 0.2349366 0 SL.lm_All 0.1125027 1 V1 Min. :0.0000 1st Qu.:0.1281 Median :0.3530 Mean :0.3899 3rd Qu.:0.6091 Max. :1.0000 Call: SuperLearner(Y = Y, X = X, family = binomial(), SL.library = SL.library, method = "method.NNLS", verbose = F, cvControl = list(V = 2)) Risk Coef SL.rpart_All 0.1986827 0.31226655 SL.glmnet_All 0.1803963 0.66105261 SL.mean_All 0.2534500 0.02668084 Error in (function (Y, X, newX, ...) : bad algorithm Error in (function (Y, X, newX, ...) : bad algorithm Error in (function (Y, X, newX, ...) : bad algorithm Call: SuperLearner(Y = Y, X = X, family = binomial(), SL.library = c(SL.library, "SL.bad_algorithm"), method = "method.NNLS", verbose = T, cvControl = list(V = 2)) Risk Coef SL.rpart_All 0.1921176 0.08939677 SL.glmnet_All 0.1635548 0.91060323 SL.mean_All 0.2504500 0.00000000 SL.bad_algorithm_All NA 0.00000000 Call: SuperLearner(Y = Y, X = X, family = binomial(), SL.library = SL.library, method = "method.NNLS2", verbose = F, cvControl = list(V = 2)) Risk Coef SL.rpart_All 0.2279346 0.05397859 SL.glmnet_All 0.1670620 0.94602141 SL.mean_All 0.2504500 0.00000000 Call: SuperLearner(Y = Y, X = X, family = binomial(), SL.library = SL.library, method = "method.NNloglik", verbose = F, cvControl = list(V = 2)) Risk Coef SL.rpart_All 0.5804469 0.1760951 SL.glmnet_All 0.5010294 0.8239049 SL.mean_All 0.6964542 0.0000000 Error in (function (Y, X, newX, ...) : bad algorithm Error in (function (Y, X, newX, ...) : bad algorithm Error in (function (Y, X, newX, ...) : bad algorithm Call: SuperLearner(Y = Y, X = X, family = binomial(), SL.library = c(SL.library, "SL.bad_algorithm"), method = "method.NNloglik", verbose = T, cvControl = list(V = 2)) Risk Coef SL.rpart_All Inf 0.1338597 SL.glmnet_All 0.5027498 0.8661403 SL.mean_All 0.7000679 0.0000000 SL.bad_algorithm_All NA 0.0000000 Call: SuperLearner(Y = Y, X = X, family = binomial(), SL.library = SL.library, method = "method.CC_LS", verbose = F, cvControl = list(V = 2)) Risk Coef SL.rpart_All 0.2033781 0.16438434 SL.glmnet_All 0.1740498 0.82391928 SL.mean_All 0.2516500 0.01169638 Call: SuperLearner(Y = Y, X = X, family = binomial(), SL.library = SL.library, method = "method.CC_nloglik", verbose = F, cvControl = list(V = 2)) Risk Coef SL.rpart_All 295.8455 0.1014591 SL.glmnet_All 205.3289 0.7867610 SL.mean_All 277.1389 0.1117798 Error in (function (Y, X, newX, ...) : bad algorithm Error in (function (Y, X, newX, ...) : bad algorithm Error in (function (Y, X, newX, ...) : bad algorithm Call: SuperLearner(Y = Y, X = X, family = binomial(), SL.library = c(SL.library, "SL.bad_algorithm"), method = "method.CC_nloglik", verbose = T, cvControl = list(V = 2)) Risk Coef SL.rpart_All 212.5569 0.2707202 SL.glmnet_All 193.9384 0.7292798 SL.mean_All 277.1389 0.0000000 SL.bad_algorithm_All NA 0.0000000 Call: SuperLearner(Y = Y, X = X, family = binomial(), SL.library = SL.library, method = "method.AUC", verbose = FALSE, cvControl = list(V = 2)) Risk Coef SL.rpart_All 0.2533780 0.3333333 SL.glmnet_All 0.1869683 0.3333333 SL.mean_All 0.5550495 0.3333333 Error in (function (Y, X, newX, ...) : bad algorithm Error in (function (Y, X, newX, ...) : bad algorithm Removing failed learners: SL.bad_algorithm_All Error in (function (Y, X, newX, ...) : bad algorithm Call: SuperLearner(Y = Y, X = X, family = binomial(), SL.library = c(SL.library, "SL.bad_algorithm"), method = "method.AUC", verbose = TRUE, cvControl = list(V = 2)) Risk Coef SL.rpart_All 0.2467721 0.2982123 SL.glmnet_All 0.1705535 0.3508938 SL.mean_All 0.5150135 0.3508938 SL.bad_algorithm_All NA 0.0000000 Call: qda(X, grouping = Y, prior = prior, method = method, tol = tol, CV = CV, nu = nu) Prior probabilities of groups: 0 1 0.6245059 0.3754941 Group means: crim zn indus chas nox rm age dis 0 5.2936824 4.708861 13.622089 0.05379747 0.5912399 5.985693 77.93228 3.349307 1 0.8191541 22.431579 7.003316 0.09473684 0.4939153 6.781821 53.01211 4.536371 rad tax ptratio black lstat 0 11.588608 459.9209 19.19968 340.6392 16.042468 1 6.157895 322.2789 17.21789 383.3425 7.015947 Call: qda(X, grouping = Y, prior = prior, method = method, tol = tol, CV = CV, nu = nu) Prior probabilities of groups: 0 1 0.6245059 0.3754941 Group means: crim zn indus chas nox rm age dis 0 5.2936824 4.708861 13.622089 0.05379747 0.5912399 5.985693 77.93228 3.349307 1 0.8191541 22.431579 7.003316 0.09473684 0.4939153 6.781821 53.01211 4.536371 rad tax ptratio black lstat 0 11.588608 459.9209 19.19968 340.6392 16.042468 1 6.157895 322.2789 17.21789 383.3425 7.015947 Y 0 1 62 38 Call: SuperLearner(Y = Y, X = X, family = binomial(), SL.library = sl_lib, cvControl = list(V = 2)) Risk Coef SL.randomForest_All 0.0384594 0.98145221 SL.mean_All 0.2356000 0.01854779 $grid NULL $names [1] "SL.randomForest_1" $base_learner [1] "SL.randomForest" $params list() Call: SuperLearner(Y = Y, X = X, family = binomial(), SL.library = create_rf$names, cvControl = list(V = 2)) Risk Coef SL.randomForest_1_All 0.05215472 1 SL.randomForest_1 <- function(...) SL.randomForest(...) $grid NULL $names [1] "SL.randomForest_1" $base_learner [1] "SL.randomForest" $params list() [1] "SL.randomForest_1" [1] 1 Call: SuperLearner(Y = Y, X = X, family = binomial(), SL.library = create_rf$names, cvControl = list(V = 2), env = sl_env) Risk Coef SL.randomForest_1_All 0.04151372 1 $grid mtry 1 1 2 2 $names [1] "SL.randomForest_1" "SL.randomForest_2" $base_learner [1] "SL.randomForest" $params list() [1] "SL.randomForest_1" "SL.randomForest_2" Call: SuperLearner(Y = Y, X = X, family = binomial(), SL.library = create_rf$names, cvControl = list(V = 2), env = sl_env) Risk Coef SL.randomForest_1_All 0.05852161 0.8484752 SL.randomForest_2_All 0.05319324 0.1515248 $grid mtry 1 1 2 2 $names [1] "SL.randomForest_1" "SL.randomForest_2" $base_learner [1] "SL.randomForest" $params list() [1] "SL.randomForest_1" "SL.randomForest_2" Call: SuperLearner(Y = Y, X = X, family = binomial(), SL.library = create_rf$names, cvControl = list(V = 2), env = sl_env) Risk Coef SL.randomForest_1_All 0.04540374 0.2120815 SL.randomForest_2_All 0.03931360 0.7879185 $grid mtry nodesize maxnodes 1 1 NULL NULL 2 2 NULL NULL $names [1] "SL.randomForest_1_NULL_NULL" "SL.randomForest_2_NULL_NULL" $base_learner [1] "SL.randomForest" $params list() [1] "SL.randomForest_1_NULL_NULL" "SL.randomForest_2_NULL_NULL" Call: SuperLearner(Y = Y, X = X, family = binomial(), SL.library = create_rf$names, cvControl = list(V = 2), env = sl_env) Risk Coef SL.randomForest_1_NULL_NULL_All 0.05083433 0.2589592 SL.randomForest_2_NULL_NULL_All 0.04697238 0.7410408 $grid mtry maxnodes 1 1 5 2 2 5 3 1 10 4 2 10 5 1 NULL 6 2 NULL $names [1] "SL.randomForest_1_5" "SL.randomForest_2_5" "SL.randomForest_1_10" [4] "SL.randomForest_2_10" "SL.randomForest_1_NULL" "SL.randomForest_2_NULL" $base_learner [1] "SL.randomForest" $params list() Call: SuperLearner(Y = Y, X = X, family = binomial(), SL.library = create_rf$names, cvControl = list(V = 2), env = sl_env) Risk Coef SL.randomForest_1_5_All 0.04597977 0.0000000 SL.randomForest_2_5_All 0.03951320 0.0000000 SL.randomForest_1_10_All 0.04337471 0.1117946 SL.randomForest_2_10_All 0.03898477 0.8882054 SL.randomForest_1_NULL_All 0.04395171 0.0000000 SL.randomForest_2_NULL_All 0.03928269 0.0000000 Call: SuperLearner(Y = Y, X = X, family = binomial(), SL.library = create_rf$names, cvControl = list(V = 2)) Risk Coef SL.randomForest_1_5_All 0.05330062 0.4579034 SL.randomForest_2_5_All 0.05189278 0.0000000 SL.randomForest_1_10_All 0.05263432 0.1614643 SL.randomForest_2_10_All 0.05058144 0.0000000 SL.randomForest_1_NULL_All 0.05415397 0.0000000 SL.randomForest_2_NULL_All 0.05036643 0.3806323 Call: SuperLearner(Y = Y, X = X, family = binomial(), SL.library = create_rf$names, cvControl = list(V = 2)) Risk Coef SL.randomForest_1_5_All 0.05978213 0 SL.randomForest_2_5_All 0.05628852 0 SL.randomForest_1_10_All 0.05751494 0 SL.randomForest_2_10_All 0.05889935 0 SL.randomForest_1_NULL_All 0.05629605 1 SL.randomForest_2_NULL_All 0.05807645 0 Ranger result Call: ranger::ranger(`_Y` ~ ., data = cbind(`_Y` = Y, X), num.trees = num.trees, mtry = mtry, min.node.size = min.node.size, replace = replace, sample.fraction = sample.fraction, case.weights = obsWeights, write.forest = write.forest, probability = probability, num.threads = num.threads, verbose = verbose) Type: Regression Number of trees: 500 Sample size: 506 Number of independent variables: 13 Mtry: 3 Target node size: 5 Variable importance mode: none Splitrule: variance OOB prediction error (MSE): 10.57547 R squared (OOB): 0.8749748 Ranger result Call: ranger::ranger(`_Y` ~ ., data = cbind(`_Y` = Y, X), num.trees = num.trees, mtry = mtry, min.node.size = min.node.size, replace = replace, sample.fraction = sample.fraction, case.weights = obsWeights, write.forest = write.forest, probability = probability, num.threads = num.threads, verbose = verbose) Type: Probability estimation Number of trees: 500 Sample size: 506 Number of independent variables: 13 Mtry: 3 Target node size: 1 Variable importance mode: none Splitrule: gini OOB prediction error (Brier s.): 0.08262419 Ranger result Call: ranger::ranger(`_Y` ~ ., data = cbind(`_Y` = Y, X), num.trees = num.trees, mtry = mtry, min.node.size = min.node.size, replace = replace, sample.fraction = sample.fraction, case.weights = obsWeights, write.forest = write.forest, probability = probability, num.threads = num.threads, verbose = verbose) Type: Regression Number of trees: 500 Sample size: 506 Number of independent variables: 13 Mtry: 3 Target node size: 5 Variable importance mode: none Splitrule: variance OOB prediction error (MSE): 10.46443 R squared (OOB): 0.8762876 Ranger result Call: ranger::ranger(`_Y` ~ ., data = cbind(`_Y` = Y, X), num.trees = num.trees, mtry = mtry, min.node.size = min.node.size, replace = replace, sample.fraction = sample.fraction, case.weights = obsWeights, write.forest = write.forest, probability = probability, num.threads = num.threads, verbose = verbose) Type: Probability estimation Number of trees: 500 Sample size: 506 Number of independent variables: 13 Mtry: 3 Target node size: 1 Variable importance mode: none Splitrule: gini OOB prediction error (Brier s.): 0.08395011 Generalized Linear Model of class 'speedglm': Call: speedglm::speedglm(formula = Y ~ ., data = X, family = family, weights = obsWeights, maxit = maxit, k = k) Coefficients: (Intercept) crim zn indus chas nox 3.646e+01 -1.080e-01 4.642e-02 2.056e-02 2.687e+00 -1.777e+01 rm age dis rad tax ptratio 3.810e+00 6.922e-04 -1.476e+00 3.060e-01 -1.233e-02 -9.527e-01 black lstat 9.312e-03 -5.248e-01 Generalized Linear Model of class 'speedglm': Call: speedglm::speedglm(formula = Y ~ ., data = X, family = family, weights = obsWeights, maxit = maxit, k = k) Coefficients: ------------------------------------------------------------------ Estimate Std. Error t value Pr(>|t|) (Intercept) 3.646e+01 5.103459 7.1441 3.283e-12 *** crim -1.080e-01 0.032865 -3.2865 1.087e-03 ** zn 4.642e-02 0.013727 3.3816 7.781e-04 *** indus 2.056e-02 0.061496 0.3343 7.383e-01 chas 2.687e+00 0.861580 3.1184 1.925e-03 ** nox -1.777e+01 3.819744 -4.6513 4.246e-06 *** rm 3.810e+00 0.417925 9.1161 1.979e-18 *** age 6.922e-04 0.013210 0.0524 9.582e-01 dis -1.476e+00 0.199455 -7.3980 6.013e-13 *** rad 3.060e-01 0.066346 4.6129 5.071e-06 *** tax -1.233e-02 0.003761 -3.2800 1.112e-03 ** ptratio -9.527e-01 0.130827 -7.2825 1.309e-12 *** black 9.312e-03 0.002686 3.4668 5.729e-04 *** lstat -5.248e-01 0.050715 -10.3471 7.777e-23 *** ------------------------------------------------------------------- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 --- null df: 505; null deviance: 42716.3; residuals df: 492; residuals deviance: 11078.78; # obs.: 506; # non-zero weighted obs.: 506; AIC: 3027.609; log Likelihood: -1498.804; RSS: 11078.8; dispersion: 22.51785; iterations: 1; rank: 14; max tolerance: 1e+00; convergence: FALSE. Generalized Linear Model of class 'speedglm': Call: speedglm::speedglm(formula = Y ~ ., data = X, family = family, weights = obsWeights, maxit = maxit, k = k) Coefficients: ------------------------------------------------------------------ Estimate Std. Error z value Pr(>|z|) (Intercept) 10.682635 3.921395 2.7242 6.446e-03 ** crim -0.040649 0.049796 -0.8163 4.143e-01 zn 0.012134 0.010678 1.1364 2.558e-01 indus -0.040715 0.045615 -0.8926 3.721e-01 chas 0.248209 0.653283 0.3799 7.040e-01 nox -3.601085 2.924365 -1.2314 2.182e-01 rm 1.155157 0.374843 3.0817 2.058e-03 ** age -0.018660 0.009319 -2.0023 4.525e-02 * dis -0.518934 0.146286 -3.5474 3.891e-04 *** rad 0.255522 0.061391 4.1622 3.152e-05 *** tax -0.009500 0.003107 -3.0574 2.233e-03 ** ptratio -0.409317 0.103191 -3.9666 7.291e-05 *** black -0.001451 0.002558 -0.5674 5.704e-01 lstat -0.318436 0.054735 -5.8178 5.964e-09 *** ------------------------------------------------------------------- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 --- null df: 505; null deviance: 669.76; residuals df: 492; residuals deviance: 296.39; # obs.: 506; # non-zero weighted obs.: 506; AIC: 324.3944; log Likelihood: -148.1972; RSS: 1107.5; dispersion: 1; iterations: 7; rank: 14; max tolerance: 7.55e-12; convergence: TRUE. Generalized Linear Model of class 'speedglm': Call: speedglm::speedglm(formula = Y ~ ., data = X, family = family, weights = obsWeights, maxit = maxit, k = k) Coefficients: ------------------------------------------------------------------ Estimate Std. Error t value Pr(>|t|) (Intercept) 3.646e+01 5.103459 7.1441 3.283e-12 *** crim -1.080e-01 0.032865 -3.2865 1.087e-03 ** zn 4.642e-02 0.013727 3.3816 7.781e-04 *** indus 2.056e-02 0.061496 0.3343 7.383e-01 chas 2.687e+00 0.861580 3.1184 1.925e-03 ** nox -1.777e+01 3.819744 -4.6513 4.246e-06 *** rm 3.810e+00 0.417925 9.1161 1.979e-18 *** age 6.922e-04 0.013210 0.0524 9.582e-01 dis -1.476e+00 0.199455 -7.3980 6.013e-13 *** rad 3.060e-01 0.066346 4.6129 5.071e-06 *** tax -1.233e-02 0.003761 -3.2800 1.112e-03 ** ptratio -9.527e-01 0.130827 -7.2825 1.309e-12 *** black 9.312e-03 0.002686 3.4668 5.729e-04 *** lstat -5.248e-01 0.050715 -10.3471 7.777e-23 *** ------------------------------------------------------------------- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 --- null df: 505; null deviance: 42716.3; residuals df: 492; residuals deviance: 11078.78; # obs.: 506; # non-zero weighted obs.: 506; AIC: 3027.609; log Likelihood: -1498.804; RSS: 11078.8; dispersion: 22.51785; iterations: 1; rank: 14; max tolerance: 1e+00; convergence: FALSE. Generalized Linear Model of class 'speedglm': Call: speedglm::speedglm(formula = Y ~ ., data = X, family = family, weights = obsWeights, maxit = maxit, k = k) Coefficients: ------------------------------------------------------------------ Estimate Std. Error z value Pr(>|z|) (Intercept) 10.682635 3.921395 2.7242 6.446e-03 ** crim -0.040649 0.049796 -0.8163 4.143e-01 zn 0.012134 0.010678 1.1364 2.558e-01 indus -0.040715 0.045615 -0.8926 3.721e-01 chas 0.248209 0.653283 0.3799 7.040e-01 nox -3.601085 2.924365 -1.2314 2.182e-01 rm 1.155157 0.374843 3.0817 2.058e-03 ** age -0.018660 0.009319 -2.0023 4.525e-02 * dis -0.518934 0.146286 -3.5474 3.891e-04 *** rad 0.255522 0.061391 4.1622 3.152e-05 *** tax -0.009500 0.003107 -3.0574 2.233e-03 ** ptratio -0.409317 0.103191 -3.9666 7.291e-05 *** black -0.001451 0.002558 -0.5674 5.704e-01 lstat -0.318436 0.054735 -5.8178 5.964e-09 *** ------------------------------------------------------------------- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 --- null df: 505; null deviance: 669.76; residuals df: 492; residuals deviance: 296.39; # obs.: 506; # non-zero weighted obs.: 506; AIC: 324.3944; log Likelihood: -148.1972; RSS: 1107.5; dispersion: 1; iterations: 7; rank: 14; max tolerance: 7.55e-12; convergence: TRUE. Linear Regression Model of class 'speedlm': Call: speedglm::speedlm(formula = Y ~ ., data = X, weights = obsWeights) Coefficients: (Intercept) crim zn indus chas nox 3.646e+01 -1.080e-01 4.642e-02 2.056e-02 2.687e+00 -1.777e+01 rm age dis rad tax ptratio 3.810e+00 6.922e-04 -1.476e+00 3.060e-01 -1.233e-02 -9.527e-01 black lstat 9.312e-03 -5.248e-01 Linear Regression Model of class 'speedlm': Call: speedglm::speedlm(formula = Y ~ ., data = X, weights = obsWeights) Coefficients: ------------------------------------------------------------------ coef se t p.value (Intercept) 36.459488 5.103459 7.144 3.283e-12 *** crim -0.108011 0.032865 -3.287 1.087e-03 ** zn 0.046420 0.013727 3.382 7.781e-04 *** indus 0.020559 0.061496 0.334 7.383e-01 chas 2.686734 0.861580 3.118 1.925e-03 ** nox -17.766611 3.819744 -4.651 4.246e-06 *** rm 3.809865 0.417925 9.116 1.979e-18 *** age 0.000692 0.013210 0.052 9.582e-01 dis -1.475567 0.199455 -7.398 6.013e-13 *** rad 0.306049 0.066346 4.613 5.071e-06 *** tax -0.012335 0.003761 -3.280 1.112e-03 ** ptratio -0.952747 0.130827 -7.283 1.309e-12 *** black 0.009312 0.002686 3.467 5.729e-04 *** lstat -0.524758 0.050715 -10.347 7.777e-23 *** ------------------------------------------------------------------- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 --- Residual standard error: 4.745298 on 492 degrees of freedom; observations: 506; R^2: 0.741; adjusted R^2: 0.734; F-statistic: 108.1 on 13 and 492 df; p-value: 0. Linear Regression Model of class 'speedlm': Call: speedglm::speedlm(formula = Y ~ ., data = X, weights = obsWeights) Coefficients: ------------------------------------------------------------------ coef se t p.value (Intercept) 1.667540 0.366239 4.553 6.670e-06 *** crim 0.000303 0.002358 0.128 8.979e-01 zn 0.002303 0.000985 2.338 1.981e-02 * indus -0.004025 0.004413 -0.912 3.621e-01 chas 0.033853 0.061829 0.548 5.843e-01 nox -0.724254 0.274116 -2.642 8.501e-03 ** rm 0.135798 0.029992 4.528 7.483e-06 *** age -0.003107 0.000948 -3.278 1.121e-03 ** dis -0.074892 0.014313 -5.232 2.482e-07 *** rad 0.016816 0.004761 3.532 4.515e-04 *** tax -0.000672 0.000270 -2.490 1.311e-02 * ptratio -0.049838 0.009389 -5.308 1.677e-07 *** black 0.000147 0.000193 0.760 4.474e-01 lstat -0.019759 0.003639 -5.429 8.912e-08 *** ------------------------------------------------------------------- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 --- Residual standard error: 0.340537 on 492 degrees of freedom; observations: 506; R^2: 0.519; adjusted R^2: 0.506; F-statistic: 40.86 on 13 and 492 df; p-value: 0. Linear Regression Model of class 'speedlm': Call: speedglm::speedlm(formula = Y ~ ., data = X, weights = obsWeights) Coefficients: ------------------------------------------------------------------ coef se t p.value (Intercept) 36.459488 5.103459 7.144 3.283e-12 *** crim -0.108011 0.032865 -3.287 1.087e-03 ** zn 0.046420 0.013727 3.382 7.781e-04 *** indus 0.020559 0.061496 0.334 7.383e-01 chas 2.686734 0.861580 3.118 1.925e-03 ** nox -17.766611 3.819744 -4.651 4.246e-06 *** rm 3.809865 0.417925 9.116 1.979e-18 *** age 0.000692 0.013210 0.052 9.582e-01 dis -1.475567 0.199455 -7.398 6.013e-13 *** rad 0.306049 0.066346 4.613 5.071e-06 *** tax -0.012335 0.003761 -3.280 1.112e-03 ** ptratio -0.952747 0.130827 -7.283 1.309e-12 *** black 0.009312 0.002686 3.467 5.729e-04 *** lstat -0.524758 0.050715 -10.347 7.777e-23 *** ------------------------------------------------------------------- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 --- Residual standard error: 4.745298 on 492 degrees of freedom; observations: 506; R^2: 0.741; adjusted R^2: 0.734; F-statistic: 108.1 on 13 and 492 df; p-value: 0. Linear Regression Model of class 'speedlm': Call: speedglm::speedlm(formula = Y ~ ., data = X, weights = obsWeights) Coefficients: ------------------------------------------------------------------ coef se t p.value (Intercept) 1.667540 0.366239 4.553 6.670e-06 *** crim 0.000303 0.002358 0.128 8.979e-01 zn 0.002303 0.000985 2.338 1.981e-02 * indus -0.004025 0.004413 -0.912 3.621e-01 chas 0.033853 0.061829 0.548 5.843e-01 nox -0.724254 0.274116 -2.642 8.501e-03 ** rm 0.135798 0.029992 4.528 7.483e-06 *** age -0.003107 0.000948 -3.278 1.121e-03 ** dis -0.074892 0.014313 -5.232 2.482e-07 *** rad 0.016816 0.004761 3.532 4.515e-04 *** tax -0.000672 0.000270 -2.490 1.311e-02 * ptratio -0.049838 0.009389 -5.308 1.677e-07 *** black 0.000147 0.000193 0.760 4.474e-01 lstat -0.019759 0.003639 -5.429 8.912e-08 *** ------------------------------------------------------------------- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 --- Residual standard error: 0.340537 on 492 degrees of freedom; observations: 506; R^2: 0.519; adjusted R^2: 0.506; F-statistic: 40.86 on 13 and 492 df; p-value: 0. [ FAIL 1 | WARN 34 | SKIP 9 | PASS 67 ] ══ Skipped tests (9) ═══════════════════════════════════════════════════════════ • empty test (9): , , , , , , , , ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-XGBoost.R:25:1'): (code run outside of `test_that()`) ────────── Error in `UseMethod("predict")`: no applicable method for 'predict' applied to an object of class "NULL" Backtrace: ▆ 1. ├─stats::predict(sl, X) at test-XGBoost.R:25:1 2. └─SuperLearner::predict.SuperLearner(sl, X) 3. ├─base::do.call(...) 4. └─stats::predict(...) [ FAIL 1 | WARN 34 | SKIP 9 | PASS 67 ] Error: ! Test failures. Execution halted Package: targeted Check: examples New result: ERROR Running examples in ‘targeted-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: learner_xgboost > ### Title: Construct a learner > ### Aliases: learner_xgboost > > ### ** Examples > > n <- 1e3 > x1 <- rnorm(n, sd = 2) > x2 <- rnorm(n) > lp <- x2*x1 + cos(x1) > yb <- rbinom(n, 1, lava::expit(lp)) > y <- lp + rnorm(n, sd = 0.5**.5) > d0 <- data.frame(y, yb, x1, x2) > > # regression > lr <- learner_xgboost(y ~ x1 + x2, nrounds = 5) > lr$estimate(d0) Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: verbose. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'eta' has been renamed to 'learning_rate'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'lambda' has been renamed to 'reg_lambda'. This warning will become an error in a future version. Error in (function (x, y, objective = NULL, nrounds = 100L, max_depth = NULL, : argument "y" is missing, with no default Calls: ... -> process.y.margin.and.objective -> NROW Execution halted Package: targeted Check: re-building of vignette outputs New result: ERROR Error(s) in re-building vignettes: ... --- re-building ‘predictionclass.Rmd’ using rmarkdown Quitting from predictionclass.Rmd:143-145 [unnamed-chunk-5] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: ! argument "y" is missing, with no default --- Backtrace: ▆ 1. └─lr_xgboost$estimate(data = pbc) 2. └─private$fitfun(data, ...) 3. ├─base::structure(do.call(private$init.estimate, args), design = summary(xx)) 4. ├─base::do.call(private$init.estimate, args) 5. └─targeted (local) ``(...) 6. ├─base::do.call(...) 7. └─xgboost (local) ``(...) 8. └─xgboost:::process.y.margin.and.objective(...) 9. └─base::NROW(y) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'predictionclass.Rmd' failed with diagnostics: argument "y" is missing, with no default --- failed re-building ‘predictionclass.Rmd’ --- re-building ‘riskregression.Rmd’ using rmarkdown --- finished re-building ‘riskregression.Rmd’ SUMMARY: processing the following file failed: ‘predictionclass.Rmd’ Error: Vignette re-building failed. Execution halted Package: targeted Check: tests New result: ERROR Running ‘tinytest.R’ [15s/15s] Running the tests in ‘tests/tinytest.R’ failed. Complete output: > > if (requireNamespace("tinytest", quietly = TRUE)) { + + ## future::plan("sequential") + Sys.setenv("OMP_THREAD_LIMIT" = 2) + options(Ncpus = 1) + data.table::setDTthreads(1) + + tinytest::test_package("targeted") + } test_cate.R................... 0 tests test_cate.R................... 0 tests test_cate.R................... 0 tests test_cate.R................... 0 tests test_cate.R................... 0 tests test_cate.R................... 0 tests test_cate.R................... 0 tests test_cate.R................... 0 tests test_cate.R................... 0 tests test_cate.R................... 1 tests OK test_cate.R................... 2 tests OK test_cate.R................... 2 tests OK test_cate.R................... 11 tests OK test_cate.R................... 11 tests OK test_cate.R................... 23 tests OK test_cate.R................... 23 tests OK test_cate.R................... 27 tests OK test_cate.R................... 27 tests OK test_cate.R................... 27 tests OK test_cate.R................... 31 tests OK test_cate.R................... 31 tests OK test_cate.R................... 31 tests OK test_cate.R................... 31 tests OK test_cate.R................... 31 tests OK test_cate.R................... 31 tests OK test_cate.R................... 31 tests OK test_cate.R................... 31 tests OK test_cate.R................... 31 tests OK test_cate.R................... 35 tests OK test_cate.R................... 35 tests OK test_cate.R................... 36 tests OK 3.5s test_cumhaz.R................. 0 tests test_cumhaz.R................. 0 tests test_cumhaz.R................. 0 tests test_cumhaz.R................. 0 tests test_cumhaz.R................. 0 tests test_cumhaz.R................. 0 tests test_cumhaz.R................. 0 tests test_cumhaz.R................. 0 tests test_cumhaz.R................. 0 tests test_cumhaz.R................. 0 tests test_cumhaz.R................. 0 tests test_cumhaz.R................. 0 tests test_cumhaz.R................. 0 tests test_cumhaz.R................. 0 tests test_cumhaz.R................. 0 tests test_cumhaz.R................. 0 tests test_cumhaz.R................. 5 tests OK test_cumhaz.R................. 5 tests OK test_cumhaz.R................. 5 tests OK test_cumhaz.R................. 5 tests OK test_cumhaz.R................. 5 tests OK test_cumhaz.R................. 6 tests OK test_cumhaz.R................. 6 tests OK test_cumhaz.R................. 11 tests OK test_cumhaz.R................. 11 tests OK test_cumhaz.R................. 11 tests OK test_cumhaz.R................. 11 tests OK test_cumhaz.R................. 11 tests OK test_cumhaz.R................. 11 tests OK test_cumhaz.R................. 11 tests OK test_cumhaz.R................. 12 tests OK test_cumhaz.R................. 12 tests OK test_cumhaz.R................. 17 tests OK test_cumhaz.R................. 17 tests OK test_cumhaz.R................. 22 tests OK test_cumhaz.R................. 22 tests OK test_cumhaz.R................. 22 tests OK test_cumhaz.R................. 27 tests OK test_cumhaz.R................. 27 tests OK test_cumhaz.R................. 27 tests OK test_cumhaz.R................. 28 tests OK test_cumhaz.R................. 28 tests OK test_cumhaz.R................. 28 tests OK test_cumhaz.R................. 28 tests OK test_cumhaz.R................. 29 tests OK test_cumhaz.R................. 34 tests OK test_cumhaz.R................. 34 tests OK test_cumhaz.R................. 34 tests OK test_cumhaz.R................. 34 tests OK test_cumhaz.R................. 34 tests OK test_cumhaz.R................. 35 tests OK test_cumhaz.R................. 35 tests OK test_cumhaz.R................. 40 tests OK test_cumhaz.R................. 41 tests OK test_cumhaz.R................. 41 tests OK test_cumhaz.R................. 41 tests OK test_cumhaz.R................. 41 tests OK test_cumhaz.R................. 41 tests OK test_cumhaz.R................. 42 tests OK test_cumhaz.R................. 42 tests OK test_cumhaz.R................. 42 tests OK test_cumhaz.R................. 43 tests OK test_cumhaz.R................. 43 tests OK test_cumhaz.R................. 43 tests OK test_cumhaz.R................. 44 tests OK test_cumhaz.R................. 44 tests OK test_cumhaz.R................. 44 tests OK test_cumhaz.R................. 44 tests OK test_cumhaz.R................. 45 tests OK test_cumhaz.R................. 45 tests OK test_cumhaz.R................. 46 tests OK test_cumhaz.R................. 46 tests OK test_cumhaz.R................. 51 tests OK test_cumhaz.R................. 51 tests OK test_cumhaz.R................. 51 tests OK test_cumhaz.R................. 56 tests OK test_cumhaz.R................. 56 tests OK test_cumhaz.R................. 56 tests OK test_cumhaz.R................. 56 tests OK test_cumhaz.R................. 56 tests OK test_cumhaz.R................. 56 tests OK test_cumhaz.R................. 56 tests OK test_cumhaz.R................. 56 tests OK test_cumhaz.R................. 56 tests OK test_cumhaz.R................. 56 tests OK test_cumhaz.R................. 56 tests OK test_cumhaz.R................. 56 tests OK Attaching package: 'survival' The following object is masked from 'package:future': cluster test_cumhaz.R................. 56 tests OK test_cumhaz.R................. 56 tests OK test_cumhaz.R................. 56 tests OK test_cumhaz.R................. 56 tests OK test_cumhaz.R................. 56 tests OK test_cumhaz.R................. 56 tests OK test_cumhaz.R................. 56 tests OK test_cumhaz.R................. 56 tests OK test_cumhaz.R................. 56 tests OK test_cumhaz.R................. 63 tests OK test_cumhaz.R................. 63 tests OK test_cumhaz.R................. 70 tests OK test_cumhaz.R................. 70 tests OK test_cumhaz.R................. 77 tests OK 3.7s test_design.R................. 0 tests test_design.R................. 0 tests test_design.R................. 0 tests test_design.R................. 0 tests test_design.R................. 0 tests test_design.R................. 11 tests OK test_design.R................. 11 tests OK test_design.R................. 14 tests OK test_design.R................. 14 tests OK test_design.R................. 30 tests OK test_design.R................. 30 tests OK test_design.R................. 30 tests OK test_design.R................. 36 tests OK test_design.R................. 36 tests OK test_design.R................. 38 tests OK test_design.R................. 38 tests OK test_design.R................. 48 tests OK test_design.R................. 48 tests OK test_design.R................. 55 tests OK test_design.R................. 55 tests OK test_design.R................. 62 tests OK test_design.R................. 62 tests OK test_design.R................. 64 tests OK 0.1s test_expand_list.R............ 0 tests test_expand_list.R............ 0 tests test_expand_list.R............ 0 tests test_expand_list.R............ 18 tests OK 6ms test_intersection_sw.R........ 0 tests test_intersection_sw.R........ 0 tests test_intersection_sw.R........ 0 tests test_intersection_sw.R........ 0 tests test_intersection_sw.R........ 0 tests test_intersection_sw.R........ 0 tests test_intersection_sw.R........ 0 tests test_intersection_sw.R........ 0 tests test_intersection_sw.R........ 0 tests 6ms test_intsurv.R................ 0 tests test_intsurv.R................ 0 tests test_intsurv.R................ 0 tests test_intsurv.R................ 0 tests test_intsurv.R................ 0 tests test_intsurv.R................ 0 tests test_intsurv.R................ 0 tests test_intsurv.R................ 0 tests test_intsurv.R................ 0 tests test_intsurv.R................ 0 tests test_intsurv.R................ 0 tests test_intsurv.R................ 0 tests test_intsurv.R................ 0 tests test_intsurv.R................ 0 tests test_intsurv.R................ 0 tests test_intsurv.R................ 0 tests test_intsurv.R................ 0 tests test_intsurv.R................ 0 tests test_intsurv.R................ 0 tests test_intsurv.R................ 1 tests OK test_intsurv.R................ 2 tests OK test_intsurv.R................ 2 tests OK test_intsurv.R................ 2 tests OK test_intsurv.R................ 2 tests OK test_intsurv.R................ 3 tests OK 23ms test_learner.R................ 0 tests test_learner.R................ 0 tests test_learner.R................ 0 tests test_learner.R................ 0 tests test_learner.R................ 0 tests test_learner.R................ 0 tests test_learner.R................ 0 tests test_learner.R................ 5 tests OK test_learner.R................ 5 tests OK test_learner.R................ 13 tests OK test_learner.R................ 13 tests OK test_learner.R................ 17 tests OK test_learner.R................ 17 tests OK test_learner.R................ 24 tests OK test_learner.R................ 24 tests OK test_learner.R................ 28 tests OK test_learner.R................ 28 tests OK test_learner.R................ 32 tests OK test_learner.R................ 32 tests OK test_learner.R................ 43 tests OK test_learner.R................ 43 tests OK test_learner.R................ 45 tests OK test_learner.R................ 45 tests OK test_learner.R................ 51 tests OK 0.5s test_learner_expand_grid.R.... 0 tests test_learner_expand_grid.R.... 1 tests OK test_learner_expand_grid.R.... 1 tests OK test_learner_expand_grid.R.... 2 tests OK test_learner_expand_grid.R.... 2 tests OK test_learner_expand_grid.R.... 3 tests OK test_learner_expand_grid.R.... 3 tests OK test_learner_expand_grid.R.... 4 tests OK test_learner_expand_grid.R.... 4 tests OK test_learner_expand_grid.R.... 4 tests OK test_learner_expand_grid.R.... 5 tests OK test_learner_expand_grid.R.... 6 tests OK test_learner_expand_grid.R.... 6 tests OK test_learner_expand_grid.R.... 7 tests OK test_learner_expand_grid.R.... 8 tests OK 33ms test_learner_glm.R............ 0 tests test_learner_glm.R............ 0 tests test_learner_glm.R............ 0 tests test_learner_glm.R............ 0 tests test_learner_glm.R............ 0 tests test_learner_glm.R............ 0 tests test_learner_glm.R............ 8 tests OK 74ms test_learner_glmnet.R......... 0 tests test_learner_glmnet.R......... 0 tests test_learner_glmnet.R......... 0 tests test_learner_glmnet.R......... 0 tests test_learner_glmnet.R......... 0 tests test_learner_glmnet.R......... 0 tests test_learner_glmnet.R......... 8 tests OK 1.5s test_learner_grf.R............ 0 tests test_learner_grf.R............ 0 tests test_learner_grf.R............ 0 tests test_learner_grf.R............ 0 tests test_learner_grf.R............ 0 tests test_learner_grf.R............ 0 tests test_learner_grf.R............ 0 tests test_learner_grf.R............ 1 tests OK test_learner_grf.R............ 2 tests OK test_learner_grf.R............ 2 tests OK test_learner_grf.R............ 2 tests OK test_learner_grf.R............ 3 tests OK test_learner_grf.R............ 3 tests OK test_learner_grf.R............ 4 tests OK test_learner_grf.R............ 5 tests OK test_learner_grf.R............ 5 tests OK test_learner_grf.R............ 6 tests OK test_learner_grf.R............ 7 tests OK test_learner_grf.R............ 7 tests OK test_learner_grf.R............ 7 tests OK test_learner_grf.R............ 7 tests OK test_learner_grf.R............ 8 tests OK test_learner_grf.R............ 9 tests OK test_learner_grf.R............ 9 tests OK test_learner_grf.R............ 9 tests OK test_learner_grf.R............ 9 tests OK test_learner_grf.R............ 10 tests OK test_learner_grf.R............ 10 tests OK test_learner_grf.R............ 10 tests OK test_learner_grf.R............ 10 tests OK 1.4s test_learner_hal.R............ 0 tests test_learner_hal.R............ 0 tests test_learner_hal.R............ 0 tests test_learner_hal.R............ 0 tests test_learner_hal.R............ 3 tests OK 0.4s test_learner_isoreg.R......... 0 tests test_learner_isoreg.R......... 0 tests test_learner_isoreg.R......... 0 tests test_learner_isoreg.R......... 0 tests test_learner_isoreg.R......... 0 tests test_learner_isoreg.R......... 0 tests test_learner_isoreg.R......... 0 tests test_learner_isoreg.R......... 0 tests test_learner_isoreg.R......... 0 tests test_learner_isoreg.R......... 1 tests OK test_learner_isoreg.R......... 2 tests OK test_learner_isoreg.R......... 2 tests OK test_learner_isoreg.R......... 3 tests OK 9ms test_learner_mars.R........... 0 tests test_learner_mars.R........... 0 tests test_learner_mars.R........... 0 tests test_learner_mars.R........... 0 tests test_learner_mars.R........... 0 tests test_learner_mars.R........... 0 tests test_learner_mars.R........... 5 tests OK 0.2s test_learner_naivebayes.R..... 0 tests test_learner_naivebayes.R..... 0 tests test_learner_naivebayes.R..... 0 tests test_learner_naivebayes.R..... 0 tests test_learner_naivebayes.R..... 0 tests test_learner_naivebayes.R..... 0 tests test_learner_naivebayes.R..... 1 tests OK test_learner_naivebayes.R..... 2 tests OK test_learner_naivebayes.R..... 2 tests OK test_learner_naivebayes.R..... 2 tests OK test_learner_naivebayes.R..... 2 tests OK test_learner_naivebayes.R..... 3 tests OK test_learner_naivebayes.R..... 4 tests OK test_learner_naivebayes.R..... 5 tests OK test_learner_naivebayes.R..... 5 tests OK test_learner_naivebayes.R..... 5 tests OK test_learner_naivebayes.R..... 6 tests OK test_learner_naivebayes.R..... 7 tests OK test_learner_naivebayes.R..... 7 tests OK test_learner_naivebayes.R..... 7 tests OK test_learner_naivebayes.R..... 7 tests OK test_learner_naivebayes.R..... 7 tests OK test_learner_naivebayes.R..... 7 tests OK test_learner_naivebayes.R..... 8 tests OK test_learner_naivebayes.R..... 8 tests OK test_learner_naivebayes.R..... 8 tests OK test_learner_naivebayes.R..... 9 tests OK test_learner_naivebayes.R..... 9 tests OK test_learner_naivebayes.R..... 9 tests OK test_learner_naivebayes.R..... 9 tests OK test_learner_naivebayes.R..... 9 tests OK test_learner_naivebayes.R..... 9 tests OK test_learner_naivebayes.R..... 9 tests OK test_learner_naivebayes.R..... 10 tests OK 0.2s test_learner_sl.R............. 0 tests test_learner_sl.R............. 0 tests test_learner_sl.R............. 0 tests test_learner_sl.R............. 0 tests test_learner_sl.R............. 5 tests OK 0.5s test_learner_stratify.R....... 0 tests test_learner_stratify.R....... 0 tests test_learner_stratify.R....... 0 tests test_learner_stratify.R....... 3 tests OK test_learner_stratify.R....... 3 tests OK test_learner_stratify.R....... 5 tests OK 50ms test_learner_svm.R............ 0 tests test_learner_svm.R............ 0 tests test_learner_svm.R............ 0 tests test_learner_svm.R............ 0 tests test_learner_svm.R............ 0 tests test_learner_svm.R............ 1 tests OK test_learner_svm.R............ 2 tests OK test_learner_svm.R............ 2 tests OK test_learner_svm.R............ 2 tests OK test_learner_svm.R............ 2 tests OK test_learner_svm.R............ 2 tests OK test_learner_svm.R............ 2 tests OK test_learner_svm.R............ 2 tests OK test_learner_svm.R............ 3 tests OK test_learner_svm.R............ 3 tests OK test_learner_svm.R............ 3 tests OK test_learner_svm.R............ 4 tests OK test_learner_svm.R............ 5 tests OK test_learner_svm.R............ 5 tests OK test_learner_svm.R............ 5 tests OK test_learner_svm.R............ 6 tests OK test_learner_svm.R............ 7 tests OK test_learner_svm.R............ 8 tests OK test_learner_svm.R............ 8 tests OK test_learner_svm.R............ 9 tests OK test_learner_svm.R............ 9 tests OK test_learner_svm.R............ 9 tests OK test_learner_svm.R............ 10 tests OK 0.3s test_learner_xgboost.R........ 0 tests test_learner_xgboost.R........ 0 tests test_learner_xgboost.R........ 0 tests test_learner_xgboost.R........ 0 tests Error in (function (x, y, objective = NULL, nrounds = 100L, max_depth = NULL, : argument "y" is missing, with no default Calls: ... -> process.y.margin.and.objective -> NROW In addition: Warning messages: 1: In throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: verbose. This warning will become an error in a future version. 2: In throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. 3: In throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'eta' has been renamed to 'learning_rate'. This warning will become an error in a future version. 4: In throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'lambda' has been renamed to 'reg_lambda'. This warning will become an error in a future version. Execution halted Package: tidysdm Check: re-building of vignette outputs New result: ERROR Error(s) in re-building vignettes: ... --- re-building ‘a0_tidysdm_overview.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘a0_tidysdm_overview.Rmd’ --- re-building ‘a1_palaeodata_application.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘a1_palaeodata_application.Rmd’ --- re-building ‘a2_tidymodels_additions.Rmd’ using rmarkdown Quitting from a2_tidymodels_additions.Rmd:65-69 [vip] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error in `xgb.get.handle()`: ! 'xgb.Booster' object is corrupted or is from an incompatible XGBoost version. --- Backtrace: ▆ 1. ├─DALEX::model_parts(explainer = explainer_lacerta_ens) 2. │ ├─ingredients::feature_importance(...) 3. │ └─ingredients:::feature_importance.explainer(...) 4. │ └─ingredients:::feature_importance.default(...) 5. │ └─base::replicate(B, loss_after_permutation()) 6. │ └─base::sapply(...) 7. │ └─base::lapply(X = X, FUN = FUN, ...) 8. │ └─ingredients (local) FUN(X[[i]], ...) 9. │ └─ingredients (local) loss_after_permutation() 10. │ ├─DALEX (local) loss_function(observed, predict_function(x, sampled_data)) 11. │ │ └─base::tapply(observed, predicted, sum) 12. │ └─tidysdm (local) predict_function(x, sampled_data) 13. │ ├─stats::predict(model, newdata) 14. │ └─tidysdm:::predict.simple_ensemble(model, newdata) 15. │ └─base::lapply(...) 16. │ ├─stats (local) FUN(X[[i]], ...) 17. │ └─workflows:::predict.workflow(X[[i]], ...) 18. │ ├─stats::predict(fit, new_data, type = type, opts = opts, ...) 19. │ └─parsnip::predict.model_fit(...) 20. │ ├─parsnip:::predict_classprob(...) 21. │ └─parsnip::predict_classprob.model_fit(...) 22. │ └─rlang::eval_tidy(pred_call) 23. └─parsnip::xgb_predict(object = object$fit, new_data = new_data) 24. ├─stats::predict(object, new_data, ...) 25. └─xgboost:::predict.xgb.Booster(object, new_data, ...) 26. └─xgboost:::xgb.best_iteration(object) 27. └─xgboost::xgb.attr(bst, "best_iteration") 28. └─xgboost:::xgb.get.handle(object) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'a2_tidymodels_additions.Rmd' failed with diagnostics: 'xgb.Booster' object is corrupted or is from an incompatible XGBoost version. --- failed re-building ‘a2_tidymodels_additions.Rmd’ --- re-building ‘a3_troubleshooting.Rmd’ using rmarkdown [WARNING] Deprecated: --highlight-style. Use --syntax-highlighting instead. --- finished re-building ‘a3_troubleshooting.Rmd’ SUMMARY: processing the following file failed: ‘a2_tidymodels_additions.Rmd’ Error: Vignette re-building failed. Execution halted Package: tidysdm Check: tests New result: ERROR Running ‘spelling.R’ [0s/0s] Running ‘testthat.R’ [65s/65s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > library(testthat) > library(tidysdm) Loading required package: tidymodels ── Attaching packages ────────────────────────────────────── tidymodels 1.4.1 ── ✔ broom 1.0.10 ✔ recipes 1.3.1 ✔ dials 1.4.2 ✔ rsample 1.3.1 ✔ dplyr 1.1.4 ✔ tailor 0.1.0 ✔ ggplot2 4.0.1 ✔ tidyr 1.3.1 ✔ infer 1.0.9 ✔ tune 2.0.1 ✔ modeldata 1.5.1 ✔ workflows 1.3.0 ✔ parsnip 1.4.0 ✔ workflowsets 1.1.1 ✔ purrr 1.2.0 ✔ yardstick 1.3.2 ── Conflicts ───────────────────────────────────────── tidymodels_conflicts() ── ✖ purrr::discard() masks scales::discard() ✖ dplyr::filter() masks stats::filter() ✖ dplyr::lag() masks stats::lag() ✖ recipes::step() masks stats::step() Loading required package: spatialsample > > test_check("tidysdm") Attaching package: 'plotrix' The following object is masked from 'package:scales': rescale i Creating pre-processing data to finalize 1 unknown parameter: "mtry" Saving _problems/test_explain_tidysdm-121.R Saving _problems/test_overlap_niche-14.R Saving _problems/test_predict_raster-8.R i Creating pre-processing data to finalize 1 unknown parameter: "mtry" [ FAIL 3 | WARN 0 | SKIP 1 | PASS 314 ] ══ Skipped tests (1) ═══════════════════════════════════════════════════════════ • On CRAN (1): 'test_filter_collinear.R:2:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test_explain_tidysdm.R:121:3'): explain_tidysdm works with response provided directly ── Error in `xgb.get.handle(model)`: 'xgb.Booster' object is corrupted or is from an incompatible XGBoost version. Backtrace: ▆ 1. ├─testthat::expect_true(all.equal(test_explainer, test_explainer_y)) at test_explain_tidysdm.R:121:3 2. │ └─testthat::quasi_label(enquo(object), label) 3. │ └─rlang::eval_bare(expr, quo_get_env(quo)) 4. ├─base::all.equal(test_explainer, test_explainer_y) 5. └─base::all.equal.default(test_explainer, test_explainer_y) 6. └─base::all.equal.list(target, current, ...) 7. ├─base::all.equal(...) 8. └─base::all.equal.default(...) 9. └─base::all.equal.list(target, current, ...) 10. ├─base::all.equal(...) 11. └─base::all.equal.list(...) 12. ├─base::all.equal(...) 13. └─base::all.equal.default(...) 14. └─base::all.equal.list(target, current, ...) 15. ├─base::all.equal(...) 16. └─base::all.equal.default(...) 17. └─base::all.equal.list(target, current, ...) 18. ├─base::all.equal(...) 19. └─base::all.equal.default(...) 20. └─base::all.equal.list(target, current, ...) 21. ├─base::all.equal(...) 22. └─base::all.equal.default(...) 23. └─base::all.equal.list(target, current, ...) 24. └─base::attr.all.equal(target, current, ...) 25. ├─base::length(target) 26. └─xgboost:::length.xgb.Booster(target) 27. └─xgboost::xgb.get.num.boosted.rounds(x) 28. └─xgboost:::xgb.get.handle(model) ── Error ('test_overlap_niche.R:14:3'): niche_overlap quantifies difference between rasters ── Error in `xgb.get.handle(object)`: 'xgb.Booster' object is corrupted or is from an incompatible XGBoost version. Backtrace: ▆ 1. ├─tidysdm::predict_raster(lacerta_ensemble, climate_present) at test_overlap_niche.R:14:3 2. ├─tidysdm:::predict_raster.default(lacerta_ensemble, climate_present) 3. │ ├─stats::predict(object, rast_sub_values, ...) 4. │ └─tidysdm:::predict.simple_ensemble(object, rast_sub_values, ...) 5. │ └─base::lapply(...) 6. │ ├─stats (local) FUN(X[[i]], ...) 7. │ └─workflows:::predict.workflow(X[[i]], ...) 8. │ ├─stats::predict(fit, new_data, type = type, opts = opts, ...) 9. │ └─parsnip::predict.model_fit(...) 10. │ ├─parsnip:::predict_classprob(...) 11. │ └─parsnip::predict_classprob.model_fit(...) 12. │ └─rlang::eval_tidy(pred_call) 13. └─parsnip::xgb_predict(object = object$fit, new_data = new_data) 14. ├─stats::predict(object, new_data, ...) 15. └─xgboost:::predict.xgb.Booster(object, new_data, ...) 16. └─xgboost:::xgb.best_iteration(object) 17. └─xgboost::xgb.attr(bst, "best_iteration") 18. └─xgboost:::xgb.get.handle(object) ── Error ('test_predict_raster.R:8:3'): predict_raster works correctly in chunks ── Error in `xgb.get.handle(object)`: 'xgb.Booster' object is corrupted or is from an incompatible XGBoost version. Backtrace: ▆ 1. ├─tidysdm::predict_raster(lacerta_ensemble, climate_future) at test_predict_raster.R:8:3 2. ├─tidysdm:::predict_raster.default(lacerta_ensemble, climate_future) 3. │ ├─stats::predict(object, rast_sub_values, ...) 4. │ └─tidysdm:::predict.simple_ensemble(object, rast_sub_values, ...) 5. │ └─base::lapply(...) 6. │ ├─stats (local) FUN(X[[i]], ...) 7. │ └─workflows:::predict.workflow(X[[i]], ...) 8. │ ├─stats::predict(fit, new_data, type = type, opts = opts, ...) 9. │ └─parsnip::predict.model_fit(...) 10. │ ├─parsnip:::predict_classprob(...) 11. │ └─parsnip::predict_classprob.model_fit(...) 12. │ └─rlang::eval_tidy(pred_call) 13. └─parsnip::xgb_predict(object = object$fit, new_data = new_data) 14. ├─stats::predict(object, new_data, ...) 15. └─xgboost:::predict.xgb.Booster(object, new_data, ...) 16. └─xgboost:::xgb.best_iteration(object) 17. └─xgboost::xgb.attr(bst, "best_iteration") 18. └─xgboost:::xgb.get.handle(object) [ FAIL 3 | WARN 0 | SKIP 1 | PASS 314 ] Error: ! Test failures. Execution halted Package: traineR Check: R code for possible problems New result: NOTE train.xgboost: warning in xgb.train(params = params, data = train_aux, eval_metric = "mlogloss", nrounds = nrounds, watchlist = watchlist, obj = obj, feval = feval, verbose = verbose, print_every_n = print_every_n, early_stopping_rounds = early_stopping_rounds, maximize = maximize, save_period = save_period, save_name = save_name, xgb_model = xgb_model, callbacks = callbacks, ... = ...): partial argument match of 'obj' to 'objective' train.xgboost: warning in xgb.train(params = params, data = train_aux, nrounds = nrounds, watchlist = watchlist, obj = obj, feval = feval, verbose = verbose, print_every_n = print_every_n, early_stopping_rounds = early_stopping_rounds, maximize = maximize, save_period = save_period, save_name = save_name, xgb_model = xgb_model, callbacks = callbacks, ... = ...): partial argument match of 'obj' to 'objective' Package: twang Check: R code for possible problems New result: NOTE ps.fast: warning in xgboost(data = sparse.data, label = data[, treat.var], params = params, tree_method = tree_method, feval = pred.xgboost, nrounds = n.trees, verbose = verbose, weight = sampW, callbacks = callback.list): partial argument match of 'weight' to 'weights' Package: vetiver Check: tests New result: ERROR Running ‘testthat.R’ [17s/17s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(vetiver) > > test_check("vetiver") Loading required package: ggplot2 Loading required package: lattice Create a Model Card for your published model * Model Cards provide a framework for transparent, responsible reporting * Use the vetiver `.Rmd` template as a place to start This message is displayed once per session. This is mgcv 1.9-4. For overview type '?mgcv'. Attaching package: 'parsnip' The following object is masked from 'package:e1071': tune Attaching package: 'probably' The following objects are masked from 'package:base': as.factor, as.ordered Attaching package: 'tune' The following object is masked from 'package:e1071': tune The following object is masked from 'package:vetiver': load_pkgs Attaching package: 'rsample' The following object is masked from 'package:e1071': permutations The following object is masked from 'package:caret': calibration Attaching package: 'recipes' The following object is masked from 'package:stats': step Your rsconnect bundle has been created at: * /home/hornik/tmp/scratch/Rtmpk43NB2/file14949a6771092d/bundle14949a3d25fcb4.tar.gz Saving _problems/test-xgboost-9.R [ FAIL 1 | WARN 2 | SKIP 70 | PASS 221 ] ══ Skipped tests (70) ══════════════════════════════════════════════════════════ • On CRAN (70): 'test-api.R:16:1', 'test-api.R:77:1', 'test-attach-pkgs.R:2:5', 'test-attach-pkgs.R:7:5', 'test-attach-pkgs.R:12:5', 'test-caret.R:22:1', 'test-caret.R:65:5', 'test-choose-version.R:4:5', 'test-choose-version.R:33:1', 'test-create-ptype.R:41:1', 'test-dashboard.R:13:5', 'test-gam.R:8:1', 'test-gam.R:60:5', 'test-glm.R:7:1', 'test-glm.R:59:5', 'test-keras.R:1:1', 'test-kproto.R:14:1', 'test-kproto.R:65:5', 'test-luz.R:1:1', 'test-mlr3.R:3:1', 'test-mlr3.R:52:5', 'test-monitor.R:66:5', 'test-monitor.R:72:5', 'test-monitor.R:79:5', 'test-monitor.R:124:5', 'test-pin-read-write.R:3:1', 'test-pin-read-write.R:17:1', 'test-pin-read-write.R:132:5', 'test-predict.R:1:1', 'test-probably.R:48:1', 'test-probably.R:98:5', 'test-probably.R:109:1', 'test-probably.R:159:5', 'test-probably.R:170:1', 'test-probably.R:220:5', 'test-probably.R:232:1', 'test-probably.R:282:5', 'test-ranger.R:9:1', 'test-ranger.R:13:1', 'test-ranger.R:62:5', 'test-recipe.R:14:1', 'test-recipe.R:58:5', 'test-rsconnect.R:18:5', 'test-sagemaker.R:4:5', 'test-sagemaker.R:25:5', 'test-sagemaker.R:49:1', 'test-sagemaker.R:77:1', 'test-sagemaker.R:98:1', 'test-sagemaker.R:112:1', 'test-sagemaker.R:161:1', 'test-stacks.R:1:1', 'test-tidymodels.R:21:1', 'test-tidymodels.R:71:5', 'test-type-convert.R:15:1', 'test-type-convert.R:31:1', 'test-type-convert.R:47:1', 'test-write-docker.R:5:5', 'test-write-docker.R:17:5', 'test-write-docker.R:35:5', 'test-write-docker.R:52:5', 'test-write-docker.R:65:5', 'test-write-docker.R:81:5', 'test-write-docker.R:88:5', 'test-write-plumber.R:4:5', 'test-write-plumber.R:17:5', 'test-write-plumber.R:38:5', 'test-write-plumber.R:57:5', 'test-write-plumber.R:71:5', 'test-write-plumber.R:84:5', 'test-write-plumber.R:98:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-xgboost.R:9:1'): (code run outside of `test_that()`) ─────────── Error in `matrix(NA_real_, ncol = model$nfeatures, dimnames = list("", model$feature_names))`: non-numeric matrix extent Backtrace: ▆ 1. └─vetiver::vetiver_model(cars_xgb, "cars2") at test-xgboost.R:9:1 2. └─vetiver::vetiver_create_ptype(model, save_prototype, ...) 3. ├─vetiver::vetiver_ptype(model, ...) 4. └─vetiver:::vetiver_ptype.xgb.Booster(model, ...) 5. └─base::matrix(...) [ FAIL 1 | WARN 2 | SKIP 70 | PASS 221 ] Error: ! Test failures. Execution halted Package: VIM Check: examples New result: ERROR Running examples in ‘VIM-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: xgboostImpute > ### Title: Xgboost Imputation > ### Aliases: xgboostImpute > > ### ** Examples > > data(sleep) > xgboostImpute(Dream~BodyWgt+BrainWgt,data=sleep) Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: verbose. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'label' has been renamed to 'y'. This warning will become an error in a future version. BodyWgt BrainWgt NonD Dream Sleep Span Gest Pred Exp Danger Dream_imp 1 6654.000 5712.00 NA 1.7996682 3.3 38.6 645.0 3 5 3 TRUE 2 1.000 6.60 6.3 2.0000000 8.3 4.5 42.0 3 1 3 FALSE 3 3.385 44.50 NA 3.5961127 12.5 14.0 60.0 1 1 1 TRUE 4 0.920 5.70 NA 0.8757253 16.5 NA 25.0 5 2 3 TRUE 5 2547.000 4603.00 2.1 1.8000000 3.9 69.0 624.0 3 5 4 FALSE 6 10.550 179.50 9.1 0.7000000 9.8 27.0 180.0 4 4 4 FALSE 7 0.023 0.30 15.8 3.9000000 19.7 19.0 35.0 1 1 1 FALSE 8 160.000 169.00 5.2 1.0000000 6.2 30.4 392.0 4 5 4 FALSE 9 3.300 25.60 10.9 3.6000000 14.5 28.0 63.0 1 2 1 FALSE 10 52.160 440.00 8.3 1.4000000 9.7 50.0 230.0 1 1 1 FALSE 11 0.425 6.40 11.0 1.5000000 12.5 7.0 112.0 5 4 4 FALSE 12 465.000 423.00 3.2 0.7000000 3.9 30.0 281.0 5 5 5 FALSE 13 0.550 2.40 7.6 2.7000000 10.3 NA NA 2 1 2 FALSE 14 187.100 419.00 NA 1.7214588 3.1 40.0 365.0 5 5 5 TRUE 15 0.075 1.20 6.3 2.1000000 8.4 3.5 42.0 1 1 1 FALSE 16 3.000 25.00 8.6 0.0000000 8.6 50.0 28.0 2 2 2 FALSE 17 0.785 3.50 6.6 4.1000000 10.7 6.0 42.0 2 2 2 FALSE 18 0.200 5.00 9.5 1.2000000 10.7 10.4 120.0 2 2 2 FALSE 19 1.410 17.50 4.8 1.3000000 6.1 34.0 NA 1 2 1 FALSE 20 60.000 81.00 12.0 6.1000000 18.1 7.0 NA 1 1 1 FALSE 21 529.000 680.00 NA 0.3000000 NA 28.0 400.0 5 5 5 FALSE 22 27.660 115.00 3.3 0.5000000 3.8 20.0 148.0 5 5 5 FALSE 23 0.120 1.00 11.0 3.4000000 14.4 3.9 16.0 3 1 2 FALSE 24 207.000 406.00 NA 1.8088160 12.0 39.3 252.0 1 4 1 TRUE 25 85.000 325.00 4.7 1.5000000 6.2 41.0 310.0 1 3 1 FALSE 26 36.330 119.50 NA 0.5030808 13.0 16.2 63.0 1 1 1 TRUE 27 0.101 4.00 10.4 3.4000000 13.8 9.0 28.0 5 1 3 FALSE 28 1.040 5.50 7.4 0.8000000 8.2 7.6 68.0 5 3 4 FALSE 29 521.000 655.00 2.1 0.8000000 2.9 46.0 336.0 5 5 5 FALSE 30 100.000 157.00 NA 1.0328215 10.8 22.4 100.0 1 1 1 TRUE 31 35.000 56.00 NA 4.6171999 NA 16.3 33.0 3 5 4 TRUE 32 0.005 0.14 7.7 1.4000000 9.1 2.6 21.5 5 2 4 FALSE 33 0.010 0.25 17.9 2.0000000 19.9 24.0 50.0 1 1 1 FALSE 34 62.000 1320.00 6.1 1.9000000 8.0 100.0 267.0 1 1 1 FALSE 35 0.122 3.00 8.2 2.4000000 10.6 NA 30.0 2 1 1 FALSE 36 1.350 8.10 8.4 2.8000000 11.2 NA 45.0 3 1 3 FALSE 37 0.023 0.40 11.9 1.3000000 13.2 3.2 19.0 4 1 3 FALSE 38 0.048 0.33 10.8 2.0000000 12.8 2.0 30.0 4 1 3 FALSE 39 1.700 6.30 13.8 5.6000000 19.4 5.0 12.0 2 1 1 FALSE 40 3.500 10.80 14.3 3.1000000 17.4 6.5 120.0 2 1 1 FALSE 41 250.000 490.00 NA 1.0000000 NA 23.6 440.0 5 5 5 FALSE 42 0.480 15.50 15.2 1.8000000 17.0 12.0 140.0 2 2 2 FALSE 43 10.000 115.00 10.0 0.9000000 10.9 20.2 170.0 4 4 4 FALSE 44 1.620 11.40 11.9 1.8000000 13.7 13.0 17.0 2 1 2 FALSE 45 192.000 180.00 6.5 1.9000000 8.4 27.0 115.0 4 4 4 FALSE 46 2.500 12.10 7.5 0.9000000 8.4 18.0 31.0 5 5 5 FALSE 47 4.288 39.20 NA 2.4006271 12.5 13.7 63.0 2 2 2 TRUE 48 0.280 1.90 10.6 2.6000000 13.2 4.7 21.0 3 1 3 FALSE 49 4.235 50.40 7.4 2.4000000 9.8 9.8 52.0 1 1 1 FALSE 50 6.800 179.00 8.4 1.2000000 9.6 29.0 164.0 2 3 2 FALSE 51 0.750 12.30 5.7 0.9000000 6.6 7.0 225.0 2 2 2 FALSE 52 3.600 21.00 4.9 0.5000000 5.4 6.0 225.0 3 2 3 FALSE 53 14.830 98.20 NA 3.7860343 2.6 17.0 150.0 5 5 5 TRUE 54 55.500 175.00 3.2 0.6000000 3.8 20.0 151.0 5 5 5 FALSE 55 1.400 12.50 NA 1.0766708 11.0 12.7 90.0 2 2 2 TRUE 56 0.060 1.00 8.1 2.2000000 10.3 3.5 NA 3 1 2 FALSE 57 0.900 2.60 11.0 2.3000000 13.3 4.5 60.0 2 1 2 FALSE 58 2.000 12.30 4.9 0.5000000 5.4 7.5 200.0 3 1 3 FALSE 59 0.104 2.50 13.2 2.6000000 15.8 2.3 46.0 3 2 2 FALSE 60 4.190 58.00 9.7 0.6000000 10.3 24.0 210.0 4 3 4 FALSE 61 3.500 3.90 12.8 6.6000000 19.4 3.0 14.0 2 1 1 FALSE 62 4.050 17.00 NA 0.4989381 NA 13.0 38.0 3 1 1 TRUE > xgboostImpute(Dream+NonD~BodyWgt+BrainWgt,data=sleep) Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: verbose. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'label' has been renamed to 'y'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: verbose. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'label' has been renamed to 'y'. This warning will become an error in a future version. BodyWgt BrainWgt NonD Dream Sleep Span Gest Pred Exp Danger 1 6654.000 5712.00 2.100814 1.7996682 3.3 38.6 645.0 3 5 3 2 1.000 6.60 6.300000 2.0000000 8.3 4.5 42.0 3 1 3 3 3.385 44.50 10.897091 3.5961127 12.5 14.0 60.0 1 1 1 4 0.920 5.70 7.186167 0.8757253 16.5 NA 25.0 5 2 3 5 2547.000 4603.00 2.100000 1.8000000 3.9 69.0 624.0 3 5 4 6 10.550 179.50 9.100000 0.7000000 9.8 27.0 180.0 4 4 4 7 0.023 0.30 15.800000 3.9000000 19.7 19.0 35.0 1 1 1 8 160.000 169.00 5.200000 1.0000000 6.2 30.4 392.0 4 5 4 9 3.300 25.60 10.900000 3.6000000 14.5 28.0 63.0 1 2 1 10 52.160 440.00 8.300000 1.4000000 9.7 50.0 230.0 1 1 1 11 0.425 6.40 11.000000 1.5000000 12.5 7.0 112.0 5 4 4 12 465.000 423.00 3.200000 0.7000000 3.9 30.0 281.0 5 5 5 13 0.550 2.40 7.600000 2.7000000 10.3 NA NA 2 1 2 14 187.100 419.00 5.123134 1.7214588 3.1 40.0 365.0 5 5 5 15 0.075 1.20 6.300000 2.1000000 8.4 3.5 42.0 1 1 1 16 3.000 25.00 8.600000 0.0000000 8.6 50.0 28.0 2 2 2 17 0.785 3.50 6.600000 4.1000000 10.7 6.0 42.0 2 2 2 18 0.200 5.00 9.500000 1.2000000 10.7 10.4 120.0 2 2 2 19 1.410 17.50 4.800000 1.3000000 6.1 34.0 NA 1 2 1 20 60.000 81.00 12.000000 6.1000000 18.1 7.0 NA 1 1 1 21 529.000 680.00 2.100814 0.3000000 NA 28.0 400.0 5 5 5 22 27.660 115.00 3.300000 0.5000000 3.8 20.0 148.0 5 5 5 23 0.120 1.00 11.000000 3.4000000 14.4 3.9 16.0 3 1 2 24 207.000 406.00 6.285658 1.8088160 12.0 39.3 252.0 1 4 1 25 85.000 325.00 4.700000 1.5000000 6.2 41.0 310.0 1 3 1 26 36.330 119.50 3.301962 0.5030808 13.0 16.2 63.0 1 1 1 27 0.101 4.00 10.400000 3.4000000 13.8 9.0 28.0 5 1 3 28 1.040 5.50 7.400000 0.8000000 8.2 7.6 68.0 5 3 4 29 521.000 655.00 2.100000 0.8000000 2.9 46.0 336.0 5 5 5 30 100.000 157.00 4.835842 1.0328215 10.8 22.4 100.0 1 1 1 31 35.000 56.00 9.414713 4.6171999 NA 16.3 33.0 3 5 4 32 0.005 0.14 7.700000 1.4000000 9.1 2.6 21.5 5 2 4 33 0.010 0.25 17.900000 2.0000000 19.9 24.0 50.0 1 1 1 34 62.000 1320.00 6.100000 1.9000000 8.0 100.0 267.0 1 1 1 35 0.122 3.00 8.200000 2.4000000 10.6 NA 30.0 2 1 1 36 1.350 8.10 8.400000 2.8000000 11.2 NA 45.0 3 1 3 37 0.023 0.40 11.900000 1.3000000 13.2 3.2 19.0 4 1 3 38 0.048 0.33 10.800000 2.0000000 12.8 2.0 30.0 4 1 3 39 1.700 6.30 13.800000 5.6000000 19.4 5.0 12.0 2 1 1 40 3.500 10.80 14.300000 3.1000000 17.4 6.5 120.0 2 1 1 41 250.000 490.00 6.742025 1.0000000 NA 23.6 440.0 5 5 5 42 0.480 15.50 15.200000 1.8000000 17.0 12.0 140.0 2 2 2 43 10.000 115.00 10.000000 0.9000000 10.9 20.2 170.0 4 4 4 44 1.620 11.40 11.900000 1.8000000 13.7 13.0 17.0 2 1 2 45 192.000 180.00 6.500000 1.9000000 8.4 27.0 115.0 4 4 4 46 2.500 12.10 7.500000 0.9000000 8.4 18.0 31.0 5 5 5 47 4.288 39.20 7.402267 2.4006271 12.5 13.7 63.0 2 2 2 48 0.280 1.90 10.600000 2.6000000 13.2 4.7 21.0 3 1 3 49 4.235 50.40 7.400000 2.4000000 9.8 9.8 52.0 1 1 1 50 6.800 179.00 8.400000 1.2000000 9.6 29.0 164.0 2 3 2 51 0.750 12.30 5.700000 0.9000000 6.6 7.0 225.0 2 2 2 52 3.600 21.00 4.900000 0.5000000 5.4 6.0 225.0 3 2 3 53 14.830 98.20 10.641701 3.7860343 2.6 17.0 150.0 5 5 5 54 55.500 175.00 3.200000 0.6000000 3.8 20.0 151.0 5 5 5 55 1.400 12.50 5.010159 1.0766708 11.0 12.7 90.0 2 2 2 56 0.060 1.00 8.100000 2.2000000 10.3 3.5 NA 3 1 2 57 0.900 2.60 11.000000 2.3000000 13.3 4.5 60.0 2 1 2 58 2.000 12.30 4.900000 0.5000000 5.4 7.5 200.0 3 1 3 59 0.104 2.50 13.200000 2.6000000 15.8 2.3 46.0 3 2 2 60 4.190 58.00 9.700000 0.6000000 10.3 24.0 210.0 4 3 4 61 3.500 3.90 12.800000 6.6000000 19.4 3.0 14.0 2 1 1 62 4.050 17.00 5.988167 0.4989381 NA 13.0 38.0 3 1 1 Dream_imp NonD_imp 1 TRUE TRUE 2 FALSE FALSE 3 TRUE TRUE 4 TRUE TRUE 5 FALSE FALSE 6 FALSE FALSE 7 FALSE FALSE 8 FALSE FALSE 9 FALSE FALSE 10 FALSE FALSE 11 FALSE FALSE 12 FALSE FALSE 13 FALSE FALSE 14 TRUE TRUE 15 FALSE FALSE 16 FALSE FALSE 17 FALSE FALSE 18 FALSE FALSE 19 FALSE FALSE 20 FALSE FALSE 21 FALSE TRUE 22 FALSE FALSE 23 FALSE FALSE 24 TRUE TRUE 25 FALSE FALSE 26 TRUE TRUE 27 FALSE FALSE 28 FALSE FALSE 29 FALSE FALSE 30 TRUE TRUE 31 TRUE TRUE 32 FALSE FALSE 33 FALSE FALSE 34 FALSE FALSE 35 FALSE FALSE 36 FALSE FALSE 37 FALSE FALSE 38 FALSE FALSE 39 FALSE FALSE 40 FALSE FALSE 41 FALSE TRUE 42 FALSE FALSE 43 FALSE FALSE 44 FALSE FALSE 45 FALSE FALSE 46 FALSE FALSE 47 TRUE TRUE 48 FALSE FALSE 49 FALSE FALSE 50 FALSE FALSE 51 FALSE FALSE 52 FALSE FALSE 53 TRUE TRUE 54 FALSE FALSE 55 TRUE TRUE 56 FALSE FALSE 57 FALSE FALSE 58 FALSE FALSE 59 FALSE FALSE 60 FALSE FALSE 61 FALSE FALSE 62 TRUE TRUE > xgboostImpute(Dream+NonD+Gest~BodyWgt+BrainWgt,data=sleep) Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: verbose. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'label' has been renamed to 'y'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: verbose. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'label' has been renamed to 'y'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: verbose. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'label' has been renamed to 'y'. This warning will become an error in a future version. BodyWgt BrainWgt NonD Dream Sleep Span Gest Pred Exp Danger 1 6654.000 5712.00 2.100814 1.7996682 3.3 38.6 645.00000 3 5 3 2 1.000 6.60 6.300000 2.0000000 8.3 4.5 42.00000 3 1 3 3 3.385 44.50 10.897091 3.5961127 12.5 14.0 60.00000 1 1 1 4 0.920 5.70 7.186167 0.8757253 16.5 NA 25.00000 5 2 3 5 2547.000 4603.00 2.100000 1.8000000 3.9 69.0 624.00000 3 5 4 6 10.550 179.50 9.100000 0.7000000 9.8 27.0 180.00000 4 4 4 7 0.023 0.30 15.800000 3.9000000 19.7 19.0 35.00000 1 1 1 8 160.000 169.00 5.200000 1.0000000 6.2 30.4 392.00000 4 5 4 9 3.300 25.60 10.900000 3.6000000 14.5 28.0 63.00000 1 2 1 10 52.160 440.00 8.300000 1.4000000 9.7 50.0 230.00000 1 1 1 11 0.425 6.40 11.000000 1.5000000 12.5 7.0 112.00000 5 4 4 12 465.000 423.00 3.200000 0.7000000 3.9 30.0 281.00000 5 5 5 13 0.550 2.40 7.600000 2.7000000 10.3 NA 28.07713 2 1 2 14 187.100 419.00 5.123134 1.7214588 3.1 40.0 365.00000 5 5 5 15 0.075 1.20 6.300000 2.1000000 8.4 3.5 42.00000 1 1 1 16 3.000 25.00 8.600000 0.0000000 8.6 50.0 28.00000 2 2 2 17 0.785 3.50 6.600000 4.1000000 10.7 6.0 42.00000 2 2 2 18 0.200 5.00 9.500000 1.2000000 10.7 10.4 120.00000 2 2 2 19 1.410 17.50 4.800000 1.3000000 6.1 34.0 80.31452 1 2 1 20 60.000 81.00 12.000000 6.1000000 18.1 7.0 101.52599 1 1 1 21 529.000 680.00 2.100814 0.3000000 NA 28.0 400.00000 5 5 5 22 27.660 115.00 3.300000 0.5000000 3.8 20.0 148.00000 5 5 5 23 0.120 1.00 11.000000 3.4000000 14.4 3.9 16.00000 3 1 2 24 207.000 406.00 6.285658 1.8088160 12.0 39.3 252.00000 1 4 1 25 85.000 325.00 4.700000 1.5000000 6.2 41.0 310.00000 1 3 1 26 36.330 119.50 3.301962 0.5030808 13.0 16.2 63.00000 1 1 1 27 0.101 4.00 10.400000 3.4000000 13.8 9.0 28.00000 5 1 3 28 1.040 5.50 7.400000 0.8000000 8.2 7.6 68.00000 5 3 4 29 521.000 655.00 2.100000 0.8000000 2.9 46.0 336.00000 5 5 5 30 100.000 157.00 4.835842 1.0328215 10.8 22.4 100.00000 1 1 1 31 35.000 56.00 9.414713 4.6171999 NA 16.3 33.00000 3 5 4 32 0.005 0.14 7.700000 1.4000000 9.1 2.6 21.50000 5 2 4 33 0.010 0.25 17.900000 2.0000000 19.9 24.0 50.00000 1 1 1 34 62.000 1320.00 6.100000 1.9000000 8.0 100.0 267.00000 1 1 1 35 0.122 3.00 8.200000 2.4000000 10.6 NA 30.00000 2 1 1 36 1.350 8.10 8.400000 2.8000000 11.2 NA 45.00000 3 1 3 37 0.023 0.40 11.900000 1.3000000 13.2 3.2 19.00000 4 1 3 38 0.048 0.33 10.800000 2.0000000 12.8 2.0 30.00000 4 1 3 39 1.700 6.30 13.800000 5.6000000 19.4 5.0 12.00000 2 1 1 40 3.500 10.80 14.300000 3.1000000 17.4 6.5 120.00000 2 1 1 41 250.000 490.00 6.742025 1.0000000 NA 23.6 440.00000 5 5 5 42 0.480 15.50 15.200000 1.8000000 17.0 12.0 140.00000 2 2 2 43 10.000 115.00 10.000000 0.9000000 10.9 20.2 170.00000 4 4 4 44 1.620 11.40 11.900000 1.8000000 13.7 13.0 17.00000 2 1 2 45 192.000 180.00 6.500000 1.9000000 8.4 27.0 115.00000 4 4 4 46 2.500 12.10 7.500000 0.9000000 8.4 18.0 31.00000 5 5 5 47 4.288 39.20 7.402267 2.4006271 12.5 13.7 63.00000 2 2 2 48 0.280 1.90 10.600000 2.6000000 13.2 4.7 21.00000 3 1 3 49 4.235 50.40 7.400000 2.4000000 9.8 9.8 52.00000 1 1 1 50 6.800 179.00 8.400000 1.2000000 9.6 29.0 164.00000 2 3 2 51 0.750 12.30 5.700000 0.9000000 6.6 7.0 225.00000 2 2 2 52 3.600 21.00 4.900000 0.5000000 5.4 6.0 225.00000 3 2 3 53 14.830 98.20 10.641701 3.7860343 2.6 17.0 150.00000 5 5 5 54 55.500 175.00 3.200000 0.6000000 3.8 20.0 151.00000 5 5 5 55 1.400 12.50 5.010159 1.0766708 11.0 12.7 90.00000 2 2 2 56 0.060 1.00 8.100000 2.2000000 10.3 3.5 22.96904 3 1 2 57 0.900 2.60 11.000000 2.3000000 13.3 4.5 60.00000 2 1 2 58 2.000 12.30 4.900000 0.5000000 5.4 7.5 200.00000 3 1 3 59 0.104 2.50 13.200000 2.6000000 15.8 2.3 46.00000 3 2 2 60 4.190 58.00 9.700000 0.6000000 10.3 24.0 210.00000 4 3 4 61 3.500 3.90 12.800000 6.6000000 19.4 3.0 14.00000 2 1 1 62 4.050 17.00 5.988167 0.4989381 NA 13.0 38.00000 3 1 1 Dream_imp NonD_imp Gest_imp 1 TRUE TRUE FALSE 2 FALSE FALSE FALSE 3 TRUE TRUE FALSE 4 TRUE TRUE FALSE 5 FALSE FALSE FALSE 6 FALSE FALSE FALSE 7 FALSE FALSE FALSE 8 FALSE FALSE FALSE 9 FALSE FALSE FALSE 10 FALSE FALSE FALSE 11 FALSE FALSE FALSE 12 FALSE FALSE FALSE 13 FALSE FALSE TRUE 14 TRUE TRUE FALSE 15 FALSE FALSE FALSE 16 FALSE FALSE FALSE 17 FALSE FALSE FALSE 18 FALSE FALSE FALSE 19 FALSE FALSE TRUE 20 FALSE FALSE TRUE 21 FALSE TRUE FALSE 22 FALSE FALSE FALSE 23 FALSE FALSE FALSE 24 TRUE TRUE FALSE 25 FALSE FALSE FALSE 26 TRUE TRUE FALSE 27 FALSE FALSE FALSE 28 FALSE FALSE FALSE 29 FALSE FALSE FALSE 30 TRUE TRUE FALSE 31 TRUE TRUE FALSE 32 FALSE FALSE FALSE 33 FALSE FALSE FALSE 34 FALSE FALSE FALSE 35 FALSE FALSE FALSE 36 FALSE FALSE FALSE 37 FALSE FALSE FALSE 38 FALSE FALSE FALSE 39 FALSE FALSE FALSE 40 FALSE FALSE FALSE 41 FALSE TRUE FALSE 42 FALSE FALSE FALSE 43 FALSE FALSE FALSE 44 FALSE FALSE FALSE 45 FALSE FALSE FALSE 46 FALSE FALSE FALSE 47 TRUE TRUE FALSE 48 FALSE FALSE FALSE 49 FALSE FALSE FALSE 50 FALSE FALSE FALSE 51 FALSE FALSE FALSE 52 FALSE FALSE FALSE 53 TRUE TRUE FALSE 54 FALSE FALSE FALSE 55 TRUE TRUE FALSE 56 FALSE FALSE TRUE 57 FALSE FALSE FALSE 58 FALSE FALSE FALSE 59 FALSE FALSE FALSE 60 FALSE FALSE FALSE 61 FALSE FALSE FALSE 62 TRUE TRUE FALSE > > sleepx <- sleep > sleepx$Pred <- as.factor(LETTERS[sleepx$Pred]) > sleepx$Pred[1] <- NA > xgboostImpute(Pred~BodyWgt+BrainWgt,data=sleepx) Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: num_class, verbose. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'label' has been renamed to 'y'. This warning will become an error in a future version. Error in prescreen.objective(objective) : Objectives with non-default prediction mode (reg:logistic, binary:logitraw, multi:softmax) are not supported in 'xgboost()'. Try 'xgb.train()'. Calls: xgboostImpute -> -> prescreen.objective Execution halted Package: VIM Check: tests New result: ERROR Running ‘test_imputeRobust.R’ [0s/0s] Running ‘tinytest.R’ [16s/16s] Running the tests in ‘tests/tinytest.R’ failed. Complete output: > if ( requireNamespace("tinytest", quietly=TRUE) ){ + tinytest::test_package("VIM") + } Loading required package: colorspace Loading required package: grid VIM is ready to use. Suggestions and bug-reports can be submitted at: https://github.com/statistikat/VIM/issues Attaching package: 'VIM' The following object is masked from 'package:datasets': sleep test_IRMI_ordered.R........... 0 tests test_IRMI_ordered.R........... 0 tests v1 v2 co v1 v2 co -2.981434 -2.924405 2.000000 2.995740 2.483395 21.000000 test_IRMI_ordered.R........... 1 tests OK test_IRMI_ordered.R........... 2 tests OK v1 v2 co v1 v2 co -2.981434 -2.924405 2.000000 2.995740 2.483395 21.000000 Start: AIC=1389.11 y ~ v1 + v2 + m + b + c + co Df Deviance AIC - m 1 1369.1 1387.1 - v1 1 1369.7 1387.7 - c 4 1376.6 1388.6 1369.1 1389.1 - co 1 1371.2 1389.2 - b 1 1372.2 1390.2 - v2 1 1373.0 1391.0 Step: AIC=1387.11 y ~ v1 + v2 + b + c + co Df Deviance AIC - v1 1 1369.7 1385.7 - c 4 1376.6 1386.6 1369.1 1387.1 - co 1 1371.2 1387.2 - b 1 1372.2 1388.2 - v2 1 1373.0 1389.0 Step: AIC=1385.69 y ~ v2 + b + c + co Df Deviance AIC - c 4 1377.1 1385.1 1369.7 1385.7 - co 1 1371.8 1385.8 - b 1 1372.7 1386.7 - v2 1 1373.7 1387.7 Step: AIC=1385.13 y ~ v2 + b + co Df Deviance AIC - co 1 1378.8 1384.8 1377.1 1385.1 - b 1 1380.0 1386.0 - v2 1 1380.4 1386.4 Step: AIC=1384.77 y ~ v2 + b Df Deviance AIC 1378.8 1384.8 - v2 1 1381.8 1385.8 - b 1 1381.9 1385.9 Start: AIC=1282.15 y ~ v1 + v2 + m + b + c + co Df Deviance AIC - c 4 1267.4 1279.4 - co 1 1262.3 1280.3 - m 1 1262.7 1280.7 1262.2 1282.2 - v1 1 1264.4 1282.4 - v2 1 1305.3 1323.3 - b 1 1320.9 1338.9 Step: AIC=1279.38 y ~ v1 + v2 + m + b + co Df Deviance AIC - co 1 1267.7 1277.7 - m 1 1268.0 1278.0 1267.4 1279.4 - v1 1 1269.5 1279.5 - v2 1 1308.4 1318.4 - b 1 1325.6 1335.6 Step: AIC=1277.65 y ~ v1 + v2 + m + b Df Deviance AIC - m 1 1268.2 1276.2 1267.7 1277.7 - v1 1 1269.8 1277.8 - v2 1 1309.0 1317.0 - b 1 1325.7 1333.7 Step: AIC=1276.21 y ~ v1 + v2 + b Df Deviance AIC 1268.2 1276.2 - v1 1 1270.4 1276.4 - v2 1 1309.8 1315.8 - b 1 1326.2 1332.2 Start: AIC=1279.33 y ~ v1 + v2 + m + b + c + co Df Deviance AIC - c 4 1263.5 1275.5 - co 1 1259.5 1277.5 - m 1 1259.8 1277.8 1259.3 1279.3 - v1 1 1266.6 1284.6 - v2 1 1296.2 1314.2 - b 1 1324.2 1342.2 Step: AIC=1275.46 y ~ v1 + v2 + m + b + co Df Deviance AIC - co 1 1263.7 1273.7 - m 1 1263.9 1273.9 1263.5 1275.5 - v1 1 1270.5 1280.5 - v2 1 1298.6 1308.6 - b 1 1327.9 1337.9 Step: AIC=1273.72 y ~ v1 + v2 + m + b Df Deviance AIC - m 1 1264.2 1272.2 1263.7 1273.7 - v1 1 1270.8 1278.8 - v2 1 1299.2 1307.2 - b 1 1327.9 1335.9 Step: AIC=1272.16 y ~ v1 + v2 + b Df Deviance AIC 1264.2 1272.2 - v1 1 1271.3 1277.3 - v2 1 1299.8 1305.8 - b 1 1328.3 1334.3 Start: AIC=1277.68 y ~ v1 + v2 + m + b + c + co Df Deviance AIC - c 4 1261.4 1273.4 - co 1 1257.7 1275.7 - m 1 1257.9 1275.9 1257.7 1277.7 - v1 1 1269.7 1287.7 - v2 1 1291.2 1309.2 - b 1 1327.5 1345.5 Step: AIC=1273.4 y ~ v1 + v2 + m + b + co Df Deviance AIC - co 1 1261.4 1271.4 - m 1 1261.7 1271.7 1261.4 1273.4 - v1 1 1273.1 1283.1 - v2 1 1293.3 1303.3 - b 1 1331.0 1341.0 Step: AIC=1271.42 y ~ v1 + v2 + m + b Df Deviance AIC - m 1 1261.7 1269.7 1261.4 1271.4 - v1 1 1273.2 1281.2 - v2 1 1293.4 1301.4 - b 1 1331.1 1339.1 Step: AIC=1269.71 y ~ v1 + v2 + b Df Deviance AIC 1261.7 1269.7 - v1 1 1273.5 1279.5 - v2 1 1293.8 1299.8 - b 1 1331.3 1337.3 Start: AIC=1277.6 y ~ v1 + v2 + m + b + c + co Df Deviance AIC - c 4 1260.4 1272.4 - co 1 1257.6 1275.6 - m 1 1258.0 1276.0 1257.6 1277.6 - v1 1 1272.2 1290.2 - v2 1 1288.5 1306.5 - b 1 1330.5 1348.5 Step: AIC=1272.41 y ~ v1 + v2 + m + b + co Df Deviance AIC - co 1 1260.4 1270.4 - m 1 1260.8 1270.8 1260.4 1272.4 - v1 1 1274.8 1284.8 - v2 1 1289.9 1299.9 - b 1 1333.3 1343.3 Step: AIC=1270.41 y ~ v1 + v2 + m + b Df Deviance AIC - m 1 1260.8 1268.8 1260.4 1270.4 - v1 1 1274.8 1282.8 - v2 1 1290.0 1298.0 - b 1 1333.4 1341.4 Step: AIC=1268.83 y ~ v1 + v2 + b Df Deviance AIC 1260.8 1268.8 - v1 1 1275.3 1281.3 - v2 1 1290.6 1296.6 - b 1 1333.7 1339.7 test_IRMI_ordered.R........... 3 tests OK test_IRMI_ordered.R........... 4 tests OK v1 v2 co v1 v2 co -2.981434 -2.924405 2.000000 2.995740 2.483395 21.000000 Start: AIC=1389.11 y ~ v1 + v2 + m + b + c + co Df Deviance AIC - m 1 1369.1 1387.1 - v1 1 1369.7 1387.7 - c 4 1376.6 1388.6 1369.1 1389.1 - co 1 1371.2 1389.2 - b 1 1372.2 1390.2 - v2 1 1373.0 1391.0 Step: AIC=1387.11 y ~ v1 + v2 + b + c + co Df Deviance AIC - v1 1 1369.7 1385.7 - c 4 1376.6 1386.6 1369.1 1387.1 - co 1 1371.2 1387.2 - b 1 1372.2 1388.2 - v2 1 1373.0 1389.0 Step: AIC=1385.69 y ~ v2 + b + c + co Df Deviance AIC - c 4 1377.1 1385.1 1369.7 1385.7 - co 1 1371.8 1385.8 - b 1 1372.7 1386.7 - v2 1 1373.7 1387.7 Step: AIC=1385.13 y ~ v2 + b + co Df Deviance AIC - co 1 1378.8 1384.8 1377.1 1385.1 - b 1 1380.0 1386.0 - v2 1 1380.4 1386.4 Step: AIC=1384.77 y ~ v2 + b Df Deviance AIC 1378.8 1384.8 - v2 1 1381.8 1385.8 - b 1 1381.9 1385.9 Start: AIC=1282.15 y ~ v1 + v2 + m + b + c + co Df Deviance AIC - c 4 1267.4 1279.4 - co 1 1262.3 1280.3 - m 1 1262.7 1280.7 1262.2 1282.2 - v1 1 1264.4 1282.4 - v2 1 1305.3 1323.3 - b 1 1320.9 1338.9 Step: AIC=1279.38 y ~ v1 + v2 + m + b + co Df Deviance AIC - co 1 1267.7 1277.7 - m 1 1268.0 1278.0 1267.4 1279.4 - v1 1 1269.5 1279.5 - v2 1 1308.4 1318.4 - b 1 1325.6 1335.6 Step: AIC=1277.65 y ~ v1 + v2 + m + b Df Deviance AIC - m 1 1268.2 1276.2 1267.7 1277.7 - v1 1 1269.8 1277.8 - v2 1 1309.0 1317.0 - b 1 1325.7 1333.7 Step: AIC=1276.21 y ~ v1 + v2 + b Df Deviance AIC 1268.2 1276.2 - v1 1 1270.4 1276.4 - v2 1 1309.8 1315.8 - b 1 1326.2 1332.2 Start: AIC=1279.33 y ~ v1 + v2 + m + b + c + co Df Deviance AIC - c 4 1263.5 1275.5 - co 1 1259.5 1277.5 - m 1 1259.8 1277.8 1259.3 1279.3 - v1 1 1266.6 1284.6 - v2 1 1296.2 1314.2 - b 1 1324.2 1342.2 Step: AIC=1275.46 y ~ v1 + v2 + m + b + co Df Deviance AIC - co 1 1263.7 1273.7 - m 1 1263.9 1273.9 1263.5 1275.5 - v1 1 1270.5 1280.5 - v2 1 1298.6 1308.6 - b 1 1327.9 1337.9 Step: AIC=1273.72 y ~ v1 + v2 + m + b Df Deviance AIC - m 1 1264.2 1272.2 1263.7 1273.7 - v1 1 1270.8 1278.8 - v2 1 1299.2 1307.2 - b 1 1327.9 1335.9 Step: AIC=1272.16 y ~ v1 + v2 + b Df Deviance AIC 1264.2 1272.2 - v1 1 1271.3 1277.3 - v2 1 1299.8 1305.8 - b 1 1328.3 1334.3 Start: AIC=1277.68 y ~ v1 + v2 + m + b + c + co Df Deviance AIC - c 4 1261.4 1273.4 - co 1 1257.7 1275.7 - m 1 1257.9 1275.9 1257.7 1277.7 - v1 1 1269.7 1287.7 - v2 1 1291.2 1309.2 - b 1 1327.5 1345.5 Step: AIC=1273.4 y ~ v1 + v2 + m + b + co Df Deviance AIC - co 1 1261.4 1271.4 - m 1 1261.7 1271.7 1261.4 1273.4 - v1 1 1273.1 1283.1 - v2 1 1293.3 1303.3 - b 1 1331.0 1341.0 Step: AIC=1271.42 y ~ v1 + v2 + m + b Df Deviance AIC - m 1 1261.7 1269.7 1261.4 1271.4 - v1 1 1273.2 1281.2 - v2 1 1293.4 1301.4 - b 1 1331.1 1339.1 Step: AIC=1269.71 y ~ v1 + v2 + b Df Deviance AIC 1261.7 1269.7 - v1 1 1273.5 1279.5 - v2 1 1293.8 1299.8 - b 1 1331.3 1337.3 Start: AIC=1277.6 y ~ v1 + v2 + m + b + c + co Df Deviance AIC - c 4 1260.4 1272.4 - co 1 1257.6 1275.6 - m 1 1258.0 1276.0 1257.6 1277.6 - v1 1 1272.2 1290.2 - v2 1 1288.5 1306.5 - b 1 1330.5 1348.5 Step: AIC=1272.41 y ~ v1 + v2 + m + b + co Df Deviance AIC - co 1 1260.4 1270.4 - m 1 1260.8 1270.8 1260.4 1272.4 - v1 1 1274.8 1284.8 - v2 1 1289.9 1299.9 - b 1 1333.3 1343.3 Step: AIC=1270.41 y ~ v1 + v2 + m + b Df Deviance AIC - m 1 1260.8 1268.8 1260.4 1270.4 - v1 1 1274.8 1282.8 - v2 1 1290.0 1298.0 - b 1 1333.4 1341.4 Step: AIC=1268.83 y ~ v1 + v2 + b Df Deviance AIC 1260.8 1268.8 - v1 1 1275.3 1281.3 - v2 1 1290.6 1296.6 - b 1 1333.7 1339.7 test_IRMI_ordered.R........... 5 tests OK test_IRMI_ordered.R........... 6 tests OK v1 v2 co v1 v2 co -2.981434 -2.924405 2.000000 2.995740 2.483395 21.000000 test_IRMI_ordered.R........... 7 tests OK test_IRMI_ordered.R........... 8 tests OK test_IRMI_ordered.R........... 9 tests OK v1 v2 co v1 v2 co -2.981434 -2.924405 2.000000 2.995740 2.483395 21.000000 test_IRMI_ordered.R........... 10 tests OK test_IRMI_ordered.R........... 11 tests OK v1 v2 co v1 v2 co -2.981434 -2.924405 2.000000 2.995740 2.483395 21.000000 test_IRMI_ordered.R........... 12 tests OK test_IRMI_ordered.R........... 13 tests OK 4.4s test_aggFunctions.R........... 0 tests kNN ordered test_aggFunctions.R........... 0 tests test_aggFunctions.R........... 0 tests test_aggFunctions.R........... 0 tests test_aggFunctions.R........... 0 tests test_aggFunctions.R........... 0 tests test_aggFunctions.R........... 1 tests OK test_aggFunctions.R........... 2 tests OK test_aggFunctions.R........... 3 tests OK test_aggFunctions.R........... 4 tests OK test_aggFunctions.R........... 5 tests OK test_aggFunctions.R........... 6 tests OK 9ms Attaching package: 'dplyr' The following objects are masked from 'package:stats': filter, lag The following objects are masked from 'package:base': intersect, setdiff, setequal, union test_data_frame.R............. 0 tests test_data_frame.R............. 0 tests b c b c 1 1 5 4 a c a c 1 1 5 4 a b a b 1 1 5 5 test_data_frame.R............. 0 tests b c b c 1 1 5 4 a c a c 1 1 5 4 a b a b 1 1 5 5 test_data_frame.R............. 0 tests test_data_frame.R............. 1 tests OK test_data_frame.R............. 1 tests OK test_data_frame.R............. 1 tests OK test_data_frame.R............. 2 tests OK b c b c 1 1 5 4 a c a c 1 1 5 4 a b a b 1 1 5 5 test_data_frame.R............. 2 tests OK b c b c 1 1 5 4 a c a c 1 1 5 4 a b a b 1 1 5 5 test_data_frame.R............. 2 tests OK test_data_frame.R............. 3 tests OK 0.8s test_gowerDind.R.............. 0 tests test_gowerDind.R.............. 0 tests x y x y -2.602266 -3.387941 2.194409 2.987243 test_gowerDind.R.............. 0 tests test_gowerDind.R.............. 0 tests x y x y -2.602266 -3.387941 2.194409 2.987243 test_gowerDind.R.............. 0 tests test_gowerDind.R.............. 0 tests test_gowerDind.R.............. 1 tests OK test_gowerDind.R.............. 1 tests OK test_gowerDind.R.............. 1 tests OK test_gowerDind.R.............. 1 tests OK test_gowerDind.R.............. 1 tests OK test_gowerDind.R.............. 2 tests OK 61ms test_graphics.R............... 0 tests test_graphics.R............... 0 tests test_graphics.R............... 0 tests test_graphics.R............... 0 tests test_graphics.R............... 0 tests test_graphics.R............... 0 tests test_graphics.R............... 0 tests Missings in variables: Variable Count NonD 14 Dream 12 Sleep 4 Span 4 Gest 4 test_graphics.R............... 0 tests test_graphics.R............... 1 tests OK BodyWgt BrainWgt Dream Sleep Span Gest Pred Exp 0.005 0.140 0.000 2.600 2.000 12.000 1.000 1.000 Danger BodyWgt BrainWgt Dream Sleep Span Gest Pred 1.000 6654.000 5712.000 6.600 19.900 100.000 645.000 5.000 Exp Danger 5.000 5.000 BodyWgt BrainWgt NonD Sleep Span Gest Pred Exp 0.005 0.140 2.100 2.600 2.000 12.000 1.000 1.000 Danger BodyWgt BrainWgt NonD Sleep Span Gest Pred 1.000 6654.000 5712.000 17.900 19.900 100.000 645.000 5.000 Exp Danger 5.000 5.000 BodyWgt BrainWgt NonD Dream Span Gest Pred Exp 0.005 0.140 2.100 0.000 2.000 12.000 1.000 1.000 Danger BodyWgt BrainWgt NonD Dream Span Gest Pred 1.000 6654.000 5712.000 17.900 6.600 100.000 645.000 5.000 Exp Danger 5.000 5.000 BodyWgt BrainWgt NonD Dream Sleep Gest Pred Exp 0.005 0.140 2.100 0.000 2.600 12.000 1.000 1.000 Danger BodyWgt BrainWgt NonD Dream Sleep Gest Pred 1.000 6654.000 5712.000 17.900 6.600 19.900 645.000 5.000 Exp Danger 5.000 5.000 BodyWgt BrainWgt NonD Dream Sleep Span Pred Exp 0.005 0.140 2.100 0.000 2.600 2.000 1.000 1.000 Danger BodyWgt BrainWgt NonD Dream Sleep Span Pred 1.000 6654.000 5712.000 17.900 6.600 19.900 100.000 5.000 Exp Danger 5.000 5.000 test_graphics.R............... 1 tests OK test_graphics.R............... 1 tests OK test_graphics.R............... 1 tests OK Imputed missings in variables: Variable Count NonD 14 Dream 12 Sleep 4 Span 4 Gest 4 test_graphics.R............... 1 tests OK test_graphics.R............... 2 tests OK test_graphics.R............... 2 tests OK test_graphics.R............... 2 tests OK test_graphics.R............... 3 tests OK Exp Exp 1 5 test_graphics.R............... 3 tests OK test_graphics.R............... 3 tests OK test_graphics.R............... 3 tests OK test_graphics.R............... 4 tests OK test_graphics.R............... 4 tests OK test_graphics.R............... 5 tests OK Ca Bi Ca Bi 1.10e+02 6.00e-03 4.17e+04 3.89e+00 Ca As Ca As 110.0 0.1 41700.0 30.7 test_graphics.R............... 5 tests OK test_graphics.R............... 6 tests OK test_graphics.R............... 6 tests OK test_graphics.R............... 6 tests OK test_graphics.R............... 6 tests OK test_graphics.R............... 7 tests OK Humidity Humidity 71.6 94.8 Air.Temp Air.Temp 21.42 28.50 test_graphics.R............... 7 tests OK test_graphics.R............... 7 tests OK test_graphics.R............... 7 tests OK test_graphics.R............... 8 tests OK test_graphics.R............... 8 tests OK test_graphics.R............... 9 tests OK Bi Bi 0.006 3.890 As As 0.1 30.7 test_graphics.R............... 9 tests OK test_graphics.R............... 10 tests OK test_graphics.R............... 10 tests OK test_graphics.R............... 10 tests OK test_graphics.R............... 11 tests OK BodyWgt BrainWgt Dream Sleep BodyWgt BrainWgt Dream Sleep 0.005 0.140 0.000 2.600 6654.000 5712.000 6.600 19.900 BodyWgt BrainWgt NonD Sleep BodyWgt BrainWgt NonD Sleep 0.005 0.140 2.100 2.600 6654.000 5712.000 17.900 19.900 BodyWgt BrainWgt NonD Dream BodyWgt BrainWgt NonD Dream 0.005 0.140 2.100 0.000 6654.000 5712.000 17.900 6.600 test_graphics.R............... 11 tests OK test_graphics.R............... 11 tests OK test_graphics.R............... 12 tests OK test_graphics.R............... 12 tests OK test_graphics.R............... 12 tests OK test_graphics.R............... 13 tests OK BodyWgt BrainWgt Dream Sleep Span Gest BodyWgt BrainWgt 0.005 0.140 0.000 2.600 2.000 12.000 6654.000 5712.000 Dream Sleep Span Gest 6.600 19.900 100.000 645.000 BodyWgt BrainWgt NonD Sleep Span Gest BodyWgt BrainWgt 0.005 0.140 2.100 2.600 2.000 12.000 6654.000 5712.000 NonD Sleep Span Gest 17.900 19.900 100.000 645.000 BodyWgt BrainWgt NonD Dream Span Gest BodyWgt BrainWgt 0.005 0.140 2.100 0.000 2.000 12.000 6654.000 5712.000 NonD Dream Span Gest 17.900 6.600 100.000 645.000 BodyWgt BrainWgt NonD Dream Sleep Gest BodyWgt BrainWgt 0.005 0.140 2.100 0.000 2.600 12.000 6654.000 5712.000 NonD Dream Sleep Gest 17.900 6.600 19.900 645.000 BodyWgt BrainWgt NonD Dream Sleep Span BodyWgt BrainWgt 0.005 0.140 2.100 0.000 2.600 2.000 6654.000 5712.000 NonD Dream Sleep Span 17.900 6.600 19.900 100.000 test_graphics.R............... 13 tests OK test_graphics.R............... 13 tests OK test_graphics.R............... 14 tests OK test_graphics.R............... 14 tests OK test_graphics.R............... 15 tests OK BodyWgt BrainWgt Dream Sleep Span Gest Pred Exp 0.005 0.140 0.000 2.600 2.000 12.000 1.000 1.000 Danger BodyWgt BrainWgt Dream Sleep Span Gest Pred 1.000 6654.000 5712.000 6.600 19.900 100.000 645.000 5.000 Exp Danger 5.000 5.000 BodyWgt BrainWgt NonD Sleep Span Gest Pred Exp 0.005 0.140 2.100 2.600 2.000 12.000 1.000 1.000 Danger BodyWgt BrainWgt NonD Sleep Span Gest Pred 1.000 6654.000 5712.000 17.900 19.900 100.000 645.000 5.000 Exp Danger 5.000 5.000 BodyWgt BrainWgt NonD Dream Span Gest Pred Exp 0.005 0.140 2.100 0.000 2.000 12.000 1.000 1.000 Danger BodyWgt BrainWgt NonD Dream Span Gest Pred 1.000 6654.000 5712.000 17.900 6.600 100.000 645.000 5.000 Exp Danger 5.000 5.000 BodyWgt BrainWgt NonD Dream Sleep Gest Pred Exp 0.005 0.140 2.100 0.000 2.600 12.000 1.000 1.000 Danger BodyWgt BrainWgt NonD Dream Sleep Gest Pred 1.000 6654.000 5712.000 17.900 6.600 19.900 645.000 5.000 Exp Danger 5.000 5.000 BodyWgt BrainWgt NonD Dream Sleep Span Pred Exp 0.005 0.140 2.100 0.000 2.600 2.000 1.000 1.000 Danger BodyWgt BrainWgt NonD Dream Sleep Span Pred 1.000 6654.000 5712.000 17.900 6.600 19.900 100.000 5.000 Exp Danger 5.000 5.000 test_graphics.R............... 15 tests OK test_graphics.R............... 16 tests OK test_graphics.R............... 17 tests OK Al_XRF Ca_XRF Fe_XRF K_XRF Mg_XRF Mn_XRF Na_XRF P_XRF Si_XRF Ti_XRF Al_XRF 2.920 0.030 0.590 0.360 0.120 0.015 0.080 0.004 17.050 0.053 12.080 Ca_XRF Fe_XRF K_XRF Mg_XRF Mn_XRF Na_XRF P_XRF Si_XRF Ti_XRF 6.760 12.350 5.240 7.320 0.356 4.870 0.589 40.270 1.900 test_graphics.R............... 17 tests OK test_graphics.R............... 18 tests OK test_graphics.R............... 19 tests OK Humidity Humidity 71.6 94.8 Air.Temp Air.Temp 21.42 28.50 test_graphics.R............... 20 tests OK test_graphics.R............... 21 tests OK Humidity Humidity 71.6 94.8 Air.Temp Air.Temp 21.42 28.50 test_graphics.R............... 22 tests OK test_graphics.R............... 22 tests OK test_graphics.R............... 22 tests OK test_graphics.R............... 23 tests OK BodyWgt BrainWgt Dream Sleep BodyWgt BrainWgt Dream Sleep 0.005 0.140 0.000 2.600 6654.000 5712.000 6.600 19.900 BodyWgt BrainWgt NonD Sleep BodyWgt BrainWgt NonD Sleep 0.005 0.140 2.100 2.600 6654.000 5712.000 17.900 19.900 BodyWgt BrainWgt NonD Dream BodyWgt BrainWgt NonD Dream 0.005 0.140 2.100 0.000 6654.000 5712.000 17.900 6.600 test_graphics.R............... 23 tests OK test_graphics.R............... 23 tests OK test_graphics.R............... 24 tests OK test_graphics.R............... 24 tests OK test_graphics.R............... 24 tests OK test_graphics.R............... 25 tests OK Humidity Humidity 71.6 94.8 Air.Temp Air.Temp 21.42 28.50 test_graphics.R............... 25 tests OK Exp Exp 1 5 test_graphics.R............... 25 tests OK test_graphics.R............... 26 tests OK test_graphics.R............... 27 tests OK test_graphics.R............... 28 tests OK Humidity Humidity 71.6 94.8 Air.Temp Air.Temp 21.42 28.50 test_graphics.R............... 29 tests OK CaO CaO -1.3010300 0.9758911 test_graphics.R............... 30 tests OK test_graphics.R............... 30 tests OK test_graphics.R............... 31 tests OK test_graphics.R............... 31 tests OK test_graphics.R............... 32 tests OK 5.3s hotdeck test_hotdeck.R................ 0 tests Attaching package: 'data.table' The following objects are masked from 'package:dplyr': between, first, last test_hotdeck.R................ 0 tests test_hotdeck.R................ 0 tests test_hotdeck.R................ 0 tests test_hotdeck.R................ 0 tests test_hotdeck.R................ 0 tests test_hotdeck.R................ 1 tests OK test_hotdeck.R................ 1 tests OK test_hotdeck.R................ 1 tests OK test_hotdeck.R................ 2 tests OK test_hotdeck.R................ 2 tests OK test_hotdeck.R................ 3 tests OK test_hotdeck.R................ 4 tests OK test_hotdeck.R................ 4 tests OK test_hotdeck.R................ 4 tests OK test_hotdeck.R................ 4 tests OK test_hotdeck.R................ 4 tests OK test_hotdeck.R................ 4 tests OK test_hotdeck.R................ 4 tests OK test_hotdeck.R................ 4 tests OK test_hotdeck.R................ 5 tests OK test_hotdeck.R................ 5 tests OK test_hotdeck.R................ 6 tests OK test_hotdeck.R................ 6 tests OK test_hotdeck.R................ 7 tests OK test_hotdeck.R................ 7 tests OK test_hotdeck.R................ 8 tests OK test_hotdeck.R................ 8 tests OK test_hotdeck.R................ 9 tests OK test_hotdeck.R................ 10 tests OK test_hotdeck.R................ 10 tests OK test_hotdeck.R................ 10 tests OK test_hotdeck.R................ 11 tests OK test_hotdeck.R................ 12 tests OK test_hotdeck.R................ 13 tests OK test_hotdeck.R................ 14 tests OK test_hotdeck.R................ 15 tests OK test_hotdeck.R................ 15 tests OK test_hotdeck.R................ 15 tests OK test_hotdeck.R................ 16 tests OK test_hotdeck.R................ 17 tests OK test_hotdeck.R................ 18 tests OK test_hotdeck.R................ 19 tests OK 0.5s test_impNA.R.................. 0 tests test_impNA.R.................. 0 tests test_impNA.R.................. 0 tests test_impNA.R.................. 0 tests BodyWgt BrainWgt Dream Sleep Span Gest Pred Exp 0.005 0.140 0.000 2.900 2.000 12.000 1.000 1.000 Danger BodyWgt BrainWgt Dream Sleep Span Gest Pred 1.000 6654.000 5712.000 6.600 19.900 100.000 645.000 5.000 Exp Danger 5.000 5.000 BodyWgt BrainWgt NonD Sleep Span Gest Pred Exp 0.005 0.140 2.100 2.900 2.000 12.000 1.000 1.000 Danger BodyWgt BrainWgt NonD Sleep Span Gest Pred 1.000 6654.000 5712.000 17.900 19.900 100.000 645.000 5.000 Exp Danger 5.000 5.000 BodyWgt BrainWgt NonD Dream Sleep Gest Pred Exp 0.005 0.140 2.100 0.000 2.600 12.000 1.000 1.000 Danger BodyWgt BrainWgt NonD Dream Sleep Gest Pred 1.000 6654.000 5712.000 17.900 6.600 19.900 645.000 5.000 Exp Danger 5.000 5.000 test_impNA.R.................. 0 tests test_impNA.R.................. 1 tests OK test_impNA.R.................. 2 tests OK test_impNA.R.................. 2 tests OK test_impNA.R.................. 3 tests OK test_impNA.R.................. 4 tests OK 0.2s impPCA test_impPCA.R................. 0 tests test_impPCA.R................. 0 tests test_impPCA.R................. 0 tests test_impPCA.R................. 0 tests test_impPCA.R................. 0 tests test_impPCA.R................. 0 tests test_impPCA.R................. 0 tests test_impPCA.R................. 0 tests test_impPCA.R................. 0 tests test_impPCA.R................. 0 tests test_impPCA.R................. 0 tests test_impPCA.R................. 0 tests Iterations: 4 test_impPCA.R................. 0 tests test_impPCA.R................. 1 tests OK Iterations: 4 test_impPCA.R................. 1 tests OK test_impPCA.R................. 2 tests OK Iterations: 0 test_impPCA.R................. 2 tests OK test_impPCA.R................. 3 tests OK Iterations: 0 test_impPCA.R................. 3 tests OK test_impPCA.R................. 4 tests OK 84ms test_irmi_types.R............. 0 tests z z -0.3308959 2.0121804 test_irmi_types.R............. 0 tests test_irmi_types.R............. 1 tests OK test_irmi_types.R............. 2 tests OK test_irmi_types.R............. 2 tests OK test_irmi_types.R............. 2 tests OK test_irmi_types.R............. 2 tests OK test_irmi_types.R............. 2 tests OK test_irmi_types.R............. 3 tests OK test_irmi_types.R............. 4 tests OK test_irmi_types.R............. 4 tests OK test_irmi_types.R............. 4 tests OK test_irmi_types.R............. 4 tests OK test_irmi_types.R............. 5 tests OK test_irmi_types.R............. 6 tests OK test_irmi_types.R............. 6 tests OK test_irmi_types.R............. 6 tests OK num1 num2 num3 num1 num2 num3 -3.087610 -4.001394 -3.237928 3.349508 3.615635 2.820386 test_irmi_types.R............. 6 tests OK test_irmi_types.R............. 7 tests OK test_irmi_types.R............. 8 tests OK 0.7s test_kNN.R.................... 0 tests kNN general test_kNN.R.................... 0 tests test_kNN.R.................... 0 tests test_kNN.R.................... 0 tests test_kNN.R.................... 0 tests test_kNN.R.................... 0 tests test_kNN.R.................... 0 tests test_kNN.R.................... 0 tests test_kNN.R.................... 0 tests y y 1 6 test_kNN.R.................... 0 tests test_kNN.R.................... 0 tests y y 1 6 test_kNN.R.................... 0 tests test_kNN.R.................... 0 tests test_kNN.R.................... 1 tests OK test_kNN.R.................... 1 tests OK test_kNN.R.................... 1 tests OK Detected as categorical variable: x,x_imp,y_imp Detected as ordinal variable: Detected as numerical variable: y 0 items ofvariable:x imputed 6items ofvariable:y imputed Time difference of 0.01680303 secs test_kNN.R.................... 1 tests OK test_kNN.R.................... 2 tests OK test_kNN.R.................... 2 tests OK test_kNN.R.................... 2 tests OK test_kNN.R.................... 2 tests OK y z 1.000000 1.000000 RandomVariableForImputation y -1.372898 6.000000 z RandomVariableForImputation 6.000000 2.212962 z RandomVariableForImputation 1.000000 -1.372898 z RandomVariableForImputation 6.000000 2.212962 y z 1.000000 1.000000 RandomVariableForImputation y -1.372898 6.000000 z RandomVariableForImputation 6.000000 2.212962 test_kNN.R.................... 2 tests OK test_kNN.R.................... 3 tests OK test_kNN.R.................... 3 tests OK test_kNN.R.................... 3 tests OK test_kNN.R.................... 3 tests OK test_kNN.R.................... 3 tests OK test_kNN.R.................... 3 tests OK test_kNN.R.................... 3 tests OK y z 1.0000000 1.0000000 y2 z2 1.0000000 1.0000000 m2 y23 -0.2369185 1.0000000 z23 m23 1.0000000 -0.2369185 RandomVariableForImputation y -1.5949014 6.0000000 z y2 6.0000000 6.0000000 z2 m2 6.0000000 1.0393184 y23 z23 6.0000000 6.0000000 m23 RandomVariableForImputation 1.0393184 2.0511976 z y2 1.0000000 1.0000000 z2 m2 1.0000000 -0.2369185 y23 z23 1.0000000 1.0000000 m23 RandomVariableForImputation -0.2369185 -1.5949014 z y2 6.0000000 6.0000000 z2 m2 6.0000000 1.0393184 y23 z23 6.0000000 6.0000000 m23 RandomVariableForImputation 1.0393184 2.0511976 y z 1.0000000 1.0000000 y2 z2 1.0000000 1.0000000 m2 y23 -0.2369185 1.0000000 z23 m23 1.0000000 -0.2369185 RandomVariableForImputation y -1.5949014 6.0000000 z y2 6.0000000 6.0000000 z2 m2 6.0000000 1.0393184 y23 z23 6.0000000 6.0000000 m23 RandomVariableForImputation 1.0393184 2.0511976 y z 1.0000000 1.0000000 y2 z2 1.0000000 1.0000000 m2 y23 -0.2369185 1.0000000 z23 m23 1.0000000 -0.2369185 RandomVariableForImputation y -1.5949014 6.0000000 z y2 6.0000000 6.0000000 z2 m2 6.0000000 1.0393184 y23 z23 6.0000000 6.0000000 m23 RandomVariableForImputation 1.0393184 2.0511976 y z 1.0000000 1.0000000 z2 m2 1.0000000 -0.2369185 y23 z23 1.0000000 1.0000000 m23 RandomVariableForImputation -0.2369185 -1.5949014 y z 6.0000000 6.0000000 z2 m2 6.0000000 1.0393184 y23 z23 6.0000000 6.0000000 m23 RandomVariableForImputation 1.0393184 2.0511976 y z 1.0000000 1.0000000 y2 z2 1.0000000 1.0000000 y23 z23 1.0000000 1.0000000 m23 RandomVariableForImputation -0.2369185 -1.5949014 y z 6.0000000 6.0000000 y2 z2 6.0000000 6.0000000 y23 z23 6.0000000 6.0000000 m23 RandomVariableForImputation 1.0393184 2.0511976 y z 1.0000000 1.0000000 y2 z2 1.0000000 1.0000000 m2 y23 -0.2369185 1.0000000 z23 m23 1.0000000 -0.2369185 RandomVariableForImputation y -1.5949014 6.0000000 z y2 6.0000000 6.0000000 z2 m2 6.0000000 1.0393184 y23 z23 6.0000000 6.0000000 m23 RandomVariableForImputation 1.0393184 2.0511976 y z 1.0000000 1.0000000 y2 z2 1.0000000 1.0000000 m2 z23 -0.2369185 1.0000000 m23 RandomVariableForImputation -0.2369185 -1.5949014 y z 6.0000000 6.0000000 y2 z2 6.0000000 6.0000000 m2 z23 1.0393184 6.0000000 m23 RandomVariableForImputation 1.0393184 2.0511976 y z 1.0000000 1.0000000 y2 z2 1.0000000 1.0000000 m2 y23 -0.2369185 1.0000000 z23 RandomVariableForImputation 1.0000000 -1.5949014 y z 6.0000000 6.0000000 y2 z2 6.0000000 6.0000000 m2 y23 1.0393184 6.0000000 z23 RandomVariableForImputation 6.0000000 2.0511976 test_kNN.R.................... 3 tests OK test_kNN.R.................... 4 tests OK test_kNN.R.................... 4 tests OK z z 1 6 test_kNN.R.................... 4 tests OK test_kNN.R.................... 5 tests OK test_kNN.R.................... 5 tests OK z z 1 6 test_kNN.R.................... 5 tests OK test_kNN.R.................... 6 tests OK test_kNN.R.................... 7 tests OK test_kNN.R.................... 7 tests OK test_kNN.R.................... 8 tests OK test_kNN.R.................... 9 tests OK test_kNN.R.................... 9 tests OK test_kNN.R.................... 9 tests OK z z 1 6 y y 1 6 test_kNN.R.................... 9 tests OK test_kNN.R.................... 10 tests OK test_kNN.R.................... 11 tests OK z z 1 6 y y 1 5 test_kNN.R.................... 11 tests OK test_kNN.R.................... 12 tests OK test_kNN.R.................... 13 tests OK test_kNN.R.................... 13 tests OK test_kNN.R.................... 13 tests OK test_kNN.R.................... 13 tests OK y z m y z m 1.0000000 1.0000000 -0.2369185 6.0000000 6.0000000 1.0393184 z m z m 1.0000000 -0.2369185 6.0000000 1.0393184 y z y z 1 1 6 6 y z m y z m 1.0000000 1.0000000 -0.2369185 6.0000000 6.0000000 1.0393184 y z m 1.0000000 1.0000000 -0.2369185 yrandomForestFeature y z 1.9946333 6.0000000 6.0000000 m yrandomForestFeature 1.0393184 4.8171000 y z m 1.00000000 1.00000000 -0.23691848 mrandomForestFeature y z -0.03045094 6.00000000 6.00000000 m mrandomForestFeature 1.03931837 0.90111834 test_kNN.R.................... 13 tests OK test_kNN.R.................... 14 tests OK test_kNN.R.................... 15 tests OK test_kNN.R.................... 15 tests OK test_kNN.R.................... 15 tests OK test_kNN.R.................... 15 tests OK y z m y z m 1.0000000 1.0000000 -0.2369185 6.0000000 6.0000000 1.0393184 z m z m 1.0000000 -0.2369185 6.0000000 1.0393184 y z y z 1 1 6 6 yrandomForestFeature yrandomForestFeature 2.082400 4.727667 mrandomForestFeature mrandomForestFeature -0.05045148 0.87127575 test_kNN.R.................... 15 tests OK test_kNN.R.................... 16 tests OK test_kNN.R.................... 17 tests OK test_kNN.R.................... 17 tests OK test_kNN.R.................... 17 tests OK z y z y 1 1 6 6 z z 1 6 z z 1 6 z yrandomForestFeature z 1.000000 2.032967 6.000000 yrandomForestFeature 4.985967 test_kNN.R.................... 17 tests OK test_kNN.R.................... 18 tests OK test_kNN.R.................... 19 tests OK test_kNN.R.................... 19 tests OK test_kNN.R.................... 19 tests OK z y z y 1 1 6 6 z z 1 6 z z 1 6 z yrandomForestFeature z 1.000000 1.902000 6.000000 yrandomForestFeature 4.915067 test_kNN.R.................... 19 tests OK test_kNN.R.................... 20 tests OK test_kNN.R.................... 21 tests OK test_kNN.R.................... 21 tests OK test_kNN.R.................... 21 tests OK y y 1 6 test_kNN.R.................... 21 tests OK test_kNN.R.................... 21 tests OK y y 1 6 test_kNN.R.................... 21 tests OK test_kNN.R.................... 21 tests OK test_kNN.R.................... 22 tests OK test_kNN.R.................... 22 tests OK test_kNN.R.................... 22 tests OK Detected as categorical variable: x,x_imp,y_imp Detected as ordinal variable: Detected as numerical variable: y 0 items ofvariable:x imputed 6items ofvariable:y imputed Time difference of 0.01598048 secs test_kNN.R.................... 22 tests OK test_kNN.R.................... 23 tests OK test_kNN.R.................... 23 tests OK test_kNN.R.................... 23 tests OK test_kNN.R.................... 23 tests OK y z 1.000000 1.000000 RandomVariableForImputation y -1.130797 6.000000 z RandomVariableForImputation 6.000000 1.380325 z RandomVariableForImputation 1.000000 -1.130797 z RandomVariableForImputation 6.000000 1.380325 y z 1.000000 1.000000 RandomVariableForImputation y -1.130797 6.000000 z RandomVariableForImputation 6.000000 1.380325 test_kNN.R.................... 23 tests OK test_kNN.R.................... 24 tests OK test_kNN.R.................... 24 tests OK test_kNN.R.................... 24 tests OK test_kNN.R.................... 24 tests OK test_kNN.R.................... 24 tests OK test_kNN.R.................... 24 tests OK test_kNN.R.................... 24 tests OK y z 1.0000000 1.0000000 y2 z2 1.0000000 1.0000000 m2 y23 -0.2369185 1.0000000 z23 m23 1.0000000 -0.2369185 RandomVariableForImputation y -1.6220983 6.0000000 z y2 6.0000000 6.0000000 z2 m2 6.0000000 1.0393184 y23 z23 6.0000000 6.0000000 m23 RandomVariableForImputation 1.0393184 1.1502594 z y2 1.0000000 1.0000000 z2 m2 1.0000000 -0.2369185 y23 z23 1.0000000 1.0000000 m23 RandomVariableForImputation -0.2369185 -1.6220983 z y2 6.0000000 6.0000000 z2 m2 6.0000000 1.0393184 y23 z23 6.0000000 6.0000000 m23 RandomVariableForImputation 1.0393184 1.1502594 y z 1.0000000 1.0000000 y2 z2 1.0000000 1.0000000 m2 y23 -0.2369185 1.0000000 z23 m23 1.0000000 -0.2369185 RandomVariableForImputation y -1.6220983 6.0000000 z y2 6.0000000 6.0000000 z2 m2 6.0000000 1.0393184 y23 z23 6.0000000 6.0000000 m23 RandomVariableForImputation 1.0393184 1.1502594 y z 1.0000000 1.0000000 y2 z2 1.0000000 1.0000000 m2 y23 -0.2369185 1.0000000 z23 m23 1.0000000 -0.2369185 RandomVariableForImputation y -1.6220983 6.0000000 z y2 6.0000000 6.0000000 z2 m2 6.0000000 1.0393184 y23 z23 6.0000000 6.0000000 m23 RandomVariableForImputation 1.0393184 1.1502594 y z 1.0000000 1.0000000 z2 m2 1.0000000 -0.2369185 y23 z23 1.0000000 1.0000000 m23 RandomVariableForImputation -0.2369185 -1.6220983 y z 6.0000000 6.0000000 z2 m2 6.0000000 1.0393184 y23 z23 6.0000000 6.0000000 m23 RandomVariableForImputation 1.0393184 1.1502594 y z 1.0000000 1.0000000 y2 z2 1.0000000 1.0000000 y23 z23 1.0000000 1.0000000 m23 RandomVariableForImputation -0.2369185 -1.6220983 y z 6.0000000 6.0000000 y2 z2 6.0000000 6.0000000 y23 z23 6.0000000 6.0000000 m23 RandomVariableForImputation 1.0393184 1.1502594 y z 1.0000000 1.0000000 y2 z2 1.0000000 1.0000000 m2 y23 -0.2369185 1.0000000 z23 m23 1.0000000 -0.2369185 RandomVariableForImputation y -1.6220983 6.0000000 z y2 6.0000000 6.0000000 z2 m2 6.0000000 1.0393184 y23 z23 6.0000000 6.0000000 m23 RandomVariableForImputation 1.0393184 1.1502594 y z 1.0000000 1.0000000 y2 z2 1.0000000 1.0000000 m2 z23 -0.2369185 1.0000000 m23 RandomVariableForImputation -0.2369185 -1.6220983 y z 6.0000000 6.0000000 y2 z2 6.0000000 6.0000000 m2 z23 1.0393184 6.0000000 m23 RandomVariableForImputation 1.0393184 1.1502594 y z 1.0000000 1.0000000 y2 z2 1.0000000 1.0000000 m2 y23 -0.2369185 1.0000000 z23 RandomVariableForImputation 1.0000000 -1.6220983 y z 6.0000000 6.0000000 y2 z2 6.0000000 6.0000000 m2 y23 1.0393184 6.0000000 z23 RandomVariableForImputation 6.0000000 1.1502594 test_kNN.R.................... 24 tests OK test_kNN.R.................... 25 tests OK test_kNN.R.................... 25 tests OK z z 1 6 test_kNN.R.................... 25 tests OK test_kNN.R.................... 26 tests OK test_kNN.R.................... 26 tests OK test_kNN.R.................... 27 tests OK test_kNN.R.................... 28 tests OK test_kNN.R.................... 28 tests OK test_kNN.R.................... 28 tests OK test_kNN.R.................... 28 tests OK y z m y z m 1.0000000 1.0000000 -0.2369185 6.0000000 6.0000000 1.0393184 z m z m 1.0000000 -0.2369185 6.0000000 1.0393184 y z y z 1 1 6 6 y z m y z m 1.0000000 1.0000000 -0.2369185 6.0000000 6.0000000 1.0393184 y z m 1.0000000 1.0000000 -0.2369185 yrandomForestFeature y z 2.1257000 6.0000000 6.0000000 m yrandomForestFeature 1.0393184 4.6920333 y z m 1.00000000 1.00000000 -0.23691848 mrandomForestFeature y z -0.04213089 6.00000000 6.00000000 m mrandomForestFeature 1.03931837 0.84783840 test_kNN.R.................... 28 tests OK test_kNN.R.................... 29 tests OK test_kNN.R.................... 30 tests OK test_kNN.R.................... 30 tests OK test_kNN.R.................... 30 tests OK test_kNN.R.................... 30 tests OK y z m y z m 1.0000000 1.0000000 -0.2369185 6.0000000 6.0000000 1.0393184 z m z m 1.0000000 -0.2369185 6.0000000 1.0393184 y z y z 1 1 6 6 yrandomForestFeature yrandomForestFeature 2.125167 4.833167 mrandomForestFeature mrandomForestFeature -0.09882801 0.85004137 test_kNN.R.................... 30 tests OK test_kNN.R.................... 31 tests OK test_kNN.R.................... 32 tests OK test_kNN.R.................... 32 tests OK test_kNN.R.................... 32 tests OK z y z y 1 1 6 6 z z 1 6 z z 1 6 z yrandomForestFeature z 1.000000 1.962533 6.000000 yrandomForestFeature 4.974033 test_kNN.R.................... 32 tests OK test_kNN.R.................... 33 tests OK test_kNN.R.................... 34 tests OK test_kNN.R.................... 34 tests OK test_kNN.R.................... 34 tests OK z y z y 1 1 6 6 z z 1 6 z z 1 6 z yrandomForestFeature z 1.000000 1.989633 6.000000 yrandomForestFeature 4.963533 test_kNN.R.................... 34 tests OK test_kNN.R.................... 35 tests OK test_kNN.R.................... 36 tests OK test_kNN.R.................... 36 tests OK col2 col3 col2 col3 3 5 4 6 test_kNN.R.................... 37 tests OK test_kNN.R.................... 37 tests OK test_kNN.R.................... 37 tests OK col2 col3 col2 col3 3 5 4 6 test_kNN.R.................... 38 tests OK test_kNN.R.................... 38 tests OK test_kNN.R.................... 38 tests OK col2 col3 col2 col3 3 5 4 6 test_kNN.R.................... 39 tests OK weighted catFun without missings in the distance variables test_kNN.R.................... 39 tests OK test_kNN.R.................... 39 tests OK test_kNN.R.................... 39 tests OK x y x y 1 1 10 10 test_kNN.R.................... 39 tests OK weighted catFun with missings in the distance variables test_kNN.R.................... 39 tests OK test_kNN.R.................... 39 tests OK x y x y 1 1 10 10 test_kNN.R.................... 39 tests OK x y x y 1 1 10 10 test_kNN.R.................... 39 tests OK 1.7s test_kNN_exact.R.............. 0 tests kNN exact results test_kNN_exact.R.............. 0 tests test_kNN_exact.R.............. 0 tests test_kNN_exact.R.............. 0 tests test_kNN_exact.R.............. 0 tests Detected as categorical variable: Class,Class_imp,X1_imp,X2_imp,ClassNum_imp,Row_imp,Row2_imp,ord_imp Detected as ordinal variable: ord Detected as numerical variable: X1,X2,ClassNum,Row,Row2 0 items ofvariable:Class imputed 0 items ofvariable:X1 imputed X1 X1 1 1 2items ofvariable:X2 imputed 0 items ofvariable:ClassNum imputed 0 items ofvariable:Row imputed 0 items ofvariable:Row2 imputed 0 items ofvariable:ord imputed Time difference of 0.02307701 secs test_kNN_exact.R.............. 0 tests test_kNN_exact.R.............. 1 tests OK test_kNN_exact.R.............. 2 tests OK test_kNN_exact.R.............. 2 tests OK test_kNN_exact.R.............. 3 tests OK test_kNN_exact.R.............. 4 tests OK X1 ClassNum X1 ClassNum 1 1 1 2 test_kNN_exact.R.............. 4 tests OK test_kNN_exact.R.............. 5 tests OK test_kNN_exact.R.............. 6 tests OK test_kNN_exact.R.............. 6 tests OK test_kNN_exact.R.............. 6 tests OK Row2 Row2 1 10 Row2 Row2 1 10 test_kNN_exact.R.............. 6 tests OK test_kNN_exact.R.............. 7 tests OK test_kNN_exact.R.............. 8 tests OK Row2 Row2 1 10 Row2 Row2 1 10 test_kNN_exact.R.............. 8 tests OK test_kNN_exact.R.............. 9 tests OK test_kNN_exact.R.............. 10 tests OK X1 Row2 X1 Row2 1 1 1 10 X1 Row2 X1 Row2 1 1 1 10 test_kNN_exact.R.............. 10 tests OK test_kNN_exact.R.............. 11 tests OK test_kNN_exact.R.............. 12 tests OK test_kNN_exact.R.............. 12 tests OK test_kNN_exact.R.............. 12 tests OK test_kNN_exact.R.............. 12 tests OK test_kNN_exact.R.............. 13 tests OK test_kNN_exact.R.............. 14 tests OK Row2 Row2 1 10 Row2 Row2 1 10 test_kNN_exact.R.............. 14 tests OK test_kNN_exact.R.............. 15 tests OK test_kNN_exact.R.............. 16 tests OK X1 Row2 X1 Row2 1 1 1 10 X1 Row2 X1 Row2 1 1 1 10 test_kNN_exact.R.............. 16 tests OK test_kNN_exact.R.............. 17 tests OK test_kNN_exact.R.............. 18 tests OK test_kNN_exact.R.............. 18 tests OK test_kNN_exact.R.............. 18 tests OK test_kNN_exact.R.............. 18 tests OK Detected as categorical variable: Class,Class_imp,X1_imp,X2_imp,ClassNum_imp,Row_imp,Row2_imp,ord_imp Detected as ordinal variable: ord Detected as numerical variable: X1,X2,ClassNum,Row,Row2 0 items ofvariable:Class imputed 0 items ofvariable:X1 imputed X1 X1 1 1 2items ofvariable:X2 imputed 0 items ofvariable:ClassNum imputed 0 items ofvariable:Row imputed 0 items ofvariable:Row2 imputed 0 items ofvariable:ord imputed Time difference of 0.01501894 secs test_kNN_exact.R.............. 18 tests OK test_kNN_exact.R.............. 19 tests OK test_kNN_exact.R.............. 20 tests OK test_kNN_exact.R.............. 20 tests OK test_kNN_exact.R.............. 21 tests OK test_kNN_exact.R.............. 22 tests OK X1 ClassNum X1 ClassNum 1 1 1 2 test_kNN_exact.R.............. 22 tests OK test_kNN_exact.R.............. 23 tests OK test_kNN_exact.R.............. 24 tests OK test_kNN_exact.R.............. 24 tests OK test_kNN_exact.R.............. 24 tests OK Row2 Row2 1 10 Row2 Row2 1 10 test_kNN_exact.R.............. 24 tests OK test_kNN_exact.R.............. 25 tests OK test_kNN_exact.R.............. 26 tests OK Row2 Row2 1 10 Row2 Row2 1 10 test_kNN_exact.R.............. 26 tests OK test_kNN_exact.R.............. 27 tests OK test_kNN_exact.R.............. 28 tests OK X1 Row2 X1 Row2 1 1 1 10 X1 Row2 X1 Row2 1 1 1 10 test_kNN_exact.R.............. 28 tests OK test_kNN_exact.R.............. 29 tests OK test_kNN_exact.R.............. 30 tests OK test_kNN_exact.R.............. 30 tests OK test_kNN_exact.R.............. 30 tests OK test_kNN_exact.R.............. 30 tests OK test_kNN_exact.R.............. 31 tests OK test_kNN_exact.R.............. 32 tests OK Row2 Row2 1 10 Row2 Row2 1 10 test_kNN_exact.R.............. 32 tests OK test_kNN_exact.R.............. 33 tests OK test_kNN_exact.R.............. 34 tests OK X1 Row2 X1 Row2 1 1 1 10 X1 Row2 X1 Row2 1 1 1 10 test_kNN_exact.R.............. 34 tests OK test_kNN_exact.R.............. 35 tests OK test_kNN_exact.R.............. 36 tests OK 0.3s test_kNN_iqr.R................ 0 tests kNN iqr test_kNN_iqr.R................ 0 tests test_kNN_iqr.R................ 0 tests test_kNN_iqr.R................ 0 tests test_kNN_iqr.R................ 0 tests y z y z -1.914359 -2.888921 2.307978 2.649167 test_kNN_iqr.R................ 0 tests test_kNN_iqr.R................ 0 tests test_kNN_iqr.R................ 1 tests OK 23ms test_kNN_ordered.R............ 0 tests kNN ordered test_kNN_ordered.R............ 0 tests test_kNN_ordered.R............ 0 tests test_kNN_ordered.R............ 0 tests test_kNN_ordered.R............ 0 tests y z y z 1 1 6 6 test_kNN_ordered.R............ 0 tests test_kNN_ordered.R............ 1 tests OK test_kNN_ordered.R............ 2 tests OK y z y z 1 1 6 6 test_kNN_ordered.R............ 2 tests OK test_kNN_ordered.R............ 3 tests OK test_kNN_ordered.R............ 4 tests OK y z y z 1 1 6 6 test_kNN_ordered.R............ 4 tests OK test_kNN_ordered.R............ 5 tests OK test_kNN_ordered.R............ 6 tests OK y z y z 1 1 6 6 test_kNN_ordered.R............ 6 tests OK test_kNN_ordered.R............ 7 tests OK test_kNN_ordered.R............ 8 tests OK 57ms test_matchImpute.R............ 0 tests matchImpute general test_matchImpute.R............ 0 tests test_matchImpute.R............ 0 tests test_matchImpute.R............ 0 tests test_matchImpute.R............ 0 tests test_matchImpute.R............ 0 tests test_matchImpute.R............ 1 tests OK test_matchImpute.R............ 1 tests OK test_matchImpute.R............ 2 tests OK test_matchImpute.R............ 3 tests OK test_matchImpute.R............ 4 tests OK test_matchImpute.R............ 5 tests OK test_matchImpute.R............ 5 tests OK test_matchImpute.R............ 6 tests OK 14ms test_rangerImpute.R........... 0 tests test_rangerImpute.R........... 0 tests test_rangerImpute.R........... 0 tests test_rangerImpute.R........... 0 tests test_rangerImpute.R........... 0 tests test_rangerImpute.R........... 0 tests test_rangerImpute.R........... 0 tests test_rangerImpute.R........... 0 tests test_rangerImpute.R........... 1 tests OK No missings in x. test_rangerImpute.R........... 1 tests OK test_rangerImpute.R........... 2 tests OK test_rangerImpute.R........... 2 tests OK test_rangerImpute.R........... 2 tests OK test_rangerImpute.R........... 3 tests OK test_rangerImpute.R........... 3 tests OK test_rangerImpute.R........... 4 tests OK test_rangerImpute.R........... 4 tests OK test_rangerImpute.R........... 4 tests OK test_rangerImpute.R........... 4 tests OK test_rangerImpute.R........... 5 tests OK 0.1s test_regressionImp.R.......... 0 tests test_regressionImp.R.......... 0 tests test_regressionImp.R.......... 1 tests OK test_regressionImp.R.......... 1 tests OK test_regressionImp.R.......... 2 tests OK test_regressionImp.R.......... 2 tests OK test_regressionImp.R.......... 3 tests OK test_regressionImp.R.......... 3 tests OK test_regressionImp.R.......... 4 tests OK test_regressionImp.R.......... 4 tests OK test_regressionImp.R.......... 4 tests OK test_regressionImp.R.......... 4 tests OK test_regressionImp.R.......... 4 tests OK test_regressionImp.R.......... 5 tests OK 10ms test_xgboostImpute.R.......... 0 tests test_xgboostImpute.R.......... 0 tests test_xgboostImpute.R.......... 0 tests test_xgboostImpute.R.......... 0 tests test_xgboostImpute.R.......... 0 tests test_xgboostImpute.R.......... 0 tests test_xgboostImpute.R.......... 0 tests test_xgboostImpute.R.......... 0 tests test_xgboostImpute.R.......... 0 tests test_xgboostImpute.R.......... 0 tests test_xgboostImpute.R.......... 1 tests OK No missings in x. test_xgboostImpute.R.......... 1 tests OK test_xgboostImpute.R.......... 2 tests OK Error in process.y.margin.and.objective(y, base_margin, objective, params) : Got numeric 'y' - supported objectives for this data are: reg:squarederror, reg:squaredlogerror, reg:logistic, reg:pseudohubererror, reg:absoluteerror, reg:quantileerror, count:poisson, reg:gamma, reg:tweedie. Was passed: binary:logistic Calls: ... xgboostImpute -> -> process.y.margin.and.objective In addition: There were 20 warnings (use warnings() to see them) Execution halted Package: visaOTR Check: examples New result: ERROR Running examples in ‘visaOTR-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: visa.est > ### Title: Valid Improved Sparsity A-Learning for Optimal Treatment > ### Decision > ### Aliases: visa.est > > ### ** Examples > > data(visa_SimuData) > y = visa_SimuData$y > a = visa_SimuData$a > x = visa_SimuData$x > # estimation > result <- visa.est(y, x, a, IC = "BIC", lambda.list = c(0.1, 0.5)) Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: verbose. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: verbose. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. Warning in parallel::mclapply(seq(no_rep), wt_calc) : all scheduled cores encountered errors in user code Error in FUN(x, aperm(array(STATS, dims[perm]), order(perm)), ...) : non-numeric argument to binary operator Calls: visa.est -> visa.weight -> sweep Execution halted Package: weightedGCM Check: R code for possible problems New result: NOTE train.xgboost1: no visible global function definition for ‘cb.evaluation.log’ Undefined global functions or variables: cb.evaluation.log Package: xgb2sql Check: examples New result: ERROR Running examples in ‘xgb2sql-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: booster2sql > ### Title: Transform XGBoost model object to SQL query. > ### Aliases: booster2sql > > ### ** Examples > > library(xgboost) > # load data > df = data.frame(ggplot2::diamonds) > head(df) carat cut color clarity depth table price x y z 1 0.23 Ideal E SI2 61.5 55 326 3.95 3.98 2.43 2 0.21 Premium E SI1 59.8 61 326 3.89 3.84 2.31 3 0.23 Good E VS1 56.9 65 327 4.05 4.07 2.31 4 0.29 Premium I VS2 62.4 58 334 4.20 4.23 2.63 5 0.31 Good J SI2 63.3 58 335 4.34 4.35 2.75 6 0.24 Very Good J VVS2 62.8 57 336 3.94 3.96 2.48 > > # data processing > out <- onehot2sql(df) > x <- out$model.matrix[,colnames(out$model.matrix)!='price'] > y <- out$model.matrix[,colnames(out$model.matrix)=='price'] > > # model training > bst <- xgboost(data = x, + label = y, + max.depth = 3, + eta = .3, + nround = 5, + nthread = 1, + objective = 'reg:linear') Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: max.depth. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'label' has been renamed to 'y'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'eta' has been renamed to 'learning_rate'. This warning will become an error in a future version. Error in process.y.margin.and.objective(y, base_margin, objective, params) : Got numeric 'y' - supported objectives for this data are: reg:squarederror, reg:squaredlogerror, reg:logistic, reg:pseudohubererror, reg:absoluteerror, reg:quantileerror, count:poisson, reg:gamma, reg:tweedie. Was passed: reg:linear Calls: xgboost -> process.y.margin.and.objective Execution halted Package: xgb2sql Check: re-building of vignette outputs New result: ERROR Error(s) in re-building vignettes: ... --- re-building ‘xgb2sql.Rmd’ using rmarkdown Quitting from xgb2sql.Rmd:221-231 [unnamed-chunk-6] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error in `process.y.margin.and.objective()`: ! Got numeric 'y' - supported objectives for this data are: reg:squarederror, reg:squaredlogerror, reg:logistic, reg:pseudohubererror, reg:absoluteerror, reg:quantileerror, count:poisson, reg:gamma, reg:tweedie. Was passed: reg:linear --- Backtrace: ▆ 1. └─xgboost::xgboost(...) 2. └─xgboost:::process.y.margin.and.objective(...) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'xgb2sql.Rmd' failed with diagnostics: Got numeric 'y' - supported objectives for this data are: reg:squarederror, reg:squaredlogerror, reg:logistic, reg:pseudohubererror, reg:absoluteerror, reg:quantileerror, count:poisson, reg:gamma, reg:tweedie. Was passed: reg:linear --- failed re-building ‘xgb2sql.Rmd’ SUMMARY: processing the following file failed: ‘xgb2sql.Rmd’ Error: Vignette re-building failed. Execution halted Package: xrf Check: examples New result: ERROR Running examples in ‘xrf-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: summary.xrf > ### Title: Summarize an eXtreme RuleFit model > ### Aliases: summary.xrf > > ### ** Examples > > m <- xrf(Petal.Length ~ ., iris, + xgb_control = list(nrounds = 2, max_depth = 2), + family = 'gaussian') Warning in throw_err_or_depr_msg("Parameter(s) have been removed from this function: ", : Parameter(s) have been removed from this function: params. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Passed unrecognized parameters: ", paste(head(names_unrecognized), : Passed unrecognized parameters: verbose. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'data' has been renamed to 'x'. This warning will become an error in a future version. Warning in throw_err_or_depr_msg("Parameter '", match_old, "' has been renamed to '", : Parameter 'label' has been renamed to 'y'. This warning will become an error in a future version. > summary(m) An eXtreme RuleFit model of 11 rules. Original Formula: Petal.Length ~ Sepal.Length + Sepal.Width + Petal.Width + Species Tree model: Error in h(simpleError(msg, call)) : error in evaluating the argument 'object' in selecting a method for function 'show': length of 'dimnames' [1] not equal to array extent Calls: summary ... summary.default -> array -> .handleSimpleError -> h Execution halted Package: xrf Check: tests New result: ERROR Running ‘testthat.R’ [11s/11s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(xrf) > > test_check("xrf") Saving _problems/test_model-39.R Saving _problems/test_model-48.R Saving _problems/test_model-56.R [ FAIL 3 | WARN 44 | SKIP 0 | PASS 12 ] ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test_model.R:38:3'): model from dense design matrix has expected fields ── Error in `process.y.margin.and.objective(y, base_margin, objective, params)`: Got numeric 'y' - supported objectives for this data are: reg:squarederror, reg:squaredlogerror, reg:logistic, reg:pseudohubererror, reg:absoluteerror, reg:quantileerror, count:poisson, reg:gamma, reg:tweedie. Was passed: binary:logistic Backtrace: ▆ 1. ├─xrf::xrf(...) at test_model.R:38:3 2. └─xrf:::xrf.formula(...) 3. └─xgboost::xgboost(...) 4. └─xgboost:::process.y.margin.and.objective(...) ── Error ('test_model.R:47:3'): model from sparse design matrix has expected fields ── Error in `process.y.margin.and.objective(y, base_margin, objective, params)`: Got numeric 'y' - supported objectives for this data are: reg:squarederror, reg:squaredlogerror, reg:logistic, reg:pseudohubererror, reg:absoluteerror, reg:quantileerror, count:poisson, reg:gamma, reg:tweedie. Was passed: binary:logistic Backtrace: ▆ 1. ├─xrf::xrf(...) at test_model.R:47:3 2. └─xrf:::xrf.formula(...) 3. └─xgboost::xgboost(...) 4. └─xgboost:::process.y.margin.and.objective(...) ── Error ('test_model.R:54:3'): model predicts binary outcome ────────────────── Error in `process.y.margin.and.objective(y, base_margin, objective, params)`: Got numeric 'y' - supported objectives for this data are: reg:squarederror, reg:squaredlogerror, reg:logistic, reg:pseudohubererror, reg:absoluteerror, reg:quantileerror, count:poisson, reg:gamma, reg:tweedie. Was passed: binary:logistic Backtrace: ▆ 1. ├─xrf::xrf(...) at test_model.R:54:3 2. └─xrf:::xrf.formula(...) 3. └─xgboost::xgboost(...) 4. └─xgboost:::process.y.margin.and.objective(...) [ FAIL 3 | WARN 44 | SKIP 0 | PASS 12 ] Error: ! Test failures. Execution halted