Package check result: OK Changes to worse in reverse depends: Package: broom Check: examples New result: ERROR Running examples in ‘broom-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: augment.lm > ### Title: Augment data with information from a(n) lm object > ### Aliases: augment.lm > > ### ** Examples > > ## Don't show: > if (rlang::is_installed("ggplot2")) (if (getRversion() >= "3.4") withAutoprint else force)({ # examplesIf + ## End(Don't show) + + library(ggplot2) + library(dplyr) + + mod <- lm(mpg ~ wt + qsec, data = mtcars) + + tidy(mod) + glance(mod) + + # coefficient plot + d <- tidy(mod, conf.int = TRUE) + + ggplot(d, aes(estimate, term, xmin = conf.low, xmax = conf.high, height = 0)) + + geom_point() + + geom_vline(xintercept = 0, lty = 4) + + geom_errorbarh() + + # aside: There are tidy() and glance() methods for lm.summary objects too. + # this can be useful when you want to conserve memory by converting large lm + # objects into their leaner summary.lm equivalents. + s <- summary(mod) + tidy(s, conf.int = TRUE) + glance(s) + + augment(mod) + augment(mod, mtcars, interval = "confidence") + + # predict on new data + newdata <- mtcars |> + head(6) |> + mutate(wt = wt + 1) + augment(mod, newdata = newdata) + + # ggplot2 example where we also construct 95% prediction interval + + # simpler bivariate model since we're plotting in 2D + mod2 <- lm(mpg ~ wt, data = mtcars) + + au <- augment(mod2, newdata = newdata, interval = "prediction") + + ggplot(au, aes(wt, mpg)) + + geom_point() + + geom_line(aes(y = .fitted)) + + geom_ribbon(aes(ymin = .lower, ymax = .upper), col = NA, alpha = 0.3) + + # predict on new data without outcome variable. Output does not include .resid + newdata <- newdata |> + select(-mpg) + + augment(mod, newdata = newdata) + + au <- augment(mod, data = mtcars) + + ggplot(au, aes(.hat, .std.resid)) + + geom_vline(size = 2, colour = "white", xintercept = 0) + + geom_hline(size = 2, colour = "white", yintercept = 0) + + geom_point() + + geom_smooth(se = FALSE) + + plot(mod, which = 6) + + ggplot(au, aes(.hat, .cooksd)) + + geom_vline(xintercept = 0, colour = NA) + + geom_abline(slope = seq(0, 3, by = 0.5), colour = "white") + + geom_smooth(se = FALSE) + + geom_point() + + # column-wise models + a <- matrix(rnorm(20), nrow = 10) + b <- a + rnorm(length(a)) + result <- lm(b ~ a) + + tidy(result) + ## Don't show: + }) # examplesIf > library(ggplot2) > library(dplyr) Attaching package: ‘dplyr’ The following objects are masked from ‘package:stats’: filter, lag The following objects are masked from ‘package:base’: intersect, setdiff, setequal, union > mod <- lm(mpg ~ wt + qsec, data = mtcars) > tidy(mod) # A tibble: 3 × 5 term estimate std.error statistic p.value 1 (Intercept) 19.7 5.25 3.76 7.65e- 4 2 wt -5.05 0.484 -10.4 2.52e-11 3 qsec 0.929 0.265 3.51 1.50e- 3 > glance(mod) Error in ll(object) : could not find function "ll" Calls: ... tibble_quos -> eval_tidy -> -> AIC.default Execution halted Package: broom Check: re-building of vignette outputs New result: ERROR Error(s) in re-building vignettes: ... --- re-building ‘adding-tidiers.Rmd’ using rmarkdown --- finished re-building ‘adding-tidiers.Rmd’ --- re-building ‘available-methods.Rmd’ using rmarkdown --- finished re-building ‘available-methods.Rmd’ --- re-building ‘bootstrapping.Rmd’ using rmarkdown --- finished re-building ‘bootstrapping.Rmd’ --- re-building ‘broom.Rmd’ using rmarkdown Quitting from broom.Rmd:74-76 [unnamed-chunk-3] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error in `ll()`: ! could not find function "ll" --- Backtrace: ▆ 1. ├─generics::glance(lmfit) 2. ├─broom:::glance.lm(lmfit) 3. │ ├─base::with(...) 4. │ └─base::with.default(...) 5. │ └─base::eval(substitute(expr), data, enclos = parent.frame()) 6. │ └─base::eval(substitute(expr), data, enclos = parent.frame()) 7. │ └─tibble::tibble(...) 8. │ └─tibble:::tibble_quos(xs, .rows, .name_repair) 9. │ └─rlang::eval_tidy(xs[[j]], mask) 10. ├─stats::AIC(x) 11. └─stats:::AIC.default(x) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'broom.Rmd' failed with diagnostics: could not find function "ll" --- failed re-building ‘broom.Rmd’ --- re-building ‘broom_and_dplyr.Rmd’ using rmarkdown Quitting from broom_and_dplyr.Rmd:164-182 [unnamed-chunk-15] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ NULL ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'broom_and_dplyr.Rmd' failed with diagnostics: ℹ In argument: `glanced = map(fit, glance)`. Caused by error in `map()`: ℹ In index: 1. Caused by error in `ll()`: ! could not find function "ll" --- failed re-building ‘broom_and_dplyr.Rmd’ --- re-building ‘kmeans.Rmd’ using rmarkdown --- finished re-building ‘kmeans.Rmd’ SUMMARY: processing the following files failed: ‘broom.Rmd’ ‘broom_and_dplyr.Rmd’ Error: Vignette re-building failed. Execution halted Package: broom Check: tests New result: ERROR Running ‘spelling.R’ [0s/0s] Running ‘test-all.R’ [12s/12s] Running ‘testthat.R’ [14s/14s] Running the tests in ‘tests/test-all.R’ failed. Complete output: > library(testthat) > test_check("broom") Loading required package: broom Attaching package: 'modeldata' The following object is masked from 'package:datasets': penguins Multiple parameters; naming those columns ndf and ddf. [ FAIL 4 | WARN 0 | SKIP 99 | PASS 949 ] ══ Skipped tests (99) ══════════════════════════════════════════════════════════ • On CRAN (99): 'test-aer.R:1:1', 'test-auc.R:1:1', 'test-bbmle.R:1:1', 'test-betareg.R:1:1', 'test-biglm.R:1:1', 'test-bingroup.R:1:1', 'test-boot.R:1:1', 'test-btergm.R:1:1', 'test-car.R:1:1', 'test-caret.R:1:1', 'test-cluster.R:1:1', 'test-cmprsk.R:1:1', 'test-data-frame.R:2:3', 'test-drc.R:1:1', 'test-emmeans.R:1:1', 'test-epiR.R:1:1', 'test-ergm.R:1:1', 'test-fixest.R:1:1', 'test-gam.R:1:1', 'test-geepack.R:1:1', 'test-glmnetUtils.R:1:1', 'test-gmm.R:1:1', 'test-hmisc.R:1:1', 'test-joineRML.R:1:1', 'test-kendall.R:1:1', 'test-ks.R:1:1', 'test-lavaan.R:1:1', 'test-leaps.R:1:1', 'test-lfe.R:1:1', 'test-list-irlba.R:1:1', 'test-list-optim.R:1:1', 'test-list-svd.R:1:1', 'test-list-xyz.R:1:1', 'test-list.R:1:1', 'test-lmbeta-lm-beta.R:1:1', 'test-lmodel2.R:1:1', 'test-lmtest.R:1:1', 'test-maps.R:1:1', 'test-margins.R:1:1', 'test-mass-fitdistr.R:1:1', 'test-mass-polr.R:1:1', 'test-mass-ridgelm.R:1:1', 'test-mass-rlm.R:1:1', 'test-mclust.R:1:1', 'test-mediation.R:1:1', 'test-metafor.R:1:1', 'test-mfx.R:1:1', 'test-mgcv.R:1:1', 'test-mlogit.R:1:1', 'test-muhaz.R:1:1', 'test-multcomp.R:1:1', 'test-nnet.R:1:1', 'test-null-and-default.R:9:3', 'test-null-and-default.R:23:3', 'test-null-and-default.R:39:3', 'test-ordinal.R:1:1', 'test-plm.R:1:1', 'test-polca.R:1:1', 'test-psych.R:1:1', 'test-quantreg-nlrq.R:1:1', 'test-quantreg-rq.R:1:1', 'test-quantreg-rqs.R:1:1', 'test-robust-glmrob.R:3:3', 'test-robust.R:1:1', 'test-robustbase.R:1:1', 'test-spdep.R:1:1', 'test-speedglm-speedglm.R:1:1', 'test-speedglm-speedlm.R:1:1', 'test-stats-anova.R:50:3', 'test-stats-arima.R:1:1', 'test-stats-decompose.R:1:1', 'test-stats-factanal.R:1:1', 'test-stats-glm.R:82:3', 'test-stats-glm.R:89:3', 'test-stats-htest.R:21:3', 'test-stats-htest.R:100:3', 'test-stats-htest.R:134:3', 'test-stats-lm.R:36:3', 'test-stats-lm.R:148:3', 'test-stats-mlm.R:1:1', 'test-stats-nls.R:1:1', 'test-stats-prcomp.R:42:3', 'test-survey.R:1:1', 'test-survival-aareg.R:1:1', 'test-survival-cch.R:1:1', 'test-survival-pyears.R:1:1', 'test-survival-survdiff.R:1:1', 'test-survival-survexp.R:1:1', 'test-survival-survfit.R:1:1', 'test-survival-survreg.R:1:1', 'test-systemfit.R:1:1', 'test-utilities.R:5:3', 'test-utilities.R:25:3', 'test-utilities.R:31:3', 'test-utilities.R:208:3', 'test-utilities.R:218:3', 'test-utilities.R:235:3', 'test-vars.R:1:1', 'test-zoo.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-mass-negbin.R:33:3'): glance.negbin ──────────────────────────── Error in `ll(object)`: could not find function "ll" Backtrace: ▆ 1. ├─generics::glance(fit) at test-mass-negbin.R:33:3 2. ├─broom:::glance.negbin(fit) 3. │ └─tibble::tibble(...) 4. │ └─tibble:::tibble_quos(xs, .rows, .name_repair) 5. │ └─rlang::eval_tidy(xs[[j]], mask) 6. ├─stats::AIC(x) 7. └─stats:::AIC.default(x) ── Error ('test-stats-lm.R:59:3'): glance.lm ─────────────────────────────────── Error in `ll(object)`: could not find function "ll" Backtrace: ▆ 1. ├─generics::glance(fit) at test-stats-lm.R:59:3 2. ├─broom:::glance.lm(fit) 3. │ ├─base::with(...) 4. │ └─base::with.default(...) 5. │ └─base::eval(substitute(expr), data, enclos = parent.frame()) 6. │ └─base::eval(substitute(expr), data, enclos = parent.frame()) 7. │ └─tibble::tibble(...) 8. │ └─tibble:::tibble_quos(xs, .rows, .name_repair) 9. │ └─rlang::eval_tidy(xs[[j]], mask) 10. ├─stats::AIC(x) 11. └─stats:::AIC.default(x) ── Error ('test-stats-lm.R:155:3'): glance.lm returns non-NA entries with 0-intercept model (#1209) ── Error in `ll(object)`: could not find function "ll" Backtrace: ▆ 1. ├─generics::glance(fit) at test-stats-lm.R:155:3 2. ├─broom:::glance.lm(fit) 3. │ ├─base::with(...) 4. │ └─base::with.default(...) 5. │ └─base::eval(substitute(expr), data, enclos = parent.frame()) 6. │ └─base::eval(substitute(expr), data, enclos = parent.frame()) 7. │ └─tibble::tibble(...) 8. │ └─tibble:::tibble_quos(xs, .rows, .name_repair) 9. │ └─rlang::eval_tidy(xs[[j]], mask) 10. ├─stats::AIC(x) 11. └─stats:::AIC.default(x) ── Error ('test-stats-summary-lm.R:29:3'): glance.summary.lm ─────────────────── Error in `ll(object)`: could not find function "ll" Backtrace: ▆ 1. ├─generics::glance(fit) at test-stats-summary-lm.R:29:3 2. ├─broom:::glance.lm(fit) 3. │ ├─base::with(...) 4. │ └─base::with.default(...) 5. │ └─base::eval(substitute(expr), data, enclos = parent.frame()) 6. │ └─base::eval(substitute(expr), data, enclos = parent.frame()) 7. │ └─tibble::tibble(...) 8. │ └─tibble:::tibble_quos(xs, .rows, .name_repair) 9. │ └─rlang::eval_tidy(xs[[j]], mask) 10. ├─stats::AIC(x) 11. └─stats:::AIC.default(x) [ FAIL 4 | WARN 0 | SKIP 99 | PASS 949 ] Error: Test failures Execution halted Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/testing-design.html#sec-tests-files-overview > # * https://testthat.r-lib.org/articles/special-files.html > > library(testthat) > library(broom) > > test_check("broom") Attaching package: 'modeldata' The following object is masked from 'package:datasets': penguins Multiple parameters; naming those columns ndf and ddf. [ FAIL 4 | WARN 0 | SKIP 99 | PASS 949 ] ══ Skipped tests (99) ══════════════════════════════════════════════════════════ • On CRAN (99): 'test-aer.R:1:1', 'test-auc.R:1:1', 'test-bbmle.R:1:1', 'test-betareg.R:1:1', 'test-biglm.R:1:1', 'test-bingroup.R:1:1', 'test-boot.R:1:1', 'test-btergm.R:1:1', 'test-car.R:1:1', 'test-caret.R:1:1', 'test-cluster.R:1:1', 'test-cmprsk.R:1:1', 'test-data-frame.R:2:3', 'test-drc.R:1:1', 'test-emmeans.R:1:1', 'test-epiR.R:1:1', 'test-ergm.R:1:1', 'test-fixest.R:1:1', 'test-gam.R:1:1', 'test-geepack.R:1:1', 'test-glmnetUtils.R:1:1', 'test-gmm.R:1:1', 'test-hmisc.R:1:1', 'test-joineRML.R:1:1', 'test-kendall.R:1:1', 'test-ks.R:1:1', 'test-lavaan.R:1:1', 'test-leaps.R:1:1', 'test-lfe.R:1:1', 'test-list-irlba.R:1:1', 'test-list-optim.R:1:1', 'test-list-svd.R:1:1', 'test-list-xyz.R:1:1', 'test-list.R:1:1', 'test-lmbeta-lm-beta.R:1:1', 'test-lmodel2.R:1:1', 'test-lmtest.R:1:1', 'test-maps.R:1:1', 'test-margins.R:1:1', 'test-mass-fitdistr.R:1:1', 'test-mass-polr.R:1:1', 'test-mass-ridgelm.R:1:1', 'test-mass-rlm.R:1:1', 'test-mclust.R:1:1', 'test-mediation.R:1:1', 'test-metafor.R:1:1', 'test-mfx.R:1:1', 'test-mgcv.R:1:1', 'test-mlogit.R:1:1', 'test-muhaz.R:1:1', 'test-multcomp.R:1:1', 'test-nnet.R:1:1', 'test-null-and-default.R:9:3', 'test-null-and-default.R:23:3', 'test-null-and-default.R:39:3', 'test-ordinal.R:1:1', 'test-plm.R:1:1', 'test-polca.R:1:1', 'test-psych.R:1:1', 'test-quantreg-nlrq.R:1:1', 'test-quantreg-rq.R:1:1', 'test-quantreg-rqs.R:1:1', 'test-robust-glmrob.R:3:3', 'test-robust.R:1:1', 'test-robustbase.R:1:1', 'test-spdep.R:1:1', 'test-speedglm-speedglm.R:1:1', 'test-speedglm-speedlm.R:1:1', 'test-stats-anova.R:50:3', 'test-stats-arima.R:1:1', 'test-stats-decompose.R:1:1', 'test-stats-factanal.R:1:1', 'test-stats-glm.R:82:3', 'test-stats-glm.R:89:3', 'test-stats-htest.R:21:3', 'test-stats-htest.R:100:3', 'test-stats-htest.R:134:3', 'test-stats-lm.R:36:3', 'test-stats-lm.R:148:3', 'test-stats-mlm.R:1:1', 'test-stats-nls.R:1:1', 'test-stats-prcomp.R:42:3', 'test-survey.R:1:1', 'test-survival-aareg.R:1:1', 'test-survival-cch.R:1:1', 'test-survival-pyears.R:1:1', 'test-survival-survdiff.R:1:1', 'test-survival-survexp.R:1:1', 'test-survival-survfit.R:1:1', 'test-survival-survreg.R:1:1', 'test-systemfit.R:1:1', 'test-utilities.R:5:3', 'test-utilities.R:25:3', 'test-utilities.R:31:3', 'test-utilities.R:208:3', 'test-utilities.R:218:3', 'test-utilities.R:235:3', 'test-vars.R:1:1', 'test-zoo.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-mass-negbin.R:33:3'): glance.negbin ──────────────────────────── Error in `ll(object)`: could not find function "ll" Backtrace: ▆ 1. ├─generics::glance(fit) at test-mass-negbin.R:33:3 2. ├─broom:::glance.negbin(fit) 3. │ └─tibble::tibble(...) 4. │ └─tibble:::tibble_quos(xs, .rows, .name_repair) 5. │ └─rlang::eval_tidy(xs[[j]], mask) 6. ├─stats::AIC(x) 7. └─stats:::AIC.default(x) ── Error ('test-stats-lm.R:59:3'): glance.lm ─────────────────────────────────── Error in `ll(object)`: could not find function "ll" Backtrace: ▆ 1. ├─generics::glance(fit) at test-stats-lm.R:59:3 2. ├─broom:::glance.lm(fit) 3. │ ├─base::with(...) 4. │ └─base::with.default(...) 5. │ └─base::eval(substitute(expr), data, enclos = parent.frame()) 6. │ └─base::eval(substitute(expr), data, enclos = parent.frame()) 7. │ └─tibble::tibble(...) 8. │ └─tibble:::tibble_quos(xs, .rows, .name_repair) 9. │ └─rlang::eval_tidy(xs[[j]], mask) 10. ├─stats::AIC(x) 11. └─stats:::AIC.default(x) ── Error ('test-stats-lm.R:155:3'): glance.lm returns non-NA entries with 0-intercept model (#1209) ── Error in `ll(object)`: could not find function "ll" Backtrace: ▆ 1. ├─generics::glance(fit) at test-stats-lm.R:155:3 2. ├─broom:::glance.lm(fit) 3. │ ├─base::with(...) 4. │ └─base::with.default(...) 5. │ └─base::eval(substitute(expr), data, enclos = parent.frame()) 6. │ └─base::eval(substitute(expr), data, enclos = parent.frame()) 7. │ └─tibble::tibble(...) 8. │ └─tibble:::tibble_quos(xs, .rows, .name_repair) 9. │ └─rlang::eval_tidy(xs[[j]], mask) 10. ├─stats::AIC(x) 11. └─stats:::AIC.default(x) ── Error ('test-stats-summary-lm.R:29:3'): glance.summary.lm ─────────────────── Error in `ll(object)`: could not find function "ll" Backtrace: ▆ 1. ├─generics::glance(fit) at test-stats-summary-lm.R:29:3 2. ├─broom:::glance.lm(fit) 3. │ ├─base::with(...) 4. │ └─base::with.default(...) 5. │ └─base::eval(substitute(expr), data, enclos = parent.frame()) 6. │ └─base::eval(substitute(expr), data, enclos = parent.frame()) 7. │ └─tibble::tibble(...) 8. │ └─tibble:::tibble_quos(xs, .rows, .name_repair) 9. │ └─rlang::eval_tidy(xs[[j]], mask) 10. ├─stats::AIC(x) 11. └─stats:::AIC.default(x) [ FAIL 4 | WARN 0 | SKIP 99 | PASS 949 ] Error: Test failures Execution halted Package: did2s Check: re-building of vignette outputs New result: ERROR Error(s) in re-building vignettes: ... --- re-building ‘Two-Stage-Difference-in-Differences.Rmd’ using rmarkdown Quitting from Two-Stage-Difference-in-Differences.Rmd:143-153 [static] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: ! in vcov.fixest(object, vcov = vcov, ssc = ...: Value `vcov` must be a square matrix with, in this context, 1 row. PROBLEM: it has 0 row instead of 1. --- Backtrace: ▆ 1. └─did2s::did2s(...) 2. ├─base::suppressWarnings(summary(est$second_stage, .vcov = cov)) 3. │ └─base::withCallingHandlers(...) 4. ├─base::summary(est$second_stage, .vcov = cov) 5. └─fixest:::summary.fixest(est$second_stage, .vcov = cov) 6. └─fixest:::vcov.fixest(...) 7. └─dreamerr::check_value(vcov, "square matrix nrow(value)", .value = n_coef) 8. └─dreamerr:::check_arg_core(...) 9. └─dreamerr:::send_error(...) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'Two-Stage-Difference-in-Differences.Rmd' failed with diagnostics: in vcov.fixest(object, vcov = vcov, ssc = ...: Value `vcov` must be a square matrix with, in this context, 1 row. PROBLEM: it has 0 row instead of 1. --- failed re-building ‘Two-Stage-Difference-in-Differences.Rmd’ SUMMARY: processing the following file failed: ‘Two-Stage-Difference-in-Differences.Rmd’ Error: Vignette re-building failed. Execution halted Package: did2s Check: tests New result: ERROR Running ‘testthat.R’ [13s/14s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(did2s) Loading required package: fixest did2s (v1.0.2). For more information on the methodology, visit To cite did2s in publications use: Butts, Kyle (2021). did2s: Two-Stage Difference-in-Differences Following Gardner (2021). R package version 1.0.2. A BibTeX entry for LaTeX users is @Manual{, title = {did2s: Two-Stage Difference-in-Differences Following Gardner (2021)}, author = {Kyle Butts}, year = {2021}, url = {https://github.com/kylebutts/did2s/}, } > > test_check("did2s") ! curl package not installed, falling back to using `url()` Running Two-stage Difference-in-Differences - first stage formula `~ 0 | unit + year` - second stage formula `~ i(treat, ref = FALSE)` - The indicator variable that denotes when treatment is on is `treat` - Standard errors will be clustered by `state` Running Two-stage Difference-in-Differences - first stage formula `~ 0 | unit + year` - second stage formula `~ i(rel_year, ref = Inf)` - The indicator variable that denotes when treatment is on is `treat` - Standard errors will be clustered by `state` Running Two-stage Difference-in-Differences - first stage formula `~ 0 | unit + year` - second stage formula `~ i(treat, ref = FALSE)` - The indicator variable that denotes when treatment is on is `treat` - Standard errors will be clustered by `state` Running Two-stage Difference-in-Differences - first stage formula `~ 0 | sid + year` - second stage formula `~ i(post, ref = 0)` - The indicator variable that denotes when treatment is on is `post` - Standard errors will be clustered by `state` Running Two-stage Difference-in-Differences - first stage formula `~ 0 | unit + temp^year` - second stage formula `~ i(treat, ref = FALSE)` - The indicator variable that denotes when treatment is on is `treat` - Standard errors will be clustered by `state` Running Two-stage Difference-in-Differences - first stage formula `~ 0 | unit + year` - second stage formula `~ i(treat, ref = FALSE)` - The indicator variable that denotes when treatment is on is `treat` - Standard errors will be clustered by `state` Note these estimators rely on different underlying assumptions. See Table 2 of `https://arxiv.org/abs/2109.05913` for an overview. Estimating TWFE Model Estimating using Gardner (2021) Error : in vcov.fixest(object, vcov = vcov, ssc = ...: Value `vcov` must be a square matrix with, in this context, 41 rows. PROBLEM: it has 0 row instead of 41. Estimating using Callaway and Sant'Anna (2020) Estimating using Sun and Abraham (2020) Estimating using Borusyak, Jaravel, Spiess (2021) Estimating using Roth and Sant'Anna (2021) Note these estimators rely on different underlying assumptions. See Table 2 of `https://arxiv.org/abs/2109.05913` for an overview. Estimating TWFE Model Estimating using Gardner (2021) Error : in vcov.fixest(object, vcov = vcov, ssc = ...: Value `vcov` must be a square matrix with, in this context, 41 rows. PROBLEM: it has 0 row instead of 41. Estimating using Callaway and Sant'Anna (2020) Estimating using Sun and Abraham (2020) Estimating using Borusyak, Jaravel, Spiess (2021) Estimating using Roth and Sant'Anna (2021) Note these estimators rely on different underlying assumptions. See Table 2 of `https://arxiv.org/abs/2109.05913` for an overview. Estimating TWFE Model Estimating using Gardner (2021) Error : in vcov.fixest(object, vcov = vcov, ssc = ...: Value `vcov` must be a square matrix with, in this context, 41 rows. PROBLEM: it has 0 row instead of 41. Estimating using Callaway and Sant'Anna (2020) Estimating using Sun and Abraham (2020) Estimating using Borusyak, Jaravel, Spiess (2021) Estimating using Roth and Sant'Anna (2021) Note these estimators rely on different underlying assumptions. See Table 2 of `https://arxiv.org/abs/2109.05913` for an overview. [ FAIL 6 | WARN 3 | SKIP 0 | PASS 5 ] ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-did2s.R:15:2'): estimation runs ────────────────────────────── `did2s(...)` threw an unexpected error. Message: in vcov.fixest(object, vcov = vcov, ssc = ...: Value `vcov` must be a square matrix with, in this context, 1 row. PROBLEM: it has 0 row instead of 1. Class: simpleError/error/condition Backtrace: ▆ 1. ├─testthat::expect_error(...) at test-did2s.R:15:9 2. │ └─testthat:::expect_condition_matching(...) 3. │ └─testthat:::quasi_capture(...) 4. │ ├─testthat (local) .capture(...) 5. │ │ └─base::withCallingHandlers(...) 6. │ └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo)) 7. └─did2s::did2s(...) 8. ├─base::suppressWarnings(summary(est$second_stage, .vcov = cov)) 9. │ └─base::withCallingHandlers(...) 10. ├─base::summary(est$second_stage, .vcov = cov) 11. └─fixest:::summary.fixest(est$second_stage, .vcov = cov) 12. └─fixest:::vcov.fixest(...) 13. └─dreamerr::check_value(vcov, "square matrix nrow(value)", .value = n_coef) 14. └─dreamerr:::check_arg_core(...) 15. └─dreamerr:::send_error(...) ── Failure ('test-did2s.R:21:2'): estimation runs ────────────────────────────── `did2s(...)` threw an unexpected error. Message: in vcov.fixest(object, vcov = vcov, ssc = ...: Value `vcov` must be a square matrix with, in this context, 41 rows. PROBLEM: it has 0 row instead of 41. Class: simpleError/error/condition Backtrace: ▆ 1. ├─testthat::expect_error(...) at test-did2s.R:21:9 2. │ └─testthat:::expect_condition_matching(...) 3. │ └─testthat:::quasi_capture(...) 4. │ ├─testthat (local) .capture(...) 5. │ │ └─base::withCallingHandlers(...) 6. │ └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo)) 7. └─did2s::did2s(...) 8. ├─base::suppressWarnings(summary(est$second_stage, .vcov = cov)) 9. │ └─base::withCallingHandlers(...) 10. ├─base::summary(est$second_stage, .vcov = cov) 11. └─fixest:::summary.fixest(est$second_stage, .vcov = cov) 12. └─fixest:::vcov.fixest(...) 13. └─dreamerr::check_value(vcov, "square matrix nrow(value)", .value = n_coef) 14. └─dreamerr:::check_arg_core(...) 15. └─dreamerr:::send_error(...) ── Failure ('test-did2s.R:27:2'): estimation runs ────────────────────────────── `did2s(...)` threw an unexpected error. Message: in vcov.fixest(object, vcov = vcov, ssc = ...: Value `vcov` must be a square matrix with, in this context, 1 row. PROBLEM: it has 0 row instead of 1. Class: simpleError/error/condition Backtrace: ▆ 1. ├─testthat::expect_error(...) at test-did2s.R:27:9 2. │ └─testthat:::expect_condition_matching(...) 3. │ └─testthat:::quasi_capture(...) 4. │ ├─testthat (local) .capture(...) 5. │ │ └─base::withCallingHandlers(...) 6. │ └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo)) 7. └─did2s::did2s(...) 8. ├─base::suppressWarnings(summary(est$second_stage, .vcov = cov)) 9. │ └─base::withCallingHandlers(...) 10. ├─base::summary(est$second_stage, .vcov = cov) 11. └─fixest:::summary.fixest(est$second_stage, .vcov = cov) 12. └─fixest:::vcov.fixest(...) 13. └─dreamerr::check_value(vcov, "square matrix nrow(value)", .value = n_coef) 14. └─dreamerr:::check_arg_core(...) 15. └─dreamerr:::send_error(...) ── Failure ('test-did2s.R:33:2'): estimation runs ────────────────────────────── `did2s(...)` threw an unexpected error. Message: in vcov.fixest(object, vcov = vcov, ssc = ...: Value `vcov` must be a square matrix with, in this context, 1 row. PROBLEM: it has 0 row instead of 1. Class: simpleError/error/condition Backtrace: ▆ 1. ├─testthat::expect_error(...) at test-did2s.R:33:9 2. │ └─testthat:::expect_condition_matching(...) 3. │ └─testthat:::quasi_capture(...) 4. │ ├─testthat (local) .capture(...) 5. │ │ └─base::withCallingHandlers(...) 6. │ └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo)) 7. └─did2s::did2s(...) 8. ├─base::suppressWarnings(summary(est$second_stage, .vcov = cov)) 9. │ └─base::withCallingHandlers(...) 10. ├─base::summary(est$second_stage, .vcov = cov) 11. └─fixest:::summary.fixest(est$second_stage, .vcov = cov) 12. └─fixest:::vcov.fixest(...) 13. └─dreamerr::check_value(vcov, "square matrix nrow(value)", .value = n_coef) 14. └─dreamerr:::check_arg_core(...) 15. └─dreamerr:::send_error(...) ── Failure ('test-did2s.R:41:2'): estimation runs ────────────────────────────── `did2s(...)` threw an unexpected error. Message: in vcov.fixest(object, vcov = vcov, ssc = ...: Value `vcov` must be a square matrix with, in this context, 1 row. PROBLEM: it has 0 row instead of 1. Class: simpleError/error/condition Backtrace: ▆ 1. ├─testthat::expect_error(...) at test-did2s.R:41:9 2. │ └─testthat:::expect_condition_matching(...) 3. │ └─testthat:::quasi_capture(...) 4. │ ├─testthat (local) .capture(...) 5. │ │ └─base::withCallingHandlers(...) 6. │ └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo)) 7. └─did2s::did2s(...) 8. ├─base::suppressWarnings(summary(est$second_stage, .vcov = cov)) 9. │ └─base::withCallingHandlers(...) 10. ├─base::summary(est$second_stage, .vcov = cov) 11. └─fixest:::summary.fixest(est$second_stage, .vcov = cov) 12. └─fixest:::vcov.fixest(...) 13. └─dreamerr::check_value(vcov, "square matrix nrow(value)", .value = n_coef) 14. └─dreamerr:::check_arg_core(...) 15. └─dreamerr:::send_error(...) ── Error ('test-did2s.R:49:3'): estimates match previous runs ────────────────── Error: in vcov.fixest(object, vcov = vcov, ssc = ...: Value `vcov` must be a square matrix with, in this context, 1 row. PROBLEM: it has 0 row instead of 1. Backtrace: ▆ 1. └─did2s::did2s(...) at test-did2s.R:49:3 2. ├─base::suppressWarnings(summary(est$second_stage, .vcov = cov)) 3. │ └─base::withCallingHandlers(...) 4. ├─base::summary(est$second_stage, .vcov = cov) 5. └─fixest:::summary.fixest(est$second_stage, .vcov = cov) 6. └─fixest:::vcov.fixest(...) 7. └─dreamerr::check_value(vcov, "square matrix nrow(value)", .value = n_coef) 8. └─dreamerr:::check_arg_core(...) 9. └─dreamerr:::send_error(...) [ FAIL 6 | WARN 3 | SKIP 0 | PASS 5 ] Error: Test failures Execution halted Package: ggfixest Check: tests New result: ERROR Running ‘tinytest.R’ [5s/5s] Running the tests in ‘tests/tinytest.R’ failed. Complete output: > ## Throttle CPU threads if R CMD check (for CRAN) > > if (any(grepl("_R_CHECK", names(Sys.getenv()), fixed = TRUE))) { + # fixest + if (requireNamespace("fixest", quietly = TRUE)) { + library(fixest) + setFixest_nthreads(1) + } + + # data.table + if (requireNamespace("data.table", quietly = TRUE)) { + library(data.table) + setDTthreads(1) + } + + # magick + if (requireNamespace("magick", quietly = TRUE)) { + library(magick) + magick:::magick_threads(1) + } + } Linking to ImageMagick 7.1.1.43 Enabled features: fontconfig, freetype, fftw, heic, lcms, raw, webp, x11 Disabled features: cairo, ghostscript, pango, rsvg Using 2 threads [1] 1 > > > # Run tinytest suite > > if ( requireNamespace("tinytest", quietly=TRUE) ){ + + tinytest::test_package("ggfixest") + + } Loading required package: ggplot2 test_aggr_es.R................ 0 tests test_aggr_es.R................ 0 tests test_aggr_es.R................ 0 tests test_aggr_es.R................ 0 tests test_aggr_es.R................ 0 tests test_aggr_es.R................ 0 tests test_aggr_es.R................ 0 tests test_aggr_es.R................ 0 tests test_aggr_es.R................ 0 tests test_aggr_es.R................ 0 tests test_aggr_es.R................ 0 tests test_aggr_es.R................ 0 tests test_aggr_es.R................ 0 tests test_aggr_es.R................ 0 tests test_aggr_es.R................ 0 tests test_aggr_es.R................ 0 tests test_aggr_es.R................ 0 tests test_aggr_es.R................ 48 tests 33 fails 0.8s test_fixest_multi.R........... 0 tests test_fixest_multi.R........... 0 tests test_fixest_multi.R........... 0 tests test_fixest_multi.R........... 0 tests test_fixest_multi.R........... 0 tests test_fixest_multi.R........... 0 tests test_fixest_multi.R........... 0 tests test_fixest_multi.R........... 0 tests test_fixest_multi.R........... 0 tests test_fixest_multi.R........... 0 tests test_fixest_multi.R........... 0 tests test_fixest_multi.R........... 0 tests test_fixest_multi.R........... 0 tests test_fixest_multi.R........... 0 tests test_fixest_multi.R........... 0 tests test_fixest_multi.R........... 0 tests test_fixest_multi.R........... 0 tests test_fixest_multi.R........... 0 tests test_fixest_multi.R........... 0 tests test_fixest_multi.R........... 0 tests test_fixest_multi.R........... 0 tests 0.4s test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests test_ggcoefplot.R............. 0 tests 0.5s test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests test_ggiplot.R................ 0 tests 1.3s test_iplot_data.R............. 0 tests test_iplot_data.R............. 0 tests test_iplot_data.R............. 0 tests test_iplot_data.R............. 0 tests test_iplot_data.R............. 0 tests test_iplot_data.R............. 0 tests test_iplot_data.R............. 0 tests test_iplot_data.R............. 0 tests test_iplot_data.R............. 0 tests test_iplot_data.R............. 0 tests test_iplot_data.R............. 22 tests 4 fails 52ms test_nthreads.R............... 0 tests test_nthreads.R............... 0 tests ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_post[[col]], aggr_post_known[[col]], tolerance = tol) diff| Expected '0.859857566528126', got '0.861259873207569' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_cum[[col]], aggr_cum_known[[col]], tolerance = tol) diff| Expected '4.29928782137735', got '4.3062994999405' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_pre[[col]], aggr_pre_known[[col]], tolerance = tol) diff| Expected '0.856196388205688', got '0.878957693450153' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_both[[col]], aggr_both_known[[col]], tolerance = tol) diff| Mean relative difference: 0.01408092 ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_diff[[col]], aggr_diff_known[[col]], tolerance = tol) diff| Expected '0.47207477585529', got '0.527379333034877' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_rhs1[[col]], aggr_rhs1_known[[col]], tolerance = tol) diff| Expected '0.856196388205688', got '0.878957693450153' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_post[[col]], aggr_post_known[[col]], tolerance = tol) diff| Expected '4.5432572498307', got '4.53585990068435' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_cum[[col]], aggr_cum_known[[col]], tolerance = tol) diff| Expected '4.54325726173313', got '4.5358597596436' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_pre[[col]], aggr_pre_known[[col]], tolerance = tol) diff| Expected '-1.37803746371061', got '-1.34235209274958' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_both[[col]], aggr_both_known[[col]], tolerance = tol) diff| Mean relative difference: 0.007275895 ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_diff[[col]], aggr_diff_known[[col]], tolerance = tol) diff| Expected '10.7746168241598', got '9.64471776495549' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_rhs1[[col]], aggr_rhs1_known[[col]], tolerance = tol) diff| Expected '-0.210081123582062', got '-0.204640906589159' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_pre[[col]], aggr_pre_known[[col]], tolerance = tol) diff| Expected '0.168191721635732', got '0.179481860308907' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_both[[col]], aggr_both_known[[col]], tolerance = tol) diff| Mean relative difference: 0.06712557 ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_rhs1[[col]], aggr_rhs1_known[[col]], tolerance = tol) diff| Expected '0.833604357936545', got '0.83785269284703' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_post[[col]], aggr_post_known[[col]], tolerance = tol) diff| Expected '17.461901741925', got '17.4112907787669' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_cum[[col]], aggr_cum_known[[col]], tolerance = tol) diff| Expected '17.4619018234198', got '17.4112898145223' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_pre[[col]], aggr_pre_known[[col]], tolerance = tol) diff| Expected '2.57182139671994', got '2.47809005231426' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_both[[col]], aggr_both_known[[col]], tolerance = tol) diff| Mean relative difference: 0.007204967 ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_diff[[col]], aggr_diff_known[[col]], tolerance = tol) diff| Expected '87.5104245518892', got '70.7107655185095' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_rhs1[[col]], aggr_rhs1_known[[col]], tolerance = tol) diff| Expected '0.262565275096028', got '0.255231476237238' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_post[[col]], aggr_post_known[[col]], tolerance = tol) diff| Expected '2.22126426072131', got '2.21851579013433' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_cum[[col]], aggr_cum_known[[col]], tolerance = tol) diff| Expected '11.1063213256822', got '11.0925786882273' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_pre[[col]], aggr_pre_known[[col]], tolerance = tol) diff| Expected '-2.85798478381758', got '-2.90259612233785' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_both[[col]], aggr_both_known[[col]], tolerance = tol) diff| Mean relative difference: 0.009324175 ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_diff[[col]], aggr_diff_known[[col]], tolerance = tol) diff| Expected '4.16117526350566', got '4.05278032325274' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_rhs1[[col]], aggr_rhs1_known[[col]], tolerance = tol) diff| Expected '-1.85798478381758', got '-1.90259612233785' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_post[[col]], aggr_post_known[[col]], tolerance = tol) diff| Expected '5.59184398518008', got '5.59459245576706' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_cum[[col]], aggr_cum_known[[col]], tolerance = tol) diff| Expected '27.9592199038248', got '27.9729625412797' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_pre[[col]], aggr_pre_known[[col]], tolerance = tol) diff| Expected '0.498243385335269', got '0.542854723855542' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_both[[col]], aggr_both_known[[col]], tolerance = tol) diff| Mean relative difference: 0.00777654 ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_diff[[col]], aggr_diff_known[[col]], tolerance = tol) diff| Expected '6.01167438087804', got '6.12006932113096' ----- FAILED[data]: test_aggr_es.R<96--104> call| expect_equivalent(aggr_rhs1[[col]], aggr_rhs1_known[[col]], tolerance = tol) diff| Expected '1.49824338533527', got '1.54285472385554' ----- FAILED[data]: test_iplot_data.R<69--73> call| expect_equivalent(iplot_data_est[[col]], iplot_data_est_known[[col]], call| --> tolerance = tol) diff| Mean relative difference: 0.0226864 ----- FAILED[data]: test_iplot_data.R<69--73> call| expect_equivalent(iplot_data_est_log[[col]], iplot_data_est_log_known[[col]], call| --> tolerance = tol) diff| Mean relative difference: 0.2264364 ----- FAILED[data]: test_iplot_data.R<69--73> call| expect_equivalent(iplot_data_est[[col]], iplot_data_est_known[[col]], call| --> tolerance = tol) diff| Mean relative difference: 0.01597973 ----- FAILED[data]: test_iplot_data.R<69--73> call| expect_equivalent(iplot_data_est_log[[col]], iplot_data_est_log_known[[col]], call| --> tolerance = tol) diff| Mean relative difference: 0.09714743 Error: 37 out of 70 tests failed Execution halted Package: parameters Check: tests New result: ERROR Running ‘testthat.R’ [86s/44s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(parameters) > library(testthat) > > test_check("parameters") Starting 2 test processes [ FAIL 3 | WARN 0 | SKIP 125 | PASS 704 ] ══ Skipped tests (125) ═════════════════════════════════════════════════════════ • On CRAN (115): 'test-GLMMadaptive.R:1:1', 'test-averaging.R:1:1', 'test-backticks.R:1:1', 'test-bootstrap_emmeans.R:1:1', 'test-bootstrap_parameters.R:1:1', 'test-brms.R:1:1', 'test-compare_parameters.R:91:7', 'test-compare_parameters.R:95:5', 'test-complete_separation.R:14:5', 'test-complete_separation.R:24:5', 'test-complete_separation.R:35:5', 'test-coxph.R:79:5', 'test-efa.R:1:1', 'test-emmGrid-df_colname.R:1:1', 'test-equivalence_test.R:10:3', 'test-equivalence_test.R:18:3', 'test-equivalence_test.R:22:3', 'test-equivalence_test.R:112:3', 'test-factor_analysis.R:2:3', 'test-factor_analysis.R:124:3', 'test-format_model_parameters2.R:2:3', 'test-gam.R:30:1', 'test-get_scores.R:1:1', 'test-glmer.R:1:1', 'test-glmmTMB-2.R:1:1', 'test-glmmTMB-profile_CI.R:2:3', 'test-glmmTMB.R:8:1', 'test-group_level_total.R:2:1', 'test-helper.R:1:1', 'test-ivreg.R:54:3', 'test-include_reference.R:16:3', 'test-include_reference.R:69:3', 'test-include_reference.R:121:3', 'test-lmerTest.R:1:1', 'test-mipo.R:19:3', 'test-mipo.R:33:3', 'test-mmrm.R:1:1', 'test-model_parameters.anova.R:1:1', 'test-model_parameters.aov.R:1:1', 'test-model_parameters.aov_es_ci.R:183:3', 'test-model_parameters.aov_es_ci.R:294:3', 'test-model_parameters.aov_es_ci.R:344:3', 'test-model_parameters.aov_es_ci.R:397:3', 'test-model_parameters.bracl.R:5:1', 'test-model_parameters.cgam.R:1:1', 'test-model_parameters.coxme.R:1:1', 'test-model_parameters.epi2x2.R:1:1', 'test-marginaleffects.R:176:3', 'test-marginaleffects.R:199:3', 'test-model_parameters.fixest_multi.R:3:1', 'test-model_parameters.fixest.R:2:3', 'test-model_parameters.fixest.R:77:3', 'test-model_parameters.fixest.R:147:5', 'test-model_parameters.glmgee.R:1:1', 'test-model_parameters.logistf.R:1:1', 'test-model_parameters.logitr.R:1:1', 'test-model_parameters.mclogit.R:5:1', 'test-model_parameters.glm.R:40:3', 'test-model_parameters.glm.R:76:3', 'test-model_parameters.mediate.R:32:3', 'test-model_parameters.mixed.R:2:1', 'test-model_parameters.nnet.R:5:1', 'test-model_parameters.vgam.R:3:1', 'test-model_parameters_df.R:1:1', 'test-model_parameters_ordinal.R:1:1', 'test-model_parameters_random_pars.R:1:1', 'test-model_parameters_std.R:1:1', 'test-model_parameters_std_mixed.R:3:1', 'test-n_factors.R:10:3', 'test-n_factors.R:26:3', 'test-n_factors.R:76:3', 'test-p_adjust.R:1:1', 'test-p_direction.R:1:1', 'test-p_significance.R:1:1', 'test-p_value.R:14:1', 'test-panelr.R:1:1', 'test-pipe.R:1:1', 'test-pca.R:66:3', 'test-polr.R:2:1', 'test-plm.R:111:3', 'test-posterior.R:2:1', 'test-pool_parameters.R:11:3', 'test-pool_parameters.R:32:1', 'test-print_AER_labels.R:11:5', 'test-printing-stan.R:2:1', 'test-printing.R:1:1', 'test-pretty_names.R:65:5', 'test-pretty_names.R:82:7', 'test-quantreg.R:1:1', 'test-random_effects_ci.R:4:1', 'test-robust.R:2:1', 'test-rstanarm.R:2:1', 'test-sampleSelection.R:2:1', 'test-serp.R:16:5', 'test-printing2.R:15:7', 'test-printing2.R:22:7', 'test-printing2.R:27:7', 'test-printing2.R:32:7', 'test-printing2.R:37:7', 'test-printing2.R:49:7', 'test-printing2.R:91:7', 'test-printing2.R:127:7', 'test-svylme.R:1:1', 'test-visualisation_recipe.R:7:3', 'test-weightit.R:23:3', 'test-weightit.R:43:3', 'test-wrs2.R:58:3', 'test-standardize_parameters.R:31:3', 'test-standardize_parameters.R:36:3', 'test-standardize_parameters.R:61:3', 'test-standardize_parameters.R:173:3', 'test-standardize_parameters.R:298:3', 'test-standardize_parameters.R:333:3', 'test-standardize_parameters.R:426:3', 'test-standardize_parameters.R:516:3' • On Linux (5): 'test-model_parameters.BFBayesFactor.R:1:1', 'test-nestedLogit.R:78:3', 'test-random_effects_ci-glmmTMB.R:3:1', 'test-simulate_model.R:1:1', 'test-simulate_parameters.R:1:1' • TODO: check this test locally, fails on CI, probably due to scoping issues? (1): 'test-marginaleffects.R:280:3' • TODO: fix this test (1): 'test-model_parameters.lqmm.R:40:3' • TODO: this one actually is not correct. (1): 'test-model_parameters_robust.R:127:3' • empty test (2): 'test-wrs2.R:69:1', 'test-wrs2.R:81:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-model_parameters.efa_cfa.R:49:3'): efa-cfa ───────────────────── Error in `UseMethod("logLik")`: no applicable method for 'logLik' applied to an object of class "lavaan" Backtrace: ▆ 1. ├─lavaan::anova(m1, lavaan::cfa(model2, data = attitude)) at test-model_parameters.efa_cfa.R:49:3 2. └─lavaan::anova(m1, lavaan::cfa(model2, data = attitude)) 3. └─lavaan::lavTestLRT(object = object, ..., model.names = tmp.names) 4. └─base::sapply(mods, FUN = AIC) 5. └─base::lapply(X = X, FUN = FUN, ...) 6. ├─stats (local) FUN(X[[i]], ...) 7. ├─stats (local) FUN(X[[i]], ...) 8. └─stats:::AIC.default(X[[i]], ...) 9. └─stats (local) ll(object) ── Failure ('test-model_parameters.fixest.R:113:3'): robust standard errors ──── `standard_error(mod, vcov = "HC3")` did not throw the expected error. ── Failure ('test-model_parameters.fixest.R:114:3'): robust standard errors ──── `parameters(mod, vcov = "HC3")` did not throw the expected error. [ FAIL 3 | WARN 0 | SKIP 125 | PASS 704 ] Error: Test failures Execution halted Package: summclust Check: tests New result: ERROR Running ‘testthat.R’ [16s/17s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(summclust) > > test_check("summclust") Loading required namespace: fabricatr summclust.fixest(obj = feols_fit, cluster = ~group_id1, params = c("treatment", "log_income")) Number of observations: 95 97 99 104 98 93 116 102 100 96 Number of clusters: 7 coef tstat se p_val conf_int_l treatment 0.014634446 1.5882634 0.009214118 0.1466894 -0.006209337 log_income -0.001417457 -0.4237128 0.003345325 0.6817219 -0.008985108 conf_int_u treatment 0.035478230 log_income 0.006150194 N_G leverage partial-leverage-treatment Min. 93.00000000 0.1414602 0.09313171 1st Qu. 96.25000000 0.1655626 0.09505736 Median 98.50000000 0.1969216 0.09877701 Mean 100.00000000 0.2000000 0.10000000 3rd Qu. 101.50000000 0.2151630 0.10266611 Max. 116.00000000 0.3296761 0.11516014 coefvar 0.06497863 0.2705771 0.06531733 partial-leverage-log_income beta-treatment beta-log_income Min. 0.04745902 0.009432553 -4.418022e-03 1st Qu. 0.06474135 0.012721659 -1.545444e-03 Median 0.08748224 0.014827927 -1.478700e-03 Mean 0.10000000 0.014603939 -1.467387e-03 3rd Qu. 0.11591556 0.016141665 -9.697266e-04 Max. 0.22874955 0.020724656 4.721959e-05 coefvar 0.52899992 0.221676594 8.002307e-01 summclust.fixest(obj = feols_fit, cluster = ~group_id1, params = c("treatment", "log_income")) Number of observations: 95 97 99 104 98 93 116 102 100 96 Number of clusters: 7 coef tstat se p_val conf_int_l treatment 0.014634446 1.5882634 0.009214118 0.1466894 -0.006209337 log_income -0.001417457 -0.4237128 0.003345325 0.6817219 -0.008985108 conf_int_u treatment 0.035478230 log_income 0.006150194 N_G leverage partial-leverage-treatment Min. 93.00000000 0.1414602 0.09313171 1st Qu. 96.25000000 0.1655626 0.09505736 Median 98.50000000 0.1969216 0.09877701 Mean 100.00000000 0.2000000 0.10000000 3rd Qu. 101.50000000 0.2151630 0.10266611 Max. 116.00000000 0.3296761 0.11516014 coefvar 0.06497863 0.2705771 0.06531733 partial-leverage-log_income beta-treatment beta-log_income Min. 0.04745902 0.009432553 -4.418022e-03 1st Qu. 0.06474135 0.012721659 -1.545444e-03 Median 0.08748224 0.014827927 -1.478700e-03 Mean 0.10000000 0.014603939 -1.467387e-03 3rd Qu. 0.11591556 0.016141665 -9.697266e-04 Max. 0.22874955 0.020724656 4.721959e-05 coefvar 0.52899992 0.221676594 8.002307e-01 Loading required namespace: ggplot2 Loading required namespace: latex2exp Loading required package: zoo Attaching package: 'zoo' The following objects are masked from 'package:base': as.Date, as.Date.numeric downloading the 'nlswork' dataset. downloading the 'nlswork' dataset. NOTE: 1/1/0/0 fixed-effect singletons were removed (2 observations). NOTE: 1/1/0/0 fixed-effect singletons were removed (2 observations). NOTE: 1 observation removed because of NA values (vcov: 1). NOTE: 1 observation removed because of NA values (vcov: 1). NOTE: 1 observation removed because of NA values (RHS: 1). [ FAIL 3 | WARN 0 | SKIP 6 | PASS 111 ] ══ Skipped tests (6) ═══════════════════════════════════════════════════════════ • On CRAN (2): 'test-r-vs-stata-2.R:11:3', 'test-r-vs-stata.R:11:3' • packageVersion("sandwich") != "3.1.0" is TRUE (4): 'test-sandwich-vcovJK.R:7:3', 'test-sandwich-vcovJK.R:188:3', 'test-sandwich-vcovJK.R:368:3', 'test-sandwich-vcovJK.R:561:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-r-vs-stata.R:215:3'): test against stata - leverage, fixef absorb ── round(max(unlist(summclust_res$leverage_g)), 6) (`actual`) not equal to 20.011074 (`expected`). `actual`: 19.0 `expected`: 20.0 ── Failure ('test-r-vs-stata.R:221:3'): test against stata - leverage, fixef absorb ── round(mean(unlist(summclust_res$leverage_g)), 6) (`actual`) not equal to 5.333333 (`expected`). `actual`: 5.17 `expected`: 5.33 ── Failure ('test-r-vs-stata.R:269:3'): test against stata - leverage, fixef absorb ── round(summclust_res$coef_var_leverage_g, 6) (`actual`) not equal to 1.155829 (`expected`). `actual`: 1.138 `expected`: 1.156 [ FAIL 3 | WARN 0 | SKIP 6 | PASS 111 ] Error: Test failures Execution halted