Package: CodelistGenerator Check: tests New result: ERROR Running ‘testthat.R’ [226s/112s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > library(testthat) > library(CodelistGenerator) > > test_check("CodelistGenerator") Starting 2 test processes [ FAIL 1 | WARN 444 | SKIP 22 | PASS 402 ] ══ Skipped tests (22) ══════════════════════════════════════════════════════════ • On CRAN (13): 'test-drugCodes.R:123:3', 'test-drugCodes.R:443:3', 'test-helperFunctions.R:2:3', 'test-helperFunctions.R:31:3', 'test-helperFunctions.R:46:3', 'test-helperFunctions.R:61:3', 'test-getCandidateCodes.R:310:3', 'test-summariseCodeUse.R:2:1', 'test-summariseCodeUse.R:295:3', 'test-tableAchillesCodeUse.R:2:3', 'test-tableAchillesCodeUse.R:65:3', 'test-tableCodeUse.R:2:3', 'test-tableCodeUse.R:96:3' • Sys.getenv("CDM5_POSTGRESQL_DBNAME") == "" is TRUE (1): 'test-dbms.R:306:3' • Sys.getenv("CDM5_REDSHIFT_DBNAME") == "" is TRUE (4): 'test-codesFrom.R:131:3', 'test-dbms.R:5:3', 'test-findUnmappedCodes.R:3:3', 'test-summariseCodeUse.R:492:3' • Sys.getenv("CDM5_SQL_SERVER_SERVER") == "" is TRUE (2): 'test-codesInUse.R:36:3', 'test-dbms.R:458:3' • Sys.getenv("SNOWFLAKE_SERVER") == "" is TRUE (1): 'test-dbms.R:168:3' • Sys.getenv("darwinDbDatabaseServer") == "" is TRUE (1): 'test-synthea_sql_server.R:2:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-summariseAchillesCodeUse.R:28:3'): achilles code use ───────── all(...) is not TRUE `actual`: FALSE `expected`: TRUE [ FAIL 1 | WARN 444 | SKIP 22 | PASS 402 ] Error: Test failures Execution halted Package: CohortCharacteristics Check: tests New result: ERROR Running ‘spelling.R’ [0s/0s] Running ‘testthat.R’ [302s/183s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > library(testthat) > library(CohortCharacteristics) > > test_check("CohortCharacteristics") Starting 2 test processes [ FAIL 1 | WARN 0 | SKIP 15 | PASS 161 ] ══ Skipped tests (15) ══════════════════════════════════════════════════════════ • On CRAN (15): 'test-plotCharacteristics.R:2:3', 'test-plotCharacteristics.R:136:3', 'test-plotCohortOverlap.R:2:3', 'test-plotCohortTiming.R:2:3', 'test-plotCohortTiming.R:94:3', 'test-plotLargeScaleCharacteristics.R:3:3', 'test-summariseCohortCount.R:2:3', 'test-summariseLargeScaleCharacteristics.R:2:3', 'test-tableCharacteristics.R:2:3', 'test-tableCharacteristics.R:123:3', 'test-tableCohortOverlap.R:2:3', 'test-tableCohortTiming.R:2:3', 'test-tableLargeScaleCharacteristics.R:2:3', 'test-summariseCharacteristics.R:1162:3', 'test-summariseCharacteristics.R:1392:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-summariseCohortTiming.R:51:3'): summariseCohortTiming ──────── omopgenerics::settings(timing1)$restrict_to_first_entry is not TRUE `actual` is a character vector ('TRUE') `expected` is a logical vector (TRUE) [ FAIL 1 | WARN 0 | SKIP 15 | PASS 161 ] Error: Test failures Execution halted Package: CohortConstructor Check: re-building of vignette outputs New result: ERROR Error(s) in re-building vignettes: ... --- re-building ‘a00_introduction.Rmd’ using rmarkdown --- finished re-building ‘a00_introduction.Rmd’ --- re-building ‘a01_building_base_cohorts.Rmd’ using rmarkdown trying URL 'https://example-data.ohdsi.dev/GiBleed.zip' Content type 'application/zip' length 6754786 bytes (6.4 MB) ================================================== downloaded 6.4 MB --- finished re-building ‘a01_building_base_cohorts.Rmd’ --- re-building ‘a02_cohort_table_requirements.Rmd’ using rmarkdown trying URL 'https://example-data.ohdsi.dev/GiBleed.zip' Content type 'application/zip' length 6754786 bytes (6.4 MB) ================================================== downloaded 6.4 MB ** Processing: /home/hornik/tmp/CRAN_recheck/CohortConstructor.Rcheck/vign_test/CohortConstructor/vignettes/a02_cohort_table_requirements_files/figure-html/unnamed-chunk-15-1.png 288x288 pixels, 8 bits/pixel, 255 colors in palette Reducing image to 8 bits/pixel, grayscale Input IDAT size = 3779 bytes Input file size = 4634 bytes Trying: zc = 9 zm = 8 zs = 0 f = 0 IDAT size = 3377 zc = 9 zm = 8 zs = 1 f = 0 zc = 1 zm = 8 zs = 2 f = 0 zc = 9 zm = 8 zs = 3 f = 0 zc = 9 zm = 8 zs = 0 f = 5 zc = 9 zm = 8 zs = 1 f = 5 zc = 1 zm = 8 zs = 2 f = 5 zc = 9 zm = 8 zs = 3 f = 5 Selecting parameters: zc = 9 zm = 8 zs = 0 f = 0 IDAT size = 3377 Output IDAT size = 3377 bytes (402 bytes decrease) Output file size = 3455 bytes (1179 bytes = 25.44% decrease) ** Processing: /home/hornik/tmp/CRAN_recheck/CohortConstructor.Rcheck/vign_test/CohortConstructor/vignettes/a02_cohort_table_requirements_files/figure-html/unnamed-chunk-16-1.png 288x288 pixels, 8 bits/pixel, 252 colors in palette Reducing image to 8 bits/pixel, grayscale Input IDAT size = 3614 bytes Input file size = 4460 bytes Trying: zc = 9 zm = 8 zs = 0 f = 0 IDAT size = 3284 zc = 9 zm = 8 zs = 1 f = 0 zc = 1 zm = 8 zs = 2 f = 0 zc = 9 zm = 8 zs = 3 f = 0 zc = 9 zm = 8 zs = 0 f = 5 zc = 9 zm = 8 zs = 1 f = 5 zc = 1 zm = 8 zs = 2 f = 5 zc = 9 zm = 8 zs = 3 f = 5 Selecting parameters: zc = 9 zm = 8 zs = 0 f = 0 IDAT size = 3284 Output IDAT size = 3284 bytes (330 bytes decrease) Output file size = 3362 bytes (1098 bytes = 24.62% decrease) --- finished re-building ‘a02_cohort_table_requirements.Rmd’ --- re-building ‘a03_require_demographics.Rmd’ using rmarkdown trying URL 'https://example-data.ohdsi.dev/GiBleed.zip' Content type 'application/zip' length 6754786 bytes (6.4 MB) ================================================== downloaded 6.4 MB --- finished re-building ‘a03_require_demographics.Rmd’ --- re-building ‘a04_require_intersections.Rmd’ using rmarkdown trying URL 'https://example-data.ohdsi.dev/GiBleed.zip' Content type 'application/zip' length 6754786 bytes (6.4 MB) ================================================== downloaded 6.4 MB --- finished re-building ‘a04_require_intersections.Rmd’ --- re-building ‘a05_update_cohort_start_end.Rmd’ using rmarkdown --- finished re-building ‘a05_update_cohort_start_end.Rmd’ --- re-building ‘a06_concatanate_cohorts.Rmd’ using rmarkdown trying URL 'https://example-data.ohdsi.dev/GiBleed.zip' Content type 'application/zip' length 6754786 bytes (6.4 MB) ================================================== downloaded 6.4 MB --- finished re-building ‘a06_concatanate_cohorts.Rmd’ --- re-building ‘a07_filter_cohorts.Rmd’ using rmarkdown trying URL 'https://example-data.ohdsi.dev/GiBleed.zip' Content type 'application/zip' length 6754786 bytes (6.4 MB) ================================================== downloaded 6.4 MB --- finished re-building ‘a07_filter_cohorts.Rmd’ --- re-building ‘a08_split_cohorts.Rmd’ using rmarkdown trying URL 'https://example-data.ohdsi.dev/GiBleed.zip' Content type 'application/zip' length 6754786 bytes (6.4 MB) ================================================== downloaded 6.4 MB --- finished re-building ‘a08_split_cohorts.Rmd’ --- re-building ‘a09_combine_cohorts.Rmd’ using rmarkdown trying URL 'https://example-data.ohdsi.dev/GiBleed.zip' Content type 'application/zip' length 6754786 bytes (6.4 MB) ================================================== downloaded 6.4 MB --- finished re-building ‘a09_combine_cohorts.Rmd’ --- re-building ‘a10_match_cohorts.Rmd’ using rmarkdown --- finished re-building ‘a10_match_cohorts.Rmd’ --- re-building ‘a11_benchmark.Rmd’ using rmarkdown Quitting from lines 124-137 [unnamed-chunk-5] (a11_benchmark.Rmd) Error: processing vignette 'a11_benchmark.Rmd' failed with diagnostics: The size of `nm` (2) must be compatible with the size of `x` (0). --- failed re-building ‘a11_benchmark.Rmd’ SUMMARY: processing the following file failed: ‘a11_benchmark.Rmd’ Error: Vignette re-building failed. Execution halted Package: CohortSurvival Check: tests New result: ERROR Running ‘testthat.R’ [253s/178s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > library(testthat) > library(CohortSurvival) > > test_check("CohortSurvival") Starting 2 test processes [ FAIL 2 | WARN 56 | SKIP 48 | PASS 33 ] ══ Skipped tests (48) ══════════════════════════════════════════════════════════ • On CRAN (48): 'test-deathDiagnostics.R:2:3', 'test-deathDiagnostics.R:116:3', 'test-addCohortSurvival.R:71:3', 'test-addCohortSurvival.R:158:3', 'test-addCohortSurvival.R:271:3', 'test-addCohortSurvival.R:377:3', 'test-addCohortSurvival.R:489:3', 'test-addCohortSurvival.R:575:3', 'test-addCohortSurvival.R:675:3', 'test-addCohortSurvival.R:762:3', 'test-generateDeathCohort.R:2:3', 'test-generateDeathCohort.R:61:3', 'test-generateDeathCohort.R:128:3', 'test-generateDeathCohort.R:194:3', 'test-generateDeathCohort.R:261:3', 'test-generateDeathCohort.R:349:3', 'test-generateDeathCohort.R:449:3', 'test-mockMGUS2cdm.R:2:3', 'test-plotSurvival.R:2:3', 'test-plotSurvival.R:19:3', 'test-plotSurvival.R:37:3', 'test-plotSurvival.R:57:3', 'test-plotSurvival.R:77:3', 'test-plotSurvival.R:98:3', 'test-plotSurvival.R:129:3', 'test-plotSurvival.R:149:3', 'test-plotSurvival.R:172:3', 'test-plotSurvival.R:191:3', 'test-tableSurvival.R:2:3', 'test-tableSurvival.R:88:3', 'test-estimateSurvival.R:87:3', 'test-estimateSurvival.R:126:3', 'test-estimateSurvival.R:176:3', 'test-estimateSurvival.R:343:3', 'test-estimateSurvival.R:556:3', 'test-estimateSurvival.R:707:3', 'test-estimateSurvival.R:1224:3', 'test-estimateSurvival.R:1350:3', 'test-estimateSurvival.R:1372:3', 'test-estimateSurvival.R:1448:3', 'test-estimateSurvival.R:1640:3', 'test-estimateSurvival.R:1763:3', 'test-estimateSurvival.R:1793:3', 'test-estimateSurvival.R:1812:3', 'test-estimateSurvival.R:1846:3', 'test-estimateSurvival.R:1979:3', 'test-estimateSurvival.R:2106:3', 'test-estimateSurvival.R:2216:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-reexports-omopgenerics.R:30:3'): omopgenerics reexports work ── isTRUE(all.equal(surv, surv_nosup, check.attributes = FALSE)) is not FALSE `actual`: TRUE `expected`: FALSE ── Failure ('test-reexports-omopgenerics.R:34:3'): omopgenerics reexports work ── isTRUE(all.equal(surv, surv_nosup_imported, check.attributes = FALSE)) is not TRUE `actual`: FALSE `expected`: TRUE [ FAIL 2 | WARN 56 | SKIP 48 | PASS 33 ] Error: Test failures Execution halted Package: CohortSymmetry Check: tests New result: ERROR Running ‘testthat.R’ [188s/180s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/testing-design.html#sec-tests-files-overview > # * https://testthat.r-lib.org/articles/special-files.html > > library(testthat) > library(CohortSymmetry) > > test_check("CohortSymmetry") Note: method with signature 'DBIConnection#Id' chosen for function 'dbExistsTable', target signature 'duckdb_connection#Id'. "duckdb_connection#ANY" would also be valid `days_prior_observation` casted to character. `days_prior_observation` casted to character. `days_prior_observation` casted to character. -- 1 combination of 4 had index always before marker `days_prior_observation` casted to character. -- 1 combination of 4 had index always before marker `days_prior_observation` casted to character. Joining with `by = join_by(result_id, cdm_name)` `days_prior_observation` casted to character. Joining with `by = join_by(result_id, cdm_name)` `days_prior_observation` casted to character. `days_prior_observation` casted to character. `days_prior_observation` casted to character. [ FAIL 1 | WARN 139 | SKIP 53 | PASS 51 ] ══ Skipped tests (53) ══════════════════════════════════════════════════════════ • On CRAN (53): 'test-attrition.R:2:3', 'test-attrition.R:63:3', 'test-attrition.R:149:3', 'test-attrition.R:310:3', 'test-attrition.R:397:3', 'test-attrition.R:461:3', 'test-attrition.R:592:3', 'test-attrition.R:769:3', 'test-displayTable.R:30:3', 'test-displayTable.R:58:3', 'test-displayTable.R:86:3', 'test-eunomia.R:2:3', 'test-eunomia.R:65:3', 'test-generateSequenceCohortSet.R:3:3', 'test-generateSequenceCohortSet.R:20:3', 'test-generateSequenceCohortSet.R:96:3', 'test-generateSequenceCohortSet.R:153:3', 'test-generateSequenceCohortSet.R:205:3', 'test-generateSequenceCohortSet.R:259:3', 'test-generateSequenceCohortSet.R:288:3', 'test-generateSequenceCohortSet.R:330:3', 'test-generateSequenceCohortSet.R:356:3', 'test-generateSequenceCohortSet.R:427:3', 'test-generateSequenceCohortSet.R:515:3', 'test-generateSequenceCohortSet.R:663:3', 'test-generateSequenceCohortSet.R:751:3', 'test-generateSequenceCohortSet.R:764:3', 'test-generateSequenceCohortSet.R:776:3', 'test-generateSequenceCohortSet.R:788:3', 'test-generateSequenceCohortSet.R:800:3', 'test-generateSequenceCohortSet.R:818:3', 'test-generateSequenceCohortSet.R:836:3', 'test-generateSequenceCohortSet.R:848:3', 'test-generateSequenceCohortSet.R:859:3', 'test-generateSequenceCohortSet.R:870:3', 'test-generateSequenceCohortSet.R:894:3', 'test-plotSequenceRatio.R:41:3', 'test-plotSequenceRatio.R:85:3', 'test-plotTemporalSymmetry.R:42:3', 'test-plotTemporalSymmetry.R:86:3', 'test-summariseSequenceRatios.R:203:3', 'test-summariseSequenceRatios.R:415:3', 'test-summariseSequenceRatios.R:522:3', 'test-summariseSequenceRatios.R:604:3', 'test-summariseSequenceRatios.R:763:3', 'test-summariseSequenceRatios.R:801:3', 'test-summariseSequenceRatios.R:839:3', 'test-summariseSequenceRatios.R:877:3', 'test-summariseTemporalSymmetry.R:73:3', 'test-summariseTemporalSymmetry.R:98:3', 'test-summariseTemporalSymmetry.R:156:3', 'test-test-dbs.R:2:3', 'test-test-dbs.R:50:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-summariseTemporalSymmetry.R:30:3'): test summariseTemporalSymmetry ── is.na(unique(temporal_symmetry$estimate_value)) is not TRUE `actual`: FALSE `expected`: TRUE [ FAIL 1 | WARN 139 | SKIP 53 | PASS 51 ] Error: Test failures Execution halted Package: DrugUtilisation Check: tests New result: ERROR Running ‘testthat.R’ [206s/111s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > library(testthat) > library(DrugUtilisation) > > test_check("DrugUtilisation") Starting 2 test processes [ FAIL 4 | WARN 24 | SKIP 50 | PASS 69 ] ══ Skipped tests (50) ══════════════════════════════════════════════════════════ • On CRAN (50): 'test-benchmarkDrugUtilisation.R:2:3', 'test-addDrugUtilisation.R:3:3', 'test-addDrugUtilisation.R:156:3', 'test-addDrugUtilisation.R:196:3', 'test-dailyDose.R:2:3', 'test-generateDrugUtilisationCohortSet.R:2:3', 'test-generateDrugUtilisationCohortSet.R:23:3', 'test-generatedAtcCohortSet.R:2:3', 'test-generatedIngredientCohortSet.R:2:3', 'test-generatedIngredientCohortSet.R:39:3', 'test-generatedIngredientCohortSet.R:54:3', 'test-generatedIngredientCohortSet.R:69:3', 'test-generatedIngredientCohortSet.R:98:3', 'test-drugUse.R:2:3', 'test-drugUse.R:42:3', 'test-drugUse.R:259:3', 'test-drugUse.R:463:3', 'test-drugUse.R:608:3', 'test-drugUse.R:860:3', 'test-drugUse.R:883:3', 'test-drugUse.R:932:3', 'test-drugUse.R:967:3', 'test-drugUse.R:1025:3', 'test-patterns.R:2:3', 'test-plotProportionOfPatientsCovered.R:2:3', 'test-plotProportionOfPatientsCovered.R:58:3', 'test-plotProportionOfPatientsCovered.R:100:3', 'test-plotTreatment.R:2:3', 'test-plots.R:2:3', 'test-readConceptList.R:2:3', 'test-require.R:2:3', 'test-require.R:143:3', 'test-require.R:324:3', 'test-require.R:432:3', 'test-require.R:526:3', 'test-require.R:597:3', 'test-indication.R:2:3', 'test-indication.R:202:3', 'test-indication.R:364:3', 'test-indication.R:481:3', 'test-summariseProportionOfPatientsCovered.R:94:3', 'test-summariseProportionOfPatientsCovered.R:170:3', 'test-summariseProportionOfPatientsCovered.R:277:3', 'test-summariseProportionOfPatientsCovered.R:497:3', 'test-summariseProportionOfPatientsCovered.R:550:3', 'test-tables.R:2:3', 'test-tables.R:132:3', 'test-tables.R:271:3', 'test-tables.R:393:3', 'test-tables.R:502:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-summariseDrugRestart.R:76:3'): summarise drug restart ──────── is.na(settings(resultsCohort)$censor_date) is not TRUE `actual`: `expected`: TRUE ── Failure ('test-summariseDrugRestart.R:83:3'): summarise drug restart ──────── unique(resultsSup$estimate_value) (`actual`) not equal to c(NA_character_, "0") (`expected`). `actual`: "-" "0" `expected`: NA "0" ── Failure ('test-summariseTreatment.R:49:3'): test summariseTreatment ───────── all(...) is not TRUE `actual`: FALSE `expected`: TRUE ── Failure ('test-summariseDrugUtilisation.R:65:3'): summariseDrugUtilisation works ── all(...) is not TRUE `actual`: FALSE `expected`: TRUE [ FAIL 4 | WARN 24 | SKIP 50 | PASS 69 ] Error: Test failures Execution halted Package: IncidencePrevalence Check: tests New result: ERROR Running ‘testthat.R’ [418s/206s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(IncidencePrevalence) > > test_check("IncidencePrevalence") Starting 2 test processes [ FAIL 3 | WARN 40 | SKIP 72 | PASS 134 ] ══ Skipped tests (72) ══════════════════════════════════════════════════════════ • On CRAN (71): 'test-benchmarkIncidencePrevalence.R:2:3', 'test-estimatePrevalence.R:67:3', 'test-estimatePrevalence.R:124:3', 'test-estimatePrevalence.R:244:3', 'test-estimatePrevalence.R:380:3', 'test-estimatePrevalence.R:462:3', 'test-estimatePrevalence.R:607:3', 'test-estimatePrevalence.R:658:3', 'test-estimatePrevalence.R:715:3', 'test-estimatePrevalence.R:763:3', 'test-estimatePrevalence.R:787:3', 'test-estimatePrevalence.R:812:3', 'test-estimatePrevalence.R:945:3', 'test-estimatePrevalence.R:987:3', 'test-estimatePrevalence.R:1017:3', 'test-estimatePrevalence.R:1098:3', 'test-estimatePrevalence.R:1150:3', 'test-estimateIncidence.R:91:3', 'test-estimateIncidence.R:201:3', 'test-estimateIncidence.R:273:3', 'test-estimateIncidence.R:374:3', 'test-estimateIncidence.R:479:3', 'test-estimateIncidence.R:576:3', 'test-estimateIncidence.R:744:3', 'test-estimateIncidence.R:853:3', 'test-estimateIncidence.R:1008:3', 'test-estimateIncidence.R:1074:3', 'test-estimateIncidence.R:1130:3', 'test-estimateIncidence.R:1219:3', 'test-estimateIncidence.R:1394:3', 'test-estimateIncidence.R:1512:3', 'test-estimateIncidence.R:1593:3', 'test-estimateIncidence.R:1703:3', 'test-estimateIncidence.R:1809:3', 'test-estimateIncidence.R:1962:3', 'test-estimateIncidence.R:2019:3', 'test-estimateIncidence.R:2209:3', 'test-estimateIncidence.R:2407:3', 'test-estimateIncidence.R:2485:3', 'test-estimateIncidence.R:2899:3', 'test-estimateIncidence.R:2940:3', 'test-estimateIncidence.R:2989:3', 'test-estimateIncidence.R:3047:3', 'test-mockIncidencePrevalenceRef.R:2:3', 'test-mockIncidencePrevalenceRef.R:39:3', 'test-mockIncidencePrevalenceRef.R:67:3', 'test-mockIncidencePrevalenceRef.R:118:3', 'test-mockIncidencePrevalenceRef.R:220:3', 'test-plotting.R:2:3', 'test-plotting.R:105:3', 'test-plotting.R:160:3', 'test-plotting.R:211:3', 'test-generateDenominatorCohortSet.R:138:3', 'test-generateDenominatorCohortSet.R:185:3', 'test-generateDenominatorCohortSet.R:289:3', 'test-generateDenominatorCohortSet.R:325:3', 'test-generateDenominatorCohortSet.R:581:3', 'test-generateDenominatorCohortSet.R:799:3', 'test-generateDenominatorCohortSet.R:842:3', 'test-generateDenominatorCohortSet.R:894:3', 'test-generateDenominatorCohortSet.R:978:3', 'test-generateDenominatorCohortSet.R:1101:3', 'test-generateDenominatorCohortSet.R:1165:3', 'test-generateDenominatorCohortSet.R:1218:3', 'test-generateDenominatorCohortSet.R:1276:3', 'test-generateDenominatorCohortSet.R:1352:3', 'test-generateDenominatorCohortSet.R:1473:3', 'test-generateDenominatorCohortSet.R:1572:3', 'test-generateDenominatorCohortSet.R:1623:3', 'test-generateDenominatorCohortSet.R:1686:3', 'test-generateDenominatorCohortSet.R:1905:3' • empty test (1): 'test-tables.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-benchmarkIncidencePrevalence.R:55:3'): check tables cleaned up ── Error in `omopgenerics::newSummarisedResult(x = prs, settings = dplyr::mutate(dplyr::select(analysisSettings, !c("denominator_cohort_name", "outcome_cohort_name")), dplyr::across(-"result_id", as.character)))`: Each `result_id` must be unique and contain a unique set of settings. Backtrace: ▆ 1. └─IncidencePrevalence::benchmarkIncidencePrevalence(cdm) at test-benchmarkIncidencePrevalence.R:55:3 2. └─IncidencePrevalence::estimatePointPrevalence(...) 3. └─IncidencePrevalence:::estimatePrevalence(...) 4. └─omopgenerics::newSummarisedResult(...) 5. └─omopgenerics:::validateSummarisedResult(x) 6. └─omopgenerics:::validateResultSettings(attr(x, "settings"), call = call) 7. └─cli::cli_abort(...) 8. └─rlang::abort(...) ── Failure ('test-estimatePrevalence.R:1202:3'): mock db: prevalence using strata vars ── visOmopResults::filterSettings(prev_orig, result_type == "prevalence") (`actual`) not equal to ... %>% ... (`expected`). attr(actual, 'settings') vs attr(expected, 'settings') strata - attr(actual, 'settings')[1, ] + attr(expected, 'settings')[1, ] my_strata `attr(actual, 'settings')$strata`: "" `attr(expected, 'settings')$strata`: "my_strata" ── Failure ('test-estimateIncidence.R:3154:3'): mock db: incidence using strata vars ── visOmopResults::filterSettings(inc_orig, result_type == "incidence") (`actual`) not identical to ... %>% ... (`expected`). attr(actual, 'settings') vs attr(expected, 'settings') strata - attr(actual, 'settings')[1, ] + attr(expected, 'settings')[1, ] my_strata `attr(actual, 'settings')$strata`: "" `attr(expected, 'settings')$strata`: "my_strata" [ FAIL 3 | WARN 40 | SKIP 72 | PASS 134 ] Error: Test failures Execution halted Package: visOmopResults Check: examples New result: ERROR Running examples in ‘visOmopResults-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: filterSettings > ### Title: Filter a using the settings > ### Aliases: filterSettings > > ### ** Examples > > library(dplyr) Attaching package: ‘dplyr’ The following objects are masked from ‘package:stats’: filter, lag The following objects are masked from ‘package:base’: intersect, setdiff, setequal, union > library(omopgenerics) Attaching package: ‘omopgenerics’ The following objects are masked from ‘package:visOmopResults’: addSettings, additionalColumns, filterAdditional, filterGroup, filterSettings, filterStrata, groupColumns, pivotEstimates, settingsColumns, splitAdditional, splitAll, splitGroup, splitStrata, strataColumns, tidyColumns, uniteAdditional, uniteGroup, uniteStrata The following object is masked from ‘package:stats’: filter > > x <- tibble( + "result_id" = as.integer(c(1, 2)), + "cdm_name" = c("cprd", "eunomia"), + "group_name" = "sex", + "group_level" = "male", + "strata_name" = "sex", + "strata_level" = "male", + "variable_name" = "Age group", + "variable_level" = "10 to 50", + "estimate_name" = "count", + "estimate_type" = "numeric", + "estimate_value" = "5", + "additional_name" = "overall", + "additional_level" = "overall" + ) |> + newSummarisedResult(settings = tibble( + "result_id" = c(1, 2), "custom" = c("A", "B") + )) `result_type`, `package_name`, and `package_version` added to settings. Error in `newSummarisedResult()`: ! In result_id = 1: `sex` present in both group and strata. In result_id = 2: `sex` present in both group and strata. Backtrace: ▆ 1. └─omopgenerics::newSummarisedResult(...) 2. └─omopgenerics:::validateSummarisedResult(x) 3. └─omopgenerics:::validateResultSettings(attr(x, "settings"), call = call) 4. └─omopgenerics:::reportOverlap(...) 5. └─cli::cli_abort(message = message, call = call) 6. └─rlang::abort(...) Execution halted Package: visOmopResults Check: tests New result: ERROR Running ‘testthat.R’ [64s/36s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/testing-design.html#sec-tests-files-overview > # * https://testthat.r-lib.org/articles/special-files.html > > library(testthat) > library(visOmopResults) Registered S3 method overwritten by 'visOmopResults': method from tidy.summarised_result omopgenerics > > test_check("visOmopResults") Starting 2 test processes [ FAIL 14 | WARN 49 | SKIP 0 | PASS 476 ] ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-addSettings.R:17:3'): addSettings ──────────────────────────── Expected `... <- NULL` to run without any errors. i Actually got a with text: In result_id = 1: `sex` present in both group and strata. In result_id = 2: `sex` present in both group and strata. ── Error ('test-addSettings.R:27:3'): addSettings ────────────────────────────── Error in `eval(code, test_env)`: object 'res' not found Backtrace: ▆ 1. ├─testthat::expect_identical(...) at test-addSettings.R:27:3 2. │ └─testthat::quasi_label(enquo(object), label, arg = "object") 3. │ └─rlang::eval_bare(expr, quo_get_env(quo)) 4. ├─base::sort(colnames(settings(res))) 5. ├─base::colnames(settings(res)) 6. │ └─base::is.data.frame(x) 7. └─omopgenerics::settings(res) ── Failure ('test-filter.R:35:3'): filterSettings ────────────────────────────── Expected `... <- NULL` to run without any errors. i Actually got a with text: In result_id = 1: `sex` present in both group and strata. In result_id = 2: `sex` present in both group and strata. ── Error ('test-filter.R:45:3'): filterSettings ──────────────────────────────── Error in `eval(code, test_env)`: object 'res' not found Backtrace: ▆ 1. └─visOmopResults::filterSettings(res, package_name == "omock") at test-filter.R:45:3 2. └─visOmopResults:::validateSettingsAttribute(result) ── Failure ('test-tidy.R:37:3'): tidySummarisedResult ────────────────────────── all(...) is not TRUE `actual`: FALSE `expected`: TRUE ── Failure ('test-visOmopTable.R:3:3'): visOmopTable ─────────────────────────── `expect_no_error(...)` did not throw the expected message. ── Failure ('test-visOmopTable.R:20:3'): visOmopTable ────────────────────────── `expect_no_error(...)` did not throw the expected message. ── Failure ('test-visOmopTable.R:46:3'): visOmopTable ────────────────────────── `expect_no_error(...)` did not throw the expected message. ── Failure ('test-visOmopTable.R:65:3'): visOmopTable ────────────────────────── `expect_no_error(...)` did not throw the expected message. ── Failure ('test-visOmopTable.R:86:3'): visOmopTable ────────────────────────── `expect_no_error(...)` did not throw the expected message. ── Failure ('test-visOmopTable.R:132:3'): visOmopTable ───────────────────────── all(...) is not TRUE `actual`: FALSE `expected`: TRUE ── Failure ('test-visOmopTable.R:136:3'): visOmopTable ───────────────────────── all(...) is not TRUE `actual`: FALSE `expected`: TRUE ── Failure ('test-visOmopTable.R:153:3'): renameColumn works ─────────────────── `expect_no_error(...)` did not throw the expected message. ── Failure ('test-visOmopTable.R:186:3'): renameColumn works ─────────────────── `expect_warning(...)` did not throw the expected message. [ FAIL 14 | WARN 49 | SKIP 0 | PASS 476 ] Error: Test failures Execution halted