* using log directory ‘/srv/hornik/tmp/CRAN_pretest/PhenotypeR.Rcheck’ * using R Under development (unstable) (2024-11-18 r87347) * using platform: x86_64-pc-linux-gnu * R was compiled by Debian clang version 19.1.1 (1) Debian flang-new version 19.1.1 (1) * running under: Debian GNU/Linux trixie/sid * using session charset: UTF-8 * checking for file ‘PhenotypeR/DESCRIPTION’ ... OK * checking extension type ... Package * this is package ‘PhenotypeR’ version ‘0.1.0’ * package encoding: UTF-8 * checking CRAN incoming feasibility ... [4s/5s] NOTE Maintainer: ‘Edward Burn ’ New submission Possibly misspelled words in DESCRIPTION: codelist (18:30) * checking package namespace information ... OK * checking package dependencies ... OK * checking if this is a source package ... OK * checking if there is a namespace ... OK * checking for executable files ... OK * checking for hidden files and directories ... OK * checking for portable file names ... OK * checking for sufficient/correct file permissions ... OK * checking serialization versions ... OK * checking whether package ‘PhenotypeR’ can be installed ... [2s/2s] OK * checking package directory ... OK * checking for future file timestamps ... OK * checking ‘build’ directory ... OK * checking DESCRIPTION meta-information ... OK * checking top-level files ... NOTE Non-standard file/directory found at top level: ‘shiny’ * checking for left-over files ... OK * checking index information ... OK * checking package subdirectories ... OK * checking code files for non-ASCII characters ... OK * checking R files for syntax errors ... OK * checking whether the package can be loaded ... [0s/0s] OK * checking whether the package can be loaded with stated dependencies ... [0s/0s] OK * checking whether the package can be unloaded cleanly ... [0s/0s] OK * checking whether the namespace can be loaded with stated dependencies ... [0s/0s] OK * checking whether the namespace can be unloaded cleanly ... [0s/0s] OK * checking loading without being on the library search path ... [0s/0s] OK * checking use of S3 registration ... OK * checking dependencies in R code ... OK * checking S3 generic/method consistency ... OK * checking replacement functions ... OK * checking foreign function calls ... OK * checking R code for possible problems ... [3s/3s] OK * checking Rd files ... [0s/0s] OK * checking Rd metadata ... OK * checking Rd line widths ... OK * checking Rd cross-references ... OK * checking for missing documentation entries ... OK * checking for code/documentation mismatches ... OK * checking Rd \usage sections ... OK * checking Rd contents ... OK * checking for unstated dependencies in examples ... OK * checking installed files from ‘inst/doc’ ... OK * checking files in ‘vignettes’ ... OK * checking examples ... [1s/1s] OK * checking for unstated dependencies in ‘tests’ ... OK * checking tests ... [262s/253s] ERROR Running ‘testthat.R’ [262s/253s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/testing-design.html#sec-tests-files-overview > # * https://testthat.r-lib.org/articles/special-files.html > > library(testthat) > library(PhenotypeR) > > test_check("PhenotypeR") Note: method with signature 'DBIConnection#Id' chosen for function 'dbExistsTable', target signature 'duckdb_connection#Id'. "duckdb_connection#ANY" would also be valid * Getting codelists from cohorts * Getting cohort summary i adding demographics columns i adding tableIntersectCount 1/1 i summarising data v summariseCharacteristics finished! * Getting age density * Getting cohort attrition * Getting cohort summary i adding demographics columns i adding tableIntersectCount 1/1 i summarising data v summariseCharacteristics finished! * Getting age density * Getting cohort attrition * Getting cohort overlap * Getting cohort timing i The following estimates will be computed: * days_between_cohort_entries: density > Start summary of data, at 2024-11-19 22:48:21.388666 v Summary finished, at 2024-11-19 22:48:21.46499 ! Results have not been suppressed. * Taking 1000 person sample of cohorts * Generating a age and sex matched cohorts Starting matching i Creating copy of target cohort. * 1 cohort to be matched. i Creating controls cohorts. i Excluding cases from controls * Matching by gender_concept_id and year_of_birth * Removing controls that were not in observation at index date * Excluding target records whose pair is not in observation * Adjusting ratio Binding cohorts v Done i adding demographics columns i adding tableIntersectCount 1/1 i summarising data v summariseCharacteristics finished! * Running large scale characterisation i Summarising large scale characteristics - getting characteristics from table condition_occurrence (1 of 6) - getting characteristics from table visit_occurrence (2 of 6) * Generating a age and sex matched cohorts Starting matching i Creating copy of target cohort. * 1 cohort to be matched. i Creating controls cohorts. i Excluding cases from controls * Matching by gender_concept_id and year_of_birth * Removing controls that were not in observation at index date * Excluding target records whose pair is not in observation * Adjusting ratio Binding cohorts v Done i adding demographics columns i adding tableIntersectCount 1/1 i summarising data v summariseCharacteristics finished! * Running large scale characterisation i Summarising large scale characteristics - getting characteristics from table condition_occurrence (1 of 6) - getting characteristics from table visit_occurrence (2 of 6) * Getting codelists from cohorts * Getting cohort summary i adding demographics columns i adding tableIntersectCount 1/1 i summarising data v summariseCharacteristics finished! * Getting age density * Getting cohort attrition * Getting cohort overlap * Getting cohort timing i The following estimates will be computed: * days_between_cohort_entries: density > Start summary of data, at 2024-11-19 22:49:26.641583 v Summary finished, at 2024-11-19 22:49:26.718847 * Creating denominator for incidence and prevalence * Sampling person table to 1e+06 i Creating denominator cohorts v Cohorts created in 0 min and 7 sec * Estimating incidence Getting incidence for analysis 1 of 24 Getting incidence for analysis 2 of 24 Getting incidence for analysis 3 of 24 Getting incidence for analysis 4 of 24 Getting incidence for analysis 5 of 24 Getting incidence for analysis 6 of 24 Getting incidence for analysis 7 of 24 Getting incidence for analysis 8 of 24 Getting incidence for analysis 9 of 24 Getting incidence for analysis 10 of 24 Getting incidence for analysis 11 of 24 Getting incidence for analysis 12 of 24 Getting incidence for analysis 13 of 24 Getting incidence for analysis 14 of 24 Getting incidence for analysis 15 of 24 Getting incidence for analysis 16 of 24 Getting incidence for analysis 17 of 24 Getting incidence for analysis 18 of 24 Getting incidence for analysis 19 of 24 Getting incidence for analysis 20 of 24 Getting incidence for analysis 21 of 24 Getting incidence for analysis 22 of 24 Getting incidence for analysis 23 of 24 Getting incidence for analysis 24 of 24 Overall time taken: 0 mins and 19 secs * Estimating prevalence Getting prevalence for analysis 1 of 12 Getting prevalence for analysis 2 of 12 Getting prevalence for analysis 3 of 12 Getting prevalence for analysis 4 of 12 Getting prevalence for analysis 5 of 12 Getting prevalence for analysis 6 of 12 Getting prevalence for analysis 7 of 12 Getting prevalence for analysis 8 of 12 Getting prevalence for analysis 9 of 12 Getting prevalence for analysis 10 of 12 Getting prevalence for analysis 11 of 12 Getting prevalence for analysis 12 of 12 Time taken: 0 mins and 7 secs * Taking 1000 person sample of cohorts * Generating a age and sex matched cohorts Starting matching i Creating copy of target cohort. * 2 cohorts to be matched. i Creating controls cohorts. i Excluding cases from controls * Matching by gender_concept_id and year_of_birth * Removing controls that were not in observation at index date * Excluding target records whose pair is not in observation * Adjusting ratio Binding cohorts v Done i adding demographics columns i adding tableIntersectCount 1/1 i summarising data v summariseCharacteristics finished! * Running large scale characterisation i Summarising large scale characteristics - getting characteristics from table condition_occurrence (1 of 6) - getting characteristics from table visit_occurrence (2 of 6) * Getting codelists from cohorts * Getting cohort summary i adding demographics columns i adding tableIntersectCount 1/1 i summarising data v summariseCharacteristics finished! * Getting age density * Getting cohort attrition * Getting cohort overlap * Getting cohort timing i The following estimates will be computed: * days_between_cohort_entries: density > Start summary of data, at 2024-11-19 22:50:35.927595 v Summary finished, at 2024-11-19 22:50:35.999106 * Taking 1000 person sample of cohorts * Generating a age and sex matched cohorts Starting matching i Creating copy of target cohort. * 2 cohorts to be matched. i Creating controls cohorts. i Excluding cases from controls * Matching by gender_concept_id and year_of_birth * Removing controls that were not in observation at index date * Excluding target records whose pair is not in observation * Adjusting ratio Binding cohorts v Done i adding demographics columns i adding tableIntersectCount 1/1 i summarising data v summariseCharacteristics finished! * Running large scale characterisation i Summarising large scale characteristics - getting characteristics from table condition_occurrence (1 of 6) - getting characteristics from table visit_occurrence (2 of 6) i Creating denominator cohorts v Cohorts created in 0 min and 3 sec * Creating denominator for incidence and prevalence * Sampling person table to 250 i Creating denominator cohorts -- getting cohort dates for ===============>--------------- 3 of 6 cohorts -- getting cohort dates for ==============================> 6 of 6 cohorts v Cohorts created in 0 min and 7 sec * Estimating incidence Getting incidence for analysis 1 of 12 Getting incidence for analysis 2 of 12 Getting incidence for analysis 3 of 12 Getting incidence for analysis 4 of 12 Getting incidence for analysis 5 of 12 Getting incidence for analysis 6 of 12 Getting incidence for analysis 7 of 12 Getting incidence for analysis 8 of 12 Getting incidence for analysis 9 of 12 Getting incidence for analysis 10 of 12 Getting incidence for analysis 11 of 12 Getting incidence for analysis 12 of 12 Overall time taken: 0 mins and 9 secs * Estimating prevalence Getting prevalence for analysis 1 of 6 Getting prevalence for analysis 2 of 6 Getting prevalence for analysis 3 of 6 Getting prevalence for analysis 4 of 6 Getting prevalence for analysis 5 of 6 Getting prevalence for analysis 6 of 6 Time taken: 0 mins and 4 secs * Creating denominator for incidence and prevalence i Creating denominator cohorts -- getting cohort dates for =========================>----- 5 of 6 cohorts -- getting cohort dates for ==============================> 6 of 6 cohorts v Cohorts created in 0 min and 7 sec * Estimating incidence Getting incidence for analysis 1 of 12 Getting incidence for analysis 2 of 12 Getting incidence for analysis 3 of 12 Getting incidence for analysis 4 of 12 Getting incidence for analysis 5 of 12 Getting incidence for analysis 6 of 12 Getting incidence for analysis 7 of 12 Getting incidence for analysis 8 of 12 Getting incidence for analysis 9 of 12 Getting incidence for analysis 10 of 12 Getting incidence for analysis 11 of 12 Getting incidence for analysis 12 of 12 Overall time taken: 0 mins and 12 secs * Estimating prevalence Getting prevalence for analysis 1 of 6 Getting prevalence for analysis 2 of 6 Getting prevalence for analysis 3 of 6 Getting prevalence for analysis 4 of 6 Getting prevalence for analysis 5 of 6 Getting prevalence for analysis 6 of 6 Time taken: 0 mins and 5 secs [ FAIL 4 | WARN 13 | SKIP 8 | PASS 38 ] ══ Skipped tests (8) ═══════════════════════════════════════════════════════════ • CirceR cannot be loaded (1): 'test-addCodelistAttribute.R:116:3' • On CRAN (3): 'test-dbms.R:3:3', 'test-dbms.R:25:3', 'test-shinyDiagnostics.R:3:3' • empty test (4): 'test-cohortDiagnostics.R:93:1', 'test-cohortDiagnostics.R:97:1', 'test-cohortDiagnostics.R:101:1', 'test-cohortDiagnostics.R:107:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-matchedDiagnostics.R:29:3'): cohort to pop diagnostics ─────── Expected `result <- matchedDiagnostics(cdm$my_cohort)` to run without any errors. i Actually got a with text: i In argument: `dplyr::all_of(toSelect)`. Caused by error in `dplyr::all_of()`: ! Can't subset elements that don't exist. x Element `visit_source_concept_id` doesn't exist. ── Error ('test-matchedDiagnostics.R:32:3'): cohort to pop diagnostics ───────── Error in `dplyr::select(cdm[[tab]], dplyr::all_of(toSelect))`: i In argument: `dplyr::all_of(toSelect)`. Caused by error in `dplyr::all_of()`: ! Can't subset elements that don't exist. x Element `visit_source_concept_id` doesn't exist. ── Failure ('test-phenotypeDiagnostics.R:34:3'): overall diagnostics function ── Expected `my_result <- phenotypeDiagnostics(cdm$my_cohort)` to run without any errors. i Actually got a with text: i In argument: `dplyr::all_of(toSelect)`. Caused by error in `dplyr::all_of()`: ! Can't subset elements that don't exist. x Element `visit_source_concept_id` doesn't exist. ── Error ('test-phenotypeDiagnostics.R:81:3'): overall diagnostics function ──── Error in `dplyr::select(cdm[[tab]], dplyr::all_of(toSelect))`: i In argument: `dplyr::all_of(toSelect)`. Caused by error in `dplyr::all_of()`: ! Can't subset elements that don't exist. x Element `visit_source_concept_id` doesn't exist. [ FAIL 4 | WARN 13 | SKIP 8 | PASS 38 ] Error: Test failures Execution halted * checking for unstated dependencies in vignettes ... OK * checking package vignettes ... OK * checking re-building of vignette outputs ... [37s/36s] OK * checking PDF version of manual ... [3s/3s] OK * checking HTML version of manual ... [0s/0s] OK * checking for non-standard things in the check directory ... OK * checking for detritus in the temp directory ... OK * DONE Status: 1 ERROR, 2 NOTEs