R Under development (unstable) (2023-09-08 r85113 ucrt) -- "Unsuffered Consequences" Copyright (C) 2023 The R Foundation for Statistical Computing Platform: x86_64-w64-mingw32/x64 R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > requireNamespace("testthat") Loading required namespace: testthat > requireNamespace("BeeBDC") Loading required namespace: BeeBDC The legacy packages maptools, rgdal, and rgeos, underpinning the sp package, which was just loaded, will retire in October 2023. Please refer to R-spatial evolution reports for details, especially https://r-spatial.org/r/2023/05/15/evolution4.html. It may be desirable to make the sf package available; package maintainers should consider adding sf to Suggests:. The sp package is now running under evolution status 2 (status 2 uses the sf package in place of rgdal) Please note that rgdal will be retired during October 2023, plan transition to sf/stars/terra functions using GDAL and PROJ at your earliest convenience. See https://r-spatial.org/r/2023/05/15/evolution4.html and https://github.com/r-spatial/evolution rgdal: version: 1.6-7, (SVN revision 1203) Geospatial Data Abstraction Library extensions to R successfully loaded Loaded GDAL runtime: GDAL 3.6.2, released 2023/01/02 Path to GDAL shared files: D:/RCompile/CRANpkg/lib/4.4/rgdal/gdal GDAL does not use iconv for recoding strings. GDAL binary built with GEOS: TRUE Loaded PROJ runtime: Rel. 9.2.0, March 1st, 2023, [PJ_VERSION: 920] Path to PROJ shared files: D:/RCompile/CRANpkg/lib/4.4/rgdal/proj PROJ CDN enabled: FALSE Linking to sp version:2.0-0 To mute warnings of possible GDAL/OSR exportToProj4() degradation, use options("rgdal_show_exportToProj4_warnings"="none") before loading sp or rgdal. rgeos version: 0.6-4, (SVN revision 699) GEOS runtime version: 3.11.2-CAPI-1.17.2 Please note that rgeos will be retired during October 2023, plan transition to sf or terra functions using GEOS at your earliest convenience. See https://r-spatial.org/r/2023/05/15/evolution4.html for details. GEOS using OverlayNG Linking to sp version: 2.0-0 Polygon checking: TRUE > > testthat::test_check("BeeBDC") Loading required package: BeeBDC Loading required namespace: mgsub Starting taxonomy report... Homalictus fijiensis is a synonym of Lasioglossum fijiense (Perkins and Cheesman, 1928) with the taxon id number 32620. - 'Homalictus fijiensis' has the synonyms: Halictus fijiensis Perkins and Cheesman, 1928, Halictus suvaensis Cockerell, 1929, Homalictus fijiensis (Perkins and Cheesman, 1928) Starting checklist report... - Lasioglossum fijiense (Perkins and Cheesman, 1928) is reportedly found in: Fiji, Solomon Islands The output will be returned as a list with the elements: 'taxonomyReport', 'SynonymReport', and 'checklistReport'. These can be accessed using 'output'$taxonomyReport, 'output'$SynonymReport, 'output'$checklistReport, or 'output'$failedReport. Starting taxonomy report... Lasioglossum fijiense (Perkins and Cheesman, 1928) is an accpeted name with the taxon id number 32620. Homalictus fijiensis is a synonym of Lasioglossum fijiense (Perkins and Cheesman, 1928) with the taxon id number 32620. Homalictus urbanus is a synonym of Lasioglossum urbanum (Smith, 1879) with the taxon id number 36429. - 'Lasioglossum fijiense (Perkins and Cheesman, 1928)' has the synonyms: Halictus fijiensis Perkins and Cheesman, 1928, Halictus suvaensis Cockerell, 1929, Homalictus fijiensis (Perkins and Cheesman, 1928), Halictus fijiensis Perkins and Cheesman, 1928, Halictus suvaensis Cockerell, 1929, Homalictus fijiensis (Perkins and Cheesman, 1928) - 'Homalictus fijiensis' has the synonyms: Halictus fijiensis Perkins and Cheesman, 1928, Halictus suvaensis Cockerell, 1929, Homalictus fijiensis (Perkins and Cheesman, 1928), Halictus fijiensis Perkins and Cheesman, 1928, Halictus suvaensis Cockerell, 1929, Homalictus fijiensis (Perkins and Cheesman, 1928) - 'Homalictus urbanus' has the synonyms: Halictus urbanus Smith, 1879, Halictus urbanus baudinensis Cockerell, 1905, Halictus cretinicola Friese, 1909, Halictus kesteveni Cockerell, 1912, Halictus hackeriellus Cockerell, 1914, Halictus pavonellus Cockerell, 1915, Halictus olivinus Cockerell, 1922, Halictus urbanus var lomatiae Cockerell, 1922, Halictus microchalceus Cockerell, 1929, Halictus subcarus Cockerell, 1930, Halictus williamsi Cockerell, 1930, Halictus suburbanus Cockerell, 1930, Halictus aponi Cheesman and Perkins, 1939, Halictus aponi var erromangana Cheesman and Perkins, 1939, Homalictus urbanus (Smith, 1879), Homalictus urbanus aponi (Cheesman and Perkins, 1939) Starting checklist report... - Lasioglossum fijiense (Perkins and Cheesman, 1928) is reportedly found in: Fiji, Solomon Islands - Lasioglossum fijiense (Perkins and Cheesman, 1928) is reportedly found in: Fiji, Solomon Islands The output will be returned as a list with the elements: 'taxonomyReport', 'SynonymReport', and 'checklistReport'. These can be accessed using 'output'$taxonomyReport, 'output'$SynonymReport, 'output'$checklistReport, or 'output'$failedReport. Starting taxonomy report... Lasioglossum fijiense (Perkins and Cheesman, 1928) is an accpeted name with the taxon id number 32620. Homalictus fijiensis is a synonym of Lasioglossum fijiense (Perkins and Cheesman, 1928) with the taxon id number 32620. Homalictus urbanus is a synonym of Lasioglossum urbanum (Smith, 1879) with the taxon id number 36429. - 'Lasioglossum fijiense (Perkins and Cheesman, 1928)' has the synonyms: Halictus fijiensis Perkins and Cheesman, 1928, Halictus suvaensis Cockerell, 1929, Homalictus fijiensis (Perkins and Cheesman, 1928), Halictus fijiensis Perkins and Cheesman, 1928, Halictus suvaensis Cockerell, 1929, Homalictus fijiensis (Perkins and Cheesman, 1928) - 'Homalictus fijiensis' has the synonyms: Halictus fijiensis Perkins and Cheesman, 1928, Halictus suvaensis Cockerell, 1929, Homalictus fijiensis (Perkins and Cheesman, 1928), Halictus fijiensis Perkins and Cheesman, 1928, Halictus suvaensis Cockerell, 1929, Homalictus fijiensis (Perkins and Cheesman, 1928) - 'Homalictus urbanus' has the synonyms: Halictus urbanus Smith, 1879, Halictus urbanus baudinensis Cockerell, 1905, Halictus cretinicola Friese, 1909, Halictus kesteveni Cockerell, 1912, Halictus hackeriellus Cockerell, 1914, Halictus pavonellus Cockerell, 1915, Halictus olivinus Cockerell, 1922, Halictus urbanus var lomatiae Cockerell, 1922, Halictus microchalceus Cockerell, 1929, Halictus subcarus Cockerell, 1930, Halictus williamsi Cockerell, 1930, Halictus suburbanus Cockerell, 1930, Halictus aponi Cheesman and Perkins, 1939, Halictus aponi var erromangana Cheesman and Perkins, 1939, Homalictus urbanus (Smith, 1879), Homalictus urbanus aponi (Cheesman and Perkins, 1939) The output will be returned as a list with the elements: 'taxonomyReport' and 'SynonymReport'. These can be accessed using 'output'$taxonomyReport, 'output'$SynonymReport, or 'output'$failedReport. Attaching package: 'dplyr' The following object is masked from 'package:testthat': matches The following objects are masked from 'package:stats': filter, lag The following objects are masked from 'package:base': intersect, setdiff, setequal, union - jbd_GBIFissues: Flagged 5 The .GBIFflags column was added to the database. - INITIAL match with occurrenceID only 2 of 15 Paige occurrences. There are 13 occurrences remaining to match. - Starting iteration 1 Matched 2 of 15 Paige occurrences. There are 13 occurrences remaining to match. This step has found 0 extra occurrences from the last iteration. - Starting iteration 2 Matched 2 of 15 Paige occurrences. There are 13 occurrences remaining to match. This step has found 0 extra occurrences from the last iteration. - Starting iteration 3 Matched 2 of 15 Paige occurrences. There are 13 occurrences remaining to match. This step has found 0 extra occurrences from the last iteration. - Starting iteration 4 Matched 2 of 15 Paige occurrences. There are 13 occurrences remaining to match. This step has found 0 extra occurrences from the last iteration. - Starting iteration 5 Matched 2 of 15 Paige occurrences. There are 13 occurrences remaining to match. This step has found 0 extra occurrences from the last iteration. - Starting iteration 6 Matched 2 of 15 Paige occurrences. There are 13 occurrences remaining to match. This step has found 0 extra occurrences from the last iteration. - Starting iteration 7 Matched 2 of 15 Paige occurrences. There are 13 occurrences remaining to match. This step has found 0 extra occurrences from the last iteration. - Starting iteration 8 Matched 2 of 15 Paige occurrences. There are 13 occurrences remaining to match. This step has found 0 extra occurrences from the last iteration. - Starting iteration 9 Matched 2 of 15 Paige occurrences. There are 13 occurrences remaining to match. This step has found 0 extra occurrences from the last iteration. - Updating Paige datasheet to merge... - Updating the final datasheet with new information from Paige... - No dates in file name(s). Finding most-recent from file save time... - Found the following file(s): D:\temp\Rtmp08FWP2/19-Nov-22_USGS_DRO_flat.txt - Reading in data file. This should not take too long. There may be some errors upon reading in depending on the state of the data. One might consider reporting errors to Sam Droege to improve the dataset. Rows: 7 Columns: 35 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "$" chr (22): ID., name, sex, DeterminedBy, WhoScanned, COLLECTION.db, ip, coun... dbl (7): gmt, latitude, longitude, accuracy, elevation, position, how1 lgl (3): SpeciesNotes, days, note dttm (1): DateEntered date (2): DeterminedWhen, DateScanned ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message. - Formatting the USGS dataset... - Formatting the dateTime... - Creating samplingProtocol and samplingEffort columns... - Creating the fieldNotes and dataSource columns... - Renaming and selecting columns... - Checking for existing out_file directory... - No existing,out_file directory found. Creating directory... - Writing occurrence data file... Number of rows (records): 7 Written to file called USGS_formatted_2023-09-09.csv at location D:\temp\Rtmp08FWP2/out_file - Writing attributes file... Written to file called USGS_attribute_files2023-09-09.xml at location D:\temp\Rtmp08FWP2/out_file - Fin. \coordUncerFlagR: Flagged 3 geographically uncertain records: The column '.uncertaintyThreshold' was added to the database. - Using default country names and codes from https:en.wikipedia.org/wiki/ISO_3166-1_alpha-2 - static version from July 2022. Spherical geometry (s2) switched off - Extracting country data from points... - Prepare the neighbouring country dataset... although coordinates are longitude/latitude, st_intersects assumes that they are planar - Compare points with the checklist... - Combining data... - Finished. We have matched 74 records to their exact country and 4 to an adjacent country We failed to match 1 occurrences to any 'exact' or 'neighbouring' country. There are 0 'NA' occurrences for this column. countryOutlieRs: Flagged 5 for country outlier and flagged 0 for in the .sea records. Three columns were added to the database: 1. The '.countryOutlier' column was added which is a filtering column. 2. The 'countryMatch' columns indicates exact, neighbour, or noMatch. 3. The '.sea' column was added as a filtering column for points in the ocean. The '.sea' column includes the user input buffer in its calculation. - Completed in 4.27 secs Loading required namespace: emld - Checking for existing out_file directory... - Existing out_filedirectory found. Data will be saved here. - We have removed empty columns. This is standard, but as an FYI, these columns are: - Writing occurrence, attribute, and EML data file in .rds format... Number of records: 5 Number of attribute sources: 1 The 0 eml sources are Writing to file called BeeData_2023-09-09.rds at location D:\temp\Rtmp08FWP2/out_file... - dataSaver. Fin. - Checking for existing out_file directory... - Existing out_filedirectory found. Data will be saved here. - We have removed empty columns. This is standard, but as an FYI, these columns are: - Writing occurrence data file in csv format... Number of rows (records): 5 Writing to file called BeeData_combined_2023-09-09.csv at location D:\temp\Rtmp08FWP2/out_file... - Writing attribute data file in csv format... Number of rows (sources): 1 Written to file called BeeData_attributes_2023-09-09.csv at location D:\temp\Rtmp08FWP2/out_file - dataSaver. Fin. Rows: 5 Columns: 15 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," chr (10): catalog_number, pollinator_family, pollinator_genus, pollinator_sp... dbl (5): collector_number, day_collected, year_collected, latitude, longitude ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message. Rows: 1 Columns: 11 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," chr (11): dataSource, alternateIdentifier, title, pubDate, dateStamp, doi, d... ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message. - Preparing data... - Extracting dates from year, month, day columns... - Extracting dates from fieldNotes, locationRemarks, and verbatimEventDate columns in unambiguous ymd, dmy, mdy, and my formats... - Extracting year from fieldNotes, locationRemarks, and verbatimEventDate columns in ambiguous formats... - Formating and combining the new data.. - Merging all data, nearly there... - Finished. We rescued: 8 occurrences with missing eventDate. - As it stands, there are 9 complete eventDates and 1 missing dates. - There are also 9 complete year occurrences to filter from. This is up from an initial count of 2 At this rate, you will stand to lose 1 occurrences on the basis of missing year - Operation time: 0.375651836395264 secs Removing rounded coordinates with BeeBDC::jbd_coordinates_precision... jbd_coordinates_precision: Removed 5 records. - Starting the latitude sequence... - Starting the longitude sequence... - Merging results and adding the .sequential column... diagonAlley: Flagged 17 records The .sequential column was added to the database. - Completed in 0.42 secs Loading required namespace: forcats Loading required namespace: cowplot Loading required namespace: igraph - Generating a basic completeness summary from the decimalLatitude, decimalLongitude, scientificName, eventDate columns. This summary is simply the sum of complete.cases in each column. It ranges from zero to the N of columns. This will be used to sort duplicate rows and select the most-complete rows. - Updating the .summary column to sort by... - We will NOT flag the following columns. However, they will remain in the data file. .gridSummary, .lonFlag, .latFlag, .uncer_terms, .uncertaintyThreshold, .unLicensed - summaryFun: Flagged 0 The .summary column was added to the database. - Working on CustomComparisonsRAW duplicates... Completed iteration 1 of 1: - Identified 2 duplicate records and kept 1 unique records using the column(s): catalogNumber, institutionCode, scientificName - Working on CustomComparisons duplicates... Completed iteration 1 of 4: - Identified 0 duplicate records and kept 0 unique records using the column(s): gbifID, scientificName Completed iteration 2 of 4: - Identified 1 duplicate records and kept 1 unique records using the column(s): occurrenceID, scientificName Completed iteration 3 of 4: - Identified 0 duplicate records and kept 0 unique records using the column(s): recordId, scientificName Completed iteration 4 of 4: - Identified 0 duplicate records and kept 0 unique records using the column(s): id, scientificName - Working on collectionInfo duplicates... Completed iteration 1 of 2: - Identified 0 duplicate records and kept 0 unique records using the columns: decimalLatitude, decimalLongitude, scientificName, eventDate, recordedBy, and catalogNumber Completed iteration 2 of 2: - Identified 0 duplicate records and kept 0 unique records using the columns: decimalLatitude, decimalLongitude, scientificName, eventDate, recordedBy, and otherCatalogNumbers - Clustering duplicate pairs... Duplicate pairs clustered. There are 3 duplicates across 2 kept duplicates. - Ordering prefixs... - Ordering data by 1. dataSource, 2. completeness and 3. .summary column... - Find and FIRST duplicate to keep and assign other associated duplicates to that one (i.e., across multiple tests a 'kept duplicate', could otherwise be removed)... - Duplicates have been saved in the file and location: D:\temp\Rtmp08FWP2duplicateRun_collectionInfo_2023-09-09.csv - Across the entire dataset, there are now 3 duplicates from a total of 12 occurrences. - Completed in 0.72 secs - No dates in file name(s). Finding most-recent from file save time... - Found the following file(s): D:\temp\Rtmp08FWP2/testData.csv \.occurrenceAbsent: Flagged 18 absent records: One column was added to the database. No dataSource provided. Filling this column with NAs... No license provided. Filling this column with NAs... \.unLicensed: Flagged 11 records that may NOT be used. One column was added to the database. - We will flag all columns starting with '.' - summaryFun: Flagged 77 The .summary column was added to the database. The percentages of species impacted by each flag in your analysis are as follows: .coordinates_empty = 23.46% .coordinates_outOfRange = 0% .basisOfRecords_notStandard = 1.23% .coordinates_country_inconsistent = 1.23% .occurrenceAbsent = 8.64% .unLicensed = 0% .GBIFflags = 0% .uncer_terms = 0% .rou = 29.63% .val = 0% .equ = 0% .zer = 0% .cap = 0% .cen = 0% .gbf = 0% .inst = 0% .sequential = 0% .lonFlag = 0% .latFlag = 2.47% .gridSummary = 0% .uncertaintyThreshold = 12.35% .countryOutlier = 0% .sea = 1.23% .eventDate_empty = 13.58% .year_outOfRange = 13.58% .duplicates = 56.79% - Great, R has detected some files. These files include: D:\temp\Rtmp08FWP2/USGS_attribute_files2023-01-27.csv D:\temp\Rtmp08FWP2/USGS_formatted_2023-01-27.csv - .csv export version found. Loading this file... - Merging occurrence and attribute files. Depending on file size, this could take some time... - Fin. - Formatting taxonomy for matching... The names_clean column was not found and will be temporarily copied from scientificName - Harmonise the occurrence data with unambiguous names... - Attempting to harmonise the occurrence data with ambiguous names... - Formatting merged datasets... Removing the names_clean column... - We matched valid names to 96 of 100 occurrence records. This leaves a total of 4 unmatched occurrence records. harmoniseR: 4 records were flagged. The column, '.invalidName' was added to the database. - We updated the following columns: scientificName, species, family, subfamily, genus, subgenus, specificEpithet, infraspecificEpithet, and scientificNameAuthorship. The previous scientificName column was converted to verbatimScientificName - Completed in 4.48 secs Warning message: - No completeness_cols provided. Using default of: c('decimalLatitude', 'decimalLongitude', 'scientificName', and 'eventDate') - Generating a basic completeness summary from the decimalLatitude, decimalLongitude, scientificName, eventDate columns. This summary is simply the sum of complete.cases in each column. It ranges from zero to the N of columns. This will be used to sort duplicate rows and select the most-complete rows. - Starting core loop... - we matched 26 records using gbifID. This leaves 24 unmatched data in the priorData file - we matched 20 records using catalogNumber, institutionCode, dataSource. This leaves 4 unmatched data in the priorData file - we matched 4 records using occurrenceID, dataSource. This leaves 0 unmatched data in the priorData file - we matched 0 records using recordId, dataSource. This leaves 0 unmatched data in the priorData file - we matched 0 records using id. This leaves 0 unmatched data in the priorData file - we matched 0 records using catalogNumber, institutionCode. This leaves 0 unmatched data in the priorData file - Combining ids and assigning new ones where needed... - We matched a total of 50 database_id numbers. We then assigned new database_id numbers to 49 unmatched occurrences. Warning message: - No completeness_cols provided. Using default of: c('decimalLatitude', 'decimalLongitude', 'scientificName', and 'eventDate') - Generating a basic completeness summary from the decimalLatitude, decimalLongitude, scientificName, eventDate columns. This summary is simply the sum of complete.cases in each column. It ranges from zero to the N of columns. This will be used to sort duplicate rows and select the most-complete rows. - Starting core loop... - we matched 26 records using gbifID. This leaves 24 unmatched data in the priorData file - we matched 20 records using catalogNumber, institutionCode, dataSource. This leaves 4 unmatched data in the priorData file - we matched 4 records using occurrenceID, dataSource. This leaves 0 unmatched data in the priorData file - we matched 0 records using recordId, dataSource. This leaves 0 unmatched data in the priorData file - we matched 0 records using id. This leaves 0 unmatched data in the priorData file - we matched 0 records using catalogNumber, institutionCode. This leaves 0 unmatched data in the priorData file - Combining ids and assigning new ones where needed... - We matched a total of 50 database_id numbers. We then assigned new database_id numbers to 50 unmatched occurrences. Loading required namespace: htmlwidgets Loading required namespace: leaflet Loading required package: ggplot2 Attaching package: 'plotly' The following object is masked from 'package:ggplot2': last_plot The following object is masked from 'package:stats': filter The following object is masked from 'package:graphics': layout The column .expertOutlier was not found. One will be created with all values = TRUE. - Running chunker with: stepSize = 50 chunkStart = 1 chunkEnd = 50 - Starting parallel operation. Unlike the serial operation (mc.cores = 1) , a parallel operation will not provide running feedback. Please be patient as this function may take some time to complete. Each chunk will be run on a seperate thread so also be aware of RAM usage. Loading required package: rnaturalearth Support for Spatial objects (`sp`) will be deprecated in {rnaturalearth} and will be removed in a future release of the package. Please use `sf` objects with {rnaturalearth}. For example: `ne_download(returnclass = 'sf')` - Completed in 1.94 secs - We have updated the country names of 0 occurrences that previously had no country name assigned. - Running chunker with: stepSize = 50 chunkStart = 1 chunkEnd = 50 - Starting parallel operation. Unlike the serial operation (mc.cores = 1) , a parallel operation will not provide running feedback. Please be patient as this function may take some time to complete. Each chunk will be run on a seperate thread so also be aware of RAM usage. - Completed in 1.47 secs - We have updated the country names of 0 occurrences that previously had no country name assigned. - Running chunker with: stepSize = 100 chunkStart = 1 chunkEnd = 100 append = FALSE - Starting chunk 1... From 1 to 100 Loading required package: readr Attaching package: 'readr' The following objects are masked from 'package:testthat': edition_get, local_edition Spherical geometry (s2) switched on Correcting latitude and longitude transposed 8 occurrences will be tested jbd_coordinates_transposed: Corrected 3 records. One columns were added to the database. - Finished chunk 1 with 1 remaining. Records examined: 91 - Completed in 8 secs Loading required package: cowplot Loading required package: ggspatial Check figures in D:\temp\Rtmp08FWP2 Check figures in D:\temp\Rtmp08FWP2 Check figures in D:\temp\Rtmp08FWP2 - Preparing data to plot... - Building plot... Loading required namespace: openxlsx A .csv data type was chosen... Rows: 5 Columns: 15 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," chr (10): catalog_number, pollinator_family, pollinator_genus, pollinator_sp... dbl (5): collector_number, day_collected, year_collected, latitude, longitude ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message. A .csv data type was chosen... Rows: 4 Columns: 21 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," chr (15): family, subfamily, Tribe, genus, subgenus, Morphospecies, specific... dbl (5): catalogNumber, decimalLatitude, decimalLongitude, coordinateUncert... lgl (1): associatedTaxa ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message. A .csv data type was chosen... Rows: 4 Columns: 24 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," chr (20): catalogNumber, Phylum, higherClassification, Order, family, genus,... dbl (2): decimalLatitude, decimalLongitude lgl (2): dateIdentified, county ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message. A .csv data type was chosen... Rows: 6 Columns: 41 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," chr (30): occurence_lsid, language, basisOfRecord, catalogNumber, scientifi... dbl (4): organismQuantity, decimalLatitude, decimalLongitude, dateIdentified lgl (6): municipality, eventTime, fieldNumber, typeStatus, infraspecificEp... dttm (1): modified ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message. A .csv data type was chosen... Rows: 4 Columns: 91 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," chr (27): institutionCode, collectionCode, basisOfRecord, occurrenceID, high... dbl (12): id, taxonID, year, month, day, startDayOfYear, cultivationStatus, ... lgl (52): ownerInstitutionCode, catalogNumber, otherCatalogNumbers, subgenus... ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message. A .csv data type was chosen... Rows: 4 Columns: 48 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," chr (34): institutionCode, Collection Code, collection.var, catalogNumber, ... dbl (10): Other Catalog Number, SampleRound, TempStart, TempEnd, WindStart,... lgl (2): sex, subspecies time (2): eventTime, EndTime ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message. A .xlsx data type was chosen... A .csv data type was chosen... A .csv data type was chosen... Rows: 6 Columns: 6 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," chr (2): Collection, Species dbl (4): ID_project, Longitude, Latitude, Year ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message. A .csv data type was chosen... - We have read in 6 occurrence records from the 'GEOLOCATE HIGH' sheet. - We have read in 5 occurrence records from the 'BELS High' sheet. - We have kept 6 occurrences from GeoLocate, and 5 records from BELS (11 in total). BELS was given preference over GeoLocate A .csv data type was chosen... A .xlsx data type was chosen... A .xlsx data type was chosen... A .csv data type was chosen... Rows: 4 Columns: 91 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," chr (32): institutionCode, collectionCode, basisOfRecord, catalogNumber, hig... dbl (10): id, taxonID, year, month, day, startDayOfYear, localitySecurity, d... lgl (49): ownerInstitutionCode, occurrenceID, otherCatalogNumbers, subgenus,... ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message. A .csv data type was chosen... Rows: 5 Columns: 18 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," chr (13): organismName, county, stateProvince, locale, observationDate, coll... dbl (3): individualCount, latitude, longitude lgl (2): determiner, Notes ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message. A .xlsx data type was chosen... A .csv data type was chosen... Rows: 6 Columns: 25 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," chr (17): eventID, occurrenceID, basisOfRecord, eventDate, Kingdom, Order, F... dbl (5): Morphospecies, individualCount, sampleSizeValue, decimalLatitude, ... lgl (3): Species, adult, sex ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message. A .xlsx data type was chosen... A .csv data type was chosen... Rows: 6 Columns: 30 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," chr (23): basisOfRecord, recordNumber, locationID, family, subfamily, genus... dbl (5): individualCount, decimalLatitude, decimalLongitude, elevationInMe... lgl (1): catalogNumber time (1): eventTime ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message. A .xlsx data type was chosen... A .csv data type was chosen... Rows: 5 Columns: 36 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," chr (22): CodeBBdatabase_curated, Scientific name corrected, Native.to.Brazi... dbl (8): Day, Month, Year, Latitude_dec.degrees, Longitude_dec.degrees, Spc... lgl (6): Date_precision, NotasLatLong, NotesOnLocality, Spcslink.county, Sp... ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message. A .xlsx data type was chosen... A .csv data type was chosen... Rows: 3 Columns: 92 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," chr (28): institutionCode, collectionCode, collectionID, basisOfRecord, occu... dbl (8): Catalognumber, taxonID, year, month, day, startDayOfYear, decimalL... lgl (56): ownerInstitutionCode, catalogNumber, otherCatalogNumbers, subgenus... ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message. A .csv data type was chosen... Rows: 3 Columns: 19 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," chr (15): Country, Muninciplaity, Gender, Site, Latitud, Longitude, elevatio... lgl (4): Type, othercatalognumber, AssociatedTaxa, Citation ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message. A .xlsx data type was chosen... - We will flag all columns starting with '.' - summaryFun: Flagged 30 The .summary column was added to the database. Spherical geometry (s2) switched off - Extracting country data from points... although coordinates are longitude/latitude, st_intersects assumes that they are planar although coordinates are longitude/latitude, st_intersects assumes that they are planar Extraction complete. - Buffering naturalearth map by pointBuffer... dist is assumed to be in decimal degrees (arc_degrees). although coordinates are longitude/latitude, st_intersects assumes that they are planar although coordinates are longitude/latitude, st_intersects assumes that they are planar [ FAIL 0 | WARN 0 | SKIP 0 | PASS 222 ] > > proc.time() user system elapsed 71.12 9.81 78.50