R Under development (unstable) (2025-03-05 r87885 ucrt) -- "Unsuffered Consequences" Copyright (C) 2025 The R Foundation for Statistical Computing Platform: x86_64-w64-mingw32/x64 R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/testing-design.html#sec-tests-files-overview > # * https://testthat.r-lib.org/articles/special-files.html > > library(testthat) > library(samesies) > > test_check("samesies") i Skipping 'order' method because levels are not explicitly ordered. Set ordered = TRUE to compute the order method. v Computed exact scores for "fruits1_fruits2" [mean: 1] v Computed exact scores for "fruits1_fruits3" [mean: 0.333] v Computed exact scores for "fruits2_fruits3" [mean: 0.333] i Skipping 'order' method because levels are not explicitly ordered. Set ordered = TRUE to compute the order method. v Computed exact scores for "fruits1_fruits2" [mean: 1] -- Factor Similarity Analysis -------------------------------------------------- Methods used: "exact" Lists compared: "fruits1, fruits2" Levels used: "apple, orange, banana" -- Overall Method Averages -- * exact: "1" -- Method: exact -- -- Comparison: "fruits1_fruits2" * Mean: 1 * Median: 1 * SD: 0 * Exact Matches: 3 of 3 i Skipping 'order' method because levels are not explicitly ordered. Set ordered = TRUE to compute the order method. v Computed exact scores for "fruits1_fruits2" [mean: 1] v Computed exact scores for "fruits1_fruits2" [mean: 1] v Computed order scores for "fruits1_fruits2" [mean: 1] i Skipping 'order' method because levels are not explicitly ordered. Set ordered = TRUE to compute the order method. v Computed exact scores for "fruits" in "nested_cats1"-"nested_cats2" [mean: 0.667] v Computed exact scores for "colors" in "nested_cats1"-"nested_cats2" [mean: 0.667] i Skipping 'order' method because levels are not explicitly ordered. Set ordered = TRUE to compute the order method. v Computed exact scores for "fruits1_fruits2" [mean: 1] i Skipping 'order' method because levels are not explicitly ordered. Set ordered = TRUE to compute the order method. v Computed exact scores for "fruits1_fruits2" [mean: 1] i Using auto-calculated max_diff: 0.45 v Computed exact scores for "nums1_nums2" [mean: 0.333] v Computed exact scores for "nums1_nums3" [mean: 0.333] v Computed exact scores for "nums2_nums3" [mean: 0.333] v Computed pct_diff scores for "nums1_nums2" [mean: 0.958] v Computed pct_diff scores for "nums1_nums3" [mean: 0.955] v Computed pct_diff scores for "nums2_nums3" [mean: 0.915] v Computed normalized scores for "nums1_nums2" [mean: 0.926] v Computed normalized scores for "nums1_nums3" [mean: 0.926] v Computed normalized scores for "nums2_nums3" [mean: 0.852] v Computed fuzzy scores for "nums1_nums2" [mean: 1] v Computed fuzzy scores for "nums1_nums3" [mean: 1] v Computed fuzzy scores for "nums2_nums3" [mean: 0.958] i Using auto-calculated max_diff: 0.4 v Computed exact scores for "nums1_nums2" [mean: 0.333] v Computed pct_diff scores for "nums1_nums2" [mean: 0.958] v Computed normalized scores for "nums1_nums2" [mean: 0.917] v Computed fuzzy scores for "nums1_nums2" [mean: 1] -- Numeric Similarity Analysis ------------------------------------------------- Methods used: "exact, pct_diff, normalized, fuzzy" Lists compared: "nums1, nums2" -- Overall Method Averages -- * exact: "0.333" * pct_diff: "0.958" * normalized: "0.917" * fuzzy: "1" -- Method: exact -- -- Comparison: "nums1_nums2" * Mean: 0.333 * Median: 0 * SD: 0.577 * Range: [0 - 1] * Exact Matches: 1 of 3 -- Method: pct_diff -- -- Comparison: "nums1_nums2" * Mean: 0.958 * Median: 0.95 * SD: 0.039 * Range: [0.923 - 1] * Exact Matches: 1 of 3 -- Method: normalized -- -- Comparison: "nums1_nums2" * Mean: 0.917 * Median: 0.875 * SD: 0.072 * Range: [0.875 - 1] * Exact Matches: 1 of 3 -- Method: fuzzy -- -- Comparison: "nums1_nums2" * Mean: 1 * Median: 1 * SD: 0 * Range: [1 - 1] * Exact Matches: 2 of 3 i Using auto-calculated max_diff: 0.4 v Computed exact scores for "nums1_nums2" [mean: 0.333] v Computed pct_diff scores for "nums1_nums2" [mean: 0.958] v Computed normalized scores for "nums1_nums2" [mean: 0.917] v Computed fuzzy scores for "nums1_nums2" [mean: 1] i Using auto-calculated max_diff: 0.4 v Computed normalized scores for "nums1_nums2" [mean: 0.917] v Computed fuzzy scores for "nums1_nums2" [mean: 1] v Computed pct_diff scores for "nums1_nums2" [mean: 0.958] i Using auto-calculated max_diff for "weights": 2.5 i Using auto-calculated max_diff for "heights": 20 v Computed exact scores for "weights_nested_nums1_nested_nums2" [mean: 0] v Computed pct_diff scores for "weights_nested_nums1_nested_nums2" [mean: 0.975] v Computed normalized scores for "weights_nested_nums1_nested_nums2" [mean: 0.98] v Computed fuzzy scores for "weights_nested_nums1_nested_nums2" [mean: 1] v Computed exact scores for "heights_nested_nums1_nested_nums2" [mean: 0] v Computed pct_diff scores for "heights_nested_nums1_nested_nums2" [mean: 0.988] v Computed normalized scores for "heights_nested_nums1_nested_nums2" [mean: 0.9] v Computed fuzzy scores for "heights_nested_nums1_nested_nums2" [mean: 1] i Using auto-calculated max_diff: 0.45 v Computed exact scores for "nums1_nums2" [mean: 0.333] v Computed exact scores for "nums1_nums3" [mean: 0.333] v Computed exact scores for "nums2_nums3" [mean: 0.333] v Computed pct_diff scores for "nums1_nums2" [mean: 0.958] v Computed pct_diff scores for "nums1_nums3" [mean: 0.955] v Computed pct_diff scores for "nums2_nums3" [mean: 0.915] v Computed normalized scores for "nums1_nums2" [mean: 0.926] v Computed normalized scores for "nums1_nums3" [mean: 0.926] v Computed normalized scores for "nums2_nums3" [mean: 0.852] v Computed fuzzy scores for "nums1_nums2" [mean: 1] v Computed fuzzy scores for "nums1_nums3" [mean: 1] v Computed fuzzy scores for "nums2_nums3" [mean: 0.958] v Computed jw scores for "fruits1_fruits2" [mean: 0.984] v Computed jw scores for "fruits1_fruits3" [mean: 0.952] v Computed jw scores for "fruits2_fruits3" [mean: 0.97] v Computed lv scores for "fruits1_fruits2" [mean: 0.867] v Computed lv scores for "fruits1_fruits3" [mean: 0.794] v Computed lv scores for "fruits2_fruits3" [mean: 0.849] v Computed osa scores for "fruits1_fruits2" [mean: 0.933] v Computed lv scores for "fruits1_fruits2" [mean: 0.867] v Computed dl scores for "fruits1_fruits2" [mean: 0.933] v Computed hamming scores for "fruits1_fruits2" [mean: 0.867] v Computed lcs scores for "fruits1_fruits2" [mean: 0.867] v Computed qgram scores for "fruits1_fruits2" [mean: 1] v Computed cosine scores for "fruits1_fruits2" [mean: 1] v Computed jaccard scores for "fruits1_fruits2" [mean: 1] v Computed jw scores for "fruits1_fruits2" [mean: 0.984] v Computed soundex scores for "fruits1_fruits2" [mean: 1] v Computed jw scores for "fruits1_fruits2" [mean: 0.984] v Computed lv scores for "fruits1_fruits2" [mean: 0.867] -- Text Similarity Analysis ---------------------------------------------------- Methods used: "jw, lv" Lists compared: "fruits1, fruits2" -- Overall Method Averages -- * jw: "0.984" * lv: "0.867" -- Method: jw -- -- Comparison: "fruits1_fruits2" * Mean: 0.984 * Median: 1 * SD: 0.027 * IQR: 0.023 * Range: [0.953 - 1] -- Method: lv -- -- Comparison: "fruits1_fruits2" * Mean: 0.867 * Median: 1 * SD: 0.231 * IQR: 0.2 * Range: [0.6 - 1] v Computed jw scores for "fruits1_fruits2" [mean: 0.984] v Computed lv scores for "fruits1_fruits2" [mean: 0.867] v Computed jw scores for "nested_fruits1_nested_fruits2" [mean: 0.972] v Computed jw scores for "nested_fruits1_nested_fruits2" [mean: 0.972] v Computed jw scores for "nested_fruits1_nested_fruits3" [mean: 0.946] v Computed jw scores for "nested_fruits2_nested_fruits3" [mean: 0.902] v Computed lv scores for "nested_fruits1_nested_fruits2" [mean: 0.909] v Computed lv scores for "nested_fruits1_nested_fruits3" [mean: 0.783] v Computed lv scores for "nested_fruits2_nested_fruits3" [mean: 0.702] v Computed jw scores for "fruits1_fruits2" [mean: 0.984] v Computed jw scores for "fruits1_fruits3" [mean: 0.952] v Computed jw scores for "fruits2_fruits3" [mean: 0.97] v Computed lv scores for "fruits1_fruits2" [mean: 0.867] v Computed lv scores for "fruits1_fruits3" [mean: 0.794] v Computed lv scores for "fruits2_fruits3" [mean: 0.849] v Computed jw scores for "fruits1_fruits2" [mean: 0.984] v Computed jw scores for "fruits1_fruits3" [mean: 0.952] v Computed jw scores for "fruits2_fruits3" [mean: 0.97] v Computed lv scores for "fruits1_fruits2" [mean: 0.867] v Computed lv scores for "fruits1_fruits3" [mean: 0.794] v Computed lv scores for "fruits2_fruits3" [mean: 0.849] v Computed jw scores for "fruits1_fruits2" [mean: 0.984] v Computed jw scores for "fruits1_fruits3" [mean: 0.952] v Computed jw scores for "fruits2_fruits3" [mean: 0.97] v Computed lv scores for "fruits1_fruits2" [mean: 0.867] v Computed lv scores for "fruits1_fruits3" [mean: 0.794] v Computed lv scores for "fruits2_fruits3" [mean: 0.849] i Skipping 'order' method because levels are not explicitly ordered. Set ordered = TRUE to compute the order method. v Computed exact scores for "fruits1_fruits2" [mean: 1] v Computed exact scores for "fruits1_fruits3" [mean: 0.333] v Computed exact scores for "fruits2_fruits3" [mean: 0.333] i Using auto-calculated max_diff: 0.45 v Computed exact scores for "nums1_nums2" [mean: 0.333] v Computed exact scores for "nums1_nums3" [mean: 0.333] v Computed exact scores for "nums2_nums3" [mean: 0.333] v Computed pct_diff scores for "nums1_nums2" [mean: 0.958] v Computed pct_diff scores for "nums1_nums3" [mean: 0.955] v Computed pct_diff scores for "nums2_nums3" [mean: 0.915] v Computed normalized scores for "nums1_nums2" [mean: 0.926] v Computed normalized scores for "nums1_nums3" [mean: 0.926] v Computed normalized scores for "nums2_nums3" [mean: 0.852] v Computed fuzzy scores for "nums1_nums2" [mean: 1] v Computed fuzzy scores for "nums1_nums3" [mean: 1] v Computed fuzzy scores for "nums2_nums3" [mean: 0.958] v Computed jw scores for "fruits1_fruits2" [mean: 0.984] v Computed jw scores for "fruits1_fruits3" [mean: 0.952] v Computed jw scores for "fruits2_fruits3" [mean: 0.97] v Computed lv scores for "fruits1_fruits2" [mean: 0.867] v Computed lv scores for "fruits1_fruits3" [mean: 0.794] v Computed lv scores for "fruits2_fruits3" [mean: 0.849] i Skipping 'order' method because levels are not explicitly ordered. Set ordered = TRUE to compute the order method. v Computed exact scores for "fruits1_fruits2" [mean: 1] v Computed exact scores for "fruits1_fruits3" [mean: 0.333] v Computed exact scores for "fruits2_fruits3" [mean: 0.333] i Using auto-calculated max_diff: 0.45 v Computed exact scores for "nums1_nums2" [mean: 0.333] v Computed exact scores for "nums1_nums3" [mean: 0.333] v Computed exact scores for "nums2_nums3" [mean: 0.333] v Computed pct_diff scores for "nums1_nums2" [mean: 0.958] v Computed pct_diff scores for "nums1_nums3" [mean: 0.955] v Computed pct_diff scores for "nums2_nums3" [mean: 0.915] v Computed normalized scores for "nums1_nums2" [mean: 0.926] v Computed normalized scores for "nums1_nums3" [mean: 0.926] v Computed normalized scores for "nums2_nums3" [mean: 0.852] v Computed fuzzy scores for "nums1_nums2" [mean: 1] v Computed fuzzy scores for "nums1_nums3" [mean: 1] v Computed fuzzy scores for "nums2_nums3" [mean: 0.958] [ FAIL 0 | WARN 0 | SKIP 0 | PASS 83 ] > > proc.time() user system elapsed 3.28 0.18 3.45