* using log directory ‘/home/hornik/tmp/CRAN_recheck/deepregression.Rcheck’ * using R Under development (unstable) (2025-08-31 r88749) * using platform: x86_64-pc-linux-gnu * R was compiled by gcc-15 (Debian 15.2.0-1) 15.2.0 GNU Fortran (Debian 15.2.0-1) 15.2.0 * running under: Debian GNU/Linux forky/sid * using session charset: UTF-8 * checking for file ‘deepregression/DESCRIPTION’ ... OK * this is package ‘deepregression’ version ‘2.3.1’ * package encoding: UTF-8 * checking CRAN incoming feasibility ... [5s/5s] OK * checking package namespace information ... OK * checking package dependencies ... OK * checking if this is a source package ... OK * checking if there is a namespace ... OK * checking for executable files ... OK * checking for hidden files and directories ... OK * checking for portable file names ... OK * checking for sufficient/correct file permissions ... OK * checking whether package ‘deepregression’ can be installed ... [10s/11s] OK * checking package directory ... OK * checking for future file timestamps ... OK * checking DESCRIPTION meta-information ... OK * checking top-level files ... OK * checking for left-over files ... OK * checking index information ... OK * checking package subdirectories ... OK * checking code files for non-ASCII characters ... OK * checking R files for syntax errors ... OK * checking whether the package can be loaded ... [2s/2s] OK * checking whether the package can be loaded with stated dependencies ... [2s/2s] OK * checking whether the package can be unloaded cleanly ... [2s/2s] OK * checking whether the namespace can be loaded with stated dependencies ... [2s/2s] OK * checking whether the namespace can be unloaded cleanly ... [2s/2s] OK * checking loading without being on the library search path ... [2s/2s] OK * checking whether startup messages can be suppressed ... [2s/2s] OK * checking use of S3 registration ... OK * checking dependencies in R code ... OK * checking S3 generic/method consistency ... OK * checking replacement functions ... OK * checking foreign function calls ... OK * checking R code for possible problems ... [17s/17s] OK * checking Rd files ... [0s/0s] OK * checking Rd metadata ... OK * checking Rd line widths ... OK * checking Rd cross-references ... OK * checking for missing documentation entries ... OK * checking for code/documentation mismatches ... OK * checking Rd \usage sections ... OK * checking Rd contents ... OK * checking for unstated dependencies in examples ... OK * checking examples ... [3s/3s] OK * checking for unstated dependencies in ‘tests’ ... OK * checking tests ... [226s/225s] ERROR Running ‘testthat.R’ [225s/225s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(deepregression) Loading required package: tensorflow Loading required package: tfprobability Loading required package: keras The keras package is deprecated. Use the keras3 package instead. > > if (reticulate::py_module_available("tensorflow") & + reticulate::py_module_available("keras") & + .Platform$OS.type != "windows"){ + test_check("deepregression") + } Downloading cpython-3.11.13-linux-x86_64-gnu (download) (30.1MiB) Downloading cpython-3.11.13-linux-x86_64-gnu (download) Downloading setuptools (1.1MiB) Downloading ml-dtypes (4.7MiB) Downloading tensorflow-io-gcs-filesystem (4.9MiB) Downloading tf-keras (1.6MiB) Downloading tensorboard-data-server (6.3MiB) Downloading h5py (4.3MiB) Downloading grpcio (5.9MiB) Downloading tensorflow-probability (6.7MiB) Downloading numpy (17.4MiB) Downloading pygments (1.2MiB) Downloading tensorboard (5.2MiB) Downloading libclang (23.4MiB) Downloading keras (1.3MiB) Downloading tensorflow (615.0MiB) Downloading pygments Downloading setuptools Downloading tensorflow-io-gcs-filesystem Downloading ml-dtypes Downloading h5py Downloading keras Downloading tensorboard-data-server Downloading grpcio Downloading tf-keras Downloading tensorboard Downloading numpy Downloading libclang Downloading tensorflow-probability Downloading tensorflow Installed 44 packages in 394ms 2025-09-01 09:42:42.990510: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used. 2025-09-01 09:42:42.994570: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used. 2025-09-01 09:42:43.004509: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered WARNING: All log messages before absl::InitializeLog() is called are written to STDERR E0000 00:00:1756712563.019961 3818016 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered E0000 00:00:1756712563.024614 3818016 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered W0000 00:00:1756712563.037493 3818016 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. W0000 00:00:1756712563.037511 3818016 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. W0000 00:00:1756712563.037513 3818016 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. W0000 00:00:1756712563.037516 3818016 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. 2025-09-01 09:42:43.041600: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz 8192/11490434 [..............................] - ETA: 0s 16384/11490434 [..............................] - ETA: 53s 49152/11490434 [..............................] - ETA: 41s 81920/11490434 [..............................] - ETA: 39s 131072/11490434 [..............................] - ETA: 28s 180224/11490434 [..............................] - ETA: 24s 245760/11490434 [..............................] - ETA: 20s 360448/11490434 [..............................] - ETA: 15s 491520/11490434 [>.............................] - ETA: 12s 737280/11490434 [>.............................] - ETA: 8s  1048576/11490434 [=>............................] - ETA: 6s 1523712/11490434 [==>...........................] - ETA: 4s 2285568/11490434 [====>.........................] - ETA: 3s 3334144/11490434 [=======>......................] - ETA: 1s 4915200/11490434 [===========>..................] - ETA: 1s 7233536/11490434 [=================>............] - ETA: 0s 9322496/11490434 [=======================>......] - ETA: 0s 11490434/11490434 [==============================] - 1s 0us/step 2025-09-01 09:42:54.650522: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303) Epoch 1/2 1/15 [=>............................] - ETA: 13s - loss: 2.2602 15/15 [==============================] - 1s 29ms/step - loss: 2.2009 - val_loss: 2.1550 Epoch 2/2 1/15 [=>............................] - ETA: 0s - loss: 2.1117 15/15 [==============================] - 0s 4ms/step - loss: 2.0569 - val_loss: 2.0238 Epoch 1/2 1/15 [=>............................] - ETA: 9s - loss: 2.6629 15/15 [==============================] - 1s 14ms/step - loss: 2.6533 - val_loss: 2.6393 Epoch 2/2 1/15 [=>............................] - ETA: 0s - loss: 2.6358 15/15 [==============================] - 0s 3ms/step - loss: 2.6365 - val_loss: 2.6227 Epoch 1/2 1/15 [=>............................] - ETA: 10s - loss: 2.9965 15/15 [==============================] - 1s 16ms/step - loss: 2.8998 - val_loss: 1.6201 Epoch 2/2 1/15 [=>............................] - ETA: 0s - loss: 2.1770 15/15 [==============================] - 0s 4ms/step - loss: 2.6616 - val_loss: 1.5446 Epoch 1/2 1/15 [=>............................] - ETA: 8s - loss: 4.1701 15/15 [==============================] - 1s 13ms/step - loss: 4.0062 - val_loss: 3.4240 Epoch 2/2 1/15 [=>............................] - ETA: 0s - loss: 4.3026 15/15 [==============================] - 0s 3ms/step - loss: 3.9103 - val_loss: 3.3606 Epoch 1/3 1/3 [=========>....................] - ETA: 0s - loss: 7.9864 3/3 [==============================] - 1s 73ms/step - loss: 8.7055 - val_loss: 7.2396 Epoch 2/3 1/3 [=========>....................] - ETA: 0s - loss: 9.5053 3/3 [==============================] - 0s 11ms/step - loss: 8.6326 - val_loss: 7.1833 Epoch 3/3 1/3 [=========>....................] - ETA: 0s - loss: 8.5150 3/3 [==============================] - 0s 11ms/step - loss: 8.5566 - val_loss: 7.1282 Epoch 1/2 1/15 [=>............................] - ETA: 10s - loss: 2.6300 15/15 [==============================] - 1s 17ms/step - loss: 2.6282 - val_loss: 2.6186 Epoch 2/2 1/15 [=>............................] - ETA: 0s - loss: 2.6212 15/15 [==============================] - 0s 3ms/step - loss: 2.6124 - val_loss: 2.6029 Epoch 1/2 1/15 [=>............................] - ETA: 8s - loss: 653.6500 15/15 [==============================] - 1s 14ms/step - loss: 1248.0042 - val_loss: 1370.1056 Epoch 2/2 1/15 [=>............................] - ETA: 0s - loss: 755.2899 15/15 [==============================] - 0s 3ms/step - loss: 960.7450 - val_loss: 1055.7156 Epoch 1/2 1/15 [=>............................] - ETA: 8s - loss: 927.8140 15/15 [==============================] - 1s 29ms/step - loss: 3026.7996 - val_loss: 321.5523 Epoch 2/2 1/15 [=>............................] - ETA: 0s - loss: 991.2161 15/15 [==============================] - 0s 3ms/step - loss: 2679.4141 - val_loss: 292.7625 Epoch 1/2 1/15 [=>............................] - ETA: 9s - loss: 2.3149 15/15 [==============================] - 1s 14ms/step - loss: 2.2516 - val_loss: 2.2750 Epoch 2/2 1/15 [=>............................] - ETA: 0s - loss: 2.1948 15/15 [==============================] - 0s 3ms/step - loss: 2.2132 - val_loss: 2.2328 Fitting member 1 ...Epoch 1/10 1/32 [..............................] - ETA: 15s - loss: 2.3293 32/32 [==============================] - 1s 1ms/step - loss: 2.3303 Epoch 2/10 1/32 [..............................] - ETA: 0s - loss: 2.3269 32/32 [==============================] - 0s 1ms/step - loss: 2.2977 Epoch 3/10 1/32 [..............................] - ETA: 0s - loss: 2.3270 32/32 [==============================] - 0s 1ms/step - loss: 2.2650 Epoch 4/10 1/32 [..............................] - ETA: 0s - loss: 2.2270 32/32 [==============================] - 0s 1ms/step - loss: 2.2325 Epoch 5/10 1/32 [..............................] - ETA: 0s - loss: 2.2185 32/32 [==============================] - 0s 1ms/step - loss: 2.2002 Epoch 6/10 1/32 [..............................] - ETA: 0s - loss: 2.1431 32/32 [==============================] - 0s 2ms/step - loss: 2.1681 Epoch 7/10 1/32 [..............................] - ETA: 0s - loss: 2.2840 32/32 [==============================] - ETA: 0s - loss: 2.1359 32/32 [==============================] - 0s 2ms/step - loss: 2.1359 Epoch 8/10 1/32 [..............................] - ETA: 0s - loss: 2.0769 30/32 [===========================>..] - ETA: 0s - loss: 2.1048 32/32 [==============================] - 0s 2ms/step - loss: 2.1036 Epoch 9/10 1/32 [..............................] - ETA: 0s - loss: 2.0602 30/32 [===========================>..] - ETA: 0s - loss: 2.0719 32/32 [==============================] - 0s 2ms/step - loss: 2.0717 Epoch 10/10 1/32 [..............................] - ETA: 0s - loss: 2.0980 29/32 [==========================>...] - ETA: 0s - loss: 2.0398 32/32 [==============================] - 0s 2ms/step - loss: 2.0400 Done in 1.084552 secs Fitting member 2 ...Epoch 1/10 1/32 [..............................] - ETA: 0s - loss: 2.1814 32/32 [==============================] - 0s 1ms/step - loss: 2.3312 Epoch 2/10 1/32 [..............................] - ETA: 0s - loss: 2.4545 32/32 [==============================] - 0s 1ms/step - loss: 2.2785 Epoch 3/10 1/32 [..............................] - ETA: 0s - loss: 2.5406 32/32 [==============================] - 0s 1ms/step - loss: 2.2334 Epoch 4/10 1/32 [..............................] - ETA: 0s - loss: 2.0854 32/32 [==============================] - 0s 2ms/step - loss: 2.1937 Epoch 5/10 1/32 [..............................] - ETA: 0s - loss: 2.2393 32/32 [==============================] - 0s 2ms/step - loss: 2.1597 Epoch 6/10 1/32 [..............................] - ETA: 0s - loss: 2.0225 32/32 [==============================] - ETA: 0s - loss: 2.1276 32/32 [==============================] - 0s 2ms/step - loss: 2.1276 Epoch 7/10 1/32 [..............................] - ETA: 0s - loss: 2.4568 30/32 [===========================>..] - ETA: 0s - loss: 2.0964 32/32 [==============================] - 0s 2ms/step - loss: 2.0961 Epoch 8/10 1/32 [..............................] - ETA: 0s - loss: 1.9680 31/32 [============================>.] - ETA: 0s - loss: 2.0650 32/32 [==============================] - 0s 2ms/step - loss: 2.0644 Epoch 9/10 1/32 [..............................] - ETA: 0s - loss: 2.0730 30/32 [===========================>..] - ETA: 0s - loss: 2.0361 32/32 [==============================] - 0s 2ms/step - loss: 2.0336 Epoch 10/10 1/32 [..............................] - ETA: 0s - loss: 2.1117 29/32 [==========================>...] - ETA: 0s - loss: 1.9997 32/32 [==============================] - 0s 2ms/step - loss: 2.0026 Done in 0.6232131 secs Fitting member 3 ...Epoch 1/10 1/32 [..............................] - ETA: 0s - loss: 41.9180 32/32 [==============================] - 0s 1ms/step - loss: 39.2828 Epoch 2/10 1/32 [..............................] - ETA: 0s - loss: 34.1315 32/32 [==============================] - 0s 1ms/step - loss: 27.2884 Epoch 3/10 1/32 [..............................] - ETA: 0s - loss: 27.6960 32/32 [==============================] - 0s 2ms/step - loss: 21.7021 Epoch 4/10 1/32 [..............................] - ETA: 0s - loss: 13.7574 32/32 [==============================] - 0s 2ms/step - loss: 18.2036 Epoch 5/10 1/32 [..............................] - ETA: 0s - loss: 15.2942 32/32 [==============================] - 0s 2ms/step - loss: 15.8282 Epoch 6/10 1/32 [..............................] - ETA: 0s - loss: 9.8128 32/32 [==============================] - 0s 2ms/step - loss: 14.0666 Epoch 7/10 1/32 [..............................] - ETA: 0s - loss: 14.0067 32/32 [==============================] - 0s 2ms/step - loss: 12.6950 Epoch 8/10 1/32 [..............................] - ETA: 0s - loss: 12.2746 31/32 [============================>.] - ETA: 0s - loss: 11.5457 32/32 [==============================] - 0s 2ms/step - loss: 11.5684 Epoch 9/10 1/32 [..............................] - ETA: 0s - loss: 13.3544 30/32 [===========================>..] - ETA: 0s - loss: 10.7419 32/32 [==============================] - 0s 2ms/step - loss: 10.6440 Epoch 10/10 1/32 [..............................] - ETA: 0s - loss: 8.5922 29/32 [==========================>...] - ETA: 0s - loss: 9.7091 32/32 [==============================] - 0s 2ms/step - loss: 9.8495 Done in 0.6269357 secs Fitting member 4 ...Epoch 1/10 1/32 [..............................] - ETA: 0s - loss: 2.8816 32/32 [==============================] - 0s 1ms/step - loss: 2.9588 Epoch 2/10 1/32 [..............................] - ETA: 0s - loss: 2.3825 32/32 [==============================] - 0s 2ms/step - loss: 2.9011 Epoch 3/10 1/32 [..............................] - ETA: 0s - loss: 3.9315 32/32 [==============================] - 0s 2ms/step - loss: 2.8534 Epoch 4/10 1/32 [..............................] - ETA: 0s - loss: 2.3602 32/32 [==============================] - 0s 2ms/step - loss: 2.8062 Epoch 5/10 1/32 [..............................] - ETA: 0s - loss: 2.7419 32/32 [==============================] - ETA: 0s - loss: 2.7623 32/32 [==============================] - 0s 2ms/step - loss: 2.7623 Epoch 6/10 1/32 [..............................] - ETA: 0s - loss: 1.7251 32/32 [==============================] - ETA: 0s - loss: 2.7193 32/32 [==============================] - 0s 2ms/step - loss: 2.7193 Epoch 7/10 1/32 [..............................] - ETA: 0s - loss: 2.8260 30/32 [===========================>..] - ETA: 0s - loss: 2.6980 32/32 [==============================] - 0s 2ms/step - loss: 2.6774 Epoch 8/10 1/32 [..............................] - ETA: 0s - loss: 2.2710 29/32 [==========================>...] - ETA: 0s - loss: 2.6580 32/32 [==============================] - 0s 2ms/step - loss: 2.6383 Epoch 9/10 1/32 [..............................] - ETA: 0s - loss: 3.0022 29/32 [==========================>...] - ETA: 0s - loss: 2.6502 32/32 [==============================] - 0s 2ms/step - loss: 2.6007 Epoch 10/10 1/32 [..............................] - ETA: 0s - loss: 2.6980 29/32 [==========================>...] - ETA: 0s - loss: 2.5394 30/32 [===========================>..] - ETA: 0s - loss: 2.5591 32/32 [==============================] - 0s 8ms/step - loss: 2.5630 Done in 0.8417377 secs Fitting member 5 ...Epoch 1/10 1/32 [..............................] - ETA: 0s - loss: 112.3703 32/32 [==============================] - 0s 1ms/step - loss: 139.0890 Epoch 2/10 1/32 [..............................] - ETA: 0s - loss: 141.4559 31/32 [============================>.] - ETA: 0s - loss: 95.8028  32/32 [==============================] - 0s 2ms/step - loss: 95.6168 Epoch 3/10 1/32 [..............................] - ETA: 0s - loss: 97.2787 30/32 [===========================>..] - ETA: 0s - loss: 74.0003 32/32 [==============================] - 0s 2ms/step - loss: 74.6476 Epoch 4/10 1/32 [..............................] - ETA: 0s - loss: 42.6719 30/32 [===========================>..] - ETA: 0s - loss: 60.6039 32/32 [==============================] - 0s 2ms/step - loss: 61.8040 Epoch 5/10 1/32 [..............................] - ETA: 0s - loss: 51.5790 28/32 [=========================>....] - ETA: 0s - loss: 53.8337 32/32 [==============================] - 0s 2ms/step - loss: 53.1899 Epoch 6/10 1/32 [..............................] - ETA: 0s - loss: 35.5863 28/32 [=========================>....] - ETA: 0s - loss: 46.8528 32/32 [==============================] - 0s 2ms/step - loss: 46.9254 Epoch 7/10 1/32 [..............................] - ETA: 0s - loss: 43.7696 29/32 [==========================>...] - ETA: 0s - loss: 42.6314 32/32 [==============================] - 0s 2ms/step - loss: 42.0743 Epoch 8/10 1/32 [..............................] - ETA: 0s - loss: 43.0723 28/32 [=========================>....] - ETA: 0s - loss: 37.6709 32/32 [==============================] - 0s 2ms/step - loss: 38.1152 Epoch 9/10 1/32 [..............................] - ETA: 0s - loss: 45.3285 27/32 [========================>.....] - ETA: 0s - loss: 34.9941 32/32 [==============================] - 0s 2ms/step - loss: 34.8808 Epoch 10/10 1/32 [..............................] - ETA: 0s - loss: 29.3974 27/32 [========================>.....] - ETA: 0s - loss: 31.4974 32/32 [==============================] - 0s 2ms/step - loss: 32.1088 Done in 0.7113569 secs Epoch 1/2 1/3 [=========>....................] - ETA: 0s - loss: 2.3341 3/3 [==============================] - 1s 79ms/step - loss: 2.3038 - val_loss: 2.2154 Epoch 2/2 1/3 [=========>....................] - ETA: 0s - loss: 2.3662 3/3 [==============================] - 0s 11ms/step - loss: 2.3004 - val_loss: 2.2128 Epoch 1/2 1/3 [=========>....................] - ETA: 0s - loss: 52.8941 3/3 [==============================] - 0s 27ms/step - loss: 47.0024 - val_loss: 27.1291 Epoch 2/2 1/3 [=========>....................] - ETA: 0s - loss: 49.6579 3/3 [==============================] - 0s 11ms/step - loss: 46.6172 - val_loss: 26.8568 Fitting normal Fitting bernoulli Fitting bernoulli_prob WARNING:tensorflow:5 out of the last 13 calls to .test_function at 0x7f254c3662a0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. Fitting beta WARNING:tensorflow:5 out of the last 11 calls to .test_function at 0x7f254c1e8d60> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. Fitting betar Fitting chi2 Fitting chi Fitting exponential Fitting gamma Fitting gammar Fitting gumbel Fitting half_normal Fitting horseshoe Fitting inverse_gaussian Fitting laplace Fitting log_normal Fitting logistic Fitting negbinom Fitting negbinom Fitting pareto_ls Fitting poisson Fitting poisson_lograte Epoch 1/2 1/29 [>.............................] - ETA: 13s - loss: 11.2004 29/29 [==============================] - 1s 7ms/step - loss: 10.6607 - val_loss: 7.6350 Epoch 2/2 1/29 [>.............................] - ETA: 0s - loss: 7.1379 29/29 [==============================] - 0s 2ms/step - loss: 9.5296 - val_loss: 6.8647 Fitting model with 1 orthogonalization(s) ... Fitting model with 2 orthogonalization(s) ... Fitting model with 3 orthogonalization(s) ... Fitting model with 4 orthogonalization(s) ... Fitting model with 5 orthogonalization(s) ... Fitting Fold 1 ... Done in 0.9411457 secs Fitting Fold 2 ... Done in 0.2086694 secs Epoch 1/2 1/2 [==============>...............] - ETA: 0s - loss: 22.0463 2/2 [==============================] - 0s 9ms/step - loss: 22.4672 Epoch 2/2 1/2 [==============>...............] - ETA: 0s - loss: 21.8272 2/2 [==============================] - 0s 8ms/step - loss: 20.5671 Fitting Fold 1 ... Done in 0.9469712 secs Fitting Fold 2 ... Done in 0.2137036 secs Epoch 1/2 1/2 [==============>...............] - ETA: 0s - loss: 22.0463 2/2 [==============================] - 0s 9ms/step - loss: 22.4672 Epoch 2/2 1/2 [==============>...............] - ETA: 0s - loss: 21.8272 2/2 [==============================] - 0s 11ms/step - loss: 20.5671 [ FAIL 18 | WARN 0 | SKIP 0 | PASS 640 ] ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test_customtraining_torch.R:6:3'): Use multiple optimizers torch ──── Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(50, 1), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies. Backtrace: ▆ 1. └─torch::nn_linear(1, 50) at test_customtraining_torch.R:6:3 2. └─Module$new(...) 3. └─torch (local) initialize(...) 4. ├─torch::nn_parameter(torch_empty(out_features, in_features)) 5. │ └─torch:::is_torch_tensor(x) 6. └─torch::torch_empty(out_features, in_features) 7. ├─base::do.call(.torch_empty, args) 8. └─torch (local) ``(options = ``, size = ``) 9. └─torch:::call_c_function(...) 10. └─torch:::do_call(f, args) 11. ├─base::do.call(fun, args) 12. └─torch (local) ``(size = ``, options = ``, memory_format = NULL) ── Error ('test_data_handler_torch.R:75:3'): properties of dataset torch ─────── Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(1L, 1L), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies. Backtrace: ▆ 1. └─deepregression::deepregression(...) at test_data_handler_torch.R:75:3 2. └─base::lapply(...) 3. └─deepregression (local) FUN(X[[i]], ...) 4. └─subnetwork_builder[[i]](...) 5. └─base::lapply(...) 6. └─deepregression (local) FUN(X[[i]], ...) 7. └─pp_lay[[i]]$layer() 8. ├─base::do.call(layer_class, layer_args) 9. └─deepregression (local) ``(...) 10. └─torch (local) layer_module(...) 11. └─Module$new(...) 12. └─deepregression (local) initialize(...) 13. ├─torch::nn_parameter(torch_empty(out_features, in_features)) 14. │ └─torch:::is_torch_tensor(x) 15. └─torch::torch_empty(out_features, in_features) 16. ├─base::do.call(.torch_empty, args) 17. └─torch (local) ``(options = ``, size = ``) 18. └─torch:::call_c_function(...) 19. └─torch:::do_call(f, args) 20. ├─base::do.call(fun, args) 21. └─torch (local) ``(size = ``, options = ``, memory_format = NULL) ── Error ('test_deepregression_torch.R:6:5'): Simple additive model ──────────── Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(2, 1), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies. Backtrace: ▆ 1. └─deepregression::deepregression(...) at test_deepregression_torch.R:21:5 2. └─base::lapply(...) 3. └─deepregression (local) FUN(X[[i]], ...) 4. └─subnetwork_builder[[i]](...) 5. └─base::lapply(...) 6. └─deepregression (local) FUN(X[[i]], ...) 7. └─pp_lay[[i]]$layer() 8. ├─torch::nn_sequential(...) at test_deepregression_torch.R:6:5 9. │ └─Module$new(...) 10. │ └─torch (local) initialize(...) 11. │ └─rlang::list2(...) 12. └─torch::nn_linear(in_features = i, out_features = 2, bias = F) 13. └─Module$new(...) 14. └─torch (local) initialize(...) 15. ├─torch::nn_parameter(torch_empty(out_features, in_features)) 16. │ └─torch:::is_torch_tensor(x) 17. └─torch::torch_empty(out_features, in_features) 18. ├─base::do.call(.torch_empty, args) 19. └─torch (local) ``(options = ``, size = ``) 20. └─torch:::call_c_function(...) 21. └─torch:::do_call(f, args) 22. ├─base::do.call(fun, args) 23. └─torch (local) ``(size = ``, options = ``, memory_format = NULL) ── Error ('test_deepregression_torch.R:110:3'): Generalized additive model ───── Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies. Backtrace: ▆ 1. └─deepregression::deepregression(...) at test_deepregression_torch.R:110:3 2. └─base::lapply(...) 3. └─deepregression (local) FUN(X[[i]], ...) 4. └─subnetwork_builder[[i]](...) 5. └─base::lapply(...) 6. └─deepregression (local) FUN(X[[i]], ...) 7. └─pp_lay[[i]]$layer() 8. ├─base::do.call(layer_class, layer_args) 9. └─deepregression (local) ``(...) 10. └─torch::torch_tensor(P) 11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory) 12. └─methods$initialize(NULL, NULL, ...) 13. └─torch:::torch_tensor_cpp(...) ── Error ('test_deepregression_torch.R:151:3'): Deep generalized additive model with LSS ── Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies. Backtrace: ▆ 1. └─deepregression::deepregression(...) at test_deepregression_torch.R:151:3 2. └─base::lapply(...) 3. └─deepregression (local) FUN(X[[i]], ...) 4. └─subnetwork_builder[[i]](...) 5. └─base::lapply(...) 6. └─deepregression (local) FUN(X[[i]], ...) 7. └─pp_lay[[i]]$layer() 8. ├─base::do.call(layer_class, layer_args) 9. └─deepregression (local) ``(...) 10. └─torch::torch_tensor(P) 11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory) 12. └─methods$initialize(NULL, NULL, ...) 13. └─torch:::torch_tensor_cpp(...) ── Error ('test_deepregression_torch.R:181:3'): GAMs with shared weights ─────── Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies. Backtrace: ▆ 1. └─deepregression::deepregression(...) at test_deepregression_torch.R:181:3 2. └─base::lapply(...) 3. └─deepregression (local) FUN(X[[i]], ...) 4. └─subnetwork_builder[[i]](...) 5. ├─base::do.call(...) 6. └─deepregression (local) ``(...) 7. └─torch::torch_tensor(P) 8. └─Tensor$new(data, dtype, device, requires_grad, pin_memory) 9. └─methods$initialize(NULL, NULL, ...) 10. └─torch:::torch_tensor_cpp(...) ── Error ('test_deepregression_torch.R:220:3'): GAMs with fixed weights ──────── Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies. Backtrace: ▆ 1. └─deepregression::deepregression(...) at test_deepregression_torch.R:220:3 2. └─base::lapply(...) 3. └─deepregression (local) FUN(X[[i]], ...) 4. └─subnetwork_builder[[i]](...) 5. └─base::lapply(...) 6. └─deepregression (local) FUN(X[[i]], ...) 7. └─pp_lay[[i]]$layer() 8. ├─base::do.call(layer_class, layer_args) 9. └─deepregression (local) ``(...) 10. └─torch::torch_tensor(P) 11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory) 12. └─methods$initialize(NULL, NULL, ...) 13. └─torch:::torch_tensor_cpp(...) ── Error ('test_ensemble_torch.R:13:3'): deep ensemble ───────────────────────── Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies. Backtrace: ▆ 1. └─deepregression::deepregression(...) at test_ensemble_torch.R:13:3 2. └─base::lapply(...) 3. └─deepregression (local) FUN(X[[i]], ...) 4. └─subnetwork_builder[[i]](...) 5. └─base::lapply(...) 6. └─deepregression (local) FUN(X[[i]], ...) 7. └─pp_lay[[i]]$layer() 8. ├─base::do.call(layer_class, layer_args) 9. └─deepregression (local) ``(...) 10. └─torch::torch_tensor(P) 11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory) 12. └─methods$initialize(NULL, NULL, ...) 13. └─torch:::torch_tensor_cpp(...) ── Error ('test_ensemble_torch.R:55:3'): reinitializing weights ──────────────── Error in `torch_tensor_cpp(data, dtype, device, requires_grad, pin_memory)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies. Backtrace: ▆ 1. └─deepregression::deepregression(...) at test_ensemble_torch.R:55:3 2. └─base::lapply(...) 3. └─deepregression (local) FUN(X[[i]], ...) 4. └─subnetwork_builder[[i]](...) 5. └─base::lapply(...) 6. └─deepregression (local) FUN(X[[i]], ...) 7. └─pp_lay[[i]]$layer() 8. ├─base::do.call(layer_class, layer_args) 9. └─deepregression (local) ``(...) 10. └─torch::torch_tensor(P) 11. └─Tensor$new(data, dtype, device, requires_grad, pin_memory) 12. └─methods$initialize(NULL, NULL, ...) 13. └─torch:::torch_tensor_cpp(...) ── Error ('test_families_torch.R:76:7'): torch families can be fitted ────────── Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(1L, 1L), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies. Backtrace: ▆ 1. └─deepregression::deepregression(...) at test_families_torch.R:76:7 2. └─base::lapply(...) 3. └─deepregression (local) FUN(X[[i]], ...) 4. └─subnetwork_builder[[i]](...) 5. └─base::lapply(...) 6. └─deepregression (local) FUN(X[[i]], ...) 7. └─pp_lay[[i]]$layer() 8. ├─base::do.call(layer_class, layer_args) 9. └─deepregression (local) ``(...) 10. └─torch (local) layer_module(...) 11. └─Module$new(...) 12. └─deepregression (local) initialize(...) 13. ├─torch::nn_parameter(torch_empty(out_features, in_features)) 14. │ └─torch:::is_torch_tensor(x) 15. └─torch::torch_empty(out_features, in_features) 16. ├─base::do.call(.torch_empty, args) 17. └─torch (local) ``(options = ``, size = ``) 18. └─torch:::call_c_function(...) 19. └─torch:::do_call(f, args) 20. ├─base::do.call(fun, args) 21. └─torch (local) ``(size = ``, options = ``, memory_format = NULL) ── Error ('test_layers_torch.R:6:3'): lasso layers ───────────────────────────── Error in `cpp_torch_manual_seed(as.character(seed))`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies. Backtrace: ▆ 1. └─torch::torch_manual_seed(42) at test_layers_torch.R:6:3 2. └─torch:::cpp_torch_manual_seed(as.character(seed)) ── Error ('test_methods_torch.R:18:3'): all methods ──────────────────────────── Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(1L, 1L), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies. Backtrace: ▆ 1. └─deepregression::deepregression(...) at test_methods_torch.R:18:3 2. └─base::lapply(...) 3. └─deepregression (local) FUN(X[[i]], ...) 4. └─subnetwork_builder[[i]](...) 5. └─base::lapply(...) 6. └─deepregression (local) FUN(X[[i]], ...) 7. └─pp_lay[[i]]$layer() 8. ├─base::do.call(layer_class, layer_args) 9. └─deepregression (local) ``(...) 10. └─torch (local) layer_module(...) 11. └─Module$new(...) 12. └─deepregression (local) initialize(...) 13. ├─torch::nn_parameter(torch_empty(out_features, in_features)) 14. │ └─torch:::is_torch_tensor(x) 15. └─torch::torch_empty(out_features, in_features) 16. ├─base::do.call(.torch_empty, args) 17. └─torch (local) ``(options = ``, size = ``) 18. └─torch:::call_c_function(...) 19. └─torch:::do_call(f, args) 20. ├─base::do.call(fun, args) 21. └─torch (local) ``(size = ``, options = ``, memory_format = NULL) ── Error ('test_node.R:26:3'): node regression ───────────────────────────────── Error in `py_module_import(module, convert = convert)`: File "/home/hornik/tmp/CRAN_recheck/deepregression.Rcheck/deepregression/python/utils/types.py", line 43 Number = Union[ SyntaxError: expected 'except' or 'finally' block Run `reticulate::py_last_error()` for details. Backtrace: ▆ 1. └─deepregression::deepregression(...) at test_node.R:26:3 2. └─base::lapply(...) 3. └─deepregression (local) FUN(X[[i]], ...) 4. └─subnetwork_builder[[i]](...) 5. └─base::lapply(1:length(pp_in), function(i) pp_lay[[layer_matching[i]]]$layer(inputs[[i]])) 6. └─deepregression (local) FUN(X[[i]], ...) 7. └─pp_lay[[layer_matching[i]]]$layer(inputs[[i]]) 8. ├─base::do.call(layer_class, layer_args) 9. └─deepregression (local) ``(...) 10. └─reticulate::import_from_path("node", path = python_path) 11. └─reticulate:::import_from_path_immediate(module, path, convert) 12. └─reticulate::import(module, convert = convert) 13. └─reticulate:::py_module_import(module, convert = convert) ── Error ('test_node.R:104:3'): node bernoulli ───────────────────────────────── Error in `py_module_import(module, convert = convert)`: File "/home/hornik/tmp/CRAN_recheck/deepregression.Rcheck/deepregression/python/utils/types.py", line 43 Number = Union[ SyntaxError: expected 'except' or 'finally' block Run `reticulate::py_last_error()` for details. Backtrace: ▆ 1. └─deepregression::deepregression(...) at test_node.R:104:3 2. └─base::lapply(...) 3. └─deepregression (local) FUN(X[[i]], ...) 4. └─subnetwork_builder[[i]](...) 5. └─base::lapply(1:length(pp_in), function(i) pp_lay[[layer_matching[i]]]$layer(inputs[[i]])) 6. └─deepregression (local) FUN(X[[i]], ...) 7. └─pp_lay[[layer_matching[i]]]$layer(inputs[[i]]) 8. ├─base::do.call(layer_class, layer_args) 9. └─deepregression (local) ``(...) 10. └─reticulate::import_from_path("node", path = python_path) 11. └─reticulate:::import_from_path_immediate(module, path, convert) 12. └─reticulate::import(module, convert = convert) 13. └─reticulate:::py_module_import(module, convert = convert) ── Error ('test_node.R:190:3'): node multinoulli ─────────────────────────────── Error in `py_module_import(module, convert = convert)`: File "/home/hornik/tmp/CRAN_recheck/deepregression.Rcheck/deepregression/python/utils/types.py", line 43 Number = Union[ SyntaxError: expected 'except' or 'finally' block Run `reticulate::py_last_error()` for details. Backtrace: ▆ 1. └─deepregression::deepregression(...) at test_node.R:190:3 2. └─base::lapply(...) 3. └─deepregression (local) FUN(X[[i]], ...) 4. └─subnetwork_builder[[i]](...) 5. └─base::lapply(1:length(pp_in), function(i) pp_lay[[layer_matching[i]]]$layer(inputs[[i]])) 6. └─deepregression (local) FUN(X[[i]], ...) 7. └─pp_lay[[layer_matching[i]]]$layer(inputs[[i]]) 8. ├─base::do.call(layer_class, layer_args) 9. └─deepregression (local) ``(...) 10. └─reticulate::import_from_path("node", path = python_path) 11. └─reticulate:::import_from_path_immediate(module, path, convert) 12. └─reticulate::import(module, convert = convert) 13. └─reticulate:::py_module_import(module, convert = convert) ── Error ('test_node.R:276:3'): node overlap ─────────────────────────────────── Error in `py_module_import(module, convert = convert)`: File "/home/hornik/tmp/CRAN_recheck/deepregression.Rcheck/deepregression/python/utils/types.py", line 43 Number = Union[ SyntaxError: expected 'except' or 'finally' block Run `reticulate::py_last_error()` for details. Backtrace: ▆ 1. ├─testthat::expect_warning(...) at test_node.R:276:3 2. │ └─testthat:::quasi_capture(...) 3. │ ├─testthat (local) .capture(...) 4. │ │ └─base::withCallingHandlers(...) 5. │ └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo)) 6. └─deepregression::deepregression(...) 7. └─base::lapply(...) 8. └─deepregression (local) FUN(X[[i]], ...) 9. └─subnetwork_builder[[i]](...) 10. └─base::lapply(1:length(pp_in), function(i) pp_lay[[layer_matching[i]]]$layer(inputs[[i]])) 11. └─deepregression (local) FUN(X[[i]], ...) 12. └─pp_lay[[layer_matching[i]]]$layer(inputs[[i]]) 13. ├─base::do.call(layer_class, layer_args) 14. └─deepregression (local) ``(...) 15. └─reticulate::import_from_path("node", path = python_path) 16. └─reticulate:::import_from_path_immediate(module, path, convert) 17. └─reticulate::import(module, convert = convert) 18. └─reticulate:::py_module_import(module, convert = convert) ── Error ('test_reproducibility_torch.R:21:17'): reproducibility ─────────────── Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(64, 1), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies. Backtrace: ▆ 1. └─deepregression::deepregression(...) at test_reproducibility_torch.R:33:3 2. └─base::lapply(...) 3. └─deepregression (local) FUN(X[[i]], ...) 4. └─subnetwork_builder[[i]](...) 5. └─base::lapply(...) 6. └─deepregression (local) FUN(X[[i]], ...) 7. └─pp_lay[[i]]$layer() 8. ├─torch::nn_sequential(...) at test_reproducibility_torch.R:21:17 9. │ └─Module$new(...) 10. │ └─torch (local) initialize(...) 11. │ └─rlang::list2(...) 12. └─torch::nn_linear(in_features = 1, out_features = 64, bias = F) 13. └─Module$new(...) 14. └─torch (local) initialize(...) 15. ├─torch::nn_parameter(torch_empty(out_features, in_features)) 16. │ └─torch:::is_torch_tensor(x) 17. └─torch::torch_empty(out_features, in_features) 18. ├─base::do.call(.torch_empty, args) 19. └─torch (local) ``(options = ``, size = ``) 20. └─torch:::call_c_function(...) 21. └─torch:::do_call(f, args) 22. ├─base::do.call(fun, args) 23. └─torch (local) ``(size = ``, options = ``, memory_format = NULL) ── Error ('test_subnetwork_init_torch.R:15:33'): subnetwork_init ─────────────── Error in `(function (size, options, memory_format) { .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`, size, options, memory_format) })(size = list(5, 1), options = list(dtype = NULL, layout = NULL, device = NULL, requires_grad = FALSE), memory_format = NULL)`: Lantern is not loaded. Please use `install_torch()` to install additional dependencies. Backtrace: ▆ 1. └─deepregression::subnetwork_init_torch(list(pp)) at test_subnetwork_init_torch.R:38:3 2. └─base::lapply(...) 3. └─deepregression (local) FUN(X[[i]], ...) 4. └─pp_lay[[i]]$layer() 5. ├─torch::nn_sequential(...) at test_subnetwork_init_torch.R:15:33 6. │ └─Module$new(...) 7. │ └─torch (local) initialize(...) 8. │ └─rlang::list2(...) 9. └─torch::nn_linear(in_features = 1, out_features = 5) 10. └─Module$new(...) 11. └─torch (local) initialize(...) 12. ├─torch::nn_parameter(torch_empty(out_features, in_features)) 13. │ └─torch:::is_torch_tensor(x) 14. └─torch::torch_empty(out_features, in_features) 15. ├─base::do.call(.torch_empty, args) 16. └─torch (local) ``(options = ``, size = ``) 17. └─torch:::call_c_function(...) 18. └─torch:::do_call(f, args) 19. ├─base::do.call(fun, args) 20. └─torch (local) ``(size = ``, options = ``, memory_format = NULL) [ FAIL 18 | WARN 0 | SKIP 0 | PASS 640 ] Error: Test failures Execution halted * checking PDF version of manual ... [5s/5s] OK * checking HTML version of manual ... [2s/2s] OK * checking for non-standard things in the check directory ... OK * checking for detritus in the temp directory ... NOTE Found the following files/directories: ‘__autograph_generated_file4vh7dw3_.py’ ‘__autograph_generated_file5l4ip5y4.py’ ‘__autograph_generated_file75ggu1wh.py’ ‘__autograph_generated_file_xbwds1x.py’ ‘__autograph_generated_filea5wieluo.py’ ‘__autograph_generated_filec9tx9oxf.py’ ‘__autograph_generated_fileeblp3zju.py’ ‘__autograph_generated_fileg5m7al10.py’ ‘__autograph_generated_filesp_ftk86.py’ ‘__autograph_generated_filespx1nzjd.py’ ‘__autograph_generated_filet9k7iubo.py’ ‘__autograph_generated_filexcyrs5fj.py’ ‘__autograph_generated_filey794y1qv.py’ ‘__pycache__’ * DONE Status: 1 ERROR, 1 NOTE