Thank you for providing this tool.
I tried to run your tool on test data, but I got an error 'Sequential' object has no attribute '_layers'. I guess that you fixed this problem on your repository by commenting out the linereturn model._layers[0]._batch_input_shape[1] on the def get_peptide_length_from_model function https://github.com/bzhanglab/AutoRT/blame/master/autort/RTModels.py#L640, but you didn't do that on the Docker files.
I pasted below the whole log message from nextflow.
log message from nextflow
```
N E X T F L O W ~ version 21.04.1
Launching `DeepRescore.nf` [cranky_goldwasser] - revision: 20cb1fa1d6
executor > local (6)
[e9/59e411] process > calc_basic_features (d2) [100%] 1 of 1 ✔
[02/613b3c] process > pga_fdr_control (d2) [100%] 1 of 1 ✔
[0c/2bdb1a] process > generate_train_prediction_data (d2) [100%] 1 of 1 ✔
[1f/43767c] process > run_pdeep2 (d2) [100%] 1 of 1 ✔
[59/023114] process > process_pDeep2_results (d2) [100%] 1 of 1 ✔
[6e/34ab7b] process > train_autoRT (d2) [100%] 1 of 1, failed: 1 ✘
[- ] process > predicte_autoRT -
[- ] process > generate_percolator_input -
[- ] process > run_percolator -
[- ] process > generate_pdv_input -
Error executing process > 'train_autoRT (d2)'
Caused by:
Process `train_autoRT (d2)` terminated with an error exit status (1)
Command executed:
#!/bin/sh
set -e
mkdir -p ./autoRT_models
for file in autoRT_train/*.txt
do
fraction=`basename ${file} .txt`
mkdir -p ./autoRT_models/${fraction}
python /opt/AutoRT/autort.py train -i $file -o ./autoRT_models/${fraction} -e 40 -b 64 -u m -m /opt/AutoRT/models/base_models_PXD006109/model.json -rlr -n 10
done
wait
Command exit status:
1
Command output:
Scaling method: min_max
Transfer learning ...
Deep learning model: 0
Load aa coding data from file /opt/AutoRT/models/base_models_PXD006109/aa.tsv
AA types: 21
Longest peptide in training data: 24
Use test file ./autoRT_models/AC20171011_Broad_HLA_A1101_R1_Rep01/validation.tsv
Longest peptide in test data: 20
['1', 'A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'V', 'W', 'Y']
RT range: 0 - 98
X_train shape:
(1651, 48, 21)
X_test shape:
(184, 48, 21)
Modeling start ...
Command error:
f515f4c1b074: Download complete
16d3b8690ce0: Pull complete
6688d484c18d: Pull complete
50ef52d49cb7: Pull complete
8e10381b8c57: Pull complete
5befd987b36f: Verifying Checksum
5befd987b36f: Download complete
9505c0f2462c: Verifying Checksum
9505c0f2462c: Download complete
b60763d28bdf: Download complete
a17548ca111b: Verifying Checksum
a17548ca111b: Download complete
3442d7bf8734: Verifying Checksum
3442d7bf8734: Download complete
bb9ccf0ce8ca: Verifying Checksum
bb9ccf0ce8ca: Download complete
6a0d7a40b288: Verifying Checksum
6a0d7a40b288: Download complete
e84a0c00918e: Download complete
bbd63def3ec3: Verifying Checksum
bbd63def3ec3: Download complete
e84a0c00918e: Pull complete
a17548ca111b: Pull complete
f515f4c1b074: Pull complete
5befd987b36f: Pull complete
9505c0f2462c: Pull complete
b60763d28bdf: Pull complete
bbd63def3ec3: Pull complete
3442d7bf8734: Pull complete
bb9ccf0ce8ca: Pull complete
6a0d7a40b288: Pull complete
Digest: sha256:1e8772488571f36ff29f061b6fec4778b154dfb23c5bee816e36b7e790d15c03
Status: Downloaded newer image for proteomics/autort:latest
Matplotlib created a temporary config/cache directory at /tmp/matplotlib-s_x2f8jd because the default path (/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to
speed up the import of Matplotlib and to better support multiprocessing.
2021-07-07 11:50:33.324340: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-07 11:50:35.951041: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2021-07-07 11:50:35.951091: E tensorflow/stream_executor/cuda/cuda_driver.cc:328] failed call to cuInit: UNKNOWN ERROR (34)
2021-07-07 11:50:35.951124: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:163] no NVIDIA GPU device is present: /dev/nvidia0 does not exist
2021-07-07 11:50:35.951395: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Traceback (most recent call last):
File "/opt/AutoRT/autort.py", line 133, in
main()
File "/opt/AutoRT/autort.py", line 99, in main
add_reverse=add_reverse,add_ReduceLROnPlateau=add_ReduceLROnPlateau)
File "/opt/AutoRT/autort/RTModels.py", line 427, in ensemble_models
print(get_peptide_length_from_model(new_model))
File "/opt/AutoRT/autort/RTModels.py", line 640, in get_peptide_length_from_model
return model._layers[0]._batch_input_shape[1]
AttributeError: 'Sequential' object has no attribute '_layers'
Work dir:
/home/pkudryavtseva/DeepRescore/work/6e/34ab7b8ad260035a03d13a482abdca
Tip: you can replicate the issue by changing to the process work dir and entering the command `bash .command.run`
```
We recently updated AutoRT so you need to reinstall AutoRT docker (docker pull proteomics/autort ). Please try again by using the latest DeepRescore after the docker is updated.
Dear developers,
Thank you for providing this tool. I tried to run your tool on test data, but I got an error
'Sequential' object has no attribute '_layers'
. I guess that you fixed this problem on your repository by commenting out the linereturn model._layers[0]._batch_input_shape[1]
on thedef get_peptide_length_from_model
function https://github.com/bzhanglab/AutoRT/blame/master/autort/RTModels.py#L640, but you didn't do that on the Docker files.I pasted below the whole log message from nextflow.
log message from nextflow
``` N E X T F L O W ~ version 21.04.1 Launching `DeepRescore.nf` [cranky_goldwasser] - revision: 20cb1fa1d6 executor > local (6) [e9/59e411] process > calc_basic_features (d2) [100%] 1 of 1 ✔ [02/613b3c] process > pga_fdr_control (d2) [100%] 1 of 1 ✔ [0c/2bdb1a] process > generate_train_prediction_data (d2) [100%] 1 of 1 ✔ [1f/43767c] process > run_pdeep2 (d2) [100%] 1 of 1 ✔ [59/023114] process > process_pDeep2_results (d2) [100%] 1 of 1 ✔ [6e/34ab7b] process > train_autoRT (d2) [100%] 1 of 1, failed: 1 ✘ [- ] process > predicte_autoRT - [- ] process > generate_percolator_input - [- ] process > run_percolator - [- ] process > generate_pdv_input - Error executing process > 'train_autoRT (d2)' Caused by: Process `train_autoRT (d2)` terminated with an error exit status (1) Command executed: #!/bin/sh set -e mkdir -p ./autoRT_models for file in autoRT_train/*.txt do fraction=`basename ${file} .txt` mkdir -p ./autoRT_models/${fraction} python /opt/AutoRT/autort.py train -i $file -o ./autoRT_models/${fraction} -e 40 -b 64 -u m -m /opt/AutoRT/models/base_models_PXD006109/model.json -rlr -n 10 done wait Command exit status: 1 Command output: Scaling method: min_max Transfer learning ... Deep learning model: 0 Load aa coding data from file /opt/AutoRT/models/base_models_PXD006109/aa.tsv AA types: 21 Longest peptide in training data: 24 Use test file ./autoRT_models/AC20171011_Broad_HLA_A1101_R1_Rep01/validation.tsv Longest peptide in test data: 20 ['1', 'A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'V', 'W', 'Y'] RT range: 0 - 98 X_train shape: (1651, 48, 21) X_test shape: (184, 48, 21) Modeling start ... Command error: f515f4c1b074: Download complete 16d3b8690ce0: Pull complete 6688d484c18d: Pull complete 50ef52d49cb7: Pull complete 8e10381b8c57: Pull complete 5befd987b36f: Verifying Checksum 5befd987b36f: Download complete 9505c0f2462c: Verifying Checksum 9505c0f2462c: Download complete b60763d28bdf: Download complete a17548ca111b: Verifying Checksum a17548ca111b: Download complete 3442d7bf8734: Verifying Checksum 3442d7bf8734: Download complete bb9ccf0ce8ca: Verifying Checksum bb9ccf0ce8ca: Download complete 6a0d7a40b288: Verifying Checksum 6a0d7a40b288: Download complete e84a0c00918e: Download complete bbd63def3ec3: Verifying Checksum bbd63def3ec3: Download complete e84a0c00918e: Pull complete a17548ca111b: Pull complete f515f4c1b074: Pull complete 5befd987b36f: Pull complete 9505c0f2462c: Pull complete b60763d28bdf: Pull complete bbd63def3ec3: Pull complete 3442d7bf8734: Pull complete bb9ccf0ce8ca: Pull complete 6a0d7a40b288: Pull complete Digest: sha256:1e8772488571f36ff29f061b6fec4778b154dfb23c5bee816e36b7e790d15c03 Status: Downloaded newer image for proteomics/autort:latest Matplotlib created a temporary config/cache directory at /tmp/matplotlib-s_x2f8jd because the default path (/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing. 2021-07-07 11:50:33.324340: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0 2021-07-07 11:50:35.951041: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1 2021-07-07 11:50:35.951091: E tensorflow/stream_executor/cuda/cuda_driver.cc:328] failed call to cuInit: UNKNOWN ERROR (34) 2021-07-07 11:50:35.951124: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:163] no NVIDIA GPU device is present: /dev/nvidia0 does not exist 2021-07-07 11:50:35.951395: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. Traceback (most recent call last): File "/opt/AutoRT/autort.py", line 133, inI would be grateful if you can fix this.
Best regards, Polina