Open Jason7624 opened 1 month ago
Hi, I've been having this problem for testing my trained model on my own custom dataset. I've been able to train my model with the provided code to format into lmdb. However, when I'm trying to test my trained model, this error shows up:
model input parameters 32 100 20 1 512 256 37 25 TPS ResNet BiLSTM CTC
loading pretrained model from saved_models/Test_pretrained/best_accuracy.pth deep-text-recognition-benchmark\test.py:208: FutureWarning: You are using
torch.load
withweights_only=False
(the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value forweights_only
will be flipped toTrue
. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user viatorch.serialization.add_safe_globals
. We recommend you start settingweights_only=True
for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. model.load_state_dict(torch.load(opt.saved_model, map_location=device)) 'cp' is not recognized as an internal or external command, operable program or batch file.dataset_root: Dataset/Validation\IIIT5k_3000 dataset: / Traceback (most recent call last): File "deep-text-recognition-benchmark\test.py", line 282, in test(opt) File "deep-text-recognition-benchmark\test.py", line 226, in test benchmark_all_eval(model, criterion, converter, opt) File "deep-text-recognition-benchmark\test.py", line 46, in benchmark_all_eval eval_data, eval_data_log = hierarchical_dataset(root=eval_data_path, opt=opt) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "deep-text-recognition-benchmark\dataset.py", line 124, in hierarchical_dataset concatenated_dataset = ConcatDataset(dataset_list) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "paddleocr1\Lib\site-packages\torch\utils\data\dataset.py", line 328, in init assert len(self.datasets) > 0, "datasets should not be an empty iterable" # type: ignore[arg-type] ^^^^^^^^^^^^^^^^^^^^^^ AssertionError: datasets should not be an empty iterable
And this is my script: python test.py --eval_data Dataset/Validation --benchmark_all_eval --Transformation TPS --FeatureExtraction ResNet --SequenceModeling BiLSTM --Prediction CTC --saved_model saved_models/Test_pretrained/best_accuracy.pth
Can anyone please help me resolve my issue?
use the same number used for training --output_channel 256 or 512 --hidden_size
Hi, I've been having this problem for testing my trained model on my own custom dataset. I've been able to train my model with the provided code to format into lmdb. However, when I'm trying to test my trained model, this error shows up:
model input parameters 32 100 20 1 512 256 37 25 TPS ResNet BiLSTM CTC loading pretrained model from saved_models/Test_pretrained/best_accuracy.pth deep-text-recognition-benchmark\test.py:208: FutureWarning: You are using
torch.load
withweights_only=False
(the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value forweights_only
will be flipped toTrue
. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user viatorch.serialization.add_safe_globals
. We recommend you start settingweights_only=True
for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. model.load_state_dict(torch.load(opt.saved_model, map_location=device)) 'cp' is not recognized as an internal or external command, operable program or batch file.dataset_root: Dataset/Validation\IIIT5k_3000 dataset: / Traceback (most recent call last): File "deep-text-recognition-benchmark\test.py", line 282, in
test(opt)
File "deep-text-recognition-benchmark\test.py", line 226, in test
benchmark_all_eval(model, criterion, converter, opt)
File "deep-text-recognition-benchmark\test.py", line 46, in benchmark_all_eval
eval_data, eval_data_log = hierarchical_dataset(root=eval_data_path, opt=opt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "deep-text-recognition-benchmark\dataset.py", line 124, in hierarchical_dataset
concatenated_dataset = ConcatDataset(dataset_list)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "paddleocr1\Lib\site-packages\torch\utils\data\dataset.py", line 328, in init
assert len(self.datasets) > 0, "datasets should not be an empty iterable" # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^
AssertionError: datasets should not be an empty iterable
And this is my script: python test.py --eval_data Dataset/Validation --benchmark_all_eval --Transformation TPS --FeatureExtraction ResNet --SequenceModeling BiLSTM --Prediction CTC --saved_model saved_models/Test_pretrained/best_accuracy.pth
Can anyone please help me resolve my issue?