Open Crescent-cc opened 1 week ago
你好,看起来是数据集的下载或软链接有问题,你可以提供链接数据集和运行预测代码的命令,以及更完整的报错来帮助判断问题。
感谢回复,我是按照教程配置了环境,再将textdata文件夹中的两个数据集连接到data文件夹中对应的test文件夹中,代码分别是
ln -s /home/user/EfficientLoFTR-main/testdata/scannet_test_1500/ /home/user/EfficientLoFTR-main/data/scannet/test
ln -s /home/user/EfficientLoFTR-main/testdata/megadepth_test_1500/ /home/user/EfficientLoFTR-main/data/megadepth/test
之后运行了bash scripts/reproduce_test/indoor_full_time.sh,显示报错:
(eloftr) root:~/EfficientLoFTR-main# bash scripts/reproduce_test/indoor_opt_time.sh
{'accelerator': 'ddp',
'accumulate_grad_batches': 1,
'amp_backend': 'native',
'amp_level': 'O2',
'auto_lr_find': False,
'auto_scale_batch_size': False,
'auto_select_gpus': False,
'batch_size': 1,
'benchmark': True,
'check_val_every_n_epoch': 1,
'checkpoint_callback': True,
'ckpt_path': 'weights/eloftr_outdoor.ckpt',
'data_cfg_path': 'configs/data/scannet_test_1500.py',
'default_root_dir': None,
'deter': False,
'deterministic': False,
'distributed_backend': None,
'dump_dir': 'dump/eloftr_full_scannet',
'fast_dev_run': False,
'flash': False,
'flush_logs_every_n_steps': 100,
'fp32': False,
'gpus': -1,
'gradient_clip_algorithm': 'norm',
'gradient_clip_val': 0.0,
'half': False,
'limit_predict_batches': 1.0,
'limit_test_batches': 1.0,
'limit_train_batches': 1.0,
'limit_val_batches': 1.0,
'log_every_n_steps': 50,
'log_gpu_memory': None,
'logger': True,
'main_cfg_path': 'configs/loftr/eloftr_optimized.py',
'max_epochs': None,
'max_steps': None,
'max_time': None,
'megasize': None,
'min_epochs': None,
'min_steps': None,
'move_metrics_to_cpu': False,
'multiple_trainloader_mode': 'max_size_cycle',
'npe': False,
'num_nodes': 1,
'num_processes': 1,
'num_sanity_val_steps': 2,
'num_workers': 4,
'overfit_batches': 0.0,
'pixel_thr': None,
'plugins': None,
'precision': 32,
'prepare_data_per_node': True,
'process_position': 0,
'profiler': None,
'profiler_name': 'inference',
'progress_bar_refresh_rate': None,
'ransac': None,
'ransac_times': 1,
'reload_dataloaders_every_epoch': False,
'replace_sampler_ddp': True,
'resume_from_checkpoint': None,
'rmbd': 1,
'scannetX': 640,
'scannetY': 480,
'stochastic_weight_avg': False,
'sync_batchnorm': False,
'terminate_on_nan': False,
'thr': 20.0,
'tpu_cores': None,
'track_grad_norm': -1,
'truncated_bptt_steps': None,
'val_check_interval': 1.0,
'weights_save_path': None,
'weights_summary': 'top'}
Global seed set to 66
2024-11-18 12:09:54.169 | INFO | main:torch.load
with weights_only=False
(the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only
will be flipped to True
. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals
. We recommend you start setting weights_only=True
for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
state_dict = torch.load(pretrained_ckpt, map_location='cpu')['state_dict']
2024-11-18 12:09:55.181 | INFO | src.lightning.lightning_loftr:init:65 - Load 'weights/eloftr_outdoor.ckpt' as pretrained checkpoint
2024-11-18 12:09:55.182 | INFO | main:
rank0: File "/usr/local/iCompute/envs/eloftr/lib/python3.8/site-packages/pytorch_lightning/core/datamodule.py", line 385, in wrapped_fn rank0: return fn(*args, **kwargs) rank0: File "/home/user/EfficientLoFTR-main/src/lightning/data.py", line 190, in setup rank0: self.test_dataset = self._setup_dataset( rank0: File "/home/user/EfficientLoFTR-main/src/lightning/data.py", line 221, in _setup_dataset rank0: return dataset_builder(data_root, local_npz_names, split_npz_root, intri_path, rank0: File "/home/user/EfficientLoFTR-main/src/lightning/data.py", line 272, in _build_concat_dataset rank0: return ConcatDataset(datasets) rank0: File "/usr/local/iCompute/envs/eloftr/lib/python3.8/site-packages/torch/utils/data/dataset.py", line 328, in init rank0: assert len(self.datasets) > 0, "datasets should not be an empty iterable" # type: ignorearg-type: AssertionError: datasets should not be an empty iterable
你好,我在链接数据集后,运行预测代码,出现这个报错,请问是我的文件夹格式或者用的数据集有问题吗