Open rfLiu123 opened 2 weeks ago
Thank you for your interest in our work!
Based on the image you provided, it seems there may be a download issue with the KAIST-CVPR15 dataset (KAIST Multispectral Pedestrian Detection Benchmark). To ensure proper pretraining, please make sure to download the KAIST-CVPR15 dataset as described in the training documentation. The folder named "kaist-cvpr15" should then be placed within the "data" directory, as shown below. Alternatively, you can set up a symbolic link if you'd rather keep the files in a different location. I hope this helps resolve the issue!
XoFTR/
├── data/
│ ├── kaist-cvpr15/
│ ├── megadepth/
│ └── METU_VisTIR/
└── ...
This is a great job. Hello, when reproducing your work, I followed the steps you provided and the demo file ran smoothly. However, I encountered a problem during training. This is my running log: +++ readlink -f ./pretrain.sh ++ dirname /media/tom/新加卷/XoFTR/scripts/reproduce_train/pretrain.sh
python -u ./pretrain.py configs/data/pretrain.py configs/xoftr/pretrain/pretrain.py --exp_name=pretrain--bs=1 --gpus=1 --num_nodes=1 --accelerator=ddp --batch_size=1 --num_workers=8 --pin_memory=true --check_val_every_n_epoch=1 --log_every_n_steps=100 --limit_val_batches=1. --num_sanity_val_steps=10 --benchmark=True --max_epochs=15 {'accelerator': 'ddp', 'accumulate_grad_batches': 1, 'amp_backend': 'native', 'amp_level': 'O2', 'auto_lr_find': False, 'auto_scale_batch_size': False, 'auto_select_gpus': False, 'batch_size': 1, 'benchmark': True, 'check_val_every_n_epoch': 1, 'checkpoint_callback': True, 'ckpt_path': None, 'data_cfg_path': 'configs/data/pretrain.py', 'default_root_dir': None, 'deterministic': False, 'disable_ckpt': False, 'distributed_backend': None, 'exp_name': 'pretrain--bs=1', 'fast_dev_run': False, 'flush_logs_every_n_steps': 100, 'gpus': 1, 'gradient_clip_algorithm': 'norm', 'gradient_clip_val': 0.0, 'limit_predict_batches': 1.0, 'limit_test_batches': 1.0, 'limit_train_batches': 1.0, 'limit_val_batches': 1.0, 'log_every_n_steps': 100, 'log_gpu_memory': None, 'logger': True, 'main_cfg_path': 'configs/xoftr/pretrain/pretrain.py', 'max_epochs': 15, 'max_steps': None, 'max_time': None, 'min_epochs': None, 'min_steps': None, 'move_metrics_to_cpu': False, 'multiple_trainloader_mode': 'max_size_cycle', 'num_nodes': 1, 'num_processes': 1, 'num_sanity_val_steps': 10, 'num_workers': 8, 'overfit_batches': 0.0, 'parallel_load_data': False, 'pin_memory': True, 'plugins': None, 'precision': 32, 'prepare_data_per_node': True, 'process_position': 0, 'profiler': None, 'profiler_name': None, 'progress_bar_refresh_rate': None, 'reload_dataloaders_every_epoch': False, 'replace_sampler_ddp': True, 'resume_from_checkpoint': None, 'stochastic_weight_avg': False, 'sync_batchnorm': False, 'terminate_on_nan': False, 'tpu_cores': None, 'track_grad_norm': -1, 'truncated_bptt_steps': None, 'val_check_interval': 1.0, 'weights_save_path': None, 'weights_summary': 'top'} Global seed set to 66 2024-11-07 10:00:07.608 | INFO | main:main:81 - XoFTR LightningModule initialized! 2024-11-07 10:00:07.609 | INFO | main:main:85 - XoFTR DataModule initialized! Missing logger folder: logs/tb_logs/pretrain--bs=1 /home/tom/.conda/envs/xoftr/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:69: UserWarning: ModelCheckpoint(save_last=True, save_top_k=None, monitor=None) is a redundant configuration. You can save the last checkpoint with ModelCheckpoint(save_top_k=None, monitor=None). warnings.warn(*args, **kwargs) ModelCheckpoint(save_last=True, save_top_k=-1, monitor=None) will duplicate the last checkpoint saved. GPU available: True, used: True TPU available: False, using: 0 TPU cores 2024-11-07 10:00:07.680 | INFO | main:main:119 - Trainer initialized! 2024-11-07 10:00:07.680 | INFO | main:main:120 - Start training! Global seed set to 66 initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/1 2024-11-07 10:00:07.876 | INFO | src.lightning.data_pretrain:setup:65 - [rank:0] world_size: 1 2024-11-07 10:00:07.878 | INFO | src.lightning.data_pretrain:setup:80 - [rank:0] Train & Val Dataset loaded! LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information. wandb: Currently logged in as: 1114986738 (1114986738-). Use
wandb login --relogin
to force relogin wandb: Tracking run with wandb version 0.18.5 wandb: Run data is saved locally in /media/tom/新加卷/XoFTR/wandb/run-20241107_100009-qyj3yliw wandb: Runwandb offline
to turn off syncing. wandb: Syncing run pretrain--bs=1_2024_11_07_10_00_07 wandb: ⭐️ View project at https://wandb.ai/1114986738-/XoFTR wandb: 🚀 View run at https://wandb.ai/1114986738-/XoFTR/runs/qyj3yliw| Name | Type | Params
0 | matcher | XoFTR_Pretrain | 10.9 M 1 | matcher.backbone | ResNet_8_2 | 4.2 M 2 | matcher.backbone.conv1 | Conv2d | 6.3 K 3 | matcher.backbone.bn1 | SyncBatchNorm | 256
4 | matcher.backbone.relu | ReLU | 0
5 | matcher.backbone.layer1 | Sequential | 590 K 6 | matcher.backbone.layer1.0 | BasicBlock | 295 K 7 | matcher.backbone.layer1.0.conv1 | Conv2d | 147 K 8 | matcher.backbone.layer1.0.conv2 | Conv2d | 147 K 9 | matcher.backbone.layer1.0.bn1 | SyncBatchNorm | 256
10 | matcher.backbone.layer1.0.bn2 | SyncBatchNorm | 256
11 | matcher.backbone.layer1.0.relu | ReLU | 0
12 | matcher.backbone.layer1.1 | BasicBlock | 295 K 13 | matcher.backbone.layer1.1.conv1 | Conv2d | 147 K 14 | matcher.backbone.layer1.1.conv2 | Conv2d | 147 K 15 | matcher.backbone.layer1.1.bn1 | SyncBatchNorm | 256
16 | matcher.backbone.layer1.1.bn2 | SyncBatchNorm | 256
17 | matcher.backbone.layer1.1.relu | ReLU | 0
18 | matcher.backbone.layer2 | Sequential | 1.3 M 19 | matcher.backbone.layer2.0 | BasicBlock | 597 K 20 | matcher.backbone.layer2.0.conv1 | Conv2d | 225 K 21 | matcher.backbone.layer2.0.conv2 | Conv2d | 345 K 22 | matcher.backbone.layer2.0.bn1 | SyncBatchNorm | 392
23 | matcher.backbone.layer2.0.bn2 | SyncBatchNorm | 392
24 | matcher.backbone.layer2.0.relu | ReLU | 0
25 | matcher.backbone.layer2.0.downsample | Sequential | 25.5 K 26 | matcher.backbone.layer2.0.downsample.0 | Conv2d | 25.1 K 27 | matcher.backbone.layer2.0.downsample.1 | SyncBatchNorm | 392
28 | matcher.backbone.layer2.1 | BasicBlock | 692 K 29 | matcher.backbone.layer2.1.conv1 | Conv2d | 345 K 30 | matcher.backbone.layer2.1.conv2 | Conv2d | 345 K 31 | matcher.backbone.layer2.1.bn1 | SyncBatchNorm | 392
32 | matcher.backbone.layer2.1.bn2 | SyncBatchNorm | 392
33 | matcher.backbone.layer2.1.relu | ReLU | 0
34 | matcher.backbone.layer3 | Sequential | 2.3 M 35 | matcher.backbone.layer3.0 | BasicBlock | 1.1 M 36 | matcher.backbone.layer3.0.conv1 | Conv2d | 451 K 37 | matcher.backbone.layer3.0.conv2 | Conv2d | 589 K 38 | matcher.backbone.layer3.0.bn1 | SyncBatchNorm | 512
39 | matcher.backbone.layer3.0.bn2 | SyncBatchNorm | 512
40 | matcher.backbone.layer3.0.relu | ReLU | 0
41 | matcher.backbone.layer3.0.downsample | Sequential | 50.7 K 42 | matcher.backbone.layer3.0.downsample.0 | Conv2d | 50.2 K 43 | matcher.backbone.layer3.0.downsample.1 | SyncBatchNorm | 512
44 | matcher.backbone.layer3.1 | BasicBlock | 1.2 M 45 | matcher.backbone.layer3.1.conv1 | Conv2d | 589 K 46 | matcher.backbone.layer3.1.conv2 | Conv2d | 589 K 47 | matcher.backbone.layer3.1.bn1 | SyncBatchNorm | 512
48 | matcher.backbone.layer3.1.bn2 | SyncBatchNorm | 512
49 | matcher.backbone.layer3.1.relu | ReLU | 0
50 | matcher.backbone.layer3_outconv | Conv2d | 65.5 K 51 | matcher.pos_encoding | PositionEncodingSine | 0
52 | matcher.loftr_coarse | LocalFeatureTransformer | 5.3 M 53 | matcher.loftr_coarse.layers | ModuleList | 5.3 M 54 | matcher.loftr_coarse.layers.0 | LoFTREncoderLayer | 656 K 55 | matcher.loftr_coarse.layers.0.q_proj | Linear | 65.5 K 56 | matcher.loftr_coarse.layers.0.k_proj | Linear | 65.5 K 57 | matcher.loftr_coarse.layers.0.v_proj | Linear | 65.5 K 58 | matcher.loftr_coarse.layers.0.attention | LinearAttention | 0
59 | matcher.loftr_coarse.layers.0.merge | Linear | 65.5 K 60 | matcher.loftr_coarse.layers.0.mlp | Sequential | 393 K 61 | matcher.loftr_coarse.layers.0.mlp.0 | Linear | 262 K 62 | matcher.loftr_coarse.layers.0.mlp.1 | ReLU | 0
63 | matcher.loftr_coarse.layers.0.mlp.2 | Linear | 131 K 64 | matcher.loftr_coarse.layers.0.norm1 | LayerNorm | 512
65 | matcher.loftr_coarse.layers.0.norm2 | LayerNorm | 512
66 | matcher.loftr_coarse.layers.1 | LoFTREncoderLayer | 656 K 67 | matcher.loftr_coarse.layers.1.q_proj | Linear | 65.5 K 68 | matcher.loftr_coarse.layers.1.k_proj | Linear | 65.5 K 69 | matcher.loftr_coarse.layers.1.v_proj | Linear | 65.5 K 70 | matcher.loftr_coarse.layers.1.attention | LinearAttention | 0
71 | matcher.loftr_coarse.layers.1.merge | Linear | 65.5 K 72 | matcher.loftr_coarse.layers.1.mlp | Sequential | 393 K 73 | matcher.loftr_coarse.layers.1.mlp.0 | Linear | 262 K 74 | matcher.loftr_coarse.layers.1.mlp.1 | ReLU | 0
75 | matcher.loftr_coarse.layers.1.mlp.2 | Linear | 131 K 76 | matcher.loftr_coarse.layers.1.norm1 | LayerNorm | 512
77 | matcher.loftr_coarse.layers.1.norm2 | LayerNorm | 512
78 | matcher.loftr_coarse.layers.2 | LoFTREncoderLayer | 656 K 79 | matcher.loftr_coarse.layers.2.q_proj | Linear | 65.5 K 80 | matcher.loftr_coarse.layers.2.k_proj | Linear | 65.5 K 81 | matcher.loftr_coarse.layers.2.v_proj | Linear | 65.5 K 82 | matcher.loftr_coarse.layers.2.attention | LinearAttention | 0
83 | matcher.loftr_coarse.layers.2.merge | Linear | 65.5 K 84 | matcher.loftr_coarse.layers.2.mlp | Sequential | 393 K 85 | matcher.loftr_coarse.layers.2.mlp.0 | Linear | 262 K 86 | matcher.loftr_coarse.layers.2.mlp.1 | ReLU | 0
87 | matcher.loftr_coarse.layers.2.mlp.2 | Linear | 131 K 88 | matcher.loftr_coarse.layers.2.norm1 | LayerNorm | 512
89 | matcher.loftr_coarse.layers.2.norm2 | LayerNorm | 512
90 | matcher.loftr_coarse.layers.3 | LoFTREncoderLayer | 656 K 91 | matcher.loftr_coarse.layers.3.q_proj | Linear | 65.5 K 92 | matcher.loftr_coarse.layers.3.k_proj | Linear | 65.5 K 93 | matcher.loftr_coarse.layers.3.v_proj | Linear | 65.5 K 94 | matcher.loftr_coarse.layers.3.attention | LinearAttention | 0
95 | matcher.loftr_coarse.layers.3.merge | Linear | 65.5 K 96 | matcher.loftr_coarse.layers.3.mlp | Sequential | 393 K 97 | matcher.loftr_coarse.layers.3.mlp.0 | Linear | 262 K 98 | matcher.loftr_coarse.layers.3.mlp.1 | ReLU | 0
99 | matcher.loftr_coarse.layers.3.mlp.2 | Linear | 131 K 100 | matcher.loftr_coarse.layers.3.norm1 | LayerNorm | 512
101 | matcher.loftr_coarse.layers.3.norm2 | LayerNorm | 512
102 | matcher.loftr_coarse.layers.4 | LoFTREncoderLayer | 656 K 103 | matcher.loftr_coarse.layers.4.q_proj | Linear | 65.5 K 104 | matcher.loftr_coarse.layers.4.k_proj | Linear | 65.5 K 105 | matcher.loftr_coarse.layers.4.v_proj | Linear | 65.5 K 106 | matcher.loftr_coarse.layers.4.attention | LinearAttention | 0
107 | matcher.loftr_coarse.layers.4.merge | Linear | 65.5 K 108 | matcher.loftr_coarse.layers.4.mlp | Sequential | 393 K 109 | matcher.loftr_coarse.layers.4.mlp.0 | Linear | 262 K 110 | matcher.loftr_coarse.layers.4.mlp.1 | ReLU | 0
111 | matcher.loftr_coarse.layers.4.mlp.2 | Linear | 131 K 112 | matcher.loftr_coarse.layers.4.norm1 | LayerNorm | 512
113 | matcher.loftr_coarse.layers.4.norm2 | LayerNorm | 512
114 | matcher.loftr_coarse.layers.5 | LoFTREncoderLayer | 656 K 115 | matcher.loftr_coarse.layers.5.q_proj | Linear | 65.5 K 116 | matcher.loftr_coarse.layers.5.k_proj | Linear | 65.5 K 117 | matcher.loftr_coarse.layers.5.v_proj | Linear | 65.5 K 118 | matcher.loftr_coarse.layers.5.attention | LinearAttention | 0
119 | matcher.loftr_coarse.layers.5.merge | Linear | 65.5 K 120 | matcher.loftr_coarse.layers.5.mlp | Sequential | 393 K 121 | matcher.loftr_coarse.layers.5.mlp.0 | Linear | 262 K 122 | matcher.loftr_coarse.layers.5.mlp.1 | ReLU | 0
123 | matcher.loftr_coarse.layers.5.mlp.2 | Linear | 131 K 124 | matcher.loftr_coarse.layers.5.norm1 | LayerNorm | 512
125 | matcher.loftr_coarse.layers.5.norm2 | LayerNorm | 512
126 | matcher.loftr_coarse.layers.6 | LoFTREncoderLayer | 656 K 127 | matcher.loftr_coarse.layers.6.q_proj | Linear | 65.5 K 128 | matcher.loftr_coarse.layers.6.k_proj | Linear | 65.5 K 129 | matcher.loftr_coarse.layers.6.v_proj | Linear | 65.5 K 130 | matcher.loftr_coarse.layers.6.attention | LinearAttention | 0
131 | matcher.loftr_coarse.layers.6.merge | Linear | 65.5 K 132 | matcher.loftr_coarse.layers.6.mlp | Sequential | 393 K 133 | matcher.loftr_coarse.layers.6.mlp.0 | Linear | 262 K 134 | matcher.loftr_coarse.layers.6.mlp.1 | ReLU | 0
135 | matcher.loftr_coarse.layers.6.mlp.2 | Linear | 131 K 136 | matcher.loftr_coarse.layers.6.norm1 | LayerNorm | 512
137 | matcher.loftr_coarse.layers.6.norm2 | LayerNorm | 512
138 | matcher.loftr_coarse.layers.7 | LoFTREncoderLayer | 656 K 139 | matcher.loftr_coarse.layers.7.q_proj | Linear | 65.5 K 140 | matcher.loftr_coarse.layers.7.k_proj | Linear | 65.5 K 141 | matcher.loftr_coarse.layers.7.v_proj | Linear | 65.5 K 142 | matcher.loftr_coarse.layers.7.attention | LinearAttention | 0
143 | matcher.loftr_coarse.layers.7.merge | Linear | 65.5 K 144 | matcher.loftr_coarse.layers.7.mlp | Sequential | 393 K 145 | matcher.loftr_coarse.layers.7.mlp.0 | Linear | 262 K 146 | matcher.loftr_coarse.layers.7.mlp.1 | ReLU | 0
147 | matcher.loftr_coarse.layers.7.mlp.2 | Linear | 131 K 148 | matcher.loftr_coarse.layers.7.norm1 | LayerNorm | 512
149 | matcher.loftr_coarse.layers.7.norm2 | LayerNorm | 512
150 | matcher.fine_process | FineProcess | 1.5 M 151 | matcher.fine_process.conv_merge | Sequential | 102 K 152 | matcher.fine_process.conv_merge.0 | Conv2d | 100 K 153 | matcher.fine_process.conv_merge.1 | Conv2d | 1.8 K 154 | matcher.fine_process.conv_merge.2 | SyncBatchNorm | 392
155 | matcher.fine_process.out_conv_m | Conv2d | 38.4 K 156 | matcher.fine_process.out_conv_f | Conv2d | 16.4 K 157 | matcher.fine_process.self_attn_m | WindowSelfAttention | 487 K 158 | matcher.fine_process.self_attn_m.mlp | Mlp | 231 K 159 | matcher.fine_process.self_attn_m.mlp.fc1 | Linear | 154 K 160 | matcher.fine_process.self_attn_m.mlp.act | GELU | 0
161 | matcher.fine_process.self_attn_m.mlp.fc2 | Linear | 77.0 K 162 | matcher.fine_process.self_attn_m.norm1 | LayerNorm | 392
163 | matcher.fine_process.self_attn_m.norm2 | LayerNorm | 392
164 | matcher.fine_process.self_attn_m.attn | VanillaAttention | 153 K 165 | matcher.fine_process.self_attn_m.attn.kv_proj | Linear | 76.8 K 166 | matcher.fine_process.self_attn_m.attn.q_proj | Linear | 38.4 K 167 | matcher.fine_process.self_attn_m.attn.merge | Linear | 38.6 K 168 | matcher.fine_process.self_attn_m.pos_embed | SwinPosEmbMLP | 101 K 169 | matcher.fine_process.self_attn_m.pos_embed.pos_mlp | Sequential | 101 K 170 | matcher.fine_process.self_attn_m.pos_embed.pos_mlp.0 | Linear | 1.5 K 171 | matcher.fine_process.self_attn_m.pos_embed.pos_mlp.1 | ReLU | 0
172 | matcher.fine_process.self_attn_m.pos_embed.pos_mlp.2 | Linear | 100 K 173 | matcher.fine_process.self_attn_m.pos_embed_pre | Identity | 0
174 | matcher.fine_process.cross_attn_m | WindowCrossAttention | 347 K 175 | matcher.fine_process.cross_attn_m.norm1 | LayerNorm | 392
176 | matcher.fine_process.cross_attn_m.norm2 | LayerNorm | 392
177 | matcher.fine_process.cross_attn_m.mlp | Mlp | 231 K 178 | matcher.fine_process.cross_attn_m.mlp.fc1 | Linear | 154 K 179 | matcher.fine_process.cross_attn_m.mlp.act | GELU | 0
180 | matcher.fine_process.cross_attn_m.mlp.fc2 | Linear | 77.0 K 181 | matcher.fine_process.cross_attn_m.cross_attn | CrossBidirectionalAttention | 115 K 182 | matcher.fine_process.cross_attn_m.cross_attn.qk_proj | Linear | 38.4 K 183 | matcher.fine_process.cross_attn_m.cross_attn.v_proj | Linear | 38.4 K 184 | matcher.fine_process.cross_attn_m.cross_attn.merge | Linear | 38.4 K 185 | matcher.fine_process.self_attn_f | WindowSelfAttention | 299 K 186 | matcher.fine_process.self_attn_f.mlp | Mlp | 98.7 K 187 | matcher.fine_process.self_attn_f.mlp.fc1 | Linear | 65.8 K 188 | matcher.fine_process.self_attn_f.mlp.act | GELU | 0
189 | matcher.fine_process.self_attn_f.mlp.fc2 | Linear | 32.9 K 190 | matcher.fine_process.self_attn_f.norm1 | LayerNorm | 256
191 | matcher.fine_process.self_attn_f.norm2 | LayerNorm | 256
192 | matcher.fine_process.self_attn_f.attn | VanillaAttention | 65.7 K 193 | matcher.fine_process.self_attn_f.attn.kv_proj | Linear | 32.8 K 194 | matcher.fine_process.self_attn_f.attn.q_proj | Linear | 16.4 K 195 | matcher.fine_process.self_attn_f.attn.merge | Linear | 16.5 K 196 | matcher.fine_process.self_attn_f.pos_embed | SwinPosEmbMLP | 67.1 K 197 | matcher.fine_process.self_attn_f.pos_embed.pos_mlp | Sequential | 67.1 K 198 | matcher.fine_process.self_attn_f.pos_embed.pos_mlp.0 | Linear | 1.5 K 199 | matcher.fine_process.self_attn_f.pos_embed.pos_mlp.1 | ReLU | 0
200 | matcher.fine_process.self_attn_f.pos_embed.pos_mlp.2 | Linear | 65.5 K 201 | matcher.fine_process.self_attn_f.pos_embed_pre | SwinPosEmbMLP | 67.1 K 202 | matcher.fine_process.self_attn_f.pos_embed_pre.pos_mlp | Sequential | 67.1 K 203 | matcher.fine_process.self_attn_f.pos_embed_pre.pos_mlp.0 | Linear | 1.5 K 204 | matcher.fine_process.self_attn_f.pos_embed_pre.pos_mlp.1 | ReLU | 0
205 | matcher.fine_process.self_attn_f.pos_embed_pre.pos_mlp.2 | Linear | 65.5 K 206 | matcher.fine_process.cross_attn_f | WindowCrossAttention | 148 K 207 | matcher.fine_process.cross_attn_f.norm1 | LayerNorm | 256
208 | matcher.fine_process.cross_attn_f.norm2 | LayerNorm | 256
209 | matcher.fine_process.cross_attn_f.mlp | Mlp | 98.7 K 210 | matcher.fine_process.cross_attn_f.mlp.fc1 | Linear | 65.8 K 211 | matcher.fine_process.cross_attn_f.mlp.act | GELU | 0
212 | matcher.fine_process.cross_attn_f.mlp.fc2 | Linear | 32.9 K 213 | matcher.fine_process.cross_attn_f.cross_attn | CrossBidirectionalAttention | 49.2 K 214 | matcher.fine_process.cross_attn_f.cross_attn.qk_proj | Linear | 16.4 K 215 | matcher.fine_process.cross_attn_f.cross_attn.v_proj | Linear | 16.4 K 216 | matcher.fine_process.cross_attn_f.cross_attn.merge | Linear | 16.4 K 217 | matcher.fine_process.down_proj_m_f | Linear | 25.1 K 218 | matcher.out_proj | Linear | 516
219 | loss | XoFTRLossPretrain | 0
10.9 M Trainable params 0 Non-trainable params 10.9 M Total params 43.776 Total estimated model params size (MB) Validation sanity check: 0it [00:00, ?it/s]Traceback (most recent call last): File "./pretrain.py", line 125, in
main()
File "./pretrain.py", line 121, in main
trainer.fit(model, datamodule=data_module)
File "/home/tom/.conda/envs/xoftr/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 458, in fit
self._run(model)
File "/home/tom/.conda/envs/xoftr/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 756, in _run
self.dispatch()
File "/home/tom/.conda/envs/xoftr/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 797, in dispatch
self.accelerator.start_training(self)
File "/home/tom/.conda/envs/xoftr/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 96, in start_training
self.training_type_plugin.start_training(trainer)
File "/home/tom/.conda/envs/xoftr/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 144, in start_training
self._results = trainer.run_stage()
File "/home/tom/.conda/envs/xoftr/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 807, in run_stage
return self.run_train()
File "/home/tom/.conda/envs/xoftr/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 842, in run_train
self.run_sanity_check(self.lightning_module)
File "/home/tom/.conda/envs/xoftr/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1107, in run_sanity_check
self.run_evaluation()
File "/home/tom/.conda/envs/xoftr/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 920, in run_evaluation
dataloaders, max_batches = self.evaluation_loop.get_evaluation_dataloaders()
File "/home/tom/.conda/envs/xoftr/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 63, in get_evaluation_dataloaders
self.trainer.reset_val_dataloader(model)
File "/home/tom/.conda/envs/xoftr/lib/python3.8/site-packages/pytorch_lightning/trainer/data_loading.py", line 409, in reset_val_dataloader
self.num_val_batches, self.val_dataloaders = self._reset_eval_dataloader(model, 'val')
File "/home/tom/.conda/envs/xoftr/lib/python3.8/site-packages/pytorch_lightning/trainer/data_loading.py", line 370, in _reset_eval_dataloader
num_batches = len(dataloader) if has_len(dataloader) else float('inf')
File "/home/tom/.conda/envs/xoftr/lib/python3.8/site-packages/pytorch_lightning/utilities/data.py", line 33, in has_len
raise ValueError('
Dataloader
returned 0 length. Please make sure that it returns at least 1 batch') ValueError:Dataloader
returned 0 length. Please make sure that it returns at least 1 batch Have you encountered a similar problem before and how should you handle it? Thank you very much!