tteepe / GaitGraph2

Official code for "Towards a Deeper Understanding of Skeleton-based Gait Recognition" (CVPRW'22)
40 stars 11 forks source link

RuntimeError: permute(sparse_coo): number of dimensions in the tensor input does not match the length of the desired ordering of dimensions i.e. input.dim() = 4 is not equal to len(dims) = 5 #24

Open HappyStupidChild opened 3 months ago

HappyStupidChild commented 3 months ago

(gh2) root@autodl-container-7d95439be5-49723134:~/GaitGraph2# python gaitgraph_casia_b.py --config GaitGraph/configs/casia_b.yaml

|'''''''''''''╔╬╬╬╬╬╬╬╬ _ __ _ __ _ | ╔╬╬╬╬╬╬╬╬╬ |\ \ \ |\ \ \ |\ |\ \ | ░░ ╬╬╬╬╬╬╬╬╬╬ \ \ \\\ \ \ \ \ \_\ \ \ \ \ \/ /| ░░░░ ╬╬╬╬╬╬╬╬╬╬ \ \ \|| \ \ \ \ \|__| \ \ \ \ \ ░░░░░╦╬╦ ╔╬╬╬╬╬╬╬╬╬╬ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ░░░░░╬╬╬╬ ▓▓└╬╬╬╬╬╬╬╬╬╬╬ \ \\ \ _\ \ _\ \ _\ \ _\ _\ ░░░░░╔╬╬╬ ▓▓▓ ╓╬╬╬╬╬╬╬╬╬ || || || || || || ░░░░░╠╬╬╬ ▓▓▓ └╬╬╬╬╬╬╬╬╬ ░░░░└╬╬╬╬ ▓▓ ╬╬╬╬╬╬╬╬╬ Chair of Human-Machine Communication ░░░░░╙╬╬╬╩ ╬╬ TUM School of Computation, Information and Technology ░░░░░ ╚ ''''''''''''''' Technical University of Munich ░░░

Global seed set to 5318008 Processing... load [train]: 100%|███████████████████████████████████████████████████████████████████████████████████| 1265749/1265749 [00:20<00:00, 62095.80it/s] process [train]: 100%|██████████████████████████████████████████████████████████████████████████████████████| 8140/8140 [00:00<00:00, 14842.14it/s] Done! Processing... load [test]: 100%|████████████████████████████████████████████████████████████████████████████████████| 1265749/1265749 [00:16<00:00, 75690.93it/s] process [test]: 100%|███████████████████████████████████████████████████████████████████████████████████████| 5498/5498 [00:00<00:00, 23693.01it/s] Done! GPU available: True, used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs Missing logger folder: /root/GaitGraph2/lightning_logs LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]

| Name | Type | Params

0 | backbone | ResGCN | 350 K 1 | distance | LpDistance | 0
2 | train_loss | SupConLoss | 0
3 | val_loss | ContrastiveLoss | 0

350 K Trainable params 0 Non-trainable params 350 K Total params 1.403 Total estimated model params size (MB) Epoch 0: 0%| | 0/11 [00:00<?, ?it/s]Traceback (most recent call last): File "gaitgraph_casia_b.py", line 306, in cli_main() File "gaitgraph_casia_b.py", line 300, in cli_main cli.trainer.fit(cli.model, datamodule=cli.datamodule) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 771, in fit self._call_and_handle_interrupt( File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 724, in _call_and_handle_interrupt return trainer_fn(*args, kwargs) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 812, in _fit_impl results = self._run(model, ckpt_path=self.ckpt_path) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1237, in _run results = self._run_stage() File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1324, in _run_stage return self._run_train() File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1354, in _run_train self.fit_loop.run() File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run self.advance(*args, *kwargs) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 269, in advance self._outputs = self.epoch_loop.run(self._data_fetcher) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run self.advance(args, kwargs) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 208, in advance batch_output = self.batch_loop.run(batch, batch_idx) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run self.advance(*args, kwargs) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 88, in advance outputs = self.optimizer_loop.run(split_batch, optimizers, batch_idx) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run self.advance(*args, *kwargs) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 203, in advance result = self._run_optimization( File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 256, in _run_optimization self._optimizer_step(optimizer, opt_idx, batch_idx, closure) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 369, in _optimizer_step self.trainer._call_lightning_module_hook( File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1596, in _call_lightning_module_hook output = fn(args, kwargs) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 1625, in optimizer_step optimizer.step(closure=optimizer_closure) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 168, in step step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, kwargs) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 193, in optimizer_step return self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, kwargs) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 155, in optimizer_step return optimizer.step(closure=closure, kwargs) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 68, in wrapper return wrapped(*args, *kwargs) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/torch/optim/optimizer.py", line 373, in wrapper out = func(args, kwargs) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/torch/optim/optimizer.py", line 76, in _use_grad ret = func(self, *args, kwargs) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/torch/optim/adamw.py", line 161, in step loss = closure() File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 140, in _wrap_closure closure_result = closure() File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 148, in call self._result = self.closure(*args, kwargs) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 134, in closure step_output = self._step_fn() File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 427, in _training_step training_step_output = self.trainer._call_strategy_hook("training_step", step_kwargs.values()) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1766, in _call_strategy_hook output = fn(args, kwargs) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 333, in training_step return self.model.training_step(*args, kwargs) File "gaitgraph_casia_b.py", line 71, in training_step y_hat = self(x) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "gaitgraph_casia_b.py", line 66, in forward return self.backbone(x)[0] File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/root/miniconda3/envs/gh2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/root/GaitGraph2/GaitGraph/models/ResGCNv1/nets.py", line 61, in forward x = x.permute(0, 3, 4, 1, 2) RuntimeError: permute(sparse_coo): number of dimensions in the tensor input does not match the length of the desired ordering of dimensions i.e. input.dim() = 4 is not equal to len(dims) = 5 Epoch 0: 0%| | 0/11 [00:08<?, ?it/s]

deliaiz commented 2 weeks ago

I also encountered the same problem, and finally found that I could run it by removing a parameter: transform_train = Compose([ PadSequence(sequence_length), RandomFlipSequence(flip_sequence_p), RandomSelectSequence(sequence_length), ShuffleSequence(train_shuffle_sequence), RandomFlipLeftRight(flip_lr_p, flip_idx=self.graph.flip_idx), JointNoise(joint_noise), PointNoise(point_noise), RandomMove(random_move), MultiInput(self.graph.connect_joint, self.graph.center, enabled=multi_input), ToFlatTensor() ])