Open Po0ria opened 2 years ago
Hello,
Thanks for your interest in our work!
Indeed, sorry I forgot to add the file to the repo. It should be good now, the gesture_dataset.py
file is now available.
Thank you for reply. I am having dependency issues (more specifically for torch-metrics package) and I would appreciate if you could let me know the versions of python and required packages
Well I managed to get the dataset ready but I am facing other issues regarding lightning torch. Here is my error log:
/afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/apex/pyprof/__init__.py:5: FutureWarning: pyprof will be removed by the end of June, 2022 warnings.warn("pyprof will be removed by the end of June, 2022", FutureWarning) /afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:445: LightningDeprecationWarning: Setting
Trainer(gpus=[0])is deprecated in v1.7 and will be removed in v2.0. Please use
Trainer(accelerator='gpu', devices=[0])instead. rank_zero_deprecation( Namespace(device=0, precision=16, b=64, sample_size=1500000, T=150, image_shape=(128, 128), dataset='dvsg', path='DvsGesture', model='sparse-snn', pretrained=None, lr=0.01, epochs=20, train=True, test=False, save_ckpt=True) File loaded. File loaded. Using 16bit native Automatic Mixed Precision (AMP) GPU available: True (cuda), used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs
Trainer(limit_train_batches=1.0)was configured so 100% of the batches per epoch will be used..
Trainer(limit_val_batches=1.0)` was configured so 100% of the batches will be used..
Missing logger folder: /afs/crc.nd.edu/user/p/ptaheri/Private/benchmarkSNN/sparse-spiking-neural-networks/lightning_logs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
0 | train_acc | Accuracy | 0
1 | val_acc | Accuracy | 0
2 | test_acc | Accuracy | 0
3 | train_acc_by_class | Accuracy | 0
4 | val_acc_by_class | Accuracy | 0
5 | test_acc_by_class | Accuracy | 0
6 | train_confmat | ConfusionMatrix | 0
7 | val_confmat | ConfusionMatrix | 0
8 | test_confmat | ConfusionMatrix | 0
9 | model | SparseSNN | 13.9 K
13.9 K Trainable params 0 Non-trainable params 13.9 K Total params 0.028 Total estimated model params size (MB)
Sanity Checking: 0it [00:00, ?it/s]/afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:219: PossibleUserWarning: The dataloader, val_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the num_workers
argument(try 24 which is the number of cpus on this machine) in the
DataLoader` init to improve performance.
rank_zero_warn(
Sanity Checking: 0%| | 0/2 [00:00<?, ?it/s]
Sanity Checking DataLoader 0: 0%| | 0/2 [00:00<?, ?it/s]/afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at /opt/conda/conda-bld/pytorch_1623448238472/work/aten/src/ATen/native/BinaryOps.cpp:467.)
return torch.floor_divide(self, other)
/afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/pytorch_lightning/utilities/data.py:86: UserWarning: Trying to infer the batch_size
from an ambiguous collection. The batch size we found is 3470962. To avoid any miscalculations, use self.log(..., batch_size=batch_size)
.
warning_cache.warn(
Sanity Checking DataLoader 0: 50%|█████ | 1/2 [00:02<00:02, 2.91s/it]/afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/pytorch_lightning/utilities/data.py:86: UserWarning: Trying to infer the batch_size
from an ambiguous collection. The batch size we found is 3359203. To avoid any miscalculations, use self.log(..., batch_size=batch_size)
.
warning_cache.warn(
Sanity Checking DataLoader 0: 100%|██████████| 2/2 [00:04<00:00, 2.17s/it]
val accuracy: 10.16%
val confusion matrix:
Traceback (most recent call last):
File "/afs/crc.nd.edu/user/p/ptaheri/Private/benchmarkSNN/sparse-spiking-neural-networks/classification.py", line 93, in self.log(val_confmat, tensor([[0, 0, 0, 0, 0, 2, 0, 8, 0, 0, 1], [1, 0, 0, 1, 1, 1, 0, 1, 6, 0, 0], [0, 2, 1, 0, 0, 0, 1, 4, 0, 0, 3], [0, 1, 0, 2, 2, 0, 0, 0, 5, 0, 1], [0, 1, 0, 0, 3, 0, 0, 0, 4, 0, 3], [0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 7], [0, 3, 0, 0, 0, 0, 1, 2, 0, 0, 5], [0, 0, 0, 4, 1, 1, 2, 2, 4, 1, 6], [0, 0, 1, 2, 1, 1, 0, 1, 2, 2, 0], [0, 0, 0, 2, 0, 0, 0, 1, 1, 0, 6], [1, 1, 1, 0, 1, 0, 0, 0, 4, 0, 2]], device='cuda:0'))
was called, but the tensor must have a single element. You can try doing `self.log(val_confmat, tensor([[0, 0, 0, 0, 0, 2, 0, 8, 0, 0, 1],
[1, 0, 0, 1, 1, 1, 0, 1, 6, 0, 0],
[0, 2, 1, 0, 0, 0, 1, 4, 0, 0, 3],
[0, 1, 0, 2, 2, 0, 0, 0, 5, 0, 1],
[0, 1, 0, 0, 3, 0, 0, 0, 4, 0, 3],
[0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 7],
[0, 3, 0, 0, 0, 0, 1, 2, 0, 0, 5],
[0, 0, 0, 4, 1, 1, 2, 2, 4, 1, 6],
[0, 0, 1, 2, 1, 1, 0, 1, 2, 2, 0],
[0, 0, 0, 2, 0, 0, 0, 1, 1, 0, 6],
[1, 1, 1, 0, 1, 0, 0, 0, 4, 0, 2]], device='cuda:0').mean())``
I am trying to reproduce your results but I can't find the "dvs_gesture_dataset" class or package that you imported in classification.py at line 11:
from dvs_gesture_dataset import SparseDvsGestureDataset