loiccordone / sparse-spiking-neural-networks

Repository code for the IJCNN 2021 paper "Learning from Event Cameras with Sparse Spiking Convolutional Neural Networks"
16 stars 5 forks source link

Missing "dvs_gesture_dataset " Class or Package #1

Open Po0ria opened 2 years ago

Po0ria commented 2 years ago

I am trying to reproduce your results but I can't find the "dvs_gesture_dataset" class or package that you imported in classification.py at line 11: from dvs_gesture_dataset import SparseDvsGestureDataset

loiccordone commented 2 years ago

Hello, Thanks for your interest in our work! Indeed, sorry I forgot to add the file to the repo. It should be good now, the gesture_dataset.py file is now available.

Po0ria commented 2 years ago

Thank you for reply. I am having dependency issues (more specifically for torch-metrics package) and I would appreciate if you could let me know the versions of python and required packages

Po0ria commented 2 years ago

Well I managed to get the dataset ready but I am facing other issues regarding lightning torch. Here is my error log: /afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/apex/pyprof/__init__.py:5: FutureWarning: pyprof will be removed by the end of June, 2022 warnings.warn("pyprof will be removed by the end of June, 2022", FutureWarning) /afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:445: LightningDeprecationWarning: SettingTrainer(gpus=[0])is deprecated in v1.7 and will be removed in v2.0. Please useTrainer(accelerator='gpu', devices=[0])instead. rank_zero_deprecation( Namespace(device=0, precision=16, b=64, sample_size=1500000, T=150, image_shape=(128, 128), dataset='dvsg', path='DvsGesture', model='sparse-snn', pretrained=None, lr=0.01, epochs=20, train=True, test=False, save_ckpt=True) File loaded. File loaded. Using 16bit native Automatic Mixed Precision (AMP) GPU available: True (cuda), used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs Trainer(limit_train_batches=1.0)was configured so 100% of the batches per epoch will be used.. Trainer(limit_val_batches=1.0)` was configured so 100% of the batches will be used.. Missing logger folder: /afs/crc.nd.edu/user/p/ptaheri/Private/benchmarkSNN/sparse-spiking-neural-networks/lightning_logs LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]

| Name | Type | Params


0 | train_acc | Accuracy | 0
1 | val_acc | Accuracy | 0
2 | test_acc | Accuracy | 0
3 | train_acc_by_class | Accuracy | 0
4 | val_acc_by_class | Accuracy | 0
5 | test_acc_by_class | Accuracy | 0
6 | train_confmat | ConfusionMatrix | 0
7 | val_confmat | ConfusionMatrix | 0
8 | test_confmat | ConfusionMatrix | 0
9 | model | SparseSNN | 13.9 K


13.9 K Trainable params 0 Non-trainable params 13.9 K Total params 0.028 Total estimated model params size (MB)

Sanity Checking: 0it [00:00, ?it/s]/afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:219: PossibleUserWarning: The dataloader, val_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the num_workers argument(try 24 which is the number of cpus on this machine) in theDataLoader` init to improve performance. rank_zero_warn(

Sanity Checking: 0%| | 0/2 [00:00<?, ?it/s] Sanity Checking DataLoader 0: 0%| | 0/2 [00:00<?, ?it/s]/afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at /opt/conda/conda-bld/pytorch_1623448238472/work/aten/src/ATen/native/BinaryOps.cpp:467.) return torch.floor_divide(self, other) /afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/pytorch_lightning/utilities/data.py:86: UserWarning: Trying to infer the batch_size from an ambiguous collection. The batch size we found is 3470962. To avoid any miscalculations, use self.log(..., batch_size=batch_size). warning_cache.warn(

Sanity Checking DataLoader 0: 50%|█████ | 1/2 [00:02<00:02, 2.91s/it]/afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/pytorch_lightning/utilities/data.py:86: UserWarning: Trying to infer the batch_size from an ambiguous collection. The batch size we found is 3359203. To avoid any miscalculations, use self.log(..., batch_size=batch_size). warning_cache.warn(

Sanity Checking DataLoader 0: 100%|██████████| 2/2 [00:04<00:00, 2.17s/it] val accuracy: 10.16% val confusion matrix: Traceback (most recent call last): File "/afs/crc.nd.edu/user/p/ptaheri/Private/benchmarkSNN/sparse-spiking-neural-networks/classification.py", line 93, in main() File "/afs/crc.nd.edu/user/p/ptaheri/Private/benchmarkSNN/sparse-spiking-neural-networks/classification.py", line 88, in main trainer.fit(module, train_dataloader, test_dataloader) File "/afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 696, in fit self._call_and_handle_interrupt( File "/afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 650, in _call_and_handle_interrupt return trainer_fn(*args, kwargs) File "/afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 737, in _fit_impl results = self._run(model, ckpt_path=self.ckpt_path) File "/afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1168, in _run results = self._run_stage() File "/afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1254, in _run_stage return self._run_train() File "/afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1276, in _run_train self._run_sanity_check() File "/afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1345, in _run_sanity_check val_loop.run() File "/afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/pytorch_lightning/loops/loop.py", line 207, in run output = self.on_run_end() File "/afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 187, in on_run_end self._on_evaluation_epoch_end() File "/afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 299, in _on_evaluation_epoch_end self.trainer._call_lightning_module_hook(hook_name) File "/afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1552, in _call_lightning_module_hook output = fn(*args, *kwargs) File "/afs/crc.nd.edu/user/p/ptaheri/Private/benchmarkSNN/sparse-spiking-neural-networks/classification_module.py", line 110, in on_validation_epoch_end self.on_mode_epoch_end(mode="val") File "/afs/crc.nd.edu/user/p/ptaheri/Private/benchmarkSNN/sparse-spiking-neural-networks/classification_module.py", line 98, in on_mode_epoch_end self.log(f'{mode}_confmat', confmat) File "/afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/pytorch_lightning/core/module.py", line 415, in log apply_to_collection(value, torch.Tensor, self.__check_numel_1, name) File "/afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/pytorch_lightning/utilities/apply_func.py", line 100, in apply_to_collection return function(data, args, kwargs) File "/afs/crc.nd.edu/user/p/ptaheri/.conda/envs/sparse-SNN/lib/python3.9/site-packages/pytorch_lightning/core/module.py", line 553, in __check_numel_1 raise ValueError( ValueError: self.log(val_confmat, tensor([[0, 0, 0, 0, 0, 2, 0, 8, 0, 0, 1], [1, 0, 0, 1, 1, 1, 0, 1, 6, 0, 0], [0, 2, 1, 0, 0, 0, 1, 4, 0, 0, 3], [0, 1, 0, 2, 2, 0, 0, 0, 5, 0, 1], [0, 1, 0, 0, 3, 0, 0, 0, 4, 0, 3], [0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 7], [0, 3, 0, 0, 0, 0, 1, 2, 0, 0, 5], [0, 0, 0, 4, 1, 1, 2, 2, 4, 1, 6], [0, 0, 1, 2, 1, 1, 0, 1, 2, 2, 0], [0, 0, 0, 2, 0, 0, 0, 1, 1, 0, 6], [1, 1, 1, 0, 1, 0, 0, 0, 4, 0, 2]], device='cuda:0')) was called, but the tensor must have a single element. You can try doing `self.log(val_confmat, tensor([[0, 0, 0, 0, 0, 2, 0, 8, 0, 0, 1], [1, 0, 0, 1, 1, 1, 0, 1, 6, 0, 0], [0, 2, 1, 0, 0, 0, 1, 4, 0, 0, 3], [0, 1, 0, 2, 2, 0, 0, 0, 5, 0, 1], [0, 1, 0, 0, 3, 0, 0, 0, 4, 0, 3], [0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 7], [0, 3, 0, 0, 0, 0, 1, 2, 0, 0, 5], [0, 0, 0, 4, 1, 1, 2, 2, 4, 1, 6], [0, 0, 1, 2, 1, 1, 0, 1, 2, 2, 0], [0, 0, 0, 2, 0, 0, 0, 1, 1, 0, 6], [1, 1, 1, 0, 1, 0, 0, 0, 4, 0, 2]], device='cuda:0').mean())``