Closed morenzoe closed 4 months ago
Could you please copy and paste the full error message or upload a screenshot?
Hi, thank you for your reply. I attached a screenshot below and this is the full error message:
Running benchmark
0%| | 0/1 [00:08<?, ?it/s]
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-11-a24b7539a050>](https://localhost:8080/#) in <cell line: 6>()
4 # Benchmark expects the following:
5 benchmark = Benchmark(model, test_set_loader, [], [], [static_metrics, workload_metrics])
----> 6 results = benchmark.run()
7 print(results)
9 frames
[/usr/local/lib/python3.10/dist-packages/neurobench/benchmarks/benchmark.py](https://localhost:8080/#) in run(self, quiet, verbose, dataloader, preprocessors, postprocessors, device)
115
116 # Run model on test data
--> 117 preds = self.model(data[0])
118
119 for alg in postprocessors:
[/usr/local/lib/python3.10/dist-packages/neurobench/models/torch_model.py](https://localhost:8080/#) in __call__(self, batch)
31
32 """
---> 33 return self.net(batch)
34
35 def __net__(self):
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs)
1530 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1531 else:
-> 1532 return self._call_impl(*args, **kwargs)
1533
1534 def _call_impl(self, *args, **kwargs):
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)
1539 or _global_backward_pre_hooks or _global_backward_hooks
1540 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1541 return forward_call(*args, **kwargs)
1542
1543 try:
[.../SNN2.py](https://localhost:8080/#) in forward(self, x)
49
50 for sample in range(x.shape[0]):
---> 51 predictions.append(self.single_forward(x[sample, ...]))
52
53 predictions = torch.stack(predictions)
[.../SNN2.py](https://localhost:8080/#) in single_forward(self, x)
37 x = x.squeeze() # convert shape (1, input_dim) to (input_dim)
38 cur1 = self.dropout(self.fc1(x))
---> 39 spk1, self.mem1 = self.lif1(cur1, self.mem1)
40
41 cur2 = self.fc_out(spk1)
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs)
1530 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1531 else:
-> 1532 return self._call_impl(*args, **kwargs)
1533
1534 def _call_impl(self, *args, **kwargs):
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)
1569 )
1570 else:
-> 1571 args_result = hook(self, args)
1572 if args_result is not None:
1573 if not isinstance(args_result, tuple):
[/usr/local/lib/python3.10/dist-packages/neurobench/benchmarks/hooks.py](https://localhost:8080/#) in pre_hook_fn(self, layer, input)
50 self.activation_inputs.append(input)
51 if self.spiking:
---> 52 self.pre_fire_mem_potential.append(layer.mem)
53
54 def hook_fn(self, layer, input, output):
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in __getattr__(self, name)
1707 if name in modules:
1708 return modules[name]
-> 1709 raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
1710
1711 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:
AttributeError: 'Leaky' object has no attribute 'mem'
Hi @morenzoe, thank you for bringing this to our attention. I managed to solve the issue, once we review and merge this pull request it should be fixed!
Thank you so much! Looking forward for the update 😄
Bug fixed in latest version 1.0.5
Hi, I got this error while running
benchmark.run()
in the Primate_reaching_tutorial.ipynb in Google Colab. I also produced this error while runningbenchmark.run()
in Kaggle notebook that was running fine before. I am now using neurobench ver.1.0.4 and snntorch ver.0.7.0. The error didn't happen with neurobench ver.1.0.3 and snntorch ver.0.7.0. Would be thankful for any assistance!