TorchEnsemble-Community / Ensemble-Pytorch

A unified ensemble framework for PyTorch to improve the performance and robustness of your deep learning model.
https://ensemble-pytorch.readthedocs.io
BSD 3-Clause "New" or "Revised" License
1.05k stars 95 forks source link

Ensembling Methods incompatible with snnTorch models #144

Open kgano-ucsd opened 1 year ago

kgano-ucsd commented 1 year ago

Hi,

I have been trying to set up GradientBoosting for an snnTorch model I am working on it's mostly PyTorch in the background. However, I've run into a circular issue that I have yet to find a solution for:

Originally, my inputs for my train/test loaders for my feed forward snnTorch model were all dtype torch.float. I got this error:

     33 onehot = torch.zeros(label.size(0), n_classes).float().to(label.device)
---> 34 onehot.scatter_(1, label.view(-1, 1), 1)
     36 return onehot

RuntimeError: scatter(): Expected dtype int64 for index

In an attempt to fix this, I tried changed the type for all my inputs to be dtype torch.int64, but got this error:

    113 def forward(self, input: Tensor) -> Tensor:
--> 114     return F.linear(input, self.weight, self.bias)

RuntimeError: mat1 and mat2 must have the same dtype

For an input to require_grad, the tensor must be a float, so changing dtype in my Linear layers don't help, either.

What could be going wrong? Since snnTorch is an extension of PyTorch, I was hoping that Ensemble-Pytorch would also be compatible, but if there is some core compatibility issues I understand. Thanks in advance!

Edit: To clarify, ensemble.fit exposes these issues.

xuyxu commented 1 year ago

Hi @kgano-ucsd, sorry for the late response. I think the reason is that the size of one hot encoded vector mismatches the input dim of your model.