wswxyq / transformer_EE

A transformer encoder based neutrino energy estimator.
MIT License
4 stars 2 forks source link

RuntimeError: Tensors must have same number of dimensions: got 2 and 1 #1

Closed joshuabarrow221 closed 3 weeks ago

joshuabarrow221 commented 2 months ago

I've gone into the transformerEncoder.py and added a few print statements. I'm not sure exactly if they help or not:

.
.
.
    Original Output shape:  torch.Size([1024, 5])
    Original y shape before squeeze:  torch.Size([1024, 16])
        Final y shape after squeeze:  torch.Size([1024, 16])
        Final concatenated Output shape:  torch.Size([1024, 21])
Epoch: 0, batch: 734 Loss: nan
    Original Output shape:  torch.Size([1024, 5])
    Original y shape before squeeze:  torch.Size([1024, 16])
        Final y shape after squeeze:  torch.Size([1024, 16])
        Final concatenated Output shape:  torch.Size([1024, 21])
Epoch: 0, batch: 735 Loss: nan
    Original Output shape:  torch.Size([294, 5])
    Original y shape before squeeze:  torch.Size([294, 16])
        Final y shape after squeeze:  torch.Size([294, 16])
        Final concatenated Output shape:  torch.Size([294, 21])
Epoch: 0, batch: 736 Loss: nan
    Original Output shape:  torch.Size([256, 5])
    Original y shape before squeeze:  torch.Size([256, 16])
        Final y shape after squeeze:  torch.Size([256, 16])
        Final concatenated Output shape:  torch.Size([256, 21])
Epoch: 0, batch: 0 Loss: nan
    Original Output shape:  torch.Size([256, 5])
    Original y shape before squeeze:  torch.Size([256, 16])
        Final y shape after squeeze:  torch.Size([256, 16])
        Final concatenated Output shape:  torch.Size([256, 21])
Epoch: 0, batch: 1 Loss: nan
    Original Output shape:  torch.Size([256, 5])
    Original y shape before squeeze:  torch.Size([256, 16])
        Final y shape after squeeze:  torch.Size([256, 16])
        Final concatenated Output shape:  torch.Size([256, 21])
.
.
.
Epoch: 0, batch: 153 Loss: nan
    Original Output shape:  torch.Size([256, 5])
    Original y shape before squeeze:  torch.Size([256, 16])
        Final y shape after squeeze:  torch.Size([256, 16])
        Final concatenated Output shape:  torch.Size([256, 21])
Epoch: 0, batch: 154 Loss: nan
    Original Output shape:  torch.Size([1, 5])
    Original y shape before squeeze:  torch.Size([1, 16])
        Final y shape after squeeze:  torch.Size([16])
Traceback (most recent call last):
  File "/home/jbarrow/MLProject2/transformer_EE/train_GENIEv3-0-6-Honda-Truth-hA-LFG_wLeptonScalars.py", line 25, in <module>
    my_trainer.train()
  File "/home/jbarrow/MLProject2/transformer_EE/transformer_ee/train/train.py", line 179, in train
    Netout = self.net.forward(
             ^^^^^^^^^^^^^^^^^
  File "/home/jbarrow/MLProject2/transformer_EE/transformer_ee/model/transformerEncoder.py", line 84, in forward
    output = torch.cat((output, torch.squeeze(y)), 1)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Tensors must have same number of dimensions: got 2 and 1

image (31)

Solution: Change transformerEncoder.py line where output = torch.cat((output, torch.squeeze(y)), 1) with output = torch.cat((output, y), 1)

joshuabarrow221 commented 2 months ago

This solution correctly appears to progress through epochs now

wswxyq commented 3 weeks ago

fixed in 01703b0afb69c2d0a1cb4b59e9b85bde58183a4c