Closed TaoZappar closed 2 years ago
Before I can help you investigate, your code will be in error.
Traceback (most recent call last):
File "hand.py", line 169, in <module>
landmarks, handness, score = model(inp)
File "/home/xxxxx/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "hand.py", line 98, in forward
x1 = self.block_1(x) # tj : torch.Size([1, 16, 112, 112])
File "/home/xxxxx/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "hand.py", line 50, in forward
x1 = self.layer_1(x)
File "/home/xxxxx/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "hand.py", line 33, in forward
x = self.conv(x)
File "/home/xxxxx/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/xxxxx/.local/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 446, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/xxxxx/.local/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 442, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [24, 1, 3, 3], expected input[1, 3, 224, 224] to have 1 channels, but got 3 channels instead
I replied to you within five minutes of your post. I'm going to close it since you don't seem to be interested in doing so.
Issue Type
Others
OS
Ubuntu
OS architecture
x86_64
Programming Language
Python
Framework
PyTorch
Description
Dear author
Thanks for your delicated efforts. Recently, I was trying to reverse the ''hand_landmark.tflite'' model in pytorch according its graph in netron. However, the size of my reversed model is way much larger than the official one and the one in 033_Hand_Detection_and_Tracking even though I convert my model to .onnx format, which still have 220MB comparing to 4.1MB of model_float32.onnx. There must be something wrong in my reverse, so I was wondering if u could help me look into that. Thanks in advance. My reversing code is attached in the end. Also a graph of two different models is shown in the picture, the left is model_float32.onnx, the right is my reversed model. Notice the size difference of the conv between the two clips.
Relevant Log Output
No response
URL or source code for simple inference testing code
No response