Hi! I’m using PyTorch implementation for Yolov4 from Tianxiaomo. I prepared train.txt and val.txt as described in README but when I run the python3 train.py -g 0 -dir dataset/train.txt command, I got this error:
Traceback (most recent call last):
File “train.py”, line 623, in
train(model=model,
File “train.py”, line 379, in train
bboxes_pred = model(images)
File “/home/batu/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 889, in _call_impl
result = self.forward(*input, kwargs)
File “/home/batu/Desktop/AESK/pytorch-YOLOv4/tool/darknet2pytorch.py”, line 161, in forward
x = self.modelsind
File “/home/batu/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 889, in _call_impl
result = self.forward(*input, *kwargs)
File “/home/batu/anaconda3/lib/python3.8/site-packages/torch/nn/modules/container.py”, line 119, in forward
input = module(input)
File “/home/batu/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 889, in _call_impl
result = self.forward(input, kwargs)
File “/home/batu/anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py”, line 399, in forward
return self._conv_forward(input, self.weight, self.bias)
File “/home/batu/anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py”, line 395, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [64, 128, 1, 1], expected input[4, 64, 418, 209] to have 128 channels, but got 64 channels instead
According to AlexeyAB Darknet, I edited only cfg.py and yolov4.cfg file. I didn’t change anything in the model. What should I do if I need to make changes to the model?
Hi! I’m using PyTorch implementation for Yolov4 from Tianxiaomo. I prepared train.txt and val.txt as described in README but when I run the python3 train.py -g 0 -dir dataset/train.txt command, I got this error:
Traceback (most recent call last): File “train.py”, line 623, in train(model=model, File “train.py”, line 379, in train bboxes_pred = model(images) File “/home/batu/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 889, in _call_impl result = self.forward(*input, kwargs) File “/home/batu/Desktop/AESK/pytorch-YOLOv4/tool/darknet2pytorch.py”, line 161, in forward x = self.modelsind File “/home/batu/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 889, in _call_impl result = self.forward(*input, *kwargs) File “/home/batu/anaconda3/lib/python3.8/site-packages/torch/nn/modules/container.py”, line 119, in forward input = module(input) File “/home/batu/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 889, in _call_impl result = self.forward(input, kwargs) File “/home/batu/anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py”, line 399, in forward return self._conv_forward(input, self.weight, self.bias) File “/home/batu/anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py”, line 395, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Given groups=1, weight of size [64, 128, 1, 1], expected input[4, 64, 418, 209] to have 128 channels, but got 64 channels instead