Open MarserHK opened 2 days ago
@MarserHK There seems to be a batch that contains all images for which there are no bounding boxes. This line returns more than 3 values and causes an error
Hello, but I checked my label data and didn't find any empty txt label files. Moreover, I ran the training again and it could reach the 7th round, but I noticed that the error occurs every time when saving the model.
epoch : 7, update : 109, loss = 4.220024660229683
epoch : 7, update : 110, loss = 3.662436217069626
epoch : 7, update : 111, loss = 4.221729964017868
epoch : 7, update : 112, loss = 4.394728496670723
epoch : 7, update : 113, loss = 4.131817564368248
Traceback (most recent call last):
File "main.py", line 16, in <module>
train.train_model(config=config)
File "/home/nvidia/fourT/hwt/YOWOv3-main/scripts/train.py", line 112, in train_model
loss = criterion(outputs, targets) / acc_grad
File "/home/nvidia/fourT/hwt/YOWOv3-main/utils/loss.py", line 102, in __call__
target_bboxes, target_scores, fg_mask = self.assign(scores, bboxes,
ValueError: too many values to unpack (expected 3)
@MarserHK You should add some line to check if this condition is satisfied. May be the transform pipeline dropped your annotation and if batch_size was too small, the condition will causes an error.
Thank you for your work! I have successfully trained, but I don't know how to perform inference on videos
@MarserHK You can convert your video into sequence of frames and make use the scripts/live.py
code.
"Hello, I created a dataset in UCF format, and encountered the following error during the 5th training round. How should I resolve this?"