GlassyWing / yolo_deepsort

Fast MOT base on yolo+deepsort, support yolo3 and yolo4
GNU General Public License v3.0
55 stars 13 forks source link

Tracking more than 80 classes? #5

Closed flowzen1337 closed 4 years ago

flowzen1337 commented 4 years ago

Hi @GlassyWing ,

i've managed to use your git project (working fine, nice work! :-) ) but I've got the problem that only ClassID 0 is tracked (I've got 142 trained classes).

I'm using my custom yolov4 weights file and the corresponding config.

Problem a.) When i set classes=142 in my cfg, it throws an error: File "/usr/local/src/yolo_deepsort/yolo3/models/models.py", line 199, in forward prediction = x.view(num_samples, self.num_anchors, self.num_classes + 5, *grid_size).permute(0, 1, 3, 4, 2) RuntimeError: shape '[1, 3, 147, 108, 108]' is invalid for input of size 2974320

Problem b is solved, so please ignore b) :-)

Can you give me some input how i can achieve that with your code or am i blind somewhere?

Thanks & Cheers! :-)

Problem b.) - [ Solved ] -> I had to reduce '...nms_thres=0.4....' down to '...nms_thres=0.1....' "

So when i use classes=80 from the coco default cfg its working fine. But my main problem is, that i want track all my classes not only classID 0. I've already set class_mask in video_deepsort.py but still no success (same problem also when i use the default yolov4.weights and the config for the coco dataset):

video_detector = VideoDetector(model, .... class_mask=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 5 6, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142] ....

GlassyWing commented 4 years ago

Hi @GlassyWing ,

i've managed to use your git project (working fine, nice work! :-) ) but I've got the problem that only ClassID 0 is tracked (I've got 142 trained classes).

I'm using my custom yolov4 weights file and the corresponding config.

Problem a.) When i set classes=142 in my cfg, it throws an error: File "/usr/local/src/yolo_deepsort/yolo3/models/models.py", line 199, in forward prediction = x.view(num_samples, self.num_anchors, self.num_classes + 5, *grid_size).permute(0, 1, 3, 4, 2) RuntimeError: shape '[1, 3, 147, 108, 108]' is invalid for input of size 2974320

Problem b is solved, so please ignore b) :-)

Can you give me some input how i can achieve that with your code or am i blind somewhere?

Thanks & Cheers! :-)

Problem b.) - [ Solved ] -> I had to reduce '...nms_thres=0.4....' down to '...nms_thres=0.1....' "

So when i use classes=80 from the coco default cfg its working fine. But my main problem is, that i want track all my classes not only classID 0. I've already set class_mask in video_deepsort.py but still no success (same problem also when i use the default yolov4.weights and the config for the coco dataset):

video_detector = VideoDetector(model, .... class_mask=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 5 6, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142] ....

Hi, you need to set all the class parameters in the configuration file to 142. At the same time, note that you need to change all the filters=255 part to (num_anchers // num_yolo_layer) * (num_classes + 5 ). In your case, this value should be 441 .

flowzen1337 commented 4 years ago

Hi @GlassyWing ,

aahhh the "filters=...." how could i forgett that, thanks read x thousand times "How to train custom dataset" from AlexeyAB's darknet implementation and used it for my custom trainings and still forgotten it....... ! :-)

So i'm using only 90 classes in my weights file (sorry, was little bit confused with my other project where i got 142 classes)

So in that case (9 anchors / 3 yolo layers) * (90 classes + 5) = 285 filters, I've changed filters=255 to filters=285 accordingly on all the 3x filters before each [yolo] layer and classes=80 to classes=90, but still the same problem :-(

I've also copied my *.cfg file which i used for my custom training (everything is correctly set in there, otherwise the training wouldnt work) into the "config/" directory and changed in "video_deepsort.py" :

model = Darknet("config/my_custom.cfg", img_size=(416, 416)) model.load_darknet_weights("weights/my_custom.weights") video_detector = VideoDetector(model, "config/my_custom.names"

and still getting the same error :-(

Traceback (most recent call last): File "./video_deepsort.py", line 21, in <module> model.load_darknet_weights("weights/my_custom.weights") File "/usr/local/src/yolo_deepsort/yolo3/models/models.py", line 363, in load_darknet_weights conv_w = torch.from_numpy(weights[ptr: ptr + num_w]).view_as(conv_layer.weight) RuntimeError: shape '[285, 1024, 1, 1]' is invalid for input of size 237990

My yolov4.cfg file:

[net] batch=64 subdivisions=8 # Training #width=512 #height=512 width=416 height=416 channels=3 momentum=0.949 decay=0.0005 angle=0 saturation = 1.5 exposure = 1.5 hue=.1 learning_rate=0.0013 burn_in=1000 max_batches = 500500 policy=steps steps=400000,450000 scales=.1,.1 mosaic=1 ... ... ... [convolutional] size=1 stride=1 pad=1 filters=285 activation=linear ... [yolo] mask = 0,1,2 anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401 classes=90 num=9 ... ... [convolutional] size=1 stride=1 pad=1 filters=285 activation=linear ... ... [yolo] mask = 3,4,5 anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401 classes=90 num=9 ... ... [convolutional] size=1 stride=1 pad=1 filters=285 activation=linear ... [yolo] mask = 6,7,8 anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401 classes=90 num=9 ...

GlassyWing commented 4 years ago

Hi, I test it with classes=90 and filters=285, but there is no problem. Seems like your weights not compatible with configuration, try test it agin without load weights.