AlexeyAB / darknet

YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )
http://pjreddie.com/darknet/
Other
21.7k stars 7.96k forks source link

[feature request] anti-aliasing within the network ~+1-2% Top1 #3672

Closed LukeAI closed 3 years ago

LukeAI commented 5 years ago

This technique is reported to give a small, "free" boost to accuracy, mitigating aliasing effects within the network: https://github.com/adobe/antialiased-cnns

AlexeyAB commented 5 years ago
- - -
1/16 2/16 1/16
2/16 4/16 2/16
1/16 2/16 1/16

- - -
1/16 2/16 1/16
2/16 4/16 2/16
1/16 2/16 1/16

May be better to use tri-3: Triangle-3 bluring with coefs [1, 2, 1] - bilinear downsampling

- - -
1 2 1
2 4 2
1 2 1

image


bin-5 - Binomial-5 - Just take 5x5 window, multiply elements by these values along x and y [1., 4., 6., 4., 1.], and divide result by 256: https://github.com/adobe/antialiased-cnns/blob/430d54870a2c1c5b258fd38f5f796df44aefee79/models_lpf/__init__.py#L39

Read - Page 1, first table, Index N = 4: http://web.archive.org/web/20100621232359/http://www-personal.engin.umd.umich.edu/~jwvm/ece581/21_GBlur.pdf

kernel_size = 5 stride = 1 coefficient weights =

- - - - -
1 4 6 4 1
4 16 24 16 4
6 24 36 24 6
4 16 24 16 4
1 4 6 4 1

68747470733a2f2f726963687a68616e672e6769746875622e696f2f616e7469616c69617365642d636e6e732f7265736f75726365732f616e7469616c6961735f6d6f642e6a7067


Different blurs are used:

image


image


68747470733a2f2f726963687a68616e672e6769746875622e696f2f616e7469616c69617365642d636e6e732f7265736f75726365732f696d6167656e65745f696e64325f6e6f616c65782e6a7067

LukeAI commented 5 years ago

you added "antialiasing=1" to convolutional layers? Awesome! So I can test it by adding that parameter to every [convolutional] layer throughout the .cfg?

AlexeyAB commented 5 years ago

@LukeAI Yes.


I think some of these features should solve the problem of re-identification (blinking-issue).

LukeAI commented 5 years ago

ok, I'll wait until you have added antialiasing to maxpool before I retrain.

AlexeyAB commented 5 years ago

@LukeAI Did you understand, should we use antialiasing=1 for every stride=2 layer except the 1st stride=2 layer?

AlexeyAB commented 5 years ago

@LukeAI I added antialiasing=1 for [maxpool] with stride>1 or stride_x>1 or stride_y>1

LukeAI commented 5 years ago

ok well I have done as you suggested (see attached) trying it out now. I hadn't realised that almost all of yolov3-spp is stride=1 so I guess this won't make too much difference but I'll let you know. yolo_v3_spp_antialias.cfg.txt

WongKinYiu commented 5 years ago

ImageNet BFLOPs: 0.858 Top-1: 56.3 (expected value is ~60) Top-5: 79.5

andarknet-imagenet_final.zip

ImageNet BFLOPs: 0.970 Top-1: 54.5 (expected value is ~60) Top-5: 77.9

andarknet.zip

i also trained another two densenet-based models. all of these models get worse results after add antialiasing=1.

Kyuuki93 commented 5 years ago

@AlexeyAB @LukeAI Hi, did you try to set random = 1 when you added antialiasing=1, it's seems a bug when both random and antialiasing =1, even set subvision = 64 will got 'out of CUDA memory', but with the max image size (e.g. 608 in my case) if random = 0, the training is working normally

LukeAI commented 5 years ago

random=1 does indeed increase the memory requirements so this probably isn't a bug. If you want to use random=1, try decreasing the training resolution, you can always increase it again at inference time.

LukeAI commented 5 years ago

I just tried training with antialiasing=1 in convolutional layers with stride=2 except for the very first one. I found that it made no real difference with antialiasing: chart_antialias without: chart

AlexeyAB commented 5 years ago

@LukeAI What dataset did you use? And what model did you use?

LukeAI commented 5 years ago

It was a private urban roads dataset. yolo_v3_spp_scale_swish_aa.cfg.txt

AlexeyAB commented 5 years ago

@LukeAI Also try to get cfg/weights without antialiasing=1

  1. check mAP
  2. add antialiasing=1 and check mAP again without retraining, will be mAP higher?
LukeAI commented 5 years ago

Have tried doing so, adding antialiasing=1 led to broadly worse results - mostly weaker recall. model trained and evaluated without aa:

class_id = 0, name = Car, ap = 52.54%, Precision = 0.61, Recall = 0.51, avg IOU = 0.49%, TP = 233, FP = 149
class_id = 1, name = Person, ap = 77.06%, Precision = 0.93, Recall = 0.76, avg IOU = 0.69%, TP = 358, FP = 29
class_id = 2, name = Truck, ap = 63.67%, Precision = 0.75, Recall = 0.61, avg IOU = 0.58%, TP = 476, FP = 161
class_id = 3, name = Traffic_light, ap = 56.18%, Precision = 0.56, Recall = 0.68, avg IOU = 0.38%, TP = 116, FP = 92
class_id = 4, name = Trailer, ap = 71.56%, Precision = 0.83, Recall = 0.67, avg IOU = 0.66%, TP = 268, FP = 56

 for conf_thresh = 0.10, precision = 0.75, recall = 0.64, F1-score = 0.69 
 for conf_thresh = 0.10, TP = 1451, FP = 487, FN = 826, average IoU = 57.76 % 

 IoU threshold = 50 %, used Area-Under-Curve for each unique Recall 
 mean average precision (mAP@0.50) = 0.642031, or 64.20 %

same model, weights, with aa added to cfg

class_id = 0, name = Car, ap = 44.06%, Precision = 0.92, Recall = 0.41, avg IOU = 0.74%, TP = 188, FP = 17
class_id = 1, name = Person, ap = 62.32%, Precision = 0.92, Recall = 0.54, avg IOU = 0.66%, TP = 254, FP = 21
class_id = 2, name = Truck, ap = 39.23%, Precision = 0.76, Recall = 0.33, avg IOU = 0.58%, TP = 260, FP = 83
class_id = 3, name = Traffic_light, ap = 34.33%, Precision = 0.48, Recall = 0.48, avg IOU = 0.33%, TP = 81, FP = 88
class_id = 4, name = Trailer, ap = 54.51%, Precision = 0.83, Recall = 0.48, avg IOU = 0.65%, TP = 190, FP = 39

 for conf_thresh = 0.10, precision = 0.80, recall = 0.43, F1-score = 0.56 
 for conf_thresh = 0.10, TP = 973, FP = 248, FN = 1304, average IoU = 60.17 % 

 IoU threshold = 50 %, used Area-Under-Curve for each unique Recall 
 mean average precision (mAP@0.50) = 0.468911, or 46.89 %
AlexeyAB commented 5 years ago

So may be it doesn't give any advantage for this dataset.

Did you check the mAP on separate validation dataset?

AlexeyAB commented 4 years ago

@LukeAI I added antialiasing=2 so you can try to use it. It uses 2x2 filters instead of 3x3 filters. There are also several changes:

LukeAI commented 4 years ago

Hey, I'll give this another go when I get GPU time - so I should add antialiasing=2 to all conv layers with stride=2 except the first one?

AlexeyAB commented 4 years ago

@LukeAI Yes. But I don't know will it bring any improvement in mAP.

I think better to try iou_thresh=0.3 param in yolo layers.

AlexeyAB commented 4 years ago

@WongKinYiu Did you try AntiAliasing, and did you get any boost? I didn't understand it correctly (escription of my understanding https://github.com/AlexeyAB/darknet/issues/3672#issuecomment-515779175 ), or is +1-2% Top1 with AntiAliasing just a fake?

WongKinYiu commented 4 years ago

@AlexeyAB

No ,I did not get any boost in my experiments. https://github.com/AlexeyAB/darknet/issues/3672#issuecomment-533883993

I think it is because of that we use shift-based data augmentation (random crop). image

AlexeyAB commented 4 years ago

@WongKinYiu Yes, random-crop solves shift-issue. Random-crop allows to remember all shifts. But I thought may be antialiasing=1 would not require remember shifts, therefore, accuracy will be the same, but will require fewer filters. But it seems antialiasing=1 even decreases accuracy: https://github.com/AlexeyAB/darknet/issues/3672#issuecomment-533883993

WongKinYiu commented 4 years ago

@AlexeyAB Yes, it seems decreases accuracy in this implementation. I think we need do corresponding back-propagation of anti-aliasing.

AlexeyAB commented 4 years ago

@WongKinYiu

There is back-propagation for anti-aliasing.

Just there was fixed a bug 26 Oct: https://github.com/AlexeyAB/darknet/commit/29c71a190acb82aa4beda8762e087b658f4b0347 https://github.com/AlexeyAB/darknet/blob/213b82a1bd5ea6b0679c28fc1a78932453c4766e/src/convolutional_kernels.cu#L628-L642

WongKinYiu commented 4 years ago

@AlexeyAB Oh!

My models are trained before 22 Sep, maybe I should retrain the models to get accurate results.

And do you think we need corresponding back-propagation of anti-aliasing pooling? for example, global avgpool do

state.delta[in_index] += l.delta[out_index] / (l.h*l.w)

then global anti-aliasing need do

state.delta[in_index] += l.delta[out_index] * (blur_mask[i] / sum_of_blur_mask);

for normal anti-aliasing we also need do corresponding back-propagation.

AlexeyAB commented 4 years ago

@WongKinYiu

Do you mean?

[avgpool]
antialiasing=1

state.delta[in_index] += l.delta[out_index] * (blur_mask[i] / sum_of_blur_mask);

How do we get blur_mask[]?

for normal anti-aliasing we also need do corresponding back-propagation.

What is it normal anti-aliasing. I implemented anti-aliasing just as common depth-wise [convolutional]-layer with fixed weights.

WongKinYiu commented 4 years ago

@AlexeyAB

I mean

[maxpool]
antialiasing=1

for example, image currently the blur_mask of blure_size=2 is:

1 1
1 1

It equivalent we do maxpool(size=2, stride=1) then do avgpool(size=2, stride=2) in forward pass. But the backward pass seems only considerate the maxpool part. image We need do backward pass of avgpool(size=2, stride=2) then do backward pass of maxpool(size=2, stride=1).

The blur_mask of blure_size!=2 is:

1 2 1
2 4 2
1 2 1

Blur down-sampling can be seen as a constant weighted convolutional layer. So we need do corresponding backward pass of blur down-sampling.

AlexeyAB commented 4 years ago

@WongKinYiu There is back-propagation for antialiasing in the [maxpool] layer for training on GPU: https://github.com/AlexeyAB/darknet/blob/649abac372446e6c0114e8fbc9bbbb8b226318b9/src/maxpool_layer_kernels.cu#L195-L209

I added it for GPU. But I didn't add for CPU, because no one is training at the CPU anyway. And if it does not work, then I will remove anti-aliasing altogether.

Do you try to use AntiAliasing for Classifier or for Detector currently?

WongKinYiu commented 4 years ago

@AlexeyAB Hello,

I will retrain the models next week.

WongKinYiu commented 4 years ago

@AlexeyAB

model top-1 top-5
original Model A 70.9 90.2
old aa Model A 69.8 89.5
new aa Model A 69.9 89.4
original Model B 70.2 89.7
old aa Model B 68.9 88.9
new aa Model B 68.9 88.8
AlexeyAB commented 4 years ago

@WongKinYiu Thanks! So I think it should be removed.

israfila3 commented 4 years ago

@AlexeyAB can we still use "antialiasing=1" in our cfg.

AlexeyAB commented 4 years ago

@israfila3 It is deprecated, so I will remove it for 2 months, since it doesn't give any advantage.

israfila3 commented 4 years ago

@AlexeyAB thanks for your reply.. Actually i am making some report and i wanted to add "antialiasing" results in the report. Is there any chance to use it. I have updated this "Darkent repository" on 20 December

AlexeyAB commented 4 years ago

@israfila3 Yes. It works, only if random=0 in cfg-file.