Open cfanyyx opened 6 years ago
Hi cfanyyx,
Good questions. Here are the answers: 1) It is because the order of variables in tensorflow is different than in darknet project. Because of this we also need to change order of variables of convolutional layers in line 298 of load_weights function 2) You are right, there are 4 variables, but the last one of them is of type size_t which usually is 64bit integer. So total number of bytes to skip equals (int32 + int32 + int32 + int64 = 5 * int32). I should have mentioned that.
Thanks for your reply! There remains a question that I don't understand. In the save_batchnorm_weights() function in https://github.com/pjreddie/darknet/blob/master/src/parser.c, there are only scales, rolling_mean, rolling_variance, I am curious that how they correspond to gamma, beta, mean, var these four parameters?
void save_batchnorm_weights(layer l, FILE *fp) {
.................
fwrite(l.scales, sizeof(float), l.c, fp);
fwrite(l.rolling_mean, sizeof(float), l.c, fp);
fwrite(l.rolling_variance, sizeof(float), l.c, fp);
}
Thanks for your nice codes, and I have two questions about the yolo weight file:
In your load_weight function, you read the weights in the order of 'gamma, beta, mean, var', but then you change the order to 'beta, gamma, mean, var' in next line, is that correct?
I find there are only 4 variables in the head of weight file in save_weight_up_to function in https://github.com/pjreddie/darknet/blob/master/src/parser.c, but you mention 5 in your readme article https://itnext.io/implementing-yolo-v3-in-tensorflow-tf-slim-c3c55ff59dbe, and I find only 'scale, rolling_mean, rolling_variance' in save_batchnorm_weights function. So I am confused about the total weight file. Can you help me out?