ncgarcia / modality-distillation

Tensorflow code for the paper 'Modality Distillation with Multiple Stream Networks for Action Recognition', ECCV 2018
19 stars 3 forks source link

accuray is always 0 #3

Open JoseponLee opened 4 years ago

JoseponLee commented 4 years ago

hi, @ncgarcia ,I use your code to make tfrecords for uwa3D dataset, then I train step1, its ok for the depth part and I get a high accuray, but for rgb part, I get a concussive accuray rate curve on tensorboard,like this: image

what may cause this problem?thanks!

AlishyaMa commented 3 years ago

Helllo @JoseponLee, I met the same problem as you. Have you solved this problem?

ncgarcia commented 3 years ago

Hi! this is strange indeed, does the data look ok?

AlishyaMa commented 3 years ago

Hi! this is strange indeed, does the data look ok?

Hi @ncgarcia, the data looks ok. It only happens to RGB data, while it works fine for depth data. So I think it isn't the problem with the dataset. Actually, I just used the pre-trained ImageNet weights for initialization without training it on the NTU RGB+D dataset first. I think maybe this is the reason, but I am not sure about it.

Because in your paper, it is reported that training using pre-trained ImageNet weights will lead 20% to 30% less accuracy. But in my implementation, the accuracy is very low and almost close to the probability of random guess. It seems that the model didn't learn something successfully from RGB data.

AlishyaMa commented 3 years ago

Hi, @ncgarcia @JoseponLee, I finally solved this problem by deleting the part of mean subtraction as depth data. It works when you only train it on a small dataset using pre-trained ImageNet weights for initialization, without training it on the NTU RGB+D dataset.