@qijiezhao I tried your code and the pretrain mode on Kinectics, but I always got strange results: even though I fed different videos (in different class) to the model, most of them were predicted to the same label. I fist thought there was something wrong with my preprocessing code, but when I tried to feed it with random data, the result is the same:
if __name__ == '__main__':
model = P3D199(pretrained=True, num_classes=400)
model = model.cuda()
model.eval()
batch_num = 1000 # test 1000 batches(randomly)
cnt_M = np.zeros(400)
for j in range(batch_num):
data = torch.autograd.Variable(
torch.rand(6, 3, 16, 160, 160)).cuda() # if modality=='Flow', please change the 2nd dimension 3==>2
out = model(data)
out = torch.nn.functional.softmax(out, dim=1)
vid_result = out.data.cpu().numpy()
score = np.mean(vid_result, axis=0) # take the average score of this 6 clips as the video-score
max_scr = np.max(score)
max_ind = np.argmax(score)
cnt_M[max_ind] += 1
cnt_max = np.max(cnt_M)
ind_max = np.argmax(cnt_M)
print(cnt_M)
print("most_predict_index: " + str(ind_max) + " count/all: " + "%d / %d" % (
cnt_max, batch_num))
@qijiezhao I tried your code and the pretrain mode on Kinectics, but I always got strange results: even though I fed different videos (in different class) to the model, most of them were predicted to the same label. I fist thought there was something wrong with my preprocessing code, but when I tried to feed it with random data, the result is the same:
and my result is :
I tried this for many time, they were always predicted to the same label. Is there some problem with the pre-trained model? Thx