Open laura-wang opened 6 years ago
Hi @HsinYingLee, I just wanted to add to @laura-wang's question. After unsupervised training, do you only add one randomly initialized fully connected layer on top of the 5 convolutional layers of the AlexNet architecture, which have been pre-trained, and then fine-tune the entire network? Or do you also retain the FC6 layer or any other layers which have been pre-trained?
@laura-wang We do not finetune on the test split. We 1. pretrained in the unsupervised manner with train split i, then 2. finetune in the supervised manner with train split i, then 3. test the model with test split i, for all 3 splits. @GYengera We do not retain any fc layers. All fc layers are randomly initialized in the supervised stage.
Hello @HsinYingLee,
I have tried finetuning Caffenet pre-trained on OPN for action recognition on UCF-101. In your paper, you provide the hyperparameters for training the OPN model, but not for the fine-tuning step. Could you please provide these hyperparameters or the finetuning prototxt files if possible?
Hello HsinYing, nice work! I am wondering how do you finetune the unsupervised UCF101 dataset. In your paper, you report mean classification accuracy over the 3 splits of UCF101 dataset. Could you please describe it more?
In my understanding, after training with unlabeled video from UCF101 training split 1, you first finetune the pre-trained network with test split 1 from UCF101 with label, and then you can get the accuracy of split 1. And again, finetune the pre-trained network with test split 2 from UCF101 with label, get the accuracy of split 2; and same as the split3. And after average them together, the mean accuracy is obtained.
Thanks a lot!