mzolfaghari / ECO-efficient-video-understanding

Code and models of paper " ECO: Efficient Convolutional Network for Online Video Understanding", ECCV 2018
MIT License
437 stars 96 forks source link

Pytorch pretrained models of ECO on Kinetics #19

Closed Tord-Zhang closed 6 years ago

Tord-Zhang commented 6 years ago

Did you have the pytorch pretrained models on Kinetics? I would really appreciate it if you can share it

zhang-can commented 6 years ago

Hi @mangdian , After clone the ECO-pytorch repo, you can download the models with the following command:

sh models/download_models.sh
shajie17 commented 6 years ago

sorry,there are some errors for using download_models.sh requests.exceptions.ConnectionError: HTTPSConnectionPool(host='docs.google.com', port=443): Max retries exceeded with url: /uc?export=download&id=1QffeXdoZYhPEEGXv4FT6Aicu0Hmi2o76 (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f70f8d7d250>: Failed to establish a new connection: [Errno 101] Network is unreachable',))

Can you give me the models you have downloaded? thank you so much

zhang-can commented 6 years ago

Hi @shajie17 , You can download the pretrained models here: https://pan.baidu.com/s/1Hx52akJLR_ISfX406bkIog

sophia-wright-blue commented 6 years ago

hello @zhang-can , downloading the model from 'sh models/download_models.sh' gives a file named 'eco_lite_rgb_16F_kinetics_v1.pth.tar' , extracting that gives another zipped file.

I tried downloading the model from https://pan.baidu.com/s/1Hx52akJLR_ISfX406bkIog

It downloads a .dmg file - do I have to open that file? I can't tell because it's in chinese

what file extension is the trained model in? and how do we use it for inference on some wild videos?

thanks,

Tord-Zhang commented 6 years ago

@zhang-can thank you for your response. I found that https://github.com/mzolfaghari/ECO-pytorch only provides the pretrained model on Kinetics. But I need the pretrained model on Something Something? Do you have the pytorch model on Something Something? I would really appreciate it.

shajie17 commented 6 years ago

thank you very much for your help @zhang-can ,but I want to use ECO_Lite_kinetics.caffemodel and ECO_Lite_UCF101.caffemodel in caffe to do online_recognition.py Can you give me these ,thank you

sophia-wright-blue commented 6 years ago

hello @mangdian , can you walk me through the steps you followed to download the pretrained model on Kinetics from https://github.com/mzolfaghari/ECO-efficient-video-understanding/issues/url.

I ran 'sh models/download_models.sh', but got a file named 'eco_lite_rgb_16F_kinetics_v1.pth.tar' that only unzips to further zipped files, am i doing something wrong?

Tord-Zhang commented 6 years ago

@zhang-can As reported in the paper of ECO, pretrained 2D BNInception and 3D resnet-18 models on Kinetics dataset are not enough to get a good result, training the ECO model for another 10 epochs on Kinectics would promise a better result. However, I am really not able to train the model on Kinectics(GPU and memory limitation). Since you mentioned that you are training the models on Kinetics, would you please share the trained model? I am going to use the trained weights to initialize the model and train it on something something dataset. I can report the testing result and share it in this repository.

Tord-Zhang commented 6 years ago

@sophia-wright-blue you do not need to unzip that downloaded file.

shajie17 commented 6 years ago

hello,@sophia-wright-blue Do you have ECO_Lite_kinetics.caffemodel and ECO_Lite_UCF101.caffemodel used in online_recognition.py

sophia-wright-blue commented 6 years ago

hi @shajie17 , I'm afraid I don't have ECO_Lite_kinetics.caffemodel and ECO_Lite_UCF101.caffemodel, I'm only interested in the PyTorch models. I guess you're gonna have to reach out to @zhang-can or @mzolfaghari for the PyTorch equivalent of

https://github.com/mzolfaghari/ECO-efficient-video-understanding/blob/master/scripts/online_recognition/online_recognition.py

sophia-wright-blue commented 6 years ago

thank you for replying @mangdian , I'm a little confused as to how to use the downloaded trained model - 'eco_lite_rgb_16F_kinetics_v1.pth.tar'

I would like to use the trained model for inference on wild videos (mp4 files), or a live feed, so I guess this relates to the comment by @shajie17 and I would need a PyTorch equivalent of

https://github.com/mzolfaghari/ECO-efficient-video-understanding/blob/master/scripts/online_recognition/online_recognition.py

Tord-Zhang commented 6 years ago

@sophia-wright-blue After the trained model is downloaded, actually you do not need to do anything. The model is pretrained model and has not been trained well, I guess it can't be used for practical use yet. You can finetune the model on your dataset. The online version of ECO based on PyTorch has not been implemented yet.

shajie17 commented 6 years ago

@sophia-wright-blue eco_lite_rgb_16F_kinetics_v1.pth.tar you can see 60th lines in mian.py resume='./eco_lite_rgb_16F_kinetics_v1.pth.tar' if os.path.isfile(resume): print(("=> loading checkpoint '{}'".format(resume))) checkpoint = torch.load(resume)#加载模型 model.load_state_dict(checkpoint['state_dict'])#网络结构导入模型 print(("=> loaded checkpoint '{}' (epoch {})" .format(evaluate, checkpoint['epoch']))) else: print(("=> no checkpoint found at '{}'".format(resume)))

sophia-wright-blue commented 6 years ago

thank you so much for clarifying that @mangdian and @shajie17 ,

shajie17 commented 6 years ago

@mangdian @sophia-wright-blue sorry,I am very anxious to know how to change num_segments=16,I have done it some days .do you how to do it ?thank you very much

Tord-Zhang commented 6 years ago

@shajie17 change it in .prototxt file

shajie17 commented 6 years ago

thank you @mangdian but how to do it in pytorch https://github.com/zhang-can/ECO-pytorch

Tord-Zhang commented 6 years ago

@shajie17 change it in the corresponding .sh file in scripts directory. And maybe you should use the code in this repository https://github.com/mzolfaghari/ECO-pytorch

shajie17 commented 6 years ago

@mangdian thank you ,