Atze00 / MoViNet-pytorch

MoViNets PyTorch implementation: Mobile Video Networks for Efficient Video Recognition;
MIT License
256 stars 50 forks source link

Neural network arch displayed by Netron is wrong #22

Closed erwangccc closed 2 years ago

erwangccc commented 2 years ago

Hi, @Atze00 i saved MoViNtes-A0 model to 'pth' and look through model by Netron, but the structure is a little strange. maybe there is something wrong at '_forward_impl' at class 'class MoViNet(nn.Module)'.

My code is as follows:

model = MoViNet(_C.MODEL.MoViNetA0, causal=True, pretrained=False, num_classes=num_class)
...
torch.save(model, '/path/to/*.pth')
path = '/path/to/*.pth'
model = torch.load(path, map_location='cpu')

Please let me know if i did something wrong.

截屏2021-10-28 下午2 57 31
erwangccc commented 2 years ago

Another question, refer to the paper for recommended input shapes, it's [b, t, h, w, c], but i have checked your input shapes when using HMDB51 datasets, it's [b, c, t, h, w]. why didn't you use the same input shapes?

papasanimohansrinivas commented 2 years ago

Dear @erwangccc i think u reapeated a basic mistake as i did it too. Never ever save torch model like save(model,/.pth) or like that

do torch.save(model.state_dict(),/.pth)

then see it in netron again .I wasted one month literally llike this. If u dont see difference let me know

erwangccc commented 2 years ago

Hi, @papasanimohansrinivas Thanks for your reply.

I've tried this way you mentioned before and just weights are displayed. But it's not an interconnected model. I think we can look through the model via tensorborad.

I see you've used movites in your case, well done. I have just add training phase to this repo and want to do inference frame-by-frame via model trained by this repo, do u have some tips and inference code to share, i'll appreciate it!

papasanimohansrinivas commented 2 years ago

@erwangccc sorry ! being late to reply ,caught up with my project .

Ok I have combined pytorchvideo framework ucf101 dataloader and the evaluate and training functions used in this repos jupyter notebook to train my own model , inferences are good for my very small dataset

I could have shared the code but my partner insists code repo of my project to be private etc

And write your own custom video sampler to suit your needs , just that

Wish u good luck

erwangccc commented 2 years ago

Hi, @papasanimohansrinivas thanks for your reply. And don't misunderstand, i just want to know how to do inference correctly.

Did you inference frame-by- frame based on evaluation code?

papasanimohansrinivas commented 2 years ago

Hi @erwangccc no I do use movinet a5 base model and I accumulate all frames from a video and use uniformtemporalsubsample function from pytorchvideo to pass it to ucf101 class arguments for choosing nframes of video

I do get where u are coming from ,No issues

Below these are the functions to transform videos ApplyTransformToKey, ShortSideScale, UniformTemporalSubsample

Just go through ucf101 class and see what are the requirements of it and I would fill in the gaps for anyone , I myself had faced same issues

Besides I am trying to write code for movinets a3,a4,a5 streams models by extending this repo . Any tips from anyone is welcome , as I am super new to this

erwangccc commented 2 years ago

OK, i see.

Besides I am trying to write code for movinets a3,a4,a5 streams models by extending this repo .

This means you want to train your data based on a3-5? If so, i think u can refer to implementation details from paper. Hope it can help u.