google-deepmind / kinetics-i3d

Convolutional neural network model for video classification trained on the Kinetics dataset.
Apache License 2.0
1.74k stars 461 forks source link

Is it better to train from scratch on Kinetics-600? #28

Open WuJunhui opened 6 years ago

WuJunhui commented 6 years ago

Hi, I wonder why you only release checkpoint on Kinetics-600 trained from scratch but not from ImageNet pre-trained parameters. In the paper, performance is better with ImageNet pre-training on Kinetics-400 dataset. Is it better to train from scratch on Kinetics-600 dataset?

Thanks!

joaoluiscarreira commented 6 years ago

Our main focus is transfer learning. When finetuning on HMDB-51 and UCF-101 we found that the additional data from ImageNet did not help much as pre-training over Kinetics, so for Kinetics-600 we did not use ImageNet.

Joao

On Mon, Jul 23, 2018 at 4:26 AM, Junhui Wu notifications@github.com wrote:

Hi, I wonder why you only release checkpoint on Kinetics-600 trained from scratch but not from ImageNet pre-trained parameters. In the paper, performance is better with ImageNet pre-training on Kinetics-400 dataset. Is it better to train from scratch on Kinetics-600 dataset?

Thanks!

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/deepmind/kinetics-i3d/issues/28, or mute the thread https://github.com/notifications/unsubscribe-auth/AO6qatLcMwQIDftwmn0YJRf2T4WqI1OPks5uJUKBgaJpZM4VaTys .

pinkfloyd06 commented 6 years ago

Hi @joaoluiscarreira ,

Is the fine-tuned model with kinetics-400 on UCF-101 available ?

joaoluiscarreira commented 6 years ago

Hi,

yes i have models finetuned on ucf-101 here: https://drive.google.com/file/d/1Fj2jfFNF_yylzQWClQyYCSTP9QkqJV6q/view?usp=sharing

I trained these in slim back then but shouldn't be hard to load them.

Best,

Joao

On Fri, Sep 14, 2018 at 2:54 PM pinkfloyd06 notifications@github.com wrote:

Hi @joaoluiscarreira https://github.com/joaoluiscarreira ,

Is the fine-tuned model with kinetics-400 on UCF-101 available ?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/deepmind/kinetics-i3d/issues/28#issuecomment-421349348, or mute the thread https://github.com/notifications/unsubscribe-auth/AO6qaoC1M4-F5sgLSqfZhViKoMQbKF9Oks5ua6cGgaJpZM4VaTys .

pinkfloyd06 commented 6 years ago

Hi @joaoluiscarreira ,

Thank you a lot for your model and for your answer.

Unfortunately, l didn't succeed to load them. I get the following error :

NotFoundError (see above for traceback): Key RGB/inception_i3d/Conv3d_1a_7x7/batch_norm/beta not found in checkpoint
     [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]
     [[Node: save/RestoreV2/_393 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_397_save/RestoreV2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]

At rgb_saver.restore(sess, _CHECKPOINT_PATHS[eval_type])

Thank you for your help

joaoluiscarreira commented 6 years ago

You would have to map the variable names. I think when i trained the model the first layer for example was called Conv2d_1a_7x7 instead of Conv3d_1a_7x7.

Joao

On Sat, Sep 15, 2018 at 9:00 PM pinkfloyd06 notifications@github.com wrote:

Hi @joaoluiscarreira https://github.com/joaoluiscarreira ,

Thank you a lot for your model and for your answer.

Unfortunately, l didn't succeed to load them. I get the following error :

NotFoundError (see above for traceback): Key RGB/inception_i3d/Conv3d_1a_7x7/batch_norm/beta not found in checkpoint [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]] [[Node: save/RestoreV2/_393 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_397_save/RestoreV2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]

At rgb_saver.restore(sess, _CHECKPOINT_PATHS[eval_type])

Thank you for your help

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/deepmind/kinetics-i3d/issues/28#issuecomment-421625787, or mute the thread https://github.com/notifications/unsubscribe-auth/AO6qajP-2srD5NlRpdNgPdVZ2PLVFybwks5ubVxHgaJpZM4VaTys .

pinkfloyd06 commented 6 years ago

@joaoluiscarreira which mapping ? are you talking about the valid endpoints in the https://github.com/deepmind/kinetics-i3d/blob/master/i3d.py#L94 ?

VALID_ENDPOINTS = (
      'Conv3d_1a_7x7',
      'MaxPool3d_2a_3x3',
      'Conv3d_2b_1x1',
      'Conv3d_2c_3x3',
      'MaxPool3d_3a_3x3',
      'Mixed_3b',
      'Mixed_3c',
      'MaxPool3d_4a_3x3',
      'Mixed_4b',
      'Mixed_4c',
      'Mixed_4d',
      'Mixed_4e',
      'Mixed_4f',
      'MaxPool3d_5a_2x2',
      'Mixed_5b',
      'Mixed_5c',
      'Logits',
      'Predictions',
)

Mapping between what ? since l can't load the checkpoint to look at the variable names.

Sorry for my questions , l am new to tensorflow.

Thank you

pinkfloyd06 commented 6 years ago

l printed tf.global_varibles() https://github.com/deepmind/kinetics-i3d/blob/master/evaluate_sample.py#L84

with the finetuned kinetics modelon ucf-101 you put in the drive and the model from https://github.com/deepmind/kinetics-i3d/tree/master/data/checkpoints/rgb_imagenet

The variables are the same.

You also map the variables in :

    for variable in tf.global_variables():

      if variable.name.split('/')[0] == 'RGB':
        if eval_type == 'rgb600':
          rgb_variable_map[variable.name.replace(':0', '')[len('RGB/inception_i3d/'):]] = variable
        else:

          rgb_variable_map[variable.name.replace(':0', '')] = variable

    rgb_saver = tf.train.Saver(var_list=rgb_variable_map, reshape=True)
joaoluiscarreira commented 6 years ago

Hi, the following should do the trick for loading these models finetuned on ucf101:

a) change the last layer in the model definition to output 101 classes instead of 400 or 600. b) pass the right paths for the ucf101 checkpoints. c) when setting up the restore op use something as follows (example adapting RGB stream in evaluate_sample.py):

for variable in tf.global_variables():
  if variable.name.split('/')[0] == 'RGB':
    rgb_variable_map[variable.name.replace(':0', '').replace('Conv3d',

'Conv2d').replace('conv_3d/w','weights').replace('conv_3d/b', 'biases').replace('RGB/inception_i3d', 'InceptionV1').replace('batch_norm', 'BatchNorm')] = variable

Best regards,

Joao

On Sat, Sep 15, 2018 at 10:19 PM pinkfloyd06 notifications@github.com wrote:

l printed tf.global_varibles() https://github.com/deepmind/kinetics-i3d/blob/master/evaluate_sample.py#L84

with the finetuned kinetics modelon ucf-101 you put in the drive and the model from https://github.com/deepmind/kinetics-i3d/tree/master/data/checkpoints/rgb_imagenet

The variables are the same

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/deepmind/kinetics-i3d/issues/28#issuecomment-421636575, or mute the thread https://github.com/notifications/unsubscribe-auth/AO6qanns84KyP3994oJVSeQE5uyfI2vzks5ubW7zgaJpZM4VaTys .

pinkfloyd06 commented 6 years ago

Thank you a lot @joaoluiscarreira ,

1) One more question for the sake of comparison. In the test part for UCF-101, how many RGB frames and optical frames you took per clip action ? since the number of frames in UCF-101 clip is variable.

(1, num_frames, 224, 224, 3) # how many frames for UCF-101 in test RGB (1, num_frames, 224, 224, 2) # how many frames for UCF-101 in test FLOW

2) Is it (1, num_frames, height, width, 3) or (1, num_frames, width, height, 3) ?

Thank you a lot

joaoluiscarreira commented 6 years ago

Hi,

  1. we took 250 frames for both rgb and for flow. We loop the video from the beginning if there are not enough frames.
  2. I'm not sure, but suppose height, width.

Joao

On Mon, Sep 17, 2018 at 11:29 PM pinkfloyd06 notifications@github.com wrote:

Thank you a lot @joaoluiscarreira https://github.com/joaoluiscarreira ,

  1. One more question for the sake of comparison. In the test part for UCF-101, how many RGB frames and optical frames you took per clip action ? since the number of frames in UCF-101 clip is variable.

(1, num_frames, 224, 224, 3) # how many frames for UCF-101 in test RGB (1, num_frames, 224, 224, 2) # how many frames for UCF-101 in test FLOW

  1. Is it (1, num_frames, height, width, 3) or (1, num_frames, width, height, 3) ?

Thank you a lot

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/deepmind/kinetics-i3d/issues/28#issuecomment-422192765, or mute the thread https://github.com/notifications/unsubscribe-auth/AO6qanzfTmFyWraOOY6ul-QjKo_diWFWks5ucCI6gaJpZM4VaTys .

pinkfloyd06 commented 6 years ago

I understand that. For kinetics it's ok because the clip duration is around 10 second but in UCF-101 :

A-Min Clip Length 1.06 sec => " loop the video from the beginning if there are not enough frames" B-Max Clip Length 71.04 sec => Here it's more than 250 frames. So there are two possibilities : either you take only the first 250 frames or you sample 250 frames. If it is the latter, how did you sample 250 frames ?

Thanks

joaoluiscarreira commented 6 years ago

Sorry, i wasn't aware some videos were that long. What my code did at test time was to sample the first 250 frames in that case.

Joao

On Tue, Sep 18, 2018 at 12:00 AM pinkfloyd06 notifications@github.com wrote:

I understand that. For kinetics it's ok because the clip duration is around 10 second but in UCF-101 :

A-Min Clip Length 1.06 sec => " loop the video from the beginning if there are not enough frames" B-Max Clip Length 71.04 sec => Here it's more than 250 frames. So there are two possibilities : either you take only the first 250 frames or you sample 250 frames. If it is the latter, how did you sample 250 frames ?

Thanks

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/deepmind/kinetics-i3d/issues/28#issuecomment-422199000, or mute the thread https://github.com/notifications/unsubscribe-auth/AO6qapNJD7ptDA8ruzgLJRwjeEz9Xvzgks5ucCmOgaJpZM4VaTys .

joaoluiscarreira commented 6 years ago

There's all of them in there.

Joao

On Sat, Sep 29, 2018 at 5:46 PM pinkfloyd06 notifications@github.com wrote:

Thank you @joaoluiscarreira https://github.com/joaoluiscarreira ,

Hi, yes i have models finetuned on ucf-101 here: https://drive.google.com/file/d/1Fj2jfFNF_yylzQWClQyYCSTP9QkqJV6q/view?usp=sharing I trained these in slim back then but shouldn't be hard to load them. Best, Joao … <#m-8791187984719984764> On Fri, Sep 14, 2018 at 2:54 PM pinkfloyd06 @.***> wrote: Hi @joaoluiscarreira https://github.com/joaoluiscarreira https://github.com/joaoluiscarreira , Is the fine-tuned model with kinetics-400 on UCF-101 available ? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#28 (comment) https://github.com/deepmind/kinetics-i3d/issues/28#issuecomment-421349348>, or mute the thread https://github.com/notifications/unsubscribe-auth/AO6qaoC1M4-F5sgLSqfZhViKoMQbKF9Oks5ua6cGgaJpZM4VaTys .

Hi @joaoluiscarreira https://github.com/joaoluiscarreira,

This pretrained UCF-101 model on split 1 , split 2 or split 3 ?

Thank you

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/deepmind/kinetics-i3d/issues/28#issuecomment-425658809, or mute the thread https://github.com/notifications/unsubscribe-auth/AO6qatlPYlgFsMOeVN8UAq44iE7QEf-rks5uf6PjgaJpZM4VaTys .

jillelajitta commented 6 years ago

Sorry for hijacking this post. Could someone please help me in providing instructions how to train the model. Where can I find the code and script for training the model?

preetkhaturia commented 5 years ago

Sorry for hijacking this post. Could someone please help me in providing instructions on how to train the model. Where can I find the code and script for training the model?

Did you find an answer??

jillelajitta commented 5 years ago

Hi @preetkhaturia, I couldn't and don't waste time on this framework, I wasted lot of time already. I couldn't see any good results.

MStumpp commented 5 years ago

Are there any other fine-tuned models available besides ucf-101/kinetics-400?

e.g. ucf-101/kinetics-600 HMDB-51/kinetics-400 HMDB-51/kinetics-600

joaoluiscarreira commented 5 years ago

Hi,

there's hmdb-51 / kinetics-400 here, for split 1, here: https://drive.google.com/file/d/1vIeuonTT_no7JMqzyL17J3ltYKl3dNtp

Best,

Joao

On Sat, Jan 26, 2019 at 1:40 PM Matthias Stumpp notifications@github.com wrote:

Are there any other fine-tuned models available besides ucf-101/kinetics-400?

e.g. ucf-101/kinetics-600 HMDB-51/kinetics-400 HMDB-51/kinetics-600

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/deepmind/kinetics-i3d/issues/28#issuecomment-457831995, or mute the thread https://github.com/notifications/unsubscribe-auth/AO6qasLDQ6yejeL22Ny3Tp6CZH8pstvLks5vHFregaJpZM4VaTys .

MStumpp commented 5 years ago

Hi,

have the ucf101/hmdb51-on-kinetics-400 flow models been fine tuned on kinetics-400-flow or did you use kinetics-400-rgb for fine tuning both rgb and flow models?

Just curious coz only kinetics-400-rgb checkpoint has been published, but no kinetics-400-flow checkpoint.

Thanks!

joaoluiscarreira commented 5 years ago

Both rgb and flow models have been finetuned on kinetics-400. For Kinetics-600 only did RGB. See below screenshot of github:

[image: image.png]

On Mon, Jun 3, 2019 at 1:10 PM Matthias Stumpp notifications@github.com wrote:

Hi,

have the ucf101/hmdb51-on-kinetics-400 flow models been fine tuned on kinetics-400-flow or did you use kinetics-400-rgb for fine tuning both rgb and flow models?

Just curious coz only kinetics-400-rgb checkpoint has been published, but no kinetics-400-flow checkpoint.

Thanks!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/deepmind/kinetics-i3d/issues/28?email_source=notifications&email_token=ADXKU2T26YPOV2BMQZRTRX3PYUC23A5CNFSM4FLJHSWKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWZGJYQ#issuecomment-498230498, or mute the thread https://github.com/notifications/unsubscribe-auth/ADXKU2VFPN7IJSWKSHVXKKDPYUC23ANCNFSM4FLJHSWA .

MStumpp commented 5 years ago

Unfortunately, the image is not visible.

So, do you have a flow model checkpoint trained on optical flows extracted from kinetics 400 we can use to fine tune ucf101/hmdb51 flow model?

joaoluiscarreira commented 5 years ago

Yes, they're here: https://github.com/deepmind/kinetics-i3d/tree/master/data/checkpoints under flow_scratch and flow_imagenet.

Joao

On Mon, Jun 3, 2019 at 1:26 PM Matthias Stumpp notifications@github.com wrote:

Unfortunately, the image is not visible.

So, do you have a flow model checkpoint trained on optical flows extracted from kinetics 400 we can use to fine tune ucf101/hmdb51 flow model?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/deepmind/kinetics-i3d/issues/28?email_source=notifications&email_token=ADXKU2UG5DBEUMALRQRUMBDPYUEW7A5CNFSM4FLJHSWKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWZHO6Y#issuecomment-498235259, or mute the thread https://github.com/notifications/unsubscribe-auth/ADXKU2S6PET4ZYHRHINKCETPYUEW7ANCNFSM4FLJHSWA .

MStumpp commented 5 years ago

Would it be possible to upload flow_kinetics400 too?

joaoluiscarreira commented 5 years ago

Sorry this was unclear: flow_scratch means the model was trained on Kinetics-400 from scratch. flow_imagenet means the model was trained on Kinetics-400 starting from inflated ImageNet weights.

Best,

Joao

On Mon, Jun 3, 2019 at 2:08 PM Matthias Stumpp notifications@github.com wrote:

Would it be possible to upload flow_kinetics400 too?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/deepmind/kinetics-i3d/issues/28?email_source=notifications&email_token=ADXKU2XHDKO4ENZRJNRHVHTPYUJS7A5CNFSM4FLJHSWKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWZKZWA#issuecomment-498248920, or mute the thread https://github.com/notifications/unsubscribe-auth/ADXKU2VUGLTB6YEYYN73ZILPYUJS7ANCNFSM4FLJHSWA .

VeeranjaneyuluThoka commented 5 years ago

Hi, I have pre-trained model of I3D (downloaded from this link ttps://github.com/deepmind/kinetics-i3d), then i have used the github (https://github.com/USTC-Video-Understanding/I3D_Finetune) to fine tune with UCF101 and HMDB51 dataset. My test accuracies with UCF101 is: RGB data: 0.8951, Flow data: 0.9630, mixed(both RGB and FLow): 0.8446

I have loaded again the model which is fine tuned with UCF101 dataset and used to fine tune with HDB51 dataset and accuracy is as below: RGB data:0.7577, Flow data:0.6749, mixed(both rgb and flow): 0.5957

I was accepting good accuracy when i trained with all (Kinetics, UCF101, and HMDB51) datasets, but if you notice above, accuracy is very low with final model, Could anybody have any suggestions on this?

Thanks, Veeru.

Qlina commented 4 years ago

Hi, yes i have models finetuned on ucf-101 here: https://drive.google.com/file/d/1Fj2jfFNF_yylzQWClQyYCSTP9QkqJV6q/view?usp=sharing I trained these in slim back then but shouldn't be hard to load them. Best, Joao On Fri, Sep 14, 2018 at 2:54 PM pinkfloyd06 @.***> wrote: Hi @joaoluiscarreira https://github.com/joaoluiscarreira , Is the fine-tuned model with kinetics-400 on UCF-101 available ? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#28 (comment)>, or mute the thread https://github.com/notifications/unsubscribe-auth/AO6qaoC1M4-F5sgLSqfZhViKoMQbKF9Oks5ua6cGgaJpZM4VaTys .

Hi,@joaoluiscarreira Thanks for your sharing! (1)I wonder the differences among the three models train1/train2/train3. Are they fine-tuned on UCF-101 split1/split2/split3 seperately? (2)Have they been trained on UCF-101? We test them on UCF-101(split1) directly without training, but get acc about 0%.
Expect your response.

Thanks!

sarosijbose commented 3 years ago

Hi, yes i have models finetuned on ucf-101 here: https://drive.google.com/file/d/1Fj2jfFNF_yylzQWClQyYCSTP9QkqJV6q/view?usp=sharing I trained these in slim back then but shouldn't be hard to load them. Best, Joao On Fri, Sep 14, 2018 at 2:54 PM pinkfloyd06 @.***> wrote: Hi @joaoluiscarreira https://github.com/joaoluiscarreira , Is the fine-tuned model with kinetics-400 on UCF-101 available ? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#28 (comment)>, or mute the thread https://github.com/notifications/unsubscribe-auth/AO6qaoC1M4-F5sgLSqfZhViKoMQbKF9Oks5ua6cGgaJpZM4VaTys .

Hi @joaoluiscarreira , on downloading and extracting the file, I foundn out three folders(train 1,2,3). Can you please explain what are they for? Do they contain updated weights, saved after each epoch or something else maybe.

joaoluiscarreira commented 3 years ago

Hi Sarosij,

if i remember correctly, these datasets have multiple train/test annotation splits. HMDB-51 definitely has 3. People sometimes report results on split 1, sometimes they return on the average of all 3 splits.

Best,

Joao

Be

On Mon, Apr 19, 2021 at 5:28 PM Sarosij Bose @.***> wrote:

Hi, yes i have models finetuned on ucf-101 here: https://drive.google.com/file/d/1Fj2jfFNF_yylzQWClQyYCSTP9QkqJV6q/view?usp=sharing I trained these in slim back then but shouldn't be hard to load them. Best, Joao … <#m-4982613307118766247> On Fri, Sep 14, 2018 at 2:54 PM pinkfloyd06 @.***> wrote: Hi @joaoluiscarreira https://github.com/joaoluiscarreira https://github.com/joaoluiscarreira , Is the fine-tuned model with kinetics-400 on UCF-101 available ? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#28 (comment) https://github.com/deepmind/kinetics-i3d/issues/28#issuecomment-421349348>, or mute the thread https://github.com/notifications/unsubscribe-auth/AO6qaoC1M4-F5sgLSqfZhViKoMQbKF9Oks5ua6cGgaJpZM4VaTys .

Hi @joaoluiscarreira https://github.com/joaoluiscarreira , on downloading and extracting the file, I foundn out three folders(train 1,2,3). Can you please explain what are they for? Do they contain updated weights, saved after each epoch or something else maybe.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/deepmind/kinetics-i3d/issues/28#issuecomment-822605273, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADXKU2SG6EX6QLA4QHIG5PLTJRK3TANCNFSM4FLJHSWA .

sarosijbose commented 3 years ago

Thanks a lot for sharing!

sarosijbose commented 3 years ago

Hi, the following should do the trick for loading these models finetuned on ucf101: a) change the last layer in the model definition to output 101 classes instead of 400 or 600. b) pass the right paths for the ucf101 checkpoints. c) when setting up the restore op use something as follows (example adapting RGB stream in evaluate_sample.py): for variable in tf.global_variables(): if variable.name.split('/')[0] == 'RGB': rgb_variable_map[variable.name.replace(':0', '').replace('Conv3d', 'Conv2d').replace('conv_3d/w','weights').replace('conv_3d/b', 'biases').replace('RGB/inception_i3d', 'InceptionV1').replace('batch_norm', 'BatchNorm')] = variable Best regards, Joao On Sat, Sep 15, 2018 at 10:19 PM pinkfloyd06 @.***> wrote: l printed tf.global_varibles() https://github.com/deepmind/kinetics-i3d/blob/master/evaluate_sample.py#L84 with the finetuned kinetics modelon ucf-101 you put in the drive and the model from https://github.com/deepmind/kinetics-i3d/tree/master/data/checkpoints/rgb_imagenet The variables are the same — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#28 (comment)>, or mute the thread https://github.com/notifications/unsubscribe-auth/AO6qanns84KyP3994oJVSeQE5uyfI2vzks5ubW7zgaJpZM4VaTys .

Hi @joaoluiscarreira,

I followed your advice here to change the layer name from Conv3d to Conv2d but still this same error persists.

NotFoundError: Key 2d_1a_7x7/BatchNorm/beta not found in checkpoint
     [[{{node save_2/RestoreV2}}]]

During handling of the above exception, another exception occurred:

NotFoundError                             Traceback (most recent call last)
NotFoundError: Key 2d_1a_7x7/BatchNorm/beta not found in checkpoint
     [[node save_2/RestoreV2 (defined at <ipython-input-13-d07954b37fbf>:20) ]]

I am not fine-tuning on UCF-101 and only using the RGB Input.

sarosijbose commented 3 years ago

Hi, the following should do the trick for loading these models finetuned on ucf101: a) change the last layer in the model definition to output 101 classes instead of 400 or 600. b) pass the right paths for the ucf101 checkpoints. c) when setting up the restore op use something as follows (example adapting RGB stream in evaluate_sample.py): for variable in tf.global_variables(): if variable.name.split('/')[0] == 'RGB': rgb_variable_map[variable.name.replace(':0', '').replace('Conv3d', 'Conv2d').replace('conv_3d/w','weights').replace('conv_3d/b', 'biases').replace('RGB/inception_i3d', 'InceptionV1').replace('batchnorm', 'BatchNorm')] = variable Best regards, Joao On Sat, Sep 15, 2018 at 10:19 PM pinkfloyd06 @_.***> wrote: l printed tf.global_varibles() https://github.com/deepmind/kinetics-i3d/blob/master/evaluate_sample.py#L84 with the finetuned kinetics modelon ucf-101 you put in the drive and the model from https://github.com/deepmind/kinetics-i3d/tree/master/data/checkpoints/rgb_imagenet The variables are the same — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#28 (comment)>, or mute the thread https://github.com/notifications/unsubscribe-auth/AO6qanns84KyP3994oJVSeQE5uyfI2vzks5ubW7zgaJpZM4VaTys .

Hi @joaoluiscarreira,

I followed your advice here to change the layer name from Conv3d to Conv2d but still this same error persists.

NotFoundError: Key 2d_1a_7x7/BatchNorm/beta not found in checkpoint
   [[{{node save_2/RestoreV2}}]]

During handling of the above exception, another exception occurred:

NotFoundError                             Traceback (most recent call last)
NotFoundError: Key 2d_1a_7x7/BatchNorm/beta not found in checkpoint
   [[node save_2/RestoreV2 (defined at <ipython-input-13-d07954b37fbf>:20) ]]

I am not fine-tuning on UCF-101 and only using the RGB Input.

@pinkfloyd06, Did you get around this? Then please help since I cannot open the checkpoint file and match the variables myself.

YuLengChuanJiang commented 3 years ago

Hi @joaoluiscarreira: Thanks for your sharing! I want to know how to handle videos with a number of frames less than 64. I know you copy the video. Just like the frames index 1,2,3,4,5...,40,1,2,3,4,5,...40 until the number of frames over 64?

thanks!

joaoluiscarreira commented 3 years ago

Yes, either that or padding with zeros. For classification looping the video seemed better.

Joao

On Mon, 9 Aug 2021, 15:48 YuLengChuanJiang, @.***> wrote:

Hi @joaoluiscarreira https://github.com/joaoluiscarreira: Thanks for your sharing! I want to know how to handle videos with a number of frames less than 64. I know you copy the video. Just like the frames index 1,2,3,4,5...,40,1,2,3,4,5,...40 until the number of frames over 64?

thanks!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/deepmind/kinetics-i3d/issues/28#issuecomment-895287245, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADXKU2UU6W7GE5CSGY3KZELT37TDNANCNFSM4FLJHSWA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .

Tortoise17 commented 3 years ago

Dear Friends. Is there any kinetics 600/ kinetics 700 fine tuned model available in .h5 or pt format? Kindly if someone can share the link would be a great help.

shineYuSong commented 2 years ago

Hi @joaoluiscarreira ,

Thank you a lot for your model and for your answer.

Unfortunately, l didn't succeed to load them. I get the following error :

NotFoundError (see above for traceback): Key RGB/inception_i3d/Conv3d_1a_7x7/batch_norm/beta not found in checkpoint
   [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]
   [[Node: save/RestoreV2/_393 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_397_save/RestoreV2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]

At rgb_saver.restore(sess, _CHECKPOINT_PATHS[eval_type])

Thank you for your help

i occur it either, have you solve it ?

YuLengChuanJiang commented 2 years ago

您好!您的邮件我已收到,我会尽快查看并回复 ,请不用担心。 祝您万事如意!                                                                         冷川江