google-deepmind / kinetics-i3d

Convolutional neural network model for video classification trained on the Kinetics dataset.
Apache License 2.0
1.75k stars 462 forks source link

Does the video need to be cropped? #119

Open mazatov opened 2 years ago

mazatov commented 2 years ago

Can the model handle not square input of the video? Or does it need to be 224x224?

I plan to fine-tune it on another dataset so will be removing the top layer.

Coolnerdn commented 1 year ago

Hello, do you have any progress now? I have the same need. Could we discuss it?

joaoluiscarreira commented 1 year ago

In principle the model should handle any resolution since it is convolutional.

Joao

On Sat, 28 Jan 2023, 08:17 Coolnerdn, @.***> wrote:

Hello, do you have any progress now? I have the same need. Could we discuss it?

— Reply to this email directly, view it on GitHub https://github.com/deepmind/kinetics-i3d/issues/119#issuecomment-1407328977, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA . You are receiving this because you are subscribed to this thread.Message ID: @.***>

Coolnerdn commented 1 year ago

Thank you for the response. I want to use my own dataset to fine-tune the I3D network. Is there any requirement on the number of frames? Must it be the number of frames during training?

哈哈 @.***

 

------------------ 原始邮件 ------------------ 发件人: "deepmind/kinetics-i3d" @.>; 发送时间: 2023年1月28日(星期六) 晚上7:15 @.>; @.**@.>; 主题: Re: [deepmind/kinetics-i3d] Does the video need to be cropped? (Issue #119)

In principle the model should handle any resolution since it is convolutional.

Joao

On Sat, 28 Jan 2023, 08:17 Coolnerdn, @.***> wrote:

> Hello, do you have any progress now? I have the same need. Could we > discuss it? > > — > Reply to this email directly, view it on GitHub > <https://github.com/deepmind/kinetics-i3d/issues/119#issuecomment-1407328977&gt;, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA&gt; > . > You are receiving this because you are subscribed to this thread.Message > ID: @.***> >

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

Coolnerdn commented 1 year ago

In principle the model should handle any resolution since it is convolutional. Joao On Sat, 28 Jan 2023, 08:17 Coolnerdn, @.> wrote: Hello, do you have any progress now? I have the same need. Could we discuss it? — Reply to this email directly, view it on GitHub <#119 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA . You are receiving this because you are subscribed to this thread.Message ID: @.>

Thank you for the response. I want to use my own dataset to fine-tune the I3D network. Is there any requirement on the number of frames? Must it be the number of frames during training?

joaoluiscarreira commented 1 year ago

No, you can use a different number of frames. It may be that it will not work as well if you use fewer frames, but you'd have to try to be sure.

Best,

Joao

On Sun, Jan 29, 2023 at 4:41 AM Coolnerdn @.***> wrote:

In principle the model should handle any resolution since it is convolutional. Joao … <#m-454727744888391567> On Sat, 28 Jan 2023, 08:17 Coolnerdn, @.> wrote: Hello, do you have any progress now? I have the same need. Could we discuss it? — Reply to this email directly, view it on GitHub <#119 (comment) https://github.com/deepmind/kinetics-i3d/issues/119#issuecomment-1407328977>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA . You are receiving this because you are subscribed to this thread.Message ID: @.>

Thank you for the response. I want to use my own dataset to fine-tune the I3D network. Is there any requirement on the number of frames? Must it be the number of frames during training?

— Reply to this email directly, view it on GitHub https://github.com/deepmind/kinetics-i3d/issues/119#issuecomment-1407560139, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADXKU2RISFEA5JAWPB3RQHTWUXYHBANCNFSM5KSWB4KA . You are receiving this because you commented.Message ID: @.***>

Coolnerdn commented 1 year ago

I see.  There is one more question.  Did you use the same number of frames when you train the network? The video lengths I use to fine-tune are not the same, so the number of frames are different and can't be fed directly into the network.  How did you solve this problem? Or did you crop the video to the same length during training?

哈哈 @.***

 

------------------ 原始邮件 ------------------ 发件人: "deepmind/kinetics-i3d" @.>; 发送时间: 2023年1月29日(星期天) 下午2:22 @.>; @.**@.>; 主题: Re: [deepmind/kinetics-i3d] Does the video need to be cropped? (Issue #119)

No, you can use a different number of frames. It may be that it will not work as well if you use fewer frames, but you'd have to try to be sure.

Best,

Joao

On Sun, Jan 29, 2023 at 4:41 AM Coolnerdn @.***> wrote:

> In principle the model should handle any resolution since it is > convolutional. Joao > … <#m-454727744888391567> > On Sat, 28 Jan 2023, 08:17 Coolnerdn, @.> wrote: Hello, do you have > any progress now? I have the same need. Could we discuss it? — Reply to > this email directly, view it on GitHub <#119 (comment) > <https://github.com/deepmind/kinetics-i3d/issues/119#issuecomment-1407328977&gt;&gt;, > or unsubscribe > https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA > <https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA&gt; > . You are receiving this because you are subscribed to this thread.Message > ID: @.> > > Thank you for the response. I want to use my own dataset to fine-tune the > I3D network. Is there any requirement on the number of frames? Must it be > the number of frames during training? > > — > Reply to this email directly, view it on GitHub > <https://github.com/deepmind/kinetics-i3d/issues/119#issuecomment-1407560139&gt;, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ADXKU2RISFEA5JAWPB3RQHTWUXYHBANCNFSM5KSWB4KA&gt; > . > You are receiving this because you commented.Message ID: > @.***> >

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

joaoluiscarreira commented 1 year ago

it should be possible to feed an arbitrary number of frames to the network, what is the error you get ?

You should just have to replace the 64 constant in places like this by your number of frames:

inp = tf.placeholder(tf.float32, [None, 64, _IMAGE_SIZE, _IMAGE_SIZE, 3])

Joao

On Sun, Jan 29, 2023 at 8:37 AM Coolnerdn @.***> wrote:

I see.  There is one more question.  Did you use the same number of frames when you train the network? The video lengths I use to fine-tune are not the same, so the number of frames are different and can't be fed directly into the network.  How did you solve this problem? Or did you crop the video to the same length during training?

哈哈 @.***

 

------------------ 原始邮件 ------------------ 发件人: "deepmind/kinetics-i3d" @.>; 发送时间: 2023年1月29日(星期天) 下午2:22 @.>; @.**@.>; 主题: Re: [deepmind/kinetics-i3d] Does the video need to be cropped? (Issue #119)

No, you can use a different number of frames. It may be that it will not work as well if you use fewer frames, but you'd have to try to be sure.

Best,

Joao

On Sun, Jan 29, 2023 at 4:41 AM Coolnerdn @.***> wrote:

> In principle the model should handle any resolution since it is > convolutional. Joao > … <#m-454727744888391567> > On Sat, 28 Jan 2023, 08:17 Coolnerdn, @.*> wrote: Hello, do you have > any progress now? I have the same need. Could we discuss it? — Reply to > this email directly, view it on GitHub <#119 (comment) > < https://github.com/deepmind/kinetics-i3d/issues/119#issuecomment-1407328977&gt;&gt;,

> or unsubscribe > https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA > < https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA&gt;

> . You are receiving this because you are subscribed to this thread.Message > ID: @.*> > > Thank you for the response. I want to use my own dataset to fine-tune the > I3D network. Is there any requirement on the number of frames? Must it be > the number of frames during training? > > — > Reply to this email directly, view it on GitHub > < https://github.com/deepmind/kinetics-i3d/issues/119#issuecomment-1407560139&gt;,

> or unsubscribe > < https://github.com/notifications/unsubscribe-auth/ADXKU2RISFEA5JAWPB3RQHTWUXYHBANCNFSM5KSWB4KA&gt;

> . > You are receiving this because you commented.Message ID: > @.***> >

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

— Reply to this email directly, view it on GitHub https://github.com/deepmind/kinetics-i3d/issues/119#issuecomment-1407599341, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADXKU2S3FZ4O6QXZWXQZZEDWUYT2ZANCNFSM5KSWB4KA . You are receiving this because you commented.Message ID: @.***>

Coolnerdn commented 1 year ago

it should be possible to feed an arbitrary number of frames to the network, what is the error you get ? You should just have to replace the 64 constant in places like this by your number of frames: inp = tf.placeholder(tf.float32, [None, 64, _IMAGE_SIZE, _IMAGESIZE, 3]) Joao On Sun, Jan 29, 2023 at 8:37 AM Coolnerdn @.> wrote: I see.  There is one more question.  Did you use the same number of frames when you train the network? The video lengths I use to fine-tune are not the same, so the number of frames are different and can't be fed directly into the network.  How did you solve this problem? Or did you crop the video to the same length during training? 哈哈 @.   ------------------ 原始邮件 ------------------ 发件人: "deepmind/kinetics-i3d" @.>; 发送时间: 2023年1月29日(星期天) 下午2:22 @.>; @.**@.>; 主题: Re: [deepmind/kinetics-i3d] Does the video need to be cropped? (Issue #119) No, you can use a different number of frames. It may be that it will not work as well if you use fewer frames, but you'd have to try to be sure. Best, Joao On Sun, Jan 29, 2023 at 4:41 AM Coolnerdn @.***> wrote: > In principle the model should handle any resolution since it is > convolutional. Joao > … <#m-454727744888391567_> > On Sat, 28 Jan 2023, 08:17 Coolnerdn, @.> wrote: Hello, do you have > any progress now? I have the same need. Could we discuss it? — Reply to > this email directly, view it on GitHub <#119 (comment) > < #119 (comment)>>, > or unsubscribe > https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA > < https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA&gt; > . You are receiving this because you are subscribed to this thread.Message > ID: @.> > > Thank you for the response. I want to use my own dataset to fine-tune the > I3D network. Is there any requirement on the number of frames? Must it be > the number of frames during training? > > — > Reply to this email directly, view it on GitHub > < #119 (comment)>, > or unsubscribe > < https://github.com/notifications/unsubscribe-auth/ADXKU2RISFEA5JAWPB3RQHTWUXYHBANCNFSM5KSWB4KA&gt; > . > You are receiving this because you commented.Message ID: > @.> > — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.> — Reply to this email directly, view it on GitHub <#119 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADXKU2S3FZ4O6QXZWXQZZEDWUYT2ZANCNFSM5KSWB4KA . You are receiving this because you commented.Message ID: @.***>

I mean, I'm not sure how many frames I'm going to input, so I can't replace 64 with a specific number, because the number of frames could be any integer from 20 to 90. Can this data be used directly to train networks?

joaoluiscarreira commented 1 year ago

Ok got it. People typically pad the videos to a fixed size (or slice it into chunks if too large so does not fit in memory).

Joao

On Sun, 29 Jan 2023, 09:49 Coolnerdn, @.***> wrote:

it should be possible to feed an arbitrary number of frames to the network, what is the error you get ? You should just have to replace the 64 constant in places like this by your number of frames: inp = tf.placeholder(tf.float32, [None, 64,

*IMAGE_SIZE, IMAGESIZE, 3]) Joao … <#m-6833791440840358865_> On Sun, Jan 29, 2023 at 8:37 AM Coolnerdn @.> wrote: I see. There is one more question. Did you use the same number of frames when you train the network? The video lengths I use to fine-tune are not the same, so the number of frames are different and can't be fed directly into the network. How did you solve this problem? Or did you crop the video to the same length during training? 哈哈 @. ------------------ 原始邮件 ------------------ 发件人: "deepmind/kinetics-i3d" @.>; 发送时间: 2023年1月29日(星期天) 下午2:22 @.>; @.@.>; 主题: Re: [deepmind/kinetics-i3d] Does the video need to be cropped? (Issue

119 https://github.com/deepmind/kinetics-i3d/issues/119) No, you can use

a different number of frames. It may be that it will not work as well if you use fewer frames, but you'd have to try to be sure. Best, Joao On Sun, Jan 29, 2023 at 4:41 AM Coolnerdn @.**> wrote: > In principle the model should handle any resolution since it is > convolutional. Joao > … <#m-454727744888391567> > On Sat, 28 Jan 2023, 08:17 Coolnerdn, @.*> wrote: Hello, do you have > any progress now? I have the same need. Could we discuss it? — Reply to > this email directly, view it on GitHub <#119 https://github.com/deepmind/kinetics-i3d/issues/119 (comment) > < #119 (comment) https://github.com/deepmind/kinetics-i3d/issues/119#issuecomment-1407328977>>,

or unsubscribe > https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA < https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA> . You are receiving this because you are subscribed to this thread.Message > ID: @.> > > Thank you for the response. I want to use my own dataset to fine-tune the > I3D network. Is there any requirement on the number of frames? Must it be > the number of frames during training? > — > Reply to this email directly, view it on GitHub > < #119 (comment) https://github.com/deepmind/kinetics-i3d/issues/119#issuecomment-1407560139>, or unsubscribe > < https://github.com/notifications/unsubscribe-auth/ADXKU2RISFEA5JAWPB3RQHTWUXYHBANCNFSM5KSWB4KA> . > You are receiving this because you commented.Message ID: > @.> > — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.> — Reply to this email directly, view it on GitHub <#119 (comment) https://github.com/deepmind/kinetics-i3d/issues/119#issuecomment-1407599341>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADXKU2S3FZ4O6QXZWXQZZEDWUYT2ZANCNFSM5KSWB4KA . You are receiving this because you commented.Message ID: @*.***>

I mean, I'm not sure how many frames I'm going to input, so I can't replace 64 with a specific number, because the number of frames could be any integer from 20 to 90. Can this data be used directly to train networks?

— Reply to this email directly, view it on GitHub https://github.com/deepmind/kinetics-i3d/issues/119#issuecomment-1407615163, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADXKU2VTA2R4DECJC2EJ2CLWUY4MLANCNFSM5KSWB4KA . You are receiving this because you commented.Message ID: @.***>

Coolnerdn commented 1 year ago

I see. Thank you for your kind reply.

Best wishes!

哈哈 @.***

 

------------------ 原始邮件 ------------------ 发件人: "deepmind/kinetics-i3d" @.>; 发送时间: 2023年1月29日(星期天) 下午5:54 @.>; @.**@.>; 主题: Re: [deepmind/kinetics-i3d] Does the video need to be cropped? (Issue #119)

Ok got it. People typically pad the videos to a fixed size (or slice it into chunks if too large so does not fit in memory).

Joao

On Sun, 29 Jan 2023, 09:49 Coolnerdn, @.***> wrote:

> it should be possible to feed an arbitrary number of frames to the > network, what is the error you get ? You should just have to replace the 64 > constant in places like this by your number of frames: inp = > tf.placeholder(tf.float32, [None, 64, > > *IMAGE_SIZE, IMAGESIZE, 3]) Joao … <#m-6833791440840358865_> On Sun, Jan > 29, 2023 at 8:37 AM Coolnerdn @.> wrote: I see. There is one more > question. Did you use the same number of frames when you train the > network? The video lengths I use to fine-tune are not the same, so the > number of frames are different and can't be fed directly into the network. > How did you solve this problem? Or did you crop the video to the same > length during training? 哈哈 @. ------------------ 原始邮件 ------------------ > 发件人: "deepmind/kinetics-i3d" @.>; 发送时间: 2023年1月29日(星期天) 下午2:22 @.>; @.@.>; > 主题: Re: [deepmind/kinetics-i3d] Does the video need to be cropped? (Issue > #119 <https://github.com/deepmind/kinetics-i3d/issues/119&gt;) No, you can use > a different number of frames. It may be that it will not work as well if > you use fewer frames, but you'd have to try to be sure. Best, Joao On Sun, > Jan 29, 2023 at 4:41 AM Coolnerdn @.> wrote: > In principle the model > should handle any resolution since it is > convolutional. Joao > … > <#m-454727744888391567> > On Sat, 28 Jan 2023, 08:17 Coolnerdn, @.> > wrote: Hello, do you have > any progress now? I have the same need. Could > we discuss it? — Reply to > this email directly, view it on GitHub <#119 > <https://github.com/deepmind/kinetics-i3d/issues/119&gt; (comment) > < #119 > (comment) > <https://github.com/deepmind/kinetics-i3d/issues/119#issuecomment-1407328977&gt;&gt;&gt;, > > or unsubscribe > > https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA > <https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA&gt; > > < > https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA > <https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA&gt;&gt; > > . You are receiving this because you are subscribed to this > thread.Message > ID: @.> > > Thank you for the response. I want to use > my own dataset to fine-tune the > I3D network. Is there any requirement on > the number of frames? Must it be > the number of frames during training? > > > — > Reply to this email directly, view it on GitHub > < #119 (comment) > <https://github.com/deepmind/kinetics-i3d/issues/119#issuecomment-1407560139&gt;&gt;, > > or unsubscribe > < > https://github.com/notifications/unsubscribe-auth/ADXKU2RISFEA5JAWPB3RQHTWUXYHBANCNFSM5KSWB4KA&gt; > > . > You are receiving this because you commented.Message ID: > @.> > > — Reply to this email directly, view it on GitHub, or unsubscribe. You are > receiving this because you commented.Message ID: @.> — Reply to this > email directly, view it on GitHub <#119 (comment) > <https://github.com/deepmind/kinetics-i3d/issues/119#issuecomment-1407599341&gt;&gt;, > or unsubscribe > https://github.com/notifications/unsubscribe-auth/ADXKU2S3FZ4O6QXZWXQZZEDWUYT2ZANCNFSM5KSWB4KA > . You are receiving this because you commented.Message ID: @.**> > > I mean, I'm not sure how many frames I'm going to input, so I can't > replace 64 with a specific number, because the number of frames could be > any integer from 20 to 90. Can this data be used directly to train networks? > > — > Reply to this email directly, view it on GitHub > <https://github.com/deepmind/kinetics-i3d/issues/119#issuecomment-1407615163&gt;, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ADXKU2VTA2R4DECJC2EJ2CLWUY4MLANCNFSM5KSWB4KA&gt; > . > You are receiving this because you commented.Message ID: > @.> >

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

Coolnerdn commented 1 year ago

Could you please tell me how to pad the video to a fixed size in the I3D paper?  Can I add it before and after a shorter video to achieve this goal, and will this affect the recognition of the model? 

I'm currently considering of using your work for cross-modality tasks, but since I've never been involved in CV, I really need your advice. Thanks again.

哈哈 @.***

 

------------------ 原始邮件 ------------------ 发件人: "deepmind/kinetics-i3d" @.>; 发送时间: 2023年1月29日(星期天) 下午5:54 @.>; @.**@.>; 主题: Re: [deepmind/kinetics-i3d] Does the video need to be cropped? (Issue #119)

Ok got it. People typically pad the videos to a fixed size (or slice it into chunks if too large so does not fit in memory).

Joao

On Sun, 29 Jan 2023, 09:49 Coolnerdn, @.***> wrote:

> it should be possible to feed an arbitrary number of frames to the > network, what is the error you get ? You should just have to replace the 64 > constant in places like this by your number of frames: inp = > tf.placeholder(tf.float32, [None, 64, > > *IMAGE_SIZE, IMAGESIZE, 3]) Joao … <#m-6833791440840358865_> On Sun, Jan > 29, 2023 at 8:37 AM Coolnerdn @.> wrote: I see. There is one more > question. Did you use the same number of frames when you train the > network? The video lengths I use to fine-tune are not the same, so the > number of frames are different and can't be fed directly into the network. > How did you solve this problem? Or did you crop the video to the same > length during training? 哈哈 @. ------------------ 原始邮件 ------------------ > 发件人: "deepmind/kinetics-i3d" @.>; 发送时间: 2023年1月29日(星期天) 下午2:22 @.>; @.@.>; > 主题: Re: [deepmind/kinetics-i3d] Does the video need to be cropped? (Issue > #119 <https://github.com/deepmind/kinetics-i3d/issues/119&gt;) No, you can use > a different number of frames. It may be that it will not work as well if > you use fewer frames, but you'd have to try to be sure. Best, Joao On Sun, > Jan 29, 2023 at 4:41 AM Coolnerdn @.> wrote: > In principle the model > should handle any resolution since it is > convolutional. Joao > … > <#m-454727744888391567> > On Sat, 28 Jan 2023, 08:17 Coolnerdn, @.> > wrote: Hello, do you have > any progress now? I have the same need. Could > we discuss it? — Reply to > this email directly, view it on GitHub <#119 > <https://github.com/deepmind/kinetics-i3d/issues/119&gt; (comment) > < #119 > (comment) > <https://github.com/deepmind/kinetics-i3d/issues/119#issuecomment-1407328977&gt;&gt;&gt;, > > or unsubscribe > > https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA > <https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA&gt; > > < > https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA > <https://github.com/notifications/unsubscribe-auth/ADXKU2UZYJUGAFBWDQWCX3DWUTIZFANCNFSM5KSWB4KA&gt;&gt; > > . You are receiving this because you are subscribed to this > thread.Message > ID: @.> > > Thank you for the response. I want to use > my own dataset to fine-tune the > I3D network. Is there any requirement on > the number of frames? Must it be > the number of frames during training? > > > — > Reply to this email directly, view it on GitHub > < #119 (comment) > <https://github.com/deepmind/kinetics-i3d/issues/119#issuecomment-1407560139&gt;&gt;, > > or unsubscribe > < > https://github.com/notifications/unsubscribe-auth/ADXKU2RISFEA5JAWPB3RQHTWUXYHBANCNFSM5KSWB4KA&gt; > > . > You are receiving this because you commented.Message ID: > @.> > > — Reply to this email directly, view it on GitHub, or unsubscribe. You are > receiving this because you commented.Message ID: @.> — Reply to this > email directly, view it on GitHub <#119 (comment) > <https://github.com/deepmind/kinetics-i3d/issues/119#issuecomment-1407599341&gt;&gt;, > or unsubscribe > https://github.com/notifications/unsubscribe-auth/ADXKU2S3FZ4O6QXZWXQZZEDWUYT2ZANCNFSM5KSWB4KA > . You are receiving this because you commented.Message ID: @.**> > > I mean, I'm not sure how many frames I'm going to input, so I can't > replace 64 with a specific number, because the number of frames could be > any integer from 20 to 90. Can this data be used directly to train networks? > > — > Reply to this email directly, view it on GitHub > <https://github.com/deepmind/kinetics-i3d/issues/119#issuecomment-1407615163&gt;, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ADXKU2VTA2R4DECJC2EJ2CLWUY4MLANCNFSM5KSWB4KA&gt; > . > You are receiving this because you commented.Message ID: > @.> >

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>