Closed qixi666666 closed 3 years ago
As for how to train my data set, I have the folder of video frame and the label of the corresponding folder. How can I make train_videofolder.txt and val_videofolder.txt
@qixi666666 I trained online model using offline code, the outputs seem to be right
@qixi666666 I trained online model using offline code, the outputs seem to be right
@chongyangwang-song Maybe this is little difference between offline and online which are trained with offline code, but I think the true way should be chance some code in "temporal_shift.py". For example , delete "out[:, 1:, fold: 2 fold] = x[:, :-1, fold: 2 fold] # shift right"
As for how to train my data set, I have the folder of video frame and the label of the corresponding folder. How can I make train_videofolder.txt and val_videofolder.txt
i think some tool in tsn can do it
@qixi666666 I trained online model using offline code, the outputs seem to be right
@chongyangwang-song hithere, we are doing this work right now,but we have trouble with train the online mobilenet mdel, would u mind sharing your code about training tsm onnline with us? appreciate!
@qixi666666 I trained online model using offline code, the outputs seem to be right
@chongyangwang-song hithere, we are doing this work right now,but we have trouble with train the online mobilenet mdel, would u mind sharing your code about training tsm onnline with us? appreciate!
Hithere, did you solve this issue? same problem here
@qixi666666 I trained online model using offline code, the outputs seem to be right
Hi there, may I know: did you use the inference model they provided in /online_demo , where the input size seems to be (1,3,224,224) (frame by frame) , or build your own inference model based on training code, where the inputs of (num_segment, 3, 224, 224) ?
thank you! @chongyangwang-song
嗨,在线版本的意思就是逐帧推理。他的输入只能是(1,3,224,224),直接用online_demo即可。离线版本输入会是(num_segments,3,224,224),t时刻的帧信息可以通过shift融合t+1时刻的信息。 对于在线版本,推理当前帧的时候是不知道未来的一帧的,所以t时刻的帧只能和t前面的帧通过shift融合,这是离线和在线的差异。 详细内容可以看原论文。
------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2021年7月13日(星期二) 中午11:10 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [mit-han-lab/temporal-shift-module] The offline train is same to online train or not ? (#190)
@qixi666666 I trained online model using offline code, the outputs seem to be right
Hi there, may I know: did you use the inference model they provided in /online_demo , where the input size seems to be (1,3,224,224) (frame by frame) , or build your own inference model based on training code, where the inputs of (num_segment, 3, 224, 224) ?
thank you! @chongyangwang-song
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
谢谢您的回复!
请问能知道您这样训练的细节吗? 我应该从offline版本训练,然后将训练出来的网络参数导入到online版本的判别器吗?这样的话请问有些参数比如num_segments这里在offline训练中怎么调整呢?
我按照这个训练会有这样的问题: https://github.com/mit-han-lab/temporal-shift-module/issues/39#issuecomment-820022829
------------------ 原始邮件 ------------------ 发件人: "mit-han-lab/temporal-shift-module" @.>; 发送时间: 2021年7月13日(星期二) 晚上8:29 @.>; @.**@.>; 主题: Re: [mit-han-lab/temporal-shift-module] The offline train is same to online train or not ? (#190)
嗨,在线版本的意思就是逐帧推理。他的输入只能是(1,3,224,224),直接用online_demo即可。离线版本输入会是(num_segments,3,224,224),t时刻的帧信息可以通过shift融合t+1时刻的信息。 对于在线版本,推理当前帧的时候是不知道未来的一帧的,所以t时刻的帧只能和t前面的帧通过shift融合,这是离线和在线的差异。 详细内容可以看原论文。
------------------ 原始邮件 ------------------
发件人: @.>;
发送时间: 2021年7月13日(星期二) 中午11:10
收件人: @.>;
抄送: @.>; @.>;
主题: Re: [mit-han-lab/temporal-shift-module] The offline train is same to online train or not ? (#190)
@qixi666666 I trained online model using offline code, the outputs seem to be right
Hi there, may I know: did you use the inference model they provided in /online_demo , where the input size seems to be (1,3,224,224) (frame by frame) , or build your own inference model based on training code, where the inputs of (num_segment, 3, 224, 224) ?
thank you! @chongyangwang-song
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
是的,先在offline训练,然后把参数导入到online版本,在offline训练时segment参数对精度的影响似乎不是很大,我设置的是8。我复现过demo中的手势识别,由于当时没搞懂tvm,我就把tvm加速去掉了,只用torch推理,识别结果是正确的。
------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2021年7月14日(星期三) 上午8:59 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [mit-han-lab/temporal-shift-module] The offline train is same to online train or not ? (#190)
谢谢您的回复!
请问能知道您这样训练的细节吗? 我应该从offline版本训练,然后将训练出来的网络参数导入到online版本的判别器吗?这样的话请问有些参数比如num_segments这里在offline训练中怎么调整呢?
我按照这个训练会有这样的问题: https://github.com/mit-han-lab/temporal-shift-module/issues/39#issuecomment-820022829
------------------ 原始邮件 ------------------ 发件人: "mit-han-lab/temporal-shift-module" @.>; 发送时间: 2021年7月13日(星期二) 晚上8:29 @.>; @.**@.>; 主题: Re: [mit-han-lab/temporal-shift-module] The offline train is same to online train or not ? (#190)
嗨,在线版本的意思就是逐帧推理。他的输入只能是(1,3,224,224),直接用online_demo即可。离线版本输入会是(num_segments,3,224,224),t时刻的帧信息可以通过shift融合t+1时刻的信息。
对于在线版本,推理当前帧的时候是不知道未来的一帧的,所以t时刻的帧只能和t前面的帧通过shift融合,这是离线和在线的差异。
详细内容可以看原论文。
------------------ 原始邮件 ------------------
发件人: @.>;
发送时间: 2021年7月13日(星期二) 中午11:10
收件人: @.>;
抄送: @.>; @.>;
主题: Re: [mit-han-lab/temporal-shift-module] The offline train is same to online train or not ? (#190)
@qixi666666 I trained online model using offline code, the outputs seem to be right
Hi there, may I know:
did you use the inference model they provided in /online_demo , where the input size seems to be (1,3,224,224) (frame by frame) ,
or build your own inference model based on training code, where the inputs of (num_segment, 3, 224, 224) ?
thank you! @chongyangwang-song
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
谢谢! 那请问需要把bi-direction换成uni-direction进行训练吗?
------------------ 原始邮件 ------------------ 发件人: "mit-han-lab/temporal-shift-module" @.>; 发送时间: 2021年7月14日(星期三) 上午9:12 @.>; @.**@.>; 主题: Re: [mit-han-lab/temporal-shift-module] The offline train is same to online train or not ? (#190)
是的,先在offline训练,然后把参数导入到online版本,在offline训练时segment参数对精度的影响似乎不是很大,我设置的是8。我复现过demo中的手势识别,由于当时没搞懂tvm,我就把tvm加速去掉了,只用torch推理,识别结果是正确的。
------------------ 原始邮件 ------------------
发件人: @.>;
发送时间: 2021年7月14日(星期三) 上午8:59
收件人: @.>;
抄送: @.>; @.>;
主题: Re: [mit-han-lab/temporal-shift-module] The offline train is same to online train or not ? (#190)
谢谢您的回复!
请问能知道您这样训练的细节吗? 
我应该从offline版本训练,然后将训练出来的网络参数导入到online版本的判别器吗?这样的话请问有些参数比如num_segments这里在offline训练中怎么调整呢?
我按照这个训练会有这样的问题:
https://github.com/mit-han-lab/temporal-shift-module/issues/39#issuecomment-820022829
------------------ 原始邮件 ------------------
发件人: "mit-han-lab/temporal-shift-module" @.>;
发送时间: 2021年7月13日(星期二) 晚上8:29
@.>;
@.**@.>;
主题: Re: [mit-han-lab/temporal-shift-module] The offline train is same to online train or not ? (#190)
嗨,在线版本的意思就是逐帧推理。他的输入只能是(1,3,224,224),直接用online_demo即可。离线版本输入会是(num_segments,3,224,224),t时刻的帧信息可以通过shift融合t+1时刻的信息。
对于在线版本,推理当前帧的时候是不知道未来的一帧的,所以t时刻的帧只能和t前面的帧通过shift融合,这是离线和在线的差异。
详细内容可以看原论文。
------------------ 原始邮件 ------------------
发件人: @.>;
发送时间: 2021年7月13日(星期二) 中午11:10
收件人: @.>;
抄送: @.>; @.>;
主题: Re: [mit-han-lab/temporal-shift-module] The offline train is same to online train or not ? (#190)
@qixi666666 I trained online model using offline code, the outputs seem to be right
Hi there, may I know:
did you use the inference model they provided in /online_demo , where the input size seems to be (1,3,224,224) (frame by frame) ,
or build your own inference model based on training code, where the inputs of (num_segment, 3, 224, 224) ?
thank you! @chongyangwang-song
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
bi-direction和uni-direction 是什么意思
------------------ 原始邮件 ------------------ 发件人: "mit-han-lab/temporal-shift-module" @.>; 发送时间: 2021年7月14日(星期三) 上午9:21 @.>; @.**@.>; 主题: Re: [mit-han-lab/temporal-shift-module] The offline train is same to online train or not ? (#190)
谢谢! 那请问需要把bi-direction换成uni-direction进行训练吗?
------------------ 原始邮件 ------------------ 发件人: "mit-han-lab/temporal-shift-module" @.>; 发送时间: 2021年7月14日(星期三) 上午9:12 @.>; @.**@.>; 主题: Re: [mit-han-lab/temporal-shift-module] The offline train is same to online train or not ? (#190)
是的,先在offline训练,然后把参数导入到online版本,在offline训练时segment参数对精度的影响似乎不是很大,我设置的是8。我复现过demo中的手势识别,由于当时没搞懂tvm,我就把tvm加速去掉了,只用torch推理,识别结果是正确的。
------------------ 原始邮件 ------------------
发件人: @.>;
发送时间: 2021年7月14日(星期三) 上午8:59
收件人: @.>;
抄送: @.>; @.>;
主题: Re: [mit-han-lab/temporal-shift-module] The offline train is same to online train or not ? (#190)
谢谢您的回复!
请问能知道您这样训练的细节吗? 
我应该从offline版本训练,然后将训练出来的网络参数导入到online版本的判别器吗?这样的话请问有些参数比如num_segments这里在offline训练中怎么调整呢?
我按照这个训练会有这样的问题:
https://github.com/mit-han-lab/temporal-shift-module/issues/39#issuecomment-820022829
------------------ 原始邮件 ------------------
发件人: "mit-han-lab/temporal-shift-module" @.>;
发送时间: 2021年7月13日(星期二) 晚上8:29
@.>;
@.**@.>;
主题: Re: [mit-han-lab/temporal-shift-module] The offline train is same to online train or not ? (#190)
嗨,在线版本的意思就是逐帧推理。他的输入只能是(1,3,224,224),直接用online_demo即可。离线版本输入会是(num_segments,3,224,224),t时刻的帧信息可以通过shift融合t+1时刻的信息。
对于在线版本,推理当前帧的时候是不知道未来的一帧的,所以t时刻的帧只能和t前面的帧通过shift融合,这是离线和在线的差异。
详细内容可以看原论文。
------------------ 原始邮件 ------------------
发件人: ***@***.***>;
发送时间: 2021年7月13日(星期二) 中午11:10
收件人: ***@***.***>;
抄送: ***@***.***>; ***@***.***>;
主题: Re: [mit-han-lab/temporal-shift-module] The offline train is same to online train or not ? (#190)
@qixi666666 I trained online model using offline code, the outputs seem to be right
Hi there, may I know:
did you use the inference model they provided in /online_demo , where the input size seems to be (1,3,224,224) (frame by frame) ,
or build your own inference model based on training code, where the inputs of (num_segment, 3, 224, 224) ?
thank you! @chongyangwang-song
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
哦哦, 就是您说的“对于在线版本,推理当前帧的时候是不知道未来的一帧的,所以t时刻的帧只能和t前面的帧通过shift融合,这是离线和在线的差异。” 训练offline的时候要把这个双向(bi-direction)shift的代码改成单向shift(uni-direction)吗 ?
------------------ 原始邮件 ------------------ 发件人: "mit-han-lab/temporal-shift-module" @.>; 发送时间: 2021年7月14日(星期三) 上午9:23 @.>; @.**@.>; 主题: Re: [mit-han-lab/temporal-shift-module] The offline train is same to online train or not ? (#190)
bi-direction和uni-direction 是什么意思
------------------ 原始邮件 ------------------ 发件人: "mit-han-lab/temporal-shift-module" @.>; 发送时间: 2021年7月14日(星期三) 上午9:21 @.>; @.**@.>; 主题: Re: [mit-han-lab/temporal-shift-module] The offline train is same to online train or not ? (#190)
谢谢!
那请问需要把bi-direction换成uni-direction进行训练吗?
------------------ 原始邮件 ------------------
发件人: "mit-han-lab/temporal-shift-module" @.>;
发送时间: 2021年7月14日(星期三) 上午9:12
@.>;
@.**@.>;
主题: Re: [mit-han-lab/temporal-shift-module] The offline train is same to online train or not ? (#190)
是的,先在offline训练,然后把参数导入到online版本,在offline训练时segment参数对精度的影响似乎不是很大,我设置的是8。我复现过demo中的手势识别,由于当时没搞懂tvm,我就把tvm加速去掉了,只用torch推理,识别结果是正确的。
------------------ 原始邮件 ------------------
发件人: @.>;
发送时间: 2021年7月14日(星期三) 上午8:59
收件人: @.>;
抄送: @.>; @.>;
主题: Re: [mit-han-lab/temporal-shift-module] The offline train is same to online train or not ? (#190)
谢谢您的回复!
请问能知道您这样训练的细节吗? 
我应该从offline版本训练,然后将训练出来的网络参数导入到online版本的判别器吗?这样的话请问有些参数比如num_segments这里在offline训练中怎么调整呢?
我按照这个训练会有这样的问题:
https://github.com/mit-han-lab/temporal-shift-module/issues/39#issuecomment-820022829
------------------ 原始邮件 ------------------
发件人: "mit-han-lab/temporal-shift-module" ***@***.***>;
发送时间: 2021年7月13日(星期二) 晚上8:29
***@***.***>;
***@***.******@***.***>;
主题: Re: [mit-han-lab/temporal-shift-module] The offline train is same to online train or not ? (#190)
嗨,在线版本的意思就是逐帧推理。他的输入只能是(1,3,224,224),直接用online_demo即可。离线版本输入会是(num_segments,3,224,224),t时刻的帧信息可以通过shift融合t+1时刻的信息。
对于在线版本,推理当前帧的时候是不知道未来的一帧的,所以t时刻的帧只能和t前面的帧通过shift融合,这是离线和在线的差异。
详细内容可以看原论文。
------------------ 原始邮件 ------------------
发件人: ***@***.***>;
发送时间: 2021年7月13日(星期二) 中午11:10
收件人: ***@***.***>;
抄送: ***@***.***>; ***@***.***>;
主题: Re: [mit-han-lab/temporal-shift-module] The offline train is same to online train or not ? (#190)
@qixi666666 I trained online model using offline code, the outputs seem to be right
Hi there, may I know:
did you use the inference model they provided in /online_demo , where the input size seems to be (1,3,224,224) (frame by frame) ,
or build your own inference model based on training code, where the inputs of (num_segment, 3, 224, 224) ?
thank you! @chongyangwang-song
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
@qixi666666 I trained online model using offline code, the outputs seem to be right
@chongyangwang-song hithere, we are doing this work right now,but we have trouble with train the online mobilenet mdel, would u mind sharing your code about training tsm onnline with us? appreciate!
@hanson-eye I am struggling to understand the online code. I just need to understand how to move the feature maps efficiently in the online network. Can you please help me with that? Thank you.
the show of train only to offline train , is that mean online train is same to offline train?