Open chongyangwang-song opened 3 years ago
I have the same problem, but when i run online_demo/main.py to test, the result is correct.so it can ignore? I am doubtful about that.
@Junan007 note the warning:This means that the trace might not generalize to other inputs! Maybe we should replace this line of code,using other operation to implement it.
@Junan007 you can check this issue https://github.com/mit-han-lab/temporal-shift-module/issues/181#issuecomment-818571464,similar to our problem. If you have any idea, please tell me, thanks
@Junan007 The result is correct?! Maybe onnx-simplifier make it works?
@Junan007 I reproduce the online work, but it shows 1.7vid/s, Not only me show this slow speed. Can you accelerate using tvm?
I try another implementation of shift module from here but it has the same warning too.
@the online version is not the same as offline
@Junan007 the online version is not the same as offline
@Junan007 I reproduce the online work, but it shows 1.7vid/s, Not only me show this slow speed. Can you accelerate using tvm?
Yes, i test on the cpu(2.8G Quad-Core Intel i7), only change target to "llvm", it shows 45~75vid/s when running.
@Junan007 I test it on NVIDIA TX2. I follow the official steps, but it shows 1.7vid/s.Do you know the reason?
@Junan007 I remove onnx and tvm ,then use torch to inference directly,it show 20 vid/s. This problem make me doubtful several months
@Junan007 I test it on NVIDIA TX2. I follow the official steps, but it shows 1.7vid/s.Do you know the reason?
sorry, I don't have NVIDIA TX2, do you compile tvm with cuda?
sudo apt install llvm # install llvm which is required by tvm git clone -b v0.6 https://github.com/apache/incubator-tvm.git cd incubator-tvm git submodule update --init mkdir build cp cmake/config.cmake build/ cd build
Have it is compiled with cuda?
you can use tvm.runtime.enabled('cuda') to check it.
you can use tvm.runtime.enabled('cuda') to check it.
thank you for your reply
@Junan007 I test the onnx model with several inputs, the outputs are equals to torch outputs, So, We can ignore this warning?
yes, maybe it can ignored, and i test on a jetson nano use tvm, only 0.7 vid/s, did you solve your problem?
I'm sure that it has used gpu resources. it can 17.5 Vid/s when use llvm only, I don't know why it's so slow when use cuda.
after fixed tophub error, it can reach 27.2 Vid/s when use cuda. i think i solve my problem.
after fixed tophub error, it can reach 27.2 Vid/s when use cuda. i think i solve my problem.
Cool, But what's the tophub error?
tophub is a part of tvm, it will be auto download when compile module, and save to ~/.tvm/tophub, but it failed in my environment.
tophub is a part of tvm, it will be auto download when compile module, and save to ~/.tvm/tophub, but it failed in my environment.
You are excellent!It will be great if I know about you early。Now I am trying to use tensorRT to accelerate it。 I have another question about this online model, The train for online version is same to offline version? I trained online model using offline version code(since they only provide this code),and I test it using online version, The result seems to be right(I don't test it with a large of inputs , just test it with several sample), I want to know, It is right in theory?
Yes, the train for online is same to offline.but it is different on test, online version need caching the last features to shift.
tophub is a part of tvm, it will be auto download when compile module, and save to ~/.tvm/tophub, but it failed in my environment.
@Junan007 Hi,tophub will be auto download, but I'am a Chinese student, there are so many websites and urls which I can't visit, everytime I compile the model, tophub package can't be downloaded, Is there any way to solve this problem? thanks
you can get it from here: https://github.com/tlc-pack/tophub/tree/main/tophub, and then copy to ~/.tvm/tophub
many thanks
------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2021年7月20日(星期二) 下午4:49 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [mit-han-lab/temporal-shift-module] for online version, onnx seems not support some OP in forward( ): (#189)
you can get it from here: https://github.com/tlc-pack/tophub/tree/main/tophub, and then copy to ~/.tvm/tophub
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.
When transform torch model to onnx model: for this line of code:x1, x2 = x[:,:,c//8],x[:,c//8:] TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! I think we can't ignore this warning