Open jia0511 opened 2 years ago
have you tried replacing the {envs_path}\Lib\site-packages\nni\compression\pytorch\speedup\jit_translate.py
to the new_dev_version?
I ran into this problem using NNI version 2.8.
@Louis-J
have you tried replacing the {envs_path}\Lib\site-packages\nni\compression\pytorch\speedup\jit_translate.py to the new_dev_version?
I tried replacing the jit_translate.py ,but met new problem:
I would like to know if there is a good solution to this problem?
Hi @Louis-J
I would like to know whether the new version will solve this problem, and whether the reason for the speedup error is related to the model structure or NNI. When I do not do the speedup model, the model can be executed normally.
Hi @Louis-J
Is there an update on this issue, is the 2.9 version fixed to support norm?
Sorry for not replying these days. Now in 2.9 version we submitted the feature to automatically using the ops in aten::
namespace. the aten::norm is supported now.
But there is a bug still that we cannot use the 'expand_as' function correctly. And all the returned view may cause the problem. We will fix this bug in 2.9.1.
the expand_as
is solved in #5141 included in version 2.10. you can try with the latest version 2.10 now.
When I use the NNI L1 method to prune the facenet model and perform speedup, I encounter an error indicating that aten::norm is not supported. How to solve this problem? My pytorch network structure is defined as follows: