An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
automatic operator conversion in compression.pytorch.speedup
Why is this needed:
nni needs to call these functions to understand the model.
problems when doing it manually:
The arguments can only be fetched as a argument list
The function uses a lot of star(*) syntax (Keyword-Only Arguments, PEP 3102), both positional argument and keyword-only argument, but the argument list cannot be used to distinguish positional argument and keyword-only argument
The function is overloaded, and the number of parameters in multiple versions of the same function may be the same, so it is difficult to distinguish overloaded situations only by the number.
Because it is a build-in, inspect.getfullargspec and other methods in inspect module cannot be used to get reflection information.
There are more than 2000 functions including the overloaded functions, which is hard to be operated by manual adaptation.
Without this feature, how does current nni work:
manual adaptation and conversion
Components that may involve changes:
only jit_translate.py in common/compression/pytorch/speedup/
Brief description of your proposal if any:
Automatic conversion
There is a schema information in jit node which can parse out positional argument and keyword-only argument.
Then we can automatic wrap arguments, keywords, and the function up to an adapted function.
Tested the automatic conversions of torch.sum, torch.unsqueeze, and torch.flatten OK.
Unresolved issues
Check schema syntax in multiple versions of pytorch and whether the syntax is stable.
The schema syntax is different from python's or c++'s.
I did't find the syntax document in pytorch documentation.
When pytorch compiles, it will dynamically generate schema informations from c++ functions.
For all the given schemas, see if they can correspond to the compiled pytorch functions.
For all the given schemas, try to parse one by one, and count the number that cannot be parsed.
What would you like to be added:
automatic operator conversion in compression.pytorch.speedup
Why is this needed:
nni needs to call these functions to understand the model.
problems when doing it manually:
Without this feature, how does current nni work:
manual adaptation and conversion
Components that may involve changes:
only jit_translate.py in common/compression/pytorch/speedup/
Brief description of your proposal if any: