Closed alicangok closed 1 year ago
Updated the pull request with the linter patch, as well as fixing the device name for an effnet2 model (previously it was set to MAX78000, and the script would return a "This device supports up to 32 layers" error).
This is carried out by using a similar mechanism used in quantization-aware-training. It fixes potential underflow and overflow issues, adapting for different quantization bits.