Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. This project provides researchers, developers, and engineers advanced quantization and compression tools for deploying state-of-the-art neural networks.
Add support for gradual activation quantization in Keras.
This mainly converts torch implementation of gradual activation quantization to a common implementation.
In addition, pytests tests were added to keras (similar to current pytorch tests).
Checklist before requesting a review:
[ ] I set the appropriate labels on the pull request.
[ ] I have added/updated the release note draft (if necessary).
[ ] I have updated the documentation to reflect my changes (if necessary).
[ ] All function and files are well documented.
[ ] All function and classes have type hints.
[ ] There is a licenses in all file.
[ ] The function and variable names are informative.
Pull Request Description:
Add support for gradual activation quantization in Keras. This mainly converts torch implementation of gradual activation quantization to a common implementation. In addition, pytests tests were added to keras (similar to current pytorch tests).
Checklist before requesting a review: