Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. This project provides researchers, developers, and engineers advanced quantization and compression tools for deploying state-of-the-art neural networks.
This PR fixes issue #1189. This PR also allows better import of the set_working_device function, facilitating more effective DeviceManager singleton management.
Example Usage:
from model_compression_toolkit.core.pytorch import set_working_device
# Set the working device to 'GPU' or 'CPU'
set_working_device('GPU')
With this change, the set_working_device function can be easily imported and used to configure the DeviceManager's device context. This also allows a quantization to be tested on a specific device.
cc: @Idan-BenAmi
Checklist before requesting a review:
[ ] I set the appropriate labels on the pull request.
[ ] I have added/updated the release note draft (if necessary).
[ ] I have updated the documentation to reflect my changes (if necessary).
[X] All function and files are well documented.
[X] All function and classes have type hints.
[X] There is a licenses in all file.
[X] The function and variable names are informative.
Pull Request Description:
This PR fixes issue #1189. This PR also allows better import of the
set_working_device
function, facilitating more effectiveDeviceManager
singleton management.Example Usage:
With this change, the
set_working_device
function can be easily imported and used to configure theDeviceManager
's device context. This also allows a quantization to be tested on a specific device.cc: @Idan-BenAmi
Checklist before requesting a review: