Open calad0i opened 1 month ago
I agree with the spirit of the change but not the execution. Optional dependencies are the way to go but not unchecked lazy imports. Having imports randomly fail in the middle of some function and leaving the user to deal with the problem themselves is not friendly.
In the original proposal for modular hls4ml, frontends and backends would be installable extras. All of them. In practice this would mean basic install of hls4ml would only provide the core classes that can be used to save/load hls4ml models (a feature currently in development) and one backend (which has to be enforced somehow in setup.py
). We didn't manage it before, but we could try again. For example, wiring the converters in such a way that if hls4ml was not installed with pip install hls4ml[torch]
there would be no torch converter and the function convert_from_pytorch_model()
would print that it is not available and how it should be installed to make it available. Same for ONNX and Keras and same for backends. So if a user wants an installation for converting Keras models to Vitis firmware the installation command would be pip install hls4ml[keras,vitis]
. If they try to convert a pytorch model for oneAPI backend on that installation they should get a message that this is not what their current installation supports.
To achieve that we should make sure that our internals don't depend on big external libraries, in addition to some reordering of the imports that this PR does. To that end, we should focus on removing qkeras as the internal dependency from types and from profiling functions. Since we're already discussing ditching QKeras official for a drop-in replacement library, we can replace the internal dependency with chang's reusable optimizers, which mimic qkeras anyway. Then this new replacement library (yet to be named) can be built on top of the optimizers library. It would contain the definitions of layers (QDense, QConv1D/2D etc) that rely on our optimizers. Our keras frontend could then depend on this library, if even needed (as it seems that we only really need qkeras' optimizers for parsing, not the layer definitions).
Having imports randomly fail in the middle of some function and leaving the user to deal with the problem themselves is not friendly
I was thinking a missing import is rather self-explanatory. Here are few proposals to throw an informative error for this without large restructuring for the short to midterm. Restructuring the whole project to multiple frontend and backend sounds like an overkill for this particular issue at the moment.
onnx
and (q)keras
frontend when the user passes the saved model in the config dict to the framework, as otherwise passing the model object to hls4ml
implies the existence of the corresponding package (e.g., torch
). In this case, I think we can just ward the *_to_hls
function with a try-and-catch and map the corresponding import error to a custom error message prompting for installing hls4ml[***]
. Possible import error can be thrown in the future save-load part though (likely only for qkeras
, see below)importlib.import_module
, call it and throw custom error for the lazy imports. Type hinting will probably be completely broken when developing the corresponding modules but on runtime nothing would change...For types & quantizers, the qkeras
dependency appears to be only interfacing with the qkeras
models, but they persist the whole lifecycle of the hls models. In my opinion, the quantizers should end their lifecycle at the handler level for quantized models, but maybe there is some reason at the time of design.
The standalone quantizers
library can be used in-place of qkeras
quantizers in most cases, with two main caveats:
alpha
is different, auto_po2
for example. Though no one should really be using them I guess.quantizers
. Setting m=0
in the minifloat quantizer could mimic that but probably with slightly different rounding.
Description
Make all front-end specific dependencies (
tf
,(q)keras
,torch
,onnx
) optional and imported lazily. All handlers are registered now, no matter if the frontend specific package is present. The frontend specific packages are only imported if the corresponding handler is used. This should remove the strong dependency of hls4ml onqkeras
andtensorflow
(e.g., when converting a model not intf.keras
, tensorflow no longer need to be installed).Type of change
Tests
Test Configuration:
pre-commit
on the files I edited or added.