Open bcm-at-zama opened 5 months ago
Was an issue @c-gamble had when working on S6 Concrete ML bounty, https://github.com/zama-ai/bounty-program/issues/127#issuecomment-2184252767
Hi @c-gamble,
Do you use brevitas by any chance? If so, make sure you start the model by a QuantIdentity as in https://github.com/zama-ai/concrete-ml/blob/main/docs/advanced_examples/QuantizationAwareTraining.ipynb.
If that's not it, I will probably need more info about your implementation. We can dm on discord if needed.
You could also check out the answer to the following issue as you seem to face the same kind of problem : https://github.com/zama-ai/concrete-ml/issues/729
Issue by a user (@c-gamble), that I copy here:
hello! i'm working with a team on closing this bounty and we're using PyTorch's VGG network as our style transfer solution. We have successfully quantized the network with pretrained weights using PyTorch's native quantization support, and we intend to perform inference using an FHE client to demonstrate our progress. We are taking inspiration from the image filter example provided here.
We are running into the issue, however, that the
compile_torch_model
function (member ofconcrete-ml
either throws an assertion error or times out.We define our VGG model using a helper function:
And we define our calibration input as follows:
To make the compilation fail, we run
python generate_dev_files.py
with the model and inputs initialized as shown above. The error we receive isHowever, to make the script timeout, we can change either or both of the model/inputs to floats using a
.float()
invocation after initialization.Any guidance would be greatly appreciated!