zama-ai / concrete-ml

Concrete ML: Privacy Preserving ML framework using Fully Homomorphic Encryption (FHE), built on top of Concrete, with bindings to traditional ML frameworks.
Other
1.02k stars 146 forks source link

AssertionError: Values must be float if value_is_float is set to True, got int64: [[[[102 14 188 ... 85 205 46] #759

Open bcm-at-zama opened 5 months ago

bcm-at-zama commented 5 months ago

Issue by a user (@c-gamble), that I copy here:

hello! i'm working with a team on closing this bounty and we're using PyTorch's VGG network as our style transfer solution. We have successfully quantized the network with pretrained weights using PyTorch's native quantization support, and we intend to perform inference using an FHE client to demonstrate our progress. We are taking inspiration from the image filter example provided here.

We are running into the issue, however, that the compile_torch_model function (member of concrete-ml either throws an assertion error or times out.

We define our VGG model using a helper function:

self.torch_model = get_quantized_model().to(device)

And we define our calibration input as follows:

torch_input = (
   torch.from_numpy(np.stack(inputset))
   .transpose(1, 3)
   .transpose(2, 3)
   .to(device)
)

To make the compilation fail, we run python generate_dev_files.py with the model and inputs initialized as shown above. The error we receive is

AssertionError: Values must be float if value_is_float is set to True, got int64: [[[[102  14 188 ...  85 205  46]

However, to make the script timeout, we can change either or both of the model/inputs to floats using a .float() invocation after initialization.

Any guidance would be greatly appreciated!

bcm-at-zama commented 5 months ago

Was an issue @c-gamble had when working on S6 Concrete ML bounty, https://github.com/zama-ai/bounty-program/issues/127#issuecomment-2184252767

jfrery commented 5 months ago

Hi @c-gamble,

Do you use brevitas by any chance? If so, make sure you start the model by a QuantIdentity as in https://github.com/zama-ai/concrete-ml/blob/main/docs/advanced_examples/QuantizationAwareTraining.ipynb.

If that's not it, I will probably need more info about your implementation. We can dm on discord if needed.

RomanBredehoft commented 5 months ago

You could also check out the answer to the following issue as you seem to face the same kind of problem : https://github.com/zama-ai/concrete-ml/issues/729