Closed maxwellgodv closed 1 year ago
Hello @maxwellgodv,
Thank you for reaching out to us.
To use Fully Homomorphic Encryption (FHE) with Concrete ML, it is necessary to convert your machine learning model into an FHE-compatible model.
In this use-case, we explain how to convert a custom Torch neural network into its FHE-equivalent.
An FHE-equivalent model means that the model is quantized and the maximum precision of the operation graph is less than 16 bits. So:
bit
hyper-parameter in QuantVGG11
: refers to the quantization bit, an essential hyper-parameter in Concrete-ML, required to quantize the input, weights, activation functions and output. This quantization step is mandatory as FHE operates only over integers with a precision limit of 16 bits. In the provided use-case, we used 5 bits to quantize the model and the inputs.For custom models, Concrete ml uses Brevitas library for quantization.
compile_brevitas_qat_model
, which under the hood generates an executable operation graph, determines cryptographic parameters and raises an error if the maximum bit-width exceeds 16 bits. In this case, you have to decrease the quantization bit.Thanks !
Thank you, i understand
1.the bit in Brevitas model
2.the bit after fhe_compatibility
What is the difference between two bit?