zama-ai / concrete-ml

Concrete ML: Privacy Preserving ML framework built on top of Concrete, with bindings to traditional ML frameworks.
Other
851 stars 122 forks source link

chore: fix hybrid models #765

Closed jfrery closed 1 week ago

jfrery commented 1 week ago

closes https://github.com/zama-ai/concrete-ml-internal/issues/3905

cc @RomanBredehoft

jfrery commented 1 week ago

The divergence in results was between torch and quantized?

I would still keep a torch mode here is instead of removing it entirely.

Not sure what you mean. I check the accuracy of simulation vs torch and disable vs torch.

Why would we keep a torch execution? We have the torch model for this

RomanBredehoft commented 1 week ago

actually @jfrery , I think @fd0r is right we should keep the torch execution as well, but maybe through a new mode like "torch" or "float" ?

because as I mentioned to you, currently the torch model is modified in place by the hybrid model. But that means, there's no real way of running the model in float after compiling it with the hybrid

jfrery commented 1 week ago

actually @jfrery , I think @fd0r is right we should keep the torch execution as well, but maybe through a new mode like "torch" or "float" ?

because as I mentioned to you, currently the torch model is modified in place by the hybrid model. But that means, there's no real way of running the model in float after compiling it with the hybrid

Very good point! We replace in-place so I am basically checking disable vs disable and simulate vs disable. I will add the torch inference.

That being said, simulate vs disable should raise the flaky we saw ...

So yes the problem was that we modified in-place the torch model. I missed that...

github-actions[bot] commented 1 week ago

Coverage passed ✅

Coverage details

``` ---------- coverage: platform linux, python 3.8.18-final-0 ----------- Name Stmts Miss Cover Missing ------------------------------------- TOTAL 7885 0 100% 60 files skipped due to complete coverage. ```