Closed jfrery closed 1 week ago
The divergence in results was between torch and quantized?
I would still keep a torch mode here is instead of removing it entirely.
Not sure what you mean. I check the accuracy of simulation vs torch and disable vs torch.
Why would we keep a torch execution? We have the torch model for this
actually @jfrery , I think @fd0r is right we should keep the torch execution as well, but maybe through a new mode like "torch"
or "float"
?
because as I mentioned to you, currently the torch model is modified in place by the hybrid model. But that means, there's no real way of running the model in float after compiling it with the hybrid
actually @jfrery , I think @fd0r is right we should keep the torch execution as well, but maybe through a new mode like
"torch"
or"float"
?because as I mentioned to you, currently the torch model is modified in place by the hybrid model. But that means, there's no real way of running the model in float after compiling it with the hybrid
Very good point! We replace in-place so I am basically checking disable vs disable and simulate vs disable. I will add the torch inference.
That being said, simulate vs disable should raise the flaky we saw ...
So yes the problem was that we modified in-place the torch model. I missed that...
``` ---------- coverage: platform linux, python 3.8.18-final-0 ----------- Name Stmts Miss Cover Missing ------------------------------------- TOTAL 7885 0 100% 60 files skipped due to complete coverage. ```
closes https://github.com/zama-ai/concrete-ml-internal/issues/3905
cc @RomanBredehoft