Open nazarpysko opened 1 week ago
Hello @nazarpysko , This looks more like a pytorch issue 🤔 Do you confirm :
By the way, I invite you to check out and use our latest version, Concrete ML 1.6.0, released just a few days ago 😉
I can confirm that:
conda list torch
command.Sure! I will try the latest version of concrete-ml.
I also tried to move from a conda env to a python venv, but got the same error. I used python 3.10, torch 1.13.1 and concrete-ml 1.6.0.
Hello again @nazarpysko,
Looks like there is something going on with the cpu/gpu then. Could you try to remove the device = "cuda" if torch.cuda.is_available() else "cpu"
line (cell 7 I think) and write device = "cpu"
instead ? Or at least send your model to cpu by doing `torch_model = torch_model.to_device("cpu") after training !
Hi @nazarpysko,
You are right there is a problem when the machine has a gpu available since the data will be on cpu and the model on gpu.
You can add a
torch_model = torch_model.cpu()
right before calling compile_brevitas_qat_model
and it should work fine. We are fixing the notebook in main (https://github.com/zama-ai/concrete-ml/pull/767).
Thanks for the issue!
Summary
I'm new to FHE and was trying out different tutorials listed in zama.ai website. More specifically, while executing the Quantization Aware Training notebook, I got this error in the eleventh code-cell:
The full error traceback looks like this:
Environment description
OS: Ubuntu (WSL 2) Python version: 3.8.19 concrete-ml version: 1.5.0