Open AbbyLuHui opened 2 years ago
There are a few things I'd want to know to be able to diagnose this.
Net()
?x_test
and model.parameters()
?x_test.get_plaint_text()
return as expected?Depending on the answers to these questions I may have some follow-ups
Thanks so much for the reply! Here are some additional information.
Looking forward to the follow-ups. :)
Thanks for the information. A few more follow-ups to try to troubleshoot.
src
argument accordingly)cfg.encoder.precision_bits
, cfg.mpc.provider
or cfg.mpc.protocol
?x_test.get_plain_text()
returns properly, this would only occur if the model weights are misaligned, which seems unlikely in this code. (To check you could verify that torch.nn.util.parameters_to_vector(private_model.parameters()).get_plain_text() == torch.nn.util.parameters_to_vector(model.parameters())
If these do not solve the problem, it is likely a bug in CrypTen code which we should try to identify.
Thanks a lot for the detailed comments. Here are more troubleshooting information.
torch.nn.util.parameters_to_vector(private_model.parameters()).get_plain_text() == torch.nn.util.parameters_to_vector(model.parameters())
returned mostly False. The values have minor differences in the order of magnitude 1e^{-5} - 1e^{-7}. The same applies for the world_size=1 case, which is working fine. For multiprocessing, I am using a script similar to https://github.com/facebookresearch/CrypTen/blob/f4cbdfc685d9064f45a5654dee9f3809f6d93e7f/examples/multiprocess_launcher.py
Hi, I am trying to import a pretrained pytorch MLP model into crypten. However, there seems to be numerical overflow issues using both CPU and GPU when converting output_enc to plaintext. I am wondering what might cause the issue? Thank you!