edufourfr / reading-in-the-dark

Implementation of "Reading in the Dark: Classifying Encrypted Digits with Functional Encryption"
Other
19 stars 4 forks source link

some question about the key generation #2

Open dbloldb opened 4 years ago

dbloldb commented 4 years ago

After reading the article and the code, I have some questions about the key generation. Do I need to generate the g1, g2, gt for my model? or I can use the same generator as you used in your code?

Based on my understanding, what I should do is: 1) train my own model use my data 2) export, serialized and keep the model weight in the file 3) generate the master key, private key, and public key. they are associated with my model weight. To generate these keys, I only need to send my model weight. I can leverage your MNT159.inst file instead of generating my own "MNT159.inst" file, right? if I need to, could you please give some detailed guide on how to do that? 4) encrypt the test data, and predict the decrypted data.

is it correct? or I need to build and generate my own inst file which contains g1, g2, gt?

Regards, Corey

edufourfr commented 4 years ago

Hi Corey,

Both would work, it's fine to use MNT159.inst if you don't want to generate new bases. Note that this will tie you to the MNT159 curve. I don't think there are any security issues with reusing generators generated by someone else but I could be wrong. If that's a concern, you probably shouldn't be using this library in the first place anyway because it's for prototyping.

Best, Edouard

dbloldb commented 4 years ago

Hi Edouard,

Thanks for your reply. why it will tie me to the MNT159 curve? the master key, the private key is based on the PPT algorithm, "(PG := (G1, G2, p, g1, g2, e) ← GGen(1λ) ;msk := (~s,~t), pk :=PG, g~s1, g~t2)". the only input is a random number. Given that, the parameter in MNT159 is "random". They only thing is the decryption key is based on my trained model weight. Is it correct?

Regards, Corey

edufourfr commented 4 years ago

Hi Corey,

I'm not sure I fully understand your followup. The GGen part of the cryptographic algorithm description only has theoretical importance and it's not really what the sampling of the generators is meant to emulate. So in practice we use a fixed curve with fixed generators. But as far as I understand the charm library does not (or at least did not) have fixed generators for pairing curves so I had to pick them and store them so my results would be consistent across multiple runs. Still, those generators are on pairing curve MNT159, and you can't use them with other curves. If you want to use your own generators, be it on MNT159 or on an other curve, you can generate your own '.inst' file/object with the create_ method of class ML_DGP in core/scheme.py.

Hope that helps, Edouard