bigscience-workshop / petals

🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
https://petals.dev
MIT License
9.07k stars 505 forks source link

[feature] add privacy #412

Open LouSparfell opened 1 year ago

LouSparfell commented 1 year ago

One of the obstacles of using petals is the fact that there is no privacy. It would be great to add some features for this. I'm not an expert, but wouldn't it be possible to use homomorphic encryption or even zero knowledge encryption?

Well that's an idea I'm sure the number of users could take off. :)

borzunov commented 1 year ago

Hi @LouSparfell,

As far as we know, homomorphic encryption and ZK methods are too slow to be applied for LLMs, since they are designed for integer computations and are not properly supported/parallelized on GPUs. Everything we saw leads to 100-1000x slowdown - in this case, a better approach is to just run an LLM fully locally on CPU or using offloading.

We follow this topic and would be happy to see methods that could be actually applied for distributed LLMs, but it's unclear if they can appear in the nearest future.