Haidra-Org / AI-Horde

A crowdsourced distributed cluster for AI art and text generation
GNU Affero General Public License v3.0
1.05k stars 125 forks source link

Would it make sense to use hivemind for distributed training/generation? #77

Open 0xdevalias opened 1 year ago

0xdevalias commented 1 year ago

Splitting this out from the unrelated issue:

    Not sure if the implementation/etc would be compatible, but here's another distributed StableDiffusion training project a friend recently linked me to:

Originally posted by @0xdevalias in https://github.com/db0/AI-Horde/issues/12#issuecomment-1304692808

Basically, I stumbled across the hivemind lib and thought that it could be a useful addition to AI-Horde. I'm not 100% sure how the current distributed process is implemented, but from a quick skim it looked like perhaps you had rolled your own.

Not sure if it's something you already considered and decided against, but wanted to bring it to your attention in case you hadn't seen it before.

db0 commented 1 year ago

That would take quite a bit of work to onboard to the horde, but it's a promising thing. The problem is that the horde is asynchronous so it could be that the latency would be prohibitive, but I would be willing to consider it, especially if someone sends a PR

ndahlquist commented 1 year ago

I think the horde is primarily used for inference, not training. Do any jobs actually do training, or is that planned for the future? If not, it seems like this may provide limited benefit.

0xdevalias commented 1 year ago

Speaking of using hivemind for distributed training/etc, just stumbled across the following on:

SD Training Labs is going to conduct the first global public distributed training on November 27th

  • Distributed training information provided to me:
    • Attempted combination of the compute power of over 40+ peers worldwide to train a finetune of Stable Diffusion with Hivemind
    • This is an experimental test that is not guaranteed to work
    • This is a peer-to-peer network.
    • You can use a VPN to connect
    • Run inside an isolated container if possible
    • Developer will try to add code to prevent malicious scripting, but nothing is guaranteed
    • Current concerns with training like this:
    • Concern 1 - Poisoning: A node can connect and use a malicious dataset hence affecting the averaged gradients. Similar to a blockchain network, this will only have a small effect on the averaged weights. The larger the amount of malicious nodes connected, the more power they will have on the averaged weights. At the moment we are implementing super basic (and vague) discord account verification.
    • Concern 2 - RCE: Pickle exploits should not be possible but haven't been tested.
    • Concern 3 - IP leak & firewall issues: Due to the structure of hivemind, IPs will be seen by other peers. You can avoid this by seting client-only mode, but you will limit the network reach. IPFS should be possible to be used to avoid firewall and NAT issues but doesn't work at the moment

Doing some further googling/etc, it seems that the 'SD Training Labs' discord is:

And things are being coordinated in the #distributed-training channel, which has a few pinned messages about the training, and links to the following repos:

It looks like the chavinlo/distributed-diffusion repo is based on this one:


A couple of snippets from skimming that discord channel

Could you tell me what are the minimum hardware requirements to participate?

at the moment any GPU with 20.5 GB of VRAM. so a rtx 3090

yeah, you can connect and disconnect at anytime It basically works like this: When a training session starts there is one peer which is the one more peers are going to connect to, this one usually has two ports opened, one for TCP and other for UDP connections (TCP works most of the time while UDP doesn't)

Then the rest of the peers connect to the first peer, they can either chose wether to open their ports too, so more people can connect to them, extending the network reach and reducing global latency, or chose to just be a client, meaning that no other peers can connect to them.

Then all of the peers train individually (in a federated manner) on the provided dataset (provided by a dataset server). Once a certain number of itterations has been reached all peers stop training and start exchanging data to one another, this usually takes 3 minutes in very ideal conditions but it can take up to 15 or 20.

If a peer joins while this is happening, or has outdated weights it will have to wait and download the weights again. If a peer exits while this is happening or before it shares it's locally-trained weights, the network losses some potential learning, and if the dataset that was assigned to that peer isnt reported (a timeout of 30 minutes) it will be reassigned to another peer later.

Once all the peers have syncronized they resume training and repeat the process until they reach the set number of itterations again. Some potential concerns is the security of the network, since basically anyone can connect and send garbage data. I was thinking of adding basic discord account auth for now, I have read some PRs containing security network features but I am not sure I am also testing right now the effects of compression during sync

I will prob just use the old unoptimized codebase and stick hivemind to it its so complicated to port the diffusers thing into lightning

I will try to "port" it to lightning and see if it works because theres another repo (naifu) that is also doing training with diffusers, very similar to the current trainer, but does some weird things in the back they got hivemind working I think but im not sure because they dont even use the DHT (they have the modules though)

okay and is this just for the group project, or also to offer gpu to individual artists

I was also planning to do a distributed dreambooth like horde for everyone so yeah