liusida / ComfyUI-B-LoRA

A ComfyUI custom node that loads and applies B-LoRA models.
MIT License
68 stars 6 forks source link

B-LoRA Training in ComfyUI - Request #6

Closed Kinglord closed 4 months ago

Kinglord commented 4 months ago

Hi there, thanks a lot for the node to use these in Comfy! I was actually wondering if you were thinking about (or would like some help - just let me know) in making a training node so that it can be done within Comfy as well. I'm normally not a huge fan of using Comfy for this but because of the nature of B-LoRAs using a single image, I think it could be quite handy vs other LoRA training utilities. Just a thought!

liusida commented 4 months ago

Hi, my lord. I was thinking about making a docker image for training and putting it here: https://github.com/liusida/ComfyUI-B-LoRA/tree/main/train

There are two reasons of why I think an independent training system would be better:

  1. According to the recent blog post, it is the inference where ComfyUI would focus on;
  2. Installing the dependencies for training would contaminate ComfyUI many extra packages and the users might have trouble installing this custom node at all.

So, I think a docker image would be the best solution. What do you think? Would you like to help with the training image?

Kinglord commented 4 months ago

Thanks for the quick reply! I saw you were looking at doing a docker image which is what made me think about bringing this up. In terms of your two points:

  1. Comfy has a huge ecosystem as I know you're aware, and people have built and continue to build some insane tooling for it ranging from training to LLMs, AI agents, and more. There's a large number of people trying to use Comfy for more than just inference and I think this is growing (platforms like Salt, etc.).
  2. I think this is the real gacha comes in for me, and I tend to agree with you here that there's a much larger number of people who will want to just use the B-LoRAs than train them. That said if the project wants to offer training then to me it makes a lot of sense to do what it takes to get it into Comfy vs as a standalone docker because even fewer people will use that. In my head, I guess it probably makes the most sense to offer a hard split so people that want to just use the loader get no bloat (not even a docker image) and the training is split off into another project/repo.

I don't know how extensive the training requirements are, but I'm happy to help with the docker image as well as training new B-LoRAs - especially against some fine-tuned checkpoints. 😁 Feel free to reply here or msg me on Discord (kinglord) if you want to talk more, I'll close this out for now!

liusida commented 4 months ago

Yes, I think Docker's users do not quite overlap with ComfyUI's, although both are quite popular. I've noticed that ComfyUI users tend to install more custom nodes than I had imagined. xD Maybe integrating the training into ComfyUI is an option. Let's keep this possibility open for now.

For future training with B-LoRA, I recommend trying the author's code to get a feel for the training process. As far as I know, this is the easiest way (in terms of computation) to do the training, so it's worth trying. https://github.com/yardenfren1996/B-LoRA