raysers / Mflux-ComfyUI

Quick Mflux on ComfyUI
MIT License
34 stars 8 forks source link

8-bit MFLUX #4

Open sanctimon opened 1 month ago

sanctimon commented 1 month ago

Hello,

It is not clear to me where the 8-bit MFLUX can be downloaded from. It is selectable in your custom nodes, but I cannot find an option to download.

Can you help?

Thanks

raysers commented 1 month ago

Hello, thank you for your feedback. This is the first issue I’ve received, thank you.

The MfluxModelsDownloader node currently only includes downloading "flux.1-schnell-mflux-4bit" and "flux.1-dev-mflux-4bit." If you want to use the 8-bit version, you can try the custom model node MfluxCustomModels. However, the prerequisite for using it is that you already have the full 33GB Black Forest native model in your .cache. You can quantize and save your own model by specifying the model (Schnell or Dev) along with the quantize parameter set to 8.

The custom_identifier field can be filled in arbitrarily or even left blank. Once you’ve saved it, you can find it in models/Mflux.

If you don’t have the full Black Forest model in your .cache, you might want to try:

https://huggingface.co/AITRADER/MFLUX.1-schnell-8-bit https://huggingface.co/AITRADER/MFLUX.1-dev-8-bit

I just found them on Hugging Face, but I haven't tested them. If you can try them and confirm they work properly, I will add them to the list in the MfluxModelsDownloader node soon.

sanctimon commented 1 month ago

Thank you for your quick response! I am using ComfyUI from within Pinokio, as it provides specific MPS advantages without too much fine-tuning. I was able to download the 8-bit version using the original Mflux into a folder on my machine. However, here is the challenge:

  1. The 8-bit version needs to go into /models/Mflux. But what should the folder be named as in order for the Mflux nodes to recognise it?
  2. Even in that case, the error about the missing login/token will still pop up. You wrote "the prerequisite for using it is that you already have the full 33GB Black Forest native model in your .cache". Can you point me to the exact location in the ComfyUI structure where the original native model should be stored? I already have it in my unet folder and it works fine. Many thanks!
  3. Last but not least. If a lora was created using the 16-bit flux.dev, will it work with the 8-bit version?
raysers commented 1 month ago

I may not be able to fully answer the three questions you mentioned, but I’ll do my best:

1.I'm not very familiar with the path mechanism of ComfyUI in Pinokio, as I haven’t tried Pinokio yet. So far, my environment has always been the native COMFYUI. I’m not sure if the 8-bit model you downloaded is this one:

https://huggingface.co/AITRADER/MFLUX.1-dev-8-bit/tree/main

If so, you’ll need to manually create an "MFLUX.1-dev-8-bit" folder in models/Mflux. Download all files from the above address into models/Mflux/MFLUX.1-dev-8-bit, and then you can directly use the “Mflux Models Loader” node to load it. You should see "MFLUX.1-dev-8-bit" available in the list for selection.

Here’s the file structure I found on the Mflux website:

MFLUX.1-dev-8-bit ├── text_encoder │   └── model.safetensors ├── text_encoder_2 │   ├── model-00001-of-00002.safetensors │   └── model-00002-of-00002.safetensors ├── tokenizer │   ├── merges.txt │   ├── special_tokens_map.json │   ├── tokenizer_config.json │   └── vocab.json ├── tokenizer_2 │   ├── special_tokens_map.json │   ├── spiece.model │   ├── tokenizer.json │   └── tokenizer_config.json ├── transformer │   ├── diffusion_pytorch_model-00001-of-00003.safetensors │   ├── diffusion_pytorch_model-00002-of-00003.safetensors │   └── diffusion_pytorch_model-00003-of-00003.safetensors └── vae └── diffusion_pytorch_model.safetensors

If manual downloading is confusing, I suggest using the Mflux Models Downloader node to automatically download the 4BIT version first. Then open models/Mflux and review the 4BIT version’s structure; I think this will give you a clear idea.

2.As for .cache, this is managed by Hugging Face and doesn’t depend on ComfyUI. If you follow the simplest workflow—Quick MFlux Generation node + save image—the full Black Forest native model will be automatically downloaded to your .cache. If you haven't modified the HF_HOME variable, it will default to "YourUsername/.cache".

3.Currently, using LoRA with quantized models results in errors unless you apply LoRA on the full model using the Mflux Custom Models node for quantization. For details, please refer to my README, where I’ve included a brief guide.

samgimagery commented 1 month ago

Hello,Reading this I’d like to share how I’ve got mine to work. I guess the easiest way was to use MFLUX and quantise the dev model from huggingface to my desktop simply using this command:mflux-save \    --path "/Users/you/Desktop/dev_8bit" \    --model dev \    --quantize 8This, will download the model, quantise it to 8bit if you choose, and create the correct folder architecture on your desktop or where you want it. Might have to run the line to allow access to hugging face too. Then you can just move it to the correct location and also you can remove the full model from .cache to save some space if desired. Samuel Gaudreau+61 474 535 956Eleebana, AustraliaOn 26 Oct 2024, at 5:08 am, raysers @.***> wrote: I may not be able to fully answer the three questions you mentioned, but I’ll do my best: 1.I'm not very familiar with the path mechanism of ComfyUI in Pinokio, as I haven’t tried Pinokio yet. So far, my environment has always been the native COMFYUI. I’m not sure if the 8-bit model you downloaded is this one: https://huggingface.co/AITRADER/MFLUX.1-dev-8-bit/tree/main If so, you’ll need to manually create an "MFLUX.1-dev-8-bit" folder in models/Mflux. Download all files from the above address into models/Mflux/MFLUX.1-dev-8-bit, and then you can directly use the “Mflux Models Loader” node to load it. You should see "MFLUX.1-dev-8-bit" available in the list for selection. Here’s the file structure I found on the Mflux website: MFLUX.1-dev-8-bit ├── text_encoder │   └── model.safetensors ├── text_encoder_2 │   ├── model-00001-of-00002.safetensors │   └── model-00002-of-00002.safetensors ├── tokenizer │   ├── merges.txt │   ├── special_tokens_map.json │   ├── tokenizer_config.json │   └── vocab.json ├── tokenizer_2 │   ├── special_tokens_map.json │   ├── spiece.model │   ├── tokenizer.json │   └── tokenizer_config.json ├── transformer │   ├── diffusion_pytorch_model-00001-of-00003.safetensors │   ├── diffusion_pytorch_model-00002-of-00003.safetensors │   └── diffusion_pytorch_model-00003-of-00003.safetensors └── vae └── diffusion_pytorch_model.safetensors If manual downloading is confusing, I suggest using the Mflux Models Downloader node to automatically download the 4BIT version first. Then open models/Mflux and review the 4BIT version’s structure; I think this will give you a clear idea. 2.As for .cache, this is managed by Hugging Face and doesn’t depend on ComfyUI. If you follow the simplest workflow—Quick MFlux Generation node + save image—the full Black Forest native model will be automatically downloaded to your .cache. If you haven't modified the HF_HOME variable, it will default to "YourUsername/.cache". 3.Currently, using LoRA with quantized models results in errors unless you apply LoRA on the full model using the Mflux Custom Models node for quantization. For details, please refer to my README, where I’ve included a brief guide.

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: @.***>

raysers commented 1 month ago

Hello, @sanctimon, I've added the 8-bit version to the downloader, so now you can directly download it using the MfluxModelsDownloader node. I hope this helps!

I also reviewed your previous reply carefully. You mentioned that you could place models in UNET and use them successfully, which suggests that those models are likely single safetensors files. This is different from the MFLUX model format, which is similar to diffusers and organized more like a folder structure.

I made a mistake earlier, assuming some users already had the complete MFLUX-compatible models in ComfyUI and advised them to move them to models/Mflux to avoid .cache downloads. However, based on the common usage of FLUX models, even if users had a full model setup previously, it would likely be a single safetensors file rather than the folder format that MFLUX requires. So, moving it to models/Mflux would likely not work as intended. I’ve removed this potentially misleading information from the README.

Good luck!

tolozine commented 4 weeks ago

@raysers how change model path, my model in other disk drives. how move it to them

raysers commented 4 weeks ago

@raysers how change model path, my model in other disk drives. how move it to them

你在上海吗?我斗胆使用中文来回答。

这里的模型路径是代码中预设的,暂时无法从其他路径加载。

另外能否了解一下你以前的模型是单个的safetensor或者GGUF文件吗?如果是那种模型,那在我这个插件是无法使用的。

MFLUX使用的模型类似于diffusers模型,一般是整个文件目录,不是单独的safetensor

tolozine commented 3 weeks ago

@raysers how change model path, my model in other disk drives. how move it to them

你在上海吗?我斗胆使用中文来回答。

这里的模型路径是代码中预设的,暂时无法从其他路径加载。

另外能否了解一下你以前的模型是单个的safetensor或者GGUF文件吗?如果是那种模型,那在我这个插件是无法使用的。

MFLUX使用的模型类似于diffusers模型,一般是整个文件目录,不是单独的safetenso

我有safetensor文件,以及通过Mflux下载的文件目录,或者可以使用扩展路径来调用它吗?mac的容量太小了,我把模型文件移动到移动硬盘里。

raysers commented 3 weeks ago

@tolozine 没问题,我可以加入和官方一样的手动指定路径当做另外的选择。其实我个人也一直使用移动硬盘,甚至连ComfyUI都是装在移动硬盘上的,并且我的HF_HOME变量也调整指定到了移动硬盘里。因此我没考虑过空间不足的问题。

或者我测试一下如何让extra model能够生效,如果解决了那直接可以修改comfyui的文件来映射到models/MFlux