pinokiofactory / flux-webui

119 stars 18 forks source link

Did not find branch or tag 'test-clear-memory-cpu-offload' #3

Open skydiablo opened 1 month ago

skydiablo commented 1 month ago

install error:

  Cloning https://github.com/huggingface/accelerate.git (to revision test-clear-memory-cpu-offload) to /home/volker/pinokio/cache/TMPDIR/pip-req-build-as5l700j
  Running command git clone --filter=blob:none --quiet https://github.com/huggingface/accelerate.git /home/volker/pinokio/cache/TMPDIR/pip-req-build-as5l700j

  WARNING: Did not find branch or tag 'test-clear-memory-cpu-offload', assuming revision or ref.
  Running command git checkout -q test-clear-memory-cpu-offload
  error: pathspec 'test-clear-memory-cpu-offload' did not match any file(s) known to git
  error: subprocess-exited-with-error

  × git checkout -q test-clear-memory-cpu-offload did not run successfully.
  │ exit code: 1
  ╰─> See above for output.

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× git checkout -q test-clear-memory-cpu-offload did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
MeinDeutschkurs commented 1 month ago

The same here. But it is saying 'NVIDIA only, currently'.

https://github.com/pinokiofactory/flux-webui/blob/main/pinokio.js

title: "flux-webui",
  description: "[NVIDIA ONLY, FOR NOW] Minimal Flux Web UI powered by Gradio & Diffusers (Flux Schnell + Flux Merged)",

I'm on M1max, 64gb.

vodkadrunkinski commented 1 month ago

Having the same issue.
git+https://github.com/huggingface/accelerate.git@test-clear-memory-cpu-offload branch doesn't exist.

I believe it got rolled into the new release. I got it working by replacing the above line in requirements.txt with: git+https://github.com/huggingface/accelerate.git@v0.33.0

Then use the following install line: pip install torch==2.4.0 torchvision==0.19 torchaudio==2.4.0 xformers --index-url https://download.pytorch.org/whl/cu121

MeinDeutschkurs commented 1 month ago

And if MPS should be supported, here a code snipped that works:

import torch
from diffusers import FluxPipeline
import diffusers
import argparse

# Modify the rope function to handle MPS device
flux_rope = diffusers.models.transformers.transformer_flux.rope
def new_flux_rope(pos: torch.Tensor, dim: int, theta: int) -> torch.Tensor:
    assert dim % 2 == 0, "The dimension must be even."
    if pos.device.type == "mps":
        return flux_rope(pos.to("cpu"), dim, theta).to(device=pos.device)
    else:
        return flux_rope(pos, dim, theta)

diffusers.models.transformers.transformer_flux.rope = new_flux_rope

def parse_arguments():
    parser = argparse.ArgumentParser()
    parser.add_argument("--prompt", type=str, required=True, help="Text prompt for image generation")
    parser.add_argument("--width", type=int, default=1024, help="Width of the generated image")
    parser.add_argument("--height", type=int, default=1024, help="Height of the generated image")
    parser.add_argument("--ratio", type=str, default=None, help="Aspect ratio for the image (e.g., 1:1, 3:4, 4:3)")
    parser.add_argument("--o", type=str, default="./output.png", help="Output path for the generated image")
    return parser.parse_args()

def round_to_nearest(value: int, multiple: int) -> int:
    return (value + multiple - 1) // multiple * multiple

def calculate_dimensions(ratio_str: str, base_size: int = 1280) -> (int, int):
    try:
        width_ratio, height_ratio = map(int, ratio_str.split(':'))
        total_ratio = width_ratio + height_ratio
        width = int(base_size * (width_ratio / total_ratio))
        height = int(base_size * (height_ratio / total_ratio))

        # Round dimensions to be divisible by 8
        width = round_to_nearest(width, 8)
        height = round_to_nearest(height, 8)

        return width, height
    except ValueError:
        raise ValueError("Invalid ratio format. Use 'width:height' format.")

def main():
    args = parse_arguments()

    if args.ratio:
        args.width, args.height = calculate_dimensions(args.ratio)

    # Load the Flux Schnell model
    pipe = FluxPipeline.from_pretrained(
        "black-forest-labs/FLUX.1-schnell",
        revision='refs/pr/1',
        torch_dtype=torch.bfloat16
    ).to("mps")

    # Generate the image
    out = pipe(
        prompt=args.prompt,
        guidance_scale=3.5,
        height=args.height,
        width=args.width,
        num_inference_steps=4,
        max_sequence_length=256,
    ).images[0]

    # Save the generated image
    out.save(args.o)
    print(f"Image generated and saved as {args.o}")

if __name__ == "__main__":
    main()
6Morpheus6 commented 1 month ago

@skydiablo @vodkadrunkinski @MeinDeutschkurs Flux got fixed a few hours ago and runs for hte majority of users in the meantime. It also works on Macs now. If it failed for you, please delete and reinstall flux-webui.

MeinDeutschkurs commented 1 month ago

Great! On MacOS, it runs smooth/without any troubles.

At the readme.md, there could be a tip what python version should be used.

This issue can be closed.

skydiablo commented 1 month ago

thx! working well!