FloricSpacer / AbyssDiver

12 stars 12 forks source link

Local Stable Diffusion Generation Support #129

Open SPOOKEXE opened 2 weeks ago

SPOOKEXE commented 2 weeks ago

Not really an issue as i've coded it but having the option to generate locally is helpful;

image

image

image

image

Using this stable diffusion webui repository: https://github.com/Panchovix/stable-diffusion-webui-reForge/releases

Include "--api" in the command line arguments otherwise it won't work.

Below is the code used to hook it up (note that I edited the compiled game and not this builder, also I had to use a python proxy because the command line arguments have COR5 whitelisting).

Setup:

  1. Have python installed
  2. Run Stable Diffusion WebUI
  3. "pip install fastapi" in the command prompt
  4. Run the proxy
  5. Edit the code in the html file (search for "setupDalleImageGenerator")
  6. Enter "sk-1" in the api key and submit
  7. Generate portrait

python_proxy_sd_local.py

from fastapi import FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware
import httpx

app = FastAPI()

# CORS settings to allow all origins, methods, and headers
app.add_middleware(
    CORSMiddleware,
    allow_origins=["*"],  # Change to specific domain if needed
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

# URL of the local Stable Diffusion API
STABLE_DIFFUSION_API_URL = "http://localhost:7861/sdapi/v1/txt2img"

@app.post("/proxy/txt2img")
async def proxy_txt2img(request: Request):
    try:
        # Extract the JSON body from the incoming request
        body = await request.json()

        # Forward the request to the Stable Diffusion API
        async with httpx.AsyncClient() as client:
            response = await client.post(STABLE_DIFFUSION_API_URL, json=body)

        # Return the response from Stable Diffusion to the client
        return response.json()

    except Exception as e:
        return {"error": str(e)}

# Run the FastAPI server
if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=8000)

Edited JavaScript Dalle function


setup.setupDalleImageGenerator = async function() {
    console.log("image generator");

    // API Endpoint for the local Stable Diffusion WebAPI
    const apiUrl = 'http://127.0.0.1:8000/proxy/txt2img'; // Replace with your actual local Stable Diffusion API URL if different.

    // Static part of the prompt
    let staticPrompt = "Create a highly detailed digital portrait in the style of anime screencap. The subject is a seasoned adventurer who bears distinct marks of various curses. The portrait should capture the character's resilience, with a focus on their unique traits that reflect their cursed nature. The background should evoke a sense of the mysterious and foreboding Abyss, with dark and atmospheric elements.";

    // Dynamically generated character description
    let characterDescription = setup.evaluateCharacterDescription(State.variables.mc); // Assuming $mc is stored in State.variables.mc

    // Get the notification element
    const notificationElement = document.getElementById('notification');

    // Concatenate the static prompt with the dynamic description
    const prompt = staticPrompt + characterDescription;

    try {
        const response = await fetch(apiUrl, {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json'
            },
            body: JSON.stringify({
                prompt: prompt,
                negative_prompt: "bad_quality, bad_anatomy, pixelated", // Optional: Add any negative prompts if needed
                sampler_name: "DPM++ 2M", // You can adjust this based on the sampler you want to use
                batch_size: 1,
                n_iter: 1,
                steps: 20, // Number of inference steps, adjust as needed
                width: 512, // Image width
                height: 512, // Image height
                cfg_scale: 7.0, // Classifier-Free Guidance Scale, adjust for more/less adherence to prompt
                seed: null // Optional: Use a specific seed for reproducibility
            })
        });

        if (!response.ok) {
            throw new Error('Failed to connect to the Stable Diffusion API. Please check your API endpoint and ensure the server is running.');
        }
        const data = await response.json();
        console.log(data); // Debugging: Inspect the structure of the response

        if (data.images && data.images.length > 0) {
            const base64Image = data.images[0]; // Assuming the images are returned as base64 strings
            console.log("Base64 Image Data: ", base64Image ? base64Image.substring(0, 100) : "undefined");
            setup.storeImage(base64Image)
                .then(() => console.log('Image successfully stored.'))
                .catch((error) => console.error('Failed to store image:', error));
        } else {
            console.error('No images returned:', data);
            throw new Error('No images returned from server. This might be due to an issue with the Stable Diffusion model or the server.');
        }
    } catch (error) {
        console.error('Error generating image:', error);
        notificationElement.textContent = 'Error generating image: ' + error.message;
        notificationElement.style.display = 'block';
    }
}```
SPOOKEXE commented 2 weeks ago

*"Below is the code used to hook it up (note that I edited the compiled game and not this builder, also I had to use a python proxy because the stable diffusion webui API has COR5 whitelisting)."

SPOOKEXE commented 2 weeks ago

In the example code I provided, it will use whatever model is currently loaded in the stable diffusion webui - if you want to pass a custom model in, edit the following and add the "model" key:

body: JSON.stringify({
              model: "MODEL_NAME_HERE",
              prompt: prompt,
              negative_prompt: "bad_quality, bad_anatomy, pixelated", // Optional: Add any negative prompts if needed
              sampler_name: "DPM++ 2M", // You can adjust this based on the sampler you want to use
              batch_size: 1,
              n_iter: 1,
              steps: 20, // Number of inference steps, adjust as needed
              width: 512, // Image width
              height: 512, // Image height
              cfg_scale: 7.0, // Classifier-Free Guidance Scale, adjust for more/less adherence to prompt
              seed: null // Optional: Use a specific seed for reproducibility
          })
SPOOKEXE commented 2 weeks ago

If you want to see what APIs are available in the local stable diffusion webui install, you can go to http://127.0.0.1:7860/docs. You will need to add them to the proxy though if you cannot find a solution to directly connect to it.

image

Annonymus-v02 commented 2 weeks ago

If you want to contribute this code you should open a pull request, not an issue. But unless you commit to maintaining it it's unlikely that SD support will be implemented because it'd be a large effort with technologies that current maintainers don't know, for a significantly worse experience than the existing (though non-free) option, not to mention that it requires players to have the hardware and expertise to run a model locally.