easydiffusion / sdkit

sdkit (stable diffusion kit) is an easy-to-use library for using Stable Diffusion in your AI Art projects. It is fast, feature-packed, and memory-efficient.
Other
166 stars 42 forks source link

Are there any plans to make an executable outy of this project for a one-click deploy? #71

Open iamianM opened 6 months ago

iamianM commented 6 months ago

This would be a huge feature as it would take away the hassle of installing all the python packages and GPU setup.

cmdr2 commented 6 months ago

Yep, you're looking for https://github.com/easydiffusion/easydiffusion :)

iamianM commented 6 months ago

Sorry, I should've been more specific. I meant an exe for just starting the api. Easydiffusion is actually how I found this project, but they don't offer the ability to just use the api, they force it through the ui and I'm hoping to connect my app to this api.

cmdr2 commented 6 months ago

Ah got it. If you have Easy Diffusion installed, you can run your python programs using the Developer Console.

For e.g. please start Developer Console.cmd (or .sh) and then run python your_program.py inside it. Your program can import sdkit directly.

You don't need to start Easy Diffusion at all, or use its HTTP API.

Please let me know if that doesn't work for you, thanks!

MackNcD commented 6 months ago

You can always use a .bat to have all the modules, at the correct versions, installed. Just do a 'pip freeze' and note your version of python, and then use a .bat to install the modules, (pytorch seperately) etc. cmdr2, I didn't want to start a whole issue for this, (I can) but are textual embeddings available for SDKIT? If so, what modules would need to be imported/what arg would need to be passed into the generate_images fn? ty!

cmdr2 commented 6 months ago

@MackNcD Ah, the example for embeddings was missing. Thanks, I've added it - https://github.com/easydiffusion/sdkit/blob/main/examples/010-generate-custom_embedding.py

Embeddings are loaded just like any other model, by setting context.model_paths['embeddings'] to the path (or list of paths) to the embedding, and then calling load_model(context, 'embeddings') to load the embedding.

The name of the embedding file (without the extension) will be used as the embedding token, which you can then use in the prompt.

You can load more embeddings by using the same process. Important: Previously loaded embeddings will NOT be unloaded when you load more embeddings. This works okay because the embedding is only used if the required token is present in the prompt.

MackNcD commented 6 months ago

It worked! Grazie!

def secondary_function():
    context = sdkit.Context()
    current_directory = os.getcwd()

    relative_path_to_model = os.path.join(current_directory, 'cyberrealistic_v42.safetensors')
    context.model_paths['stable-diffusion'] = relative_path_to_model
    load_model(context, 'stable-diffusion')

    relative_path_to_embedding = os.path.join(current_directory, 'CyberRealistic_Negative-neg.pt')
    context.model_paths["embeddings"] = relative_path_to_embedding 
    load_model(context, "embeddings")

    prompt = user_input
    print(f"prompt = {prompt}")
    negative_prompt = 'CyberRealistic_Negative-neg'
    num_steps = 30

    random_number_from_time = str(time.time())
        # extract the first 7 digits [-7:] find out how many digits is max
    limited_timestamp = random_number_from_time.replace('.', '')[-7:]
        # sliced string back to an integer
    seed = int(limited_timestamp)

    images = generate_images(context, prompt=prompt, negative_prompt=negative_prompt, num_inference_steps=num_steps, seed=seed, width=512, height=512)

    timestamp = int(time.time())
    filename = f'output_{timestamp}.png'
    img_path = f'imgs/{filename}'
    images[0].save(img_path)

    display_image(img_path)