Open ShenZheng2000 opened 2 weeks ago
It seems that you need to calculate depth rather than directly put the raw image into controlnet pipeline: `from transformers import pipeline from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler from PIL import Image import numpy as np import torch from diffusers.utils import load_image
depth_estimator = pipeline('depth-estimation')
image = load_image("https://huggingface.co/lllyasviel/sd-controlnet-depth/resolve/main/images/stormtrooper.png")
image = depth_estimator(image)['depth'] image = np.array(image) image = image[:, :, None] image = np.concatenate([image, image, image], axis=2) image = Image.fromarray(image)
controlnet = ControlNetModel.from_pretrained( "lllyasviel/sd-controlnet-depth", torch_dtype=torch.float16 )
pipe = StableDiffusionControlNetPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16 )
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_xformers_memory_efficient_attention()
pipe.enable_model_cpu_offload()
image = pipe("Stormtrooper's lecture", image, num_inference_steps=20).images[0]
image.save('./images/stormtrooper_depth_out.png') `
Thank you for your response. My current code isn't working. Could you update the README for ControlNet with a working example (code and images) to help me debug?
Here is my code
from transformers import pipeline
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
from PIL import Image
import numpy as np
import torch
from diffusers.utils import load_image
# Paths
base_model_path = "runwayml/stable-diffusion-v1-5"
controlnet_path = "controlnet" # Use a ControlNet model suited for depth
# Step 1: Estimate depth of the input image
depth_estimator = pipeline('depth-estimation')
# Load the input image
input_image_path = "bdd100k/images/100k/train_day/0a0a0b1a-7c39d841.jpg"
control_image = load_image(input_image_path)
# Estimate depth
depth_map = depth_estimator(control_image)['depth']
depth_map = np.array(depth_map)
depth_map = depth_map[:, :, None] # Add a channel dimension
depth_map = np.concatenate([depth_map, depth_map, depth_map], axis=2) # Make it 3-channel
depth_map = Image.fromarray(depth_map)
# Step 2: Set up the ControlNet pipeline with the depth model
controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
base_model_path, controlnet=controlnet, torch_dtype=torch.float16
)
# Optimize the pipeline
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
# pipe.enable_xformers_memory_efficient_attention()
pipe.enable_model_cpu_offload()
# Define the prompt
prompt = "day to night"
# Step 3: Generate the image using the depth map as the control image
generator = torch.manual_seed(0)
output_image = pipe(
prompt, num_inference_steps=20, generator=generator, image=depth_map
).images[0]
# Save the output image
output_image.save("./output.png")
Here is my original (left) and result image (right)
I have also tried changing controlnet_path to this
controlnet_path = "lllyasviel/sd-controlnet-depth"
And get the following result, which is still bad.
Here is the inference script I used for controlnet image to image translation. Note that I already download your
config.json
anddiffusion_pytorch_model.safetensors
and put them intocontrolnet
.However, the result is very bad (Screenshot below).![image](https://github.com/LiheYoung/Depth-Anything/assets/69662345/32323509-7685-40ce-9052-fbfd58701aa5)