LiheYoung / Depth-Anything

[CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation
https://depth-anything.github.io
Apache License 2.0
6.39k stars 492 forks source link

ControlNet result very bad. #203

Open ShenZheng2000 opened 2 weeks ago

ShenZheng2000 commented 2 weeks ago

Here is the inference script I used for controlnet image to image translation. Note that I already download your config.json and diffusion_pytorch_model.safetensors and put them into controlnet.

from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
from diffusers.utils import load_image
import torch

base_model_path = "runwayml/stable-diffusion-v1-5"
controlnet_path = "controlnet"

controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
    base_model_path, controlnet=controlnet, torch_dtype=torch.float16
)

# speed up diffusion process with faster scheduler and memory optimization
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
# remove following line if xformers is not installed or when using Torch 2.0.
# pipe.enable_xformers_memory_efficient_attention() # NOTE: comment for now because torch<2.0
# memory optimization.
pipe.enable_model_cpu_offload()

control_image = load_image("bdd100k/images/100k/train_day/0a0a0b1a-7c39d841.jpg")
# prompt = "turn this into a night driving scene"
prompt = "day to night"

# generate image
generator = torch.manual_seed(0)
image = pipe(
    prompt, num_inference_steps=20, generator=generator, image=control_image
).images[0]
image.save("./output.png")

However, the result is very bad (Screenshot below). image

starrywintersky commented 2 weeks ago

It seems that you need to calculate depth rather than directly put the raw image into controlnet pipeline: `from transformers import pipeline from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler from PIL import Image import numpy as np import torch from diffusers.utils import load_image

depth_estimator = pipeline('depth-estimation')

image = load_image("https://huggingface.co/lllyasviel/sd-controlnet-depth/resolve/main/images/stormtrooper.png")

image = depth_estimator(image)['depth'] image = np.array(image) image = image[:, :, None] image = np.concatenate([image, image, image], axis=2) image = Image.fromarray(image)

controlnet = ControlNetModel.from_pretrained( "lllyasviel/sd-controlnet-depth", torch_dtype=torch.float16 )

pipe = StableDiffusionControlNetPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16 )

pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)

pipe.enable_xformers_memory_efficient_attention()

pipe.enable_model_cpu_offload()

image = pipe("Stormtrooper's lecture", image, num_inference_steps=20).images[0]

image.save('./images/stormtrooper_depth_out.png') `

ShenZheng2000 commented 2 weeks ago

Thank you for your response. My current code isn't working. Could you update the README for ControlNet with a working example (code and images) to help me debug?

Here is my code

from transformers import pipeline
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
from PIL import Image
import numpy as np
import torch
from diffusers.utils import load_image

# Paths
base_model_path = "runwayml/stable-diffusion-v1-5"
controlnet_path = "controlnet"  # Use a ControlNet model suited for depth

# Step 1: Estimate depth of the input image
depth_estimator = pipeline('depth-estimation')

# Load the input image
input_image_path = "bdd100k/images/100k/train_day/0a0a0b1a-7c39d841.jpg"
control_image = load_image(input_image_path)

# Estimate depth
depth_map = depth_estimator(control_image)['depth']
depth_map = np.array(depth_map)
depth_map = depth_map[:, :, None]  # Add a channel dimension
depth_map = np.concatenate([depth_map, depth_map, depth_map], axis=2)  # Make it 3-channel
depth_map = Image.fromarray(depth_map)

# Step 2: Set up the ControlNet pipeline with the depth model
controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
    base_model_path, controlnet=controlnet, torch_dtype=torch.float16
)

# Optimize the pipeline
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
# pipe.enable_xformers_memory_efficient_attention()
pipe.enable_model_cpu_offload()

# Define the prompt
prompt = "day to night"

# Step 3: Generate the image using the depth map as the control image
generator = torch.manual_seed(0)
output_image = pipe(
    prompt, num_inference_steps=20, generator=generator, image=depth_map
).images[0]

# Save the output image
output_image.save("./output.png")

Here is my original (left) and result image (right) image

I have also tried changing controlnet_path to this

controlnet_path = "lllyasviel/sd-controlnet-depth"

And get the following result, which is still bad. image