Closed pragyavaishanav closed 7 months ago
I found the solution. The scheduler doesnt work when you wish to use depth maps. Use something else or just comment it out and it will work.
What version of diffusers are you using? The snippet works on my side.
Hello Authors, first of all thanks so much for the amazing work. I am trying to run zero123plus pipeline with depth control net. When I use the depth_image, then the input image's pixels/colors are completely disregarded by the model. The model still generate (kind of) consistent 6 images with respect to depth, but the initial input image colors are no where to be seen. I tested with your blue and yellow kid chair and one of my test models. I am sharing the pipeline and as well as the input and output results. It'd be a great help for me if you can look into this. ` pipeline = DiffusionPipeline.from_pretrained( "sudo-ai/zero123plus-v1.1", custom_pipeline="sudo-ai/zero123plus-pipeline", torch_dtype=torch.float16 ) pipeline.add_controlnet(ControlNetModel.from_pretrained( "sudo-ai/controlnet-zp11-depth-v1", torch_dtype=torch.float16 ), conditioning_scale=0.75)
pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config( pipeline.scheduler.config, timestep_spacing='trailing' ) pipeline.to('cuda:0')
result = pipeline(cImage, depth_image=dImage, num_inference_steps=35).images[0]
`