xxlong0 / Wonder3D

Single Image to 3D using Cross-Domain Diffusion for 3D Generation
https://www.xxlong.site/Wonder3D/
GNU Affero General Public License v3.0
4.49k stars 351 forks source link

About camera_embedding at inference #167

Open MeixiChenTracy opened 2 months ago

MeixiChenTracy commented 2 months ago

Thank you for your great work! I am trying to change camera poses at inference stage by inputting different camera_embedding to the pipeline - this gives me some unexpected results, so I tried to test a bit more about the camera_embedding parameter, and found some more unexpected results:

I firstly tried to input the default camera_embedding I found in mvdiffusion/pipelines/pipeline_mvdiffusion_image.py , and it works fine:

Code:

camera_embedding =  torch.tensor(
            [[ 0.0000,  0.0000,  0.0000,  1.0000,  0.0000],
            [ 0.0000, -0.2362,  0.8125,  1.0000,  0.0000],
            [ 0.0000, -0.1686,  1.6934,  1.0000,  0.0000],
            [ 0.0000,  0.5220,  3.1406,  1.0000,  0.0000],
            [ 0.0000,  0.6904,  4.8359,  1.0000,  0.0000],
            [ 0.0000,  0.3733,  5.5859,  1.0000,  0.0000],
            [ 0.0000,  0.0000,  0.0000,  0.0000,  1.0000],
            [ 0.0000, -0.2362,  0.8125,  0.0000,  1.0000],
            [ 0.0000, -0.1686,  1.6934,  0.0000,  1.0000],
            [ 0.0000,  0.5220,  3.1406,  0.0000,  1.0000],
            [ 0.0000,  0.6904,  4.8359,  0.0000,  1.0000],
            [ 0.0000,  0.3733,  5.5859,  0.0000,  1.0000]], dtype=torch.float16)

seed_everything(0)
images = pipeline(
    cond, 
    num_inference_steps=20, 
    output_type='pt', 
    camera_embedding=camera_embedding,
    guidance_scale=7.5).images
result = make_grid(images, nrow=6, ncol=2, padding=0, value_range=(0, 1))
save_image(result, 'result_cameraemb_orig.png')

Result: result_cameraemb_orig

Then I tried to change the camera_embedding to the following, expecting to see the identical first image 12 times - but saw this different image:

Code:

seed_everything(0)
images = pipeline(
    cond, 
    num_inference_steps=20, 
    output_type='pt', 
    camera_embedding=torch.stack([
        camera_embedding[0], camera_embedding[0], camera_embedding[0], 
        camera_embedding[0], camera_embedding[0], camera_embedding[0], 
        camera_embedding[0], camera_embedding[0], camera_embedding[0], 
        camera_embedding[0], camera_embedding[0], camera_embedding[0], ]),
    guidance_scale=7.5).images
result = make_grid(images, nrow=6, ncol=2, padding=0, value_range=(0, 1))
save_image(result, 'result_cameraemb_0.png')

Result: result_cameraemb_0

And I tried some other combinations and they are worse:

Code:

seed_everything(0)
images = pipeline(
    cond, 
    num_inference_steps=20, 
    output_type='pt', 
    camera_embedding=torch.stack([
        camera_embedding[0], camera_embedding[1], camera_embedding[1], 
        camera_embedding[1], camera_embedding[1], camera_embedding[1], 
        camera_embedding[1], camera_embedding[1], camera_embedding[1], 
        camera_embedding[1], camera_embedding[1], camera_embedding[1], ]),
    guidance_scale=7.5).images
result = make_grid(images, nrow=6, ncol=2, padding=0, value_range=(0, 1))
save_image(result, 'result_cameraemb_01.png')

Result: result_cameraemb_01

Code:

seed_everything(0)
images = pipeline(
    cond, 
    num_inference_steps=20, 
    output_type='pt', 
    camera_embedding=torch.stack([
        camera_embedding[1], camera_embedding[1], camera_embedding[1], 
        camera_embedding[1], camera_embedding[1], camera_embedding[1], 
        camera_embedding[1], camera_embedding[1], camera_embedding[1], 
        camera_embedding[1], camera_embedding[1], camera_embedding[1], ]),
    guidance_scale=7.5).images
result = make_grid(images, nrow=6, ncol=2, padding=0, value_range=(0, 1))
save_image(result, 'result_cameraemb_1.png')

Result: result_cameraemb_1

I wonder if this behaviour is expected? Does this mean that we are not supposed to change camera_embedding, and the output of inference step MUST be the 6 poses normal + the 6 poses rgb?

p.s., my full inference script is as follows:

import torch
import requests
import random
from PIL import Image
import numpy as np
from torchvision.utils import make_grid, save_image
from diffusers import DiffusionPipeline

def seed_everything(seed=42):
    """
    Seed everything to make sure results are reproducible.
    """
    random.seed(seed)      
    np.random.seed(seed)     
    torch.manual_seed(seed)  
    if torch.cuda.is_available():
        torch.cuda.manual_seed(seed)
        torch.cuda.manual_seed_all(seed)

def load_wonder3d_pipeline():
    pipeline = DiffusionPipeline.from_pretrained(
    'flamehaze1115/wonder3d-v1.0', 
    custom_pipeline='flamehaze1115/wonder3d-pipeline',
    torch_dtype=torch.float16
    )
    pipeline.unet.enable_xformers_memory_efficient_attention()
    if torch.cuda.is_available():
        pipeline.to('cuda:0')
    return pipeline

pipeline = load_wonder3d_pipeline()

cond = Image.open(requests.get("https://d.skis.ltd/nrp/sample-data/lysol.png", stream=True).raw)
cond = Image.fromarray(np.array(cond)[:, :, :3])

camera_embedding =  torch.tensor(
            [[ 0.0000,  0.0000,  0.0000,  1.0000,  0.0000],
            [ 0.0000, -0.2362,  0.8125,  1.0000,  0.0000],
            [ 0.0000, -0.1686,  1.6934,  1.0000,  0.0000],
            [ 0.0000,  0.5220,  3.1406,  1.0000,  0.0000],
            [ 0.0000,  0.6904,  4.8359,  1.0000,  0.0000],
            [ 0.0000,  0.3733,  5.5859,  1.0000,  0.0000],
            [ 0.0000,  0.0000,  0.0000,  0.0000,  1.0000],
            [ 0.0000, -0.2362,  0.8125,  0.0000,  1.0000],
            [ 0.0000, -0.1686,  1.6934,  0.0000,  1.0000],
            [ 0.0000,  0.5220,  3.1406,  0.0000,  1.0000],
            [ 0.0000,  0.6904,  4.8359,  0.0000,  1.0000],
            [ 0.0000,  0.3733,  5.5859,  0.0000,  1.0000]], dtype=torch.float16)

seed_everything(0)
images = pipeline(
    cond, 
    num_inference_steps=20, 
    output_type='pt', 
    camera_embedding=camera_embedding,
    guidance_scale=7.5).images
result = make_grid(images, nrow=6, ncol=2, padding=0, value_range=(0, 1))
save_image(result, 'result_cameraemb_orig.png')

seed_everything(0)
images = pipeline(
    cond, 
    num_inference_steps=20, 
    output_type='pt', 
    camera_embedding=torch.stack([
        camera_embedding[0], camera_embedding[0], camera_embedding[0], 
        camera_embedding[0], camera_embedding[0], camera_embedding[0], 
        camera_embedding[0], camera_embedding[0], camera_embedding[0], 
        camera_embedding[0], camera_embedding[0], camera_embedding[0], ]),
    guidance_scale=7.5).images
result = make_grid(images, nrow=6, ncol=2, padding=0, value_range=(0, 1))
save_image(result, 'result_cameraemb_0.png')

seed_everything(0)
images = pipeline(
    cond, 
    num_inference_steps=20, 
    output_type='pt', 
    camera_embedding=torch.stack([
        camera_embedding[1], camera_embedding[1], camera_embedding[1], 
        camera_embedding[1], camera_embedding[1], camera_embedding[1], 
        camera_embedding[1], camera_embedding[1], camera_embedding[1], 
        camera_embedding[1], camera_embedding[1], camera_embedding[1], ]),
    guidance_scale=7.5).images
result = make_grid(images, nrow=6, ncol=2, padding=0, value_range=(0, 1))
save_image(result, 'result_cameraemb_1.png')

seed_everything(0)
images = pipeline(
    cond, 
    num_inference_steps=20, 
    output_type='pt', 
    camera_embedding=torch.stack([
        camera_embedding[0], camera_embedding[1], camera_embedding[1], 
        camera_embedding[1], camera_embedding[1], camera_embedding[1], 
        camera_embedding[1], camera_embedding[1], camera_embedding[1], 
        camera_embedding[1], camera_embedding[1], camera_embedding[1], ]),
    guidance_scale=7.5).images
result = make_grid(images, nrow=6, ncol=2, padding=0, value_range=(0, 1))
save_image(result, 'result_cameraemb_01.png')

Thank you so much! I look forward to your kind reply.