when i run the command"python main2_4d.py --config configs/4d_svd.yaml input=data/anya_rgba.png save_path=anya"
an error occurred:
File "/mnt/petrelfs/yangshuai/4drender/dreamgaussian4d/main2_4d.py", line 421, in
gui.train(opt.iters_refine)
File "/mnt/petrelfs/yangshuai/4drender/dreamgaussian4d/main2_4d.py", line 367, in train
self.train_step()
File "/mnt/petrelfs/yangshuai/4drender/dreamgaussian4d/main2_4d.py", line 285, in train_step
refined_images = self.guidance_svd.refine(images, strength=strength).float()
File "/mnt/petrelfs/yangshuai/4drender/dreamgaussian4d/guidance/svd_utils.py", line 91, in refine
target = self.pipe(
File "/mnt/petrelfs/yangshuai/anaconda3/envs/threestudio/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
TypeError: call() got an unexpected keyword argument 'denoise_beg'
I check the pipe's args of StableVideoDiffusionPipeline in diffusers and find no corresponding keyword argument 'denoise_beg', I wonder if the pre trained model is different?
when i run the command"python main2_4d.py --config configs/4d_svd.yaml input=data/anya_rgba.png save_path=anya" an error occurred: File "/mnt/petrelfs/yangshuai/4drender/dreamgaussian4d/main2_4d.py", line 421, in
gui.train(opt.iters_refine)
File "/mnt/petrelfs/yangshuai/4drender/dreamgaussian4d/main2_4d.py", line 367, in train
self.train_step()
File "/mnt/petrelfs/yangshuai/4drender/dreamgaussian4d/main2_4d.py", line 285, in train_step
refined_images = self.guidance_svd.refine(images, strength=strength).float()
File "/mnt/petrelfs/yangshuai/4drender/dreamgaussian4d/guidance/svd_utils.py", line 91, in refine
target = self.pipe(
File "/mnt/petrelfs/yangshuai/anaconda3/envs/threestudio/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
TypeError: call() got an unexpected keyword argument 'denoise_beg'
I check the pipe's args of StableVideoDiffusionPipeline in diffusers and find no corresponding keyword argument 'denoise_beg', I wonder if the pre trained model is different?
(original code pipe load:
_pipe = StableVideoDiffusionPipeline.from_pretrained( "stabilityai/stable-video-diffusion-img2vid", torchdtype=torch.float16, variant="fp16" ))