Closed XuejiFang closed 1 year ago
Are you running the code on the GPUs?
Are you running the code on the GPUs?
Yes, I run the code on an RTX 3090 GPU. And from the dashboard I could see the GPU was being used.
I haven't verified the code on a 3090 GPU, but you can debug it in the following ways: 1) Check if the torch version is 1.12.0+cu113
; 2) If yes, then check if torch.cuda.is_available()
returns True; 3) Then check the device of the tensor t
and x
in the line of return tensor[t].view(shape).to(x)
. Then please debug where the error occurs.
return tensor.to(x)[t].view(shape)
fixes it for me...
return tensor.to(x)[t].view(shape)
fixes it for me...
This is feasible, thank you very much!
I configured the corresponding environment and then ran the code, and I wonder why I get the following error because I didn't modify your code and follow the README.md documentation.
[2023-06-20 10:30:44,328] INFO: Created a model with 1347M parameters [mpeg4 @ 0xeeaa0c40] Application has requested 128 threads. Using a thread count greater than 16 is not recommended. /root/miniconda3/envs/VideoComposer/lib/python3.8/site-packages/torchvision/transforms/functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True). warnings.warn( [2023-06-20 10:30:47,417] INFO: GPU Memory used 12.86 GB Traceback (most recent call last): File "run_net.py", line 36, in
main()
File "run_net.py", line 31, in main
inference_single(cfg.cfg_dict)
File "/root/autodl-tmp/videocomposer/tools/videocomposer/inference_single.py", line 351, in inference_single
worker(0, cfg)
File "/root/autodl-tmp/videocomposer/tools/videocomposer/inference_single.py", line 695, in worker
video_output = diffusion.ddim_sample_loop(
File "/root/miniconda3/envs/VideoComposer/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, *kwargs)
File "/root/autodl-tmp/videocomposer/artist/ops/diffusion.py", line 236, in ddim_sampleloop
xt, = self.ddim_sample(xt, t, model, model_kwargs, clamp, percentile, condition_fn, guide_scale, ddim_timesteps, eta)
File "/root/miniconda3/envs/VideoComposer/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(args, **kwargs)
File "/root/autodl-tmp/videocomposer/artist/ops/diffusion.py", line 200, in ddimsample
, , , x0 = self.p_mean_variance(xt, t, model, model_kwargs, clamp, percentile, guide_scale)
File "/root/autodl-tmp/videocomposer/artist/ops/diffusion.py", line 166, in p_mean_variance
var = _i(self.posterior_variance, t, xt)
File "/root/autodl-tmp/videocomposer/artist/ops/diffusion.py", line 13, in _i
return tensor[t].view(shape).to(x)
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)