Vchitect / Latte

Latte: Latent Diffusion Transformer for Video Generation.
Apache License 2.0
1.45k stars 147 forks source link

run bash sample/t2v.sh error #27

Closed afezeriaWrnbbmm closed 4 months ago

afezeriaWrnbbmm commented 4 months ago

运行run bash sample/t2v.sh 出现报错Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference. 我应该在哪里设置让它运行的时候使用GPU

Translate the following sentence into English: When running run bash sample/t2v.sh, an error occurred saying "Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for float16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference." Where should I set it to use GPU when it runs?

maxin-cn commented 4 months ago

运行run bash sample/t2v.sh 出现报错Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference. 我应该在哪里设置让它运行的时候使用GPU

Translate the following sentence into English: When running run bash sample/t2v.sh, an error occurred saying "Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for float16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference." Where should I set it to use GPU when it runs?

The code will automatically determine the GPU health of your machine. If the code runs on the CPU please check your machine. Please see here: https://github.com/Vchitect/Latte/blob/dafec01d3cd915f4178a486cd8e2bf51650193db/sample/sample_t2v.py#L27

afezeriaWrnbbmm commented 4 months ago

(latte) root@nvidia3090:/home/nvidia3090/Latte/sample# nvidia-smi Fri Feb 23 12:49:59 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.154.05 Driver Version: 535.154.05 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA GeForce RTX 3090 Off | 00000000:04:00.0 Off | N/A | | 0% 5C P8 16W / 350W | 2816MiB / 24576MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ | 1 NVIDIA GeForce RTX 3090 Off | 00000000:06:00.0 Off | N/A | | 0% 8C P8 28W / 350W | 2788MiB / 24576MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ | 2 NVIDIA GeForce RTX 3090 Off | 00000000:07:00.0 Off | N/A | | 0% 8C P8 24W / 350W | 2770MiB / 24576MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ | 3 NVIDIA GeForce RTX 3090 Off | 00000000:0C:00.0 Off | N/A | | 0% 8C P8 21W / 370W | 2786MiB / 24576MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ | 4 NVIDIA GeForce RTX 3090 Off | 00000000:0D:00.0 Off | N/A | | 0% 8C P8 21W / 350W | 2776MiB / 24576MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ | 5 NVIDIA GeForce RTX 3090 Off | 00000000:0E:00.0 Off | N/A | | 0% 8C P8 23W / 420W | 2786MiB / 24576MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 1210829 C python 2810MiB | | 1 N/A N/A 1210766 C python 2782MiB | | 2 N/A N/A 1210696 C python 2764MiB | | 3 N/A N/A 1210629 C python 2780MiB | | 4 N/A N/A 1210479 C python 2770MiB | | 5 N/A N/A 1210343 C python 2780MiB | +---------------------------------------------------------------------------------------+My GPU should be fine...

maxin-cn commented 4 months ago

torch.cuda.is_available()

Could you run torch.cuda.is_available() to check your environment?

afezeriaWrnbbmm commented 4 months ago

Thank you for the explanation. I have already solved the problem.