Open Pythonpa opened 5 days ago
Hi, maybe this can be attributed to the fact that your mem is not enough. You can use htop to monitor the used mem.
Hi, maybe this can be attributed to the fact that your mem is not enough. You can use htop to monitor the used mem.
got it,will have another try
作者您好,我做了一个docker镜像,在容器运行时,能正常执行'python inference.py --cfg configs/UniAnimate_infer.yaml'并得到最终的驱动转化视频。![image](https://github.com/ali-vilab/UniAnimate/assets/16030016/d3afe2c7-f28f-4ecb-b2b2-d3bbee8b4510)
但是,如果运行'python inference.py --cfg configs/UniAnimate_infer_long.yaml'这个长视频的时候,在加载完模型之后,会直接被kill掉。请问是什么原因?触碰到内存瓶颈了? 两个yaml配置中的分辨率都是[512, 768],使用同一张参考图片和同一段动作序列。 我的显卡是3090 24G Hello, I have made a docker image. When the container is running, I can normally execute 'python inference.py -- cfg configs/UniAnimate_infer.yaml ',and get the final driver conversion video. However, if I run the long yaml file 'python inference.py -- cfg configs/UniAnimate_infer_long.yaml ', it will be killed directly after the model is loaded. What is the reason? Did I touch the memory bottleneck? The resolution in both yaml configurations is [512, 768], using the same reference picture and the same motion sequence. My VRAM is 24G 3090
And my base image is from Ubuntu 22.04 CUDA=12.1 Pytorch=2.2.2