AiuniAI / Unique3D

Official implementation of Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image
https://wukailu.github.io/Unique3D/
MIT License
2.82k stars 215 forks source link

运行python app/gradio_local.py --port 7860时出现显存不足的问题 #74

Open KiriuYamato opened 1 month ago

KiriuYamato commented 1 month ago

运行python app/gradio_local.py --port 7860时出现显存不足CUDA out of memory 我所使用的是GPU是3080(10G),ubuntu22.04,CUDA12.1,Py311,其他环境遵守了requirements文本文件

查阅了issues,中的reference #53 但是我并不是在生成模型时出现CUDA out of memory,而是在运行python app/gradio_local.py --port 7860时 我推测可能是在模型读取的过程中出现的溢出。

53 中采用的是3060(12G),我不太确定10G是否达到了最低运行标准。

3080(10G)可以实现本地运行吗?请问,有什么改进的建议吗?

NytePlus commented 1 month ago

+1, 4090 24G也出现了。想问问本项目目前可以在多卡上跑吗?因为看到gradio_local.py默认的跑在cuda:0上

Traceback (most recent call last):
  File "/home/wcc/anaconda3/envs/unique3d/lib/python3.10/site-packages/gradio/queueing.py", line 521, in process_events
    response = await route_utils.call_process_api(
  File "/home/wcc/anaconda3/envs/unique3d/lib/python3.10/site-packages/gradio/route_utils.py", line 276, in call_process_api
    output = await app.get_blocks().process_api(
  File "/home/wcc/anaconda3/envs/unique3d/lib/python3.10/site-packages/gradio/blocks.py", line 1945, in process_api
    result = await self.call_function(
  File "/home/wcc/anaconda3/envs/unique3d/lib/python3.10/site-packages/gradio/blocks.py", line 1513, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/home/wcc/anaconda3/envs/unique3d/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/home/wcc/anaconda3/envs/unique3d/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
    return await future
  File "/home/wcc/anaconda3/envs/unique3d/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run
    result = context.run(func, *args)
  File "/home/wcc/anaconda3/envs/unique3d/lib/python3.10/site-packages/gradio/utils.py", line 831, in wrapper
    response = f(*args, **kwargs)
  File "/data1/wcc/Unique3D/./app/gradio_3dgen.py", line 21, in generate3dv2
    new_meshes = geo_reconstruct(rgb_pils, None, front_pil, do_refine=do_refine, predict_normal=True, expansion_weight=expansion_weight, init_type=init_type)
  File "/data1/wcc/Unique3D/./scripts/multiview_inference.py", line 95, in geo_reconstruct
    vertices, faces = run_mesh_refine(vertices, faces, rm_normals, steps=100, start_edge_len=0.02, end_edge_len=0.005, decay=0.99, update_normal_interval=20, update_warmup=5, return_mesh=False, process_inputs=False, process_outputs=False)
  File "/data1/wcc/Unique3D/./mesh_reconstruction/refine.py", line 52, in run_mesh_refine
    debug_images = renderer.render(vertices,target_normal,faces)
  File "/data1/wcc/Unique3D/./mesh_reconstruction/render.py", line 52, in render
    col = dr.antialias(col, rast_out, vertices_clip, faces) #C,H,W,4
  File "/home/wcc/anaconda3/envs/unique3d/lib/python3.10/site-packages/nvdiffrast/torch/ops.py", line 702, in antialias
    return _antialias_func.apply(color, rast, pos, tri, topology_hash, pos_gradient_boost)
  File "/home/wcc/anaconda3/envs/unique3d/lib/python3.10/site-packages/torch/autograd/function.py", line 598, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "/home/wcc/anaconda3/envs/unique3d/lib/python3.10/site-packages/nvdiffrast/torch/ops.py", line 650, in forward
    out, work_buffer = _get_plugin().antialias_fwd(color, rast, pos, tri, topology_hash)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 514.00 MiB. GPU
wukailu commented 1 month ago

10GB 可能有一定的挑战性,我也不确定是否能运行。但是 4090 24GB 应该能运行,因为现在的 gradio demo 就是使用的 4090,在输入图像分辨率没有过高的情况下,运行非常稳定。