TencentARC / InstantMesh

InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models
Apache License 2.0
3.39k stars 366 forks source link

INTERNAL ASSERT FAILED after generating multiviews #97

Open CultureAddiction opened 5 months ago

CultureAddiction commented 5 months ago

image

The whole error message is down below.

This happens when clicks 'generate' button at gradio, it successfully generate "processed image" and "generated multiviews', but occurs error at next step.

I submitted additional codes displaying pytorch vesion and cuda, and tried trivial tensor addition to check if torch is available (which was [1,2,3] + [4,5,6]. which resulted as [5,7,9] without problem). It seems that pytorch and cuda version do not disturb each other and cuda is OK as well.

What should be the problem? I am using linux server, 2080Ti. I am using given demo at readme by the way. Is this error happens because i didn't trained? I don't think so... Or be cause of using externer link?

(instantmesh) gpuadmin@sg6:~/YYY/low3D/InstantMesh$ nvidia-smi Thu May 30 22:45:06 2024
+---------------------------------------------------------------------------------------+ | NVIDIA-SMI 530.41.03 Driver Version: 530.41.03 CUDA Version: 12.1 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA GeForce RTX 2080 Ti Off| 00000000:3B:00.0 Off | N/A | | 24% 26C P8 1W / 250W| 1MiB / 11264MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ | 1 NVIDIA GeForce RTX 2080 Ti Off| 00000000:5E:00.0 Off | N/A | | 25% 26C P8 1W / 250W| 1MiB / 11264MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ | 2 NVIDIA GeForce RTX 2080 Ti Off| 00000000:D8:00.0 Off | N/A | | 27% 26C P8 18W / 250W| 1MiB / 11264MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | No running processes found | +---------------------------------------------------------------------------------------+

(instantmesh) gpuadmin@sg6:~/YYY/low3D/InstantMesh$ python app.py PyTorch version: 2.1.0+cu121 CUDA version used by PyTorch: 12.1 CUDA available: True tensor([5., 7., 9.], device='cuda:0') Seed set to 0 Loading diffusion model ... Loading pipeline components...: 62%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 5/8 [00:00<00:00, 13.48it/s]The config attributes {'dropout': 0.0, 'reverse_transformer_layers_per_block': None} were passed to UNet2DConditionModel, but are not expected and will be ignored. Please verify your config.json configuration file. Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:01<00:00, 6.50it/s] Loading reconstruction model ... Some weights of ViTModel were not initialized from the model checkpoint at facebook/dino-vitb16 and are newly initialized: ['encoder.layer.8.adaLN_modulation.1.bias', 'encoder.layer.2.adaLN_modulation.1.bias', 'encoder.layer.10.adaLN_modulation.1.weight', 'encoder.layer.3.adaLN_modulation.1.weight', 'encoder.layer.3.adaLN_modulation.1.bias', 'encoder.layer.1.adaLN_modulation.1.bias', 'encoder.layer.4.adaLN_modulation.1.bias', 'encoder.layer.5.adaLN_modulation.1.weight', 'encoder.layer.6.adaLN_modulation.1.bias', 'encoder.layer.9.adaLN_modulation.1.bias', 'encoder.layer.4.adaLN_modulation.1.weight', 'encoder.layer.10.adaLN_modulation.1.bias', 'encoder.layer.2.adaLN_modulation.1.weight', 'encoder.layer.8.adaLN_modulation.1.weight', 'encoder.layer.5.adaLN_modulation.1.bias', 'encoder.layer.6.adaLN_modulation.1.weight', 'encoder.layer.7.adaLN_modulation.1.bias', 'encoder.layer.9.adaLN_modulation.1.weight', 'encoder.layer.1.adaLN_modulation.1.weight', 'encoder.layer.0.adaLN_modulation.1.bias', 'encoder.layer.0.adaLN_modulation.1.weight', 'encoder.layer.11.adaLN_modulation.1.bias', 'encoder.layer.11.adaLN_modulation.1.weight', 'encoder.layer.7.adaLN_modulation.1.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Loading Finished! /home/gpuadmin/YYY/low3D/InstantMesh/app.py:298: GradioUnusedKwargWarning: You have unused kwarg parameters in Image, please remove them: {'sources': 'upload'} input_image = gr.Image( Running on local URL: http://0.0.0.0:43839 IMPORTANT: You are using gradio version 3.41.2, however version 4.29.0 is available, please upgrade.

Running on public URL: https://422b1701e__26f78e8.gradio.live

This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run gradio deploy from Terminal to deploy to Spaces (https://huggingface.co/spaces) Seed set to 42 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 75/75 [00:17<00:00, 4.22it/s] /tmp/tmpo96fiuyx.obj 0%| | 0/6 [00:00<?, ?it/s] Traceback (most recent call last): File "/home/gpuadmin/.miniconda3/envs/instantmesh/lib/python3.10/site-packages/gradio/routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "/home/gpuadmin/.miniconda3/envs/instantmesh/lib/python3.10/site-packages/gradio/blocks.py", line 1431, in process_api result = await self.call_function( File "/home/gpuadmin/.miniconda3/envs/instantmesh/lib/python3.10/site-packages/gradio/blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "/home/gpuadmin/.miniconda3/envs/instantmesh/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/home/gpuadmin/.miniconda3/envs/instantmesh/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread return await future File "/home/gpuadmin/.miniconda3/envs/instantmesh/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 851, in run result = context.run(func, args) File "/home/gpuadmin/.miniconda3/envs/instantmesh/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper response = f(args, kwargs) File "/home/gpuadmin/YYY/low3D/InstantMesh/app.py", line 228, in make3d frame = model.forward_geometry( File "/home/gpuadmin/YYY/low3D/InstantMesh/src/models/lrm_mesh.py", line 280, in forward_geometry mesh_v, mesh_f, sdf, deformation, v_deformed, sdf_reg_loss = self.get_geometry_prediction(planes) File "/home/gpuadmin/YYY/low3D/InstantMesh/src/models/lrm_mesh.py", line 165, in get_geometry_prediction sdf, deformation, sdf_reg_loss, weight = self.get_sdf_deformation_prediction(planes) File "/home/gpuadmin/YYY/low3D/InstantMesh/src/models/lrm_mesh.py", line 110, in get_sdf_deformation_prediction sdf, deformation, weight = torch.utils.checkpoint.checkpoint( File "/home/gpuadmin/.miniconda3/envs/instantmesh/lib/python3.10/site-packages/torch/_compile.py", line 24, in inner return torch._dynamo.disable(fn, recursive)(*args, *kwargs) File "/home/gpuadmin/.miniconda3/envs/instantmesh/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 328, in _fn return fn(args, kwargs) File "/home/gpuadmin/.miniconda3/envs/instantmesh/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner return fn(*args, *kwargs) File "/home/gpuadmin/.miniconda3/envs/instantmesh/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 458, in checkpoint ret = function(args, **kwargs) File "/home/gpuadmin/YYY/low3D/InstantMesh/src/models/renderer/synthesizer_mesh.py", line 132, in get_geometry_prediction sdf, deformation, weight = self.decoder.get_geometry_prediction(sampled_features, flexicubes_indices) File "/home/gpuadmin/YYY/low3D/InstantMesh/src/models/renderer/synthesizer_mesh.py", line 76, in get_geometry_prediction grid_features = torch.index_select(input=sampled_features, index=flexicubes_indices.reshape(-1), dim=1) RuntimeError: handle_0 INTERNAL ASSERT FAILED at "../c10/cuda/driver_api.cpp":15, please report a bug to PyTorch. ^CKeyboard interruption in main thread... closing server. Killing tunnel 0.0.0.0:43839 <> https://422b1701ec826f78e8.gradio.live

otakudj commented 1 month ago

I have the same error, you can try to change 「sample step」 to ~50, then the error disappear but the result is not so good. @CultureAddiction