Closed ZhaoLongjiea closed 1 month ago
There may be an error in a certain model. Can you try using the "run-small-instruct" instruction in the makefile?
python3 chat_edit_3D.py --port 7862 --clean_FBends --load "Segmenting_cuda:0,\
ImageCaptioning_cuda:0,VisualQuestionAnswering_cuda:0,Text2Box_cuda:0,\
Inpainting_cuda:0,InstructPix2Pix_cuda:0"
And please tell me when this step occurred. Is it when training atlas or when performing edits?
Thank you for your reply. I have solved this problem. I found the key is CUDA version. When I transfer my CUDA version to 11.8. The problem disappeared
My device parameters are here: PyTorch version: 1.13.1+cu117 CUDA version: 11.7 CUDA is available: True CUDA device count: 1 Current CUDA device: 0 Device name: NVIDIA GeForce RTX 4090
) (visual_projection): Linear(in_features=1024, out_features=768, bias=False) ). We cannot verify whether it has the correct type Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:00<00:00, 17.53it/s] Initializing Object Remove or Replace Editing All the Available Functions: {'Segmenting': <main.Segmenting object at 0x7fbe101b09d0>, 'ImageCaptioning': <main.ImageCaptioning object at 0x7fbe101b3f40>, 'VisualQuestionAnswering': <main.VisualQuestionAnswering object at 0x7fbdff7d0070>, 'Text2Box': <main.Text2Box object at 0x7fbe0c133010>, 'Inpainting': <main.Inpainting object at 0x7fbdfecffa00>, 'InstructPix2Pix': <main.InstructPix2Pix object at 0x7fbdff7d0130>, 'ObjectSegmenting': <main.ObjectSegmenting object at 0x7fbdf5d40e80>, 'ObjectRemoveOrReplace': <main.ObjectRemoveOrReplace object at 0x7fbdf5d40fd0>, 'BackgroundRemoveOrExtractObject': <main.BackgroundRemoveOrExtractObject object at 0x7fbe0c2c5840>}
To create a public link, set
share=True
inlaunch()
. Traceback (most recent call last): File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/queueing.py", line 622, in process_events response = await route_utils.call_process_api( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api output = await app.get_blocks().process_api( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/blocks.py", line 2016, in process_api result = await self.call_function( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/blocks.py", line 1569, in call_function prediction = await anyio.to_thread.run_sync( # type: ignore File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2405, in run_sync_in_worker_thread return await future File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 914, in run result = context.run(func, args) File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper response = f(args, kwargs) File "/home/lzha0538/Inpainting/CE3D/chat_edit_3D.py", line 3021, in run_workspace_init foreground = self.models["VisualQuestionAnswering"].inference( File "/home/lzha0538/Inpainting/CE3D/chat_edit_3D.py", line 368, in inference out = self.model.generate(inputs) File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, kwargs) File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/transformers/models/blip/modeling_blip.py", line 1397, in generate vision_outputs = self.vision_model( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, *kwargs) File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/transformers/models/blip/modeling_blip.py", line 727, in forward encoder_outputs = self.encoder( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(input, kwargs) File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/transformers/models/blip/modeling_blip.py", line 666, in forward layer_outputs = encoder_layer( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, *kwargs) File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/transformers/models/blip/modeling_blip.py", line 439, in forward hidden_states, attn_weights = self.self_attn( File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(input, *kwargs) File "/home/lzha0538/miniconda3/envs/CE3D/lib/python3.10/site-packages/transformers/models/blip/modeling_blip.py", line 368, in forward attention_scores = torch.matmul(query_states, key_states.transpose(-1, -2)) RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling `cublasGemmStridedBatchedExFix( handle, opa, opb, m, n, k, (void)(&falpha), a, CUDA_R_16F, lda, stridea, b, CUDA_R_16F, ldb, strideb, (void*)(&fbeta), c, CUDA_R_16F, ldc, stridec, num_batches, CUDA_R_32F, CUBLAS_GEMM_DEFAULT_TENSOR_OP)`