vietanhdev / anylabeling

Effortless AI-assisted data labeling with AI support from YOLO, Segment Anything, MobileSAM!!
https://anylabeling.nrl.ai
GNU General Public License v3.0
2k stars 218 forks source link

Failed to allocate memory for requested buffer of size #144

Open Abdulhadiasa opened 10 months ago

Abdulhadiasa commented 10 months ago

I am running:

While labeling a dataset I am running into this error when I try to add a point for Segment Anything (SAM), I tried ViT-(B/L/H).

2023-09-05 10:42:40.812917403 [E:onnxruntime:, sequential_executor.cc:494 ExecuteKernel] Non-zero status code returned while running ConvTranspose node. Name:'/output_upscaling/output_upscaling.0/ConvTranspose' Status Message: /onnxruntime_src/onnxruntime/core/framework/bfc_arena.cc:368 void* onnxruntime::BFCArena::AllocateRawInternal(size_t, bool, onnxruntime::Stream*, bool, onnxruntime::WaitNotificationFn) Failed to allocate memory for requested buffer of size 33554432

WARNING:root:Could not inference model
WARNING:root:[ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running ConvTranspose node. Name:'/output_upscaling/output_upscaling.0/ConvTranspose' Status Message: /onnxruntime_src/onnxruntime/core/framework/bfc_arena.cc:368 void* onnxruntime::BFCArena::AllocateRawInternal(size_t, bool, onnxruntime::Stream*, bool, onnxruntime::WaitNotificationFn) Failed to allocate memory for requested buffer of size 33554432

Traceback (most recent call last):
  File "/home/abed/Documents/label-tools/anylabeling/Anylabelling/lib/python3.8/site-packages/anylabeling/services/auto_labeling/segment_anything.py", line 232, in predict_shapes
    masks = self.model.predict_masks(image_embedding, self.marks)
  File "/home/abed/Documents/label-tools/anylabeling/Anylabelling/lib/python3.8/site-packages/anylabeling/services/auto_labeling/sam_onnx.py", line 193, in predict_masks
    masks = self.run_decoder(
  File "/home/abed/Documents/label-tools/anylabeling/Anylabelling/lib/python3.8/site-packages/anylabeling/services/auto_labeling/sam_onnx.py", line 126, in run_decoder
    masks, _, _ = self.decoder_session.run(None, decoder_inputs)
  File "/home/abed/Documents/label-tools/anylabeling/Anylabelling/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 200, in run
    return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running ConvTranspose node. Name:'/output_upscaling/output_upscaling.0/ConvTranspose' Status Message: /onnxruntime_src/onnxruntime/core/framework/bfc_arena.cc:368 void* onnxruntime::BFCArena::AllocateRawInternal(size_t, bool, onnxruntime::Stream*, bool, onnxruntime::WaitNotificationFn) Failed to allocate memory for requested buffer of size 33554432

nvidia-smi shows the model has been loaded into memory

Tue Sep  5 10:49:57 2023       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.86.05              Driver Version: 535.86.05    CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 2070 ...    Off | 00000000:01:00.0  On |                  N/A |
|  0%   38C    P8              21W / 215W |   7964MiB /  8192MiB |     15%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A    501464      G   /usr/lib/xorg/Xorg                           22MiB |
|    0   N/A  N/A    501819      G   /usr/lib/xorg/Xorg                           45MiB |
|    0   N/A  N/A   3588976      C   ...anylabeling/Anylabelling/bin/python     7892MiB |
+---------------------------------------------------------------------------------------+