The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Apache License 2.0
10.76k
stars
874
forks
source link
RuntimeError: CUDA error: unspecified launch failure #296
Traceback (most recent call last):
File "xxxx/xxxx/data/sam2_video.py", line 70, in <module>
for out_frame_idx, out_obj_ids, out_mask_logits in predictor.propagate_in_video(inference_state):██▌ | 10/29 [00:01<00:02, 7.50it/s]
File "xxxx/miniconda3/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 57, in generator_context
response = gen.send(request)
^^^^^^^^^^^^^^^^^
File "xxxx/segment-anything-2/sam2/sam2_video_predictor.py", line 705, in propagate_in_video
current_out, pred_masks = self._run_single_frame_inference(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxxx/segment-anything-2/sam2/sam2_video_predictor.py", line 849, in _run_single_frame_inference
current_out = self.track_step(
^^^^^^^^^^^^^^^^
File "xxxx/segment-anything-2/sam2/modeling/sam2_base.py", line 762, in track_step
sam_outputs = self._forward_sam_heads(
^^^^^^^^^^^^^^^^^^^^^^^^
File "xxxx/segment-anything-2/sam2/modeling/sam2_base.py", line 334, in _forward_sam_heads
sparse_embeddings, dense_embeddings = self.sam_prompt_encoder(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/localscratch2/zzhuang/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/localscratch2/zzhuang/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxxx/segment-anything-2/sam2/modeling/sam/prompt_encoder.py", line 169, in forward
point_embeddings = self._embed_points(coords, labels, pad=(boxes is None))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxxx/segment-anything-2/sam2/modeling/sam/prompt_encoder.py", line 96, in _embed_points
point_embedding[labels == -1] += self.not_a_point_embed.weight
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
RuntimeError: CUDA error: unspecified launch failure
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.