-
### Search before asking
- [X] I have searched the YOLOv8 [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and f…
-
Hi,
Thanks for sharing the work, when I try to run the vitl example in an A100 gpu, I found the inference time settles down to around 120ms rather than 13ms as stated in the repo, is there a reason…
-
I am deploying Example1:[Using Joint Inference Service in Helmet Detection Scenario](https://github.com/kubeedge/sedna/blob/main/examples/joint_inference/helmet_detection_inference/README.md).
edge…
-
模型:https://modelscope.cn/models/OpenBMB/MiniCPM-V-2_6
通常,多模态大模型微调会使用自定义数据集进行微调。在这里,我们将展示可直接运行的demo。
在开始微调之前,请确保您的环境已准备妥当。
```bash
git clone https://github.com/modelscope/swift.git
cd swift
…
-
我看你装的是onnxruntime而不是onnxruntime-gpu, 这样就没法用gpu加速了。但是我装了onnxruntime-gpu又报错:grid_sample不支持5D
-
`
torchrun --standalone --nproc_per_node 1 scripts/inference.py --config configs/mvdit/inference/16x512x512.py
/root/miniconda3/lib/python3.10/site-packages/transformers/utils/generic.py:441: UserW…
-
Hi, thanks for this fantastic work!
I'm running the demo but found it takes pretty long time (~1min 30s) to track batch of points on varanus data. Is there any ways to speed it up, or I'm just wonde…
-
Hi,
When I ran the code python grounded_sam2_local_demo.py
the result was good with a prompt text="car. road."
![grounded_sam2_annotated_image_with_mask](https://github.com/user-attachments/assets/…
-
Is it possible to specify the decoder format when jetson_utils gets the rtsp camera, such as h264, h265, and if so, how should I do it (python code)?
-
Would it be possible to save all rendered picts out, before they are compressed to a movie?