-
### Search before asking
- [X] I have searched the YOLOv8 [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and fou…
-
Hi @AlexeyAB kindly help.
When I run the ```darknet_video.py``` as ```python darknet_video.py --input /darknet/Sample.mp4 --weights yolov1.weights --config_file cfg/yolov1/yolo.cfg --data_file ./…
-
Hi Kyle, thank you for such a nice paper!
I really learned a lot from your work.
Currently, I am trying to tune your model for inference in ASD task
(for random video, with no annotation about an…
-
Your version of transformers forces LlamaFlashAttention2 in the constructor of LlamaDecoderLayer in transformers/models/llama/modeling_llama.py which requires Ampere or newer to work. Just by using th…
-
Hello,
I’ve been running some tests using the nano_llm.vision.video module with live camera streaming on AGX Orin 64gb model.
with the following parameters,
--model Efficient-Large-Model/VI…
-
Using an NVIDIA Jetson AGX Orin Developer Kit.
```bash
$ git clone --recursive https://github.com/dusty-nv/jetson-inference
$ cd jetson-inference
$ docker/run.sh
$ cd build/aarch64/bin
# Downloa…
-
What is needed to be change if I want to do model inference on a video file instead of images? I am using cv2.videoCapture(video_path) for reading the video, then I throw a while loop when the cap is…
-
1. Run command "./video-viewer csi://0 rtsp://@:8554/stream_1" on Jetson TX2 NX.
2. Run gstreamer on Windows PC connected to Jetson trought USB cable : "gst-launch-1.0 rtspsrc location=rtsp://192.168…
-
Model: codellama, Enum: OLLAMA
24.05.06 12:40:06: root: INFO : SOCKET inference MESSAGE: {'type': 'time', 'elapsed_time': '0.00'}
24.05.06 12:40:06: root: INFO : SOCKET inference MESSAGE: {'type…
-
### Search before asking
- [X] I have searched the YOLOv8 [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests.
### Description
How can we do real-time…