-
1. How did you get the subtitles for each image frame in the training? Does the training dataset come with it?
2. Are subtitles different for different frames of the same video?
-
Dear Nvidia Team,
I would like to request support for running TensorRT-LLM on the Nvidia AGX Orin development kit.
Thank you!
Best regards,
Shakhizat
-
[Video Demo](https://youtu.be/2BkH5hvQf6c)
[source code](https://github.com/Uzo2005/smartPasswords)
[website](http://20.51.223.32:80/)
My Approach involved using an LLM(Gemini by google) to extra…
-
### Your current environment
from PIL import Image
from transformers import AutoProcessor
from vllm import LLM, SamplingParams
from qwen_vl_utils import process_vision_info
MODEL_PATH = '/w…
-
Hi @huangb23,
Thanks for sharing code for a great work!
Can you please share the inference code to generate the Stage 3 Dataset from ActivityNet/DiDeMo? Specifically, the inference configuration…
-
### All output of fabric is being parsed by the LM. Is this normal ?
After reading the documentation, I am still not clear how to save the yt transcript as a txt file. `fabric -y https://www.youtub…
-
I noticed the recent integration of LLMs, local or remote, including RAG. This is a great feature.
The most requested feature nowadays is a real-time recommendation system, based on user events, li…
-
I wanted this feature when I was using AgentOps so, I would love to propose to implement a notification system for AgentOps to enhance user awareness of critical events and improve monitoring capabili…
-
Will be amazing if there were APIs that expose OpenRecall content to be used as a RAG for another LLM (i.e. Ollama or Dify or ChatGPT GPTs using functions) to enable asking "what's the last email i se…
-
Hello. I am interested in your project and have been conducting various experiments with it. However, I encountered the following error and do not know how to resolve it. This error occurs in pipeline…