-
### Bug Description
When I run a video action call to LLM Vision I get an error in the traces (and logs):
```
Stopped because an error was encountered at 29 October 2024 at 19:35:44 (runtime: 11.0…
-
### Describe the bug
- Cloned the repo
- Installed everything needed
- Created the modelfile FROM qwen2.5-coder:7b
PARAMETER num_ctx 32768
and run the query on powershell but either i don't see o…
-
```
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
from transformers import Qwen2VLProcessor
from awq.models.q…
-
### Project Name
LLM & RAG Chatbot for Online Shopping Website
### Description
**RAG App Description:**
This RAG app is an AI-powered chatbot designed to assist customers interested in health an…
coo1y updated
2 weeks ago
-
CUDA_VISIBLE_DEVICES=0,1 python video_audio_demo.py --model_path VITA/VITA_ckpt --image_path asset/vita_log2.png --model_type mixtral-8x7b --conv_mode mixtral_two --audio_path asset/q1.wav
cannot o…
-
I was pretty amazed with SAM 2 when it came out given all the work I do with video. My company works a ton with it and we decided to take a crack at optimizing it, and we made it run 2x faster than th…
-
I was thinking about doing something similar!
Here's what I propose adding:
- Scaling using ESRGAN and similar models for better quality, like: https://github.com/MattyMroz/ESRGAN_Upscale
- TTS usi…
-
**Describe the bug**
When deploying LLaVA-NeXT-Video-34B-hf, I find that the configuration key passed to transformers is "llava_next_video", while the accurate key in tranformers is "llava-next-video…
-
Hi Dustin
great job on Live Llava 2.0 - VILA + Multimodal NanoDB for jetson Orin
is it possible to run all the jetson-container offline instead of downloading from huggingface every time?
tried…
-
You will see the problem in the text below, this is with using gpt-4o and version 0.5 of agent zero, but have similar issues with other models
User message ('e' to leave):
> Write a college level …