-
### Describe the bug
I followed the document and video as well, installed in docker and configured the .env file as well. However, I am getting errors. PFA pf the pictures for your reference
![a11]…
-
I made some changes to the model (3D convs) and trained the small one with 128 tokens on 128p 16-frame videos pre-compressed with CogvideoX's VAE and MSE loss.
Turned out better than I expected consi…
-
It does not process videos that are over 2 hours, do you know how I can fix this? I am using Open AI model gpt-4 Turbo. This is the message I get:
Error in workflow 'YouTube Transcript Analysis': LLM…
-
### System Info
single card 3090,video memory 24GB
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially…
-
There is no way to find a button to switch models, for example, in a video, and the Llm.CLAUDE_3_5_SONNET_2024_06_20 is called by default every time
-
## ❓ General Questions
I tried to compile TVM and MLC-LLM on jetson orin AGX(jp6 cu122), in order to inference phi3.5v. However, I discovered phi3 processes images is much slower than hugging face …
-
Hi,
I have finetuned Qwen2-VL using Llama-Factory.
I successfully quantized the fine-tuned model as given
```
from transformers import Qwen2VLProcessor
from auto_gptq import BaseQuantizeC…
-
"How can we process raw video data to generate a JSON file in the format `{"video": video_path, "text": text_prompt}`?"
-
### Self Checks
- [X] This is only for bug report, if you would like to ask a question, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general).
- [X] I have s…
-
when use 34B LLM model, I can not put all the model to single gpu. so I use device_map='auto' to put several part to multi gpus.
but I found inference time cost too much, so how to solve this problem…