-
2024-09-03 16:04:29.288 | INFO | __main__::783 - 开始生成视频
2024-09-03 16:04:29.308 | INFO | __main__::784 - {
"video_subject": "父母如何教导孩子学会拒绝",
"video_script": "",
"video_terms":…
-
Thank you for your excellent work. I have some questions that I hope to receive your answers to. I hope to apply TFVTG to my custom video dataset to test the video temporal grounding function. What sh…
-
### Common description
I have encountered an issue where Baml parses the response from the LLM incorrectly. Specifically, it seems that one block type is being incorrectly converted into another bloc…
-
@songhappy / @shane-huang : Please could you share the code or steps how you ran LanguageBind/Video-LLaVA-7B-hf on IPEX-LLM few months back.
As we have a customer who wants to use video-llava runni…
-
**Describe the bug**
When deploying LLaVA-NeXT-Video-34B-hf, I find that the configuration key passed to transformers is "llava_next_video", while the accurate key in tranformers is "llava-next-video…
-
### Reference Issues
None yet.
### Summary
Hi there! I am a big fan of kotaemon and would love to integrate Not Diamond into it. In case you’re unfamiliar with Not Diamond, it automatically r…
-
You will see the problem in the text below, this is with using gpt-4o and version 0.5 of agent zero, but have similar issues with other models
User message ('e' to leave):
> Write a college level …
-
Judging from the videos and docs this is a great plugin which i will surely use in some way (didn't actually use it yet). I'm a heavy indesign user, and the possibilities for amending content seems in…
-
I was pretty amazed with SAM 2 when it came out given all the work I do with video. My company works a ton with it and we decided to take a crack at optimizing it, and we made it run 2x faster than th…
-
Is it not supported by streaming output?