-
Hello author, I am currently working on using IEMOCAP dataset with multi-label approach on your architecture, with audio, video and text as input. But I ran in some problems with your code, here are t…
-
_The template below is mostly useful for bug reports and support questions. Feel free to remove anything which doesn't apply to you and add more information where it makes sense._
---
### 1. Issue…
-
Is it possible to run the AWQ models using the `run_vila.py` script?
I ran the following command:
```
python -W ignore llava/eval/run_vila.py \
--model-path Efficient-Large-Model/VILA1.5-3…
-
Hi, motivated by the awesome MiniGPT4, we are excited to present Video-LLaMA (https://github.com/DAMO-NLP-SG/Video-LLaMA), a modular video-language pre-training framework that empowers instruction-fol…
-
```meta
Time: 2024-08-12 6:00PM Eastern
UTCTime: 2024-08-12 22:00 UTC
Duration: 2h
Location: ATL BitLab, 684 John Wesley Dobbs Ave NE, Unit A1, Atlanta, GA 30312
```
![aitl-ai-builders-august]…
-
```meta
Time: 2024-07-08 6:00PM Eastern
UTCTime: 2024-07-08 22:00 UTC
Duration: 2h
Location: ATL BitLab, 684 John Wesley Dobbs Ave NE, Unit A1, Atlanta, GA 30312
```
![aitl-ai-builders-july](h…
-
GaVaMoE: Gaussian-Variational Gated Mixture of Experts for Explainable Recommendation
https://arxiv.org/abs/2410.11841
-
### Brand Name
Activeloop
### Website
https://www.activeloop.ai/
### Popularity Metric
Activeloop provides Deep Lake - an Enterprise-Grade Database for AI. Deep Lake is a multi-modal database for…
-
I have build my own demo file. after uploading one video, it gives blank output. Could anyone help me out?
-------------------------------Here's the demo file-------------------------
from argpars…
BOYJZ updated
2 weeks ago
-
Hi authors, thanks for your amazing work which contributes to long video understanding a lot!
I'm repeating your experiments on LLaVA-NeXT-Video. I meet some problems and would like to know how you …