issues
search
magic-research
/
PLLaVA
Official repository for the paper PLLaVA
568
stars
37
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
how to train train_tcllava_7b.sh on multi machines?
#85
Wiselnn570
opened
2 days ago
0
flash-attn
#84
AAwilliam
opened
5 days ago
0
Does PLLAVA possess the capability to follow instructions?
#83
Pride-Huang
opened
2 weeks ago
0
TGIF-QA evaluation crashing
#82
geomlyd
opened
2 weeks ago
0
why the output is blank
#81
BOYJZ
opened
3 weeks ago
3
Why is the answer to this demo blank
#80
DoigtByou
opened
1 month ago
2
environment failed
#79
chenxinhua
opened
1 month ago
0
Image token process malfunction
#78
Stevetich
opened
1 month ago
0
3 enable inference api from demo
#77
zhanwenchen
closed
2 months ago
0
No module named dataset
#76
liguopeng0923
closed
2 months ago
1
Is the inference done using only one <image> token?
#75
gowreesh-mago
opened
2 months ago
2
Question about the training data
#74
Kendrick-Powehi-Z
opened
3 months ago
0
Eval Results.
#73
liguopeng0923
closed
2 months ago
6
Training dataset keys_indexfile
#72
linxid
opened
3 months ago
0
Training Dataset
#71
linxid
closed
3 months ago
1
High video loss
#70
TonyXuQAQ
opened
3 months ago
0
Which train_corpus was used by train pllava-13b?
#69
gaowei724
opened
3 months ago
0
Instruction follow ability is weak
#68
lmx760581375
opened
3 months ago
7
[TRAINING DATA]About the number of video_qa in magic_jsons, which is more than the reported number in PLLaVA paper!
#67
YeeShih
closed
3 months ago
0
Training loss curve
#66
hukkai
opened
3 months ago
2
The load_video function does not seem right in eval_utils.py
#65
ahmadmobeen
opened
3 months ago
0
What is the prompt used to evaluate models answer on VideoChatGPT generation benchmark?
#64
ADiko1997
opened
3 months ago
0
The inference result is all like a mess
#63
lucasjinreal
opened
3 months ago
2
1 runtimeerror flashattention only supports ampere gpus or newer
#62
zhanwenchen
opened
3 months ago
1
Inference error
#61
lucasjinreal
opened
3 months ago
4
Hi, how does the <image> being replaced in PLLava
#60
lucasjinreal
opened
3 months ago
1
Abou the dataset
#59
lucasjinreal
opened
3 months ago
0
About the lora config alpha and rank
#58
lucasjinreal
opened
3 months ago
2
How to prepare eval dataset.
#57
linxid
opened
3 months ago
4
How to fintune base on your fintuned result
#56
liuao743
opened
3 months ago
2
The Avg. results in the PLLaVA paper is wrong!
#55
huangshiyu13
opened
4 months ago
2
How to train only the projector?
#54
gaowei724
closed
4 months ago
2
What's the 7B model score on mvbench?
#53
MonolithFoundation
opened
4 months ago
1
Difference of videochat2 jsons and magic jsons.
#52
OliverHxh
opened
4 months ago
2
Training time consumption
#51
hmxiong
closed
3 months ago
2
eval on fintuning model
#50
liuao743
closed
4 months ago
8
Checkpoints for trained models?
#49
serwansj
closed
4 months ago
1
Multi-GPU Inference for 34B Model
#48
serwansj
opened
4 months ago
1
Clarifying Ambiguity in Training Data
#47
patrick-tssn
opened
4 months ago
3
How can we quantize the 34B model for inference?
#46
ApoorvFrontera
opened
4 months ago
3
No output text when evaluating on my own pre-trained model.
#45
gaowei724
closed
4 months ago
5
It seems that the most effective point of increase is to increase the size of llm
#44
xmy0916
opened
4 months ago
0
finetune problem
#43
AshOneN
opened
4 months ago
3
Possible error in doctring for "pixel_values"
#42
tomyoung903
opened
4 months ago
0
fix bug for demo
#41
patrick-tssn
opened
4 months ago
0
Bug in tasks/eval/eval_utils.py
#40
patrick-tssn
opened
4 months ago
1
Finetune my own's tasks based on pllava-7b, but got "image_to_overwrite.sum() != image_features.shape[:-1].numel()" assertion failure.
#39
gaowei724
closed
4 months ago
1
Can PLLaVA caption rectangular videos without cropping?
#38
tomyoung903
opened
4 months ago
1
CLI inference
#37
YepJin
opened
4 months ago
0
Dataset Resquest
#36
mingzeG
opened
4 months ago
2
Next