issues
search
OpenGVLab
/
LAMM
[NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents
https://openlamm.github.io/
284
stars
15
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
How to evaluate 3D tasks of Octavius?
#80
TangYuan96
closed
3 weeks ago
0
some question about training octavius
#79
herkerser
opened
3 weeks ago
1
How much time is required to train Octavius on 3D instruction dataset, as specified in the paper?
#78
TangYuan96
closed
3 weeks ago
0
no multimodal branch in lightLLM
#77
KimWu1994
opened
1 month ago
0
Ch3Ef dataset is not avaliable
#76
Kizna1ver
opened
2 months ago
1
petrel_client
#75
KimWu1994
opened
2 months ago
1
COCO Detection Instructions on HF Datasets
#74
adymaharana
opened
2 months ago
0
Ch ef v2
#73
Coach257
closed
3 months ago
0
how to deal with multi-turn dialogue for octivius?
#72
joez17
opened
4 months ago
0
Multi image
#71
Coach257
closed
5 months ago
0
feat: update news
#70
orangegk
closed
5 months ago
0
LAMM1.5
#69
Coach257
closed
5 months ago
1
Code Stop
#68
LanShanPi
opened
5 months ago
0
Code stop
#67
LanShanPi
closed
5 months ago
0
Using PPL on LAMM, discoverd 2 bugs, and result is much worse than direct inference type
#66
AlexWang1900
closed
3 months ago
2
I am interested in testing Octavius? Is there a tutorial to use it like LAMM with cli_demo.py?
#65
drahmad89
opened
5 months ago
1
Some benchmarks are missing from the leaderboards
#64
zhimin-z
closed
5 months ago
8
Is ScicenQA's meta filename mismatched?
#63
sunwhw
closed
6 months ago
1
Mismatched results between paper and leaderboard
#62
zhimin-z
closed
7 months ago
2
Are the leaderboard results from `0-shot` settings?
#61
zhimin-z
closed
7 months ago
1
:question: Could you specify all multitasks Octavius was trained on?
#60
simon-lund
opened
7 months ago
0
omnibenchmark dataset
#59
sunwhw
closed
7 months ago
3
:bug: Broken link in benchmark documentation
#58
simon-lund
closed
7 months ago
1
how to ensure the fairness and stability of this test
#57
sunwhw
closed
7 months ago
4
What does `failed` mean in the test?
#56
zhimin-z
closed
7 months ago
3
What are the metrics of the six recipes of Desiderata?
#55
zhimin-z
closed
7 months ago
4
What are the metrics of `Omnibenchmark`, `ScienceQA`, `MMBench`, `SEED`, and `MME` benchmarks?
#54
zhimin-z
closed
7 months ago
3
Could you provide LLaVA-v1.5 7b/13b results on ChEF leaderboard?
#53
YuqiHUO
closed
3 months ago
2
performance on evaluating instructblip-vicuna7b and instructblip-flant5xxl
#52
sunwhw
closed
7 months ago
3
Update README.md
#51
eltociear
closed
5 months ago
0
Updates
#50
Coach257
closed
8 months ago
3
release ChEF and Octavius
#49
Coach257
closed
8 months ago
0
How to change options to decrease calculation consumption during evaluation_3d ?
#48
Xiaolong-RRL
closed
6 months ago
2
AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam'
#47
Xiaolong-RRL
closed
8 months ago
1
update requirements
#46
wangjiongw
closed
8 months ago
0
update news in README & fix typo
#45
wangjiongw
closed
8 months ago
0
Zero-shot performances on ScanQA with Vicuna-7B
#44
UnderTheMangoTree
closed
5 months ago
2
fix some typo in ``.gitignore``
#43
lighten001
closed
10 months ago
0
update readme & enable llama2 support
#42
wangjiongw
closed
10 months ago
0
Zero-Shot results on 3D_Benchmark and official ckpt
#41
hmxiong
closed
9 months ago
5
add support for lightllm when inference
#40
lighten001
closed
10 months ago
0
The output is garbled
#39
hmxiong
closed
10 months ago
2
what the differences among instruct_98K, instruct_140K, instruct_186K?
#38
peiliu0408
closed
10 months ago
1
merge lora parameters when runing inference code
#37
lighten001
closed
10 months ago
0
update readme
#36
wangjiongw
closed
11 months ago
0
add flash attention support in training to save memory and speed up
#35
lighten001
closed
10 months ago
0
update scripts & inference codes; fix bugs& update results
#34
wangjiongw
closed
11 months ago
0
fix save_model func in src/model/agent.py
#33
lighten001
closed
11 months ago
0
the save_model func in src/model/agent.py may cause some bugs when using deepspeed ZeRO3
#32
lighten001
closed
11 months ago
1
fix: fix a bug of inference_2d about fixed batch_size = 1
#31
Zhoues
closed
12 months ago
1
Next