issues
search
ylsung
/
VL_adapter
PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)
MIT License
204
stars
16
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
A question about zero-grad settings in VL-adapter's multitask.py file.
#19
y2sman
opened
6 months ago
0
Where should I set a different $d$ for the Single Adapter?
#18
chenxshuo
closed
1 year ago
1
How you fed the VQA v2 each question and answers list to VL T5 or VL BART ?
#17
sanyalsunny111
opened
1 year ago
1
No such file or directory: '/root/VL_adapter/datasets/vqa/train.json'
#16
CuddleSabe
opened
1 year ago
0
Where is the entrance to the setting of half-shared adapters?
#15
chenxshuo
closed
1 year ago
2
Do you plan to also release the adapted checkpoints for each experiment?
#14
chenxshuo
closed
1 year ago
4
COCO Cap. Karpathy test CIDEr score is super low
#13
yushuinanrong
closed
1 year ago
2
training hangs every few seconds
#12
yushuinanrong
closed
1 year ago
0
CVE-2007-4559 Patch
#11
TrellixVulnTeam
opened
1 year ago
0
unsuccessful clip feature extraction
#10
JaniceLC
opened
2 years ago
1
About boxes in video
#9
czy-orange
closed
2 years ago
2
DDP degrades the performance
#8
prote376
opened
2 years ago
10
Could you release pretrained models?
#7
zeyofu
closed
1 year ago
1
Feature extraction
#6
ylsung
closed
2 years ago
0
Question about the design choice of the unified framework.
#5
JacobYuan7
closed
2 years ago
1
Clip feature extraction
#4
kittitouchar
closed
2 years ago
2
train problem
#3
Twilighter9527
closed
2 years ago
3
FileNotFoundError
#2
kmzcy
closed
2 years ago
4
When do you expect to release the code?
#1
xhyandwyy
closed
2 years ago
2