issues
search
dragen1860
/
MAML-Pytorch
Elegant PyTorch implementation of paper Model-Agnostic Meta-Learning (MAML)
MIT License
2.28k
stars
421
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
训练自己的数据集
#76
qiongjiekucha
closed
6 months ago
0
准确率不变
#75
shiyao1999
opened
8 months ago
4
What is the backup file for and what is the reference navie5 in navie5?
#74
kevinwanglu
opened
11 months ago
0
About accuracy
#73
feisichen
opened
1 year ago
6
Using the Learner object for my project, Loss not behaving at its best
#72
Metabloggism
opened
1 year ago
0
debug
#71
swaggyP9527
opened
1 year ago
0
请问模型权值文件在哪里进行保存?代码在哪里?
#70
XingshiXu
opened
2 years ago
1
Why the code for Learner is so complicated?
#69
fikry102
opened
2 years ago
1
Why is `for epoch in range(args.epoch // 10000):`
#68
Waterkin
opened
2 years ago
2
can you offer us your requirements of environment?
#67
bbruceyuan
closed
3 months ago
0
About training and testing
#66
zkk-web
opened
2 years ago
6
您好,对于代码有两个问题,请教您一下,谢谢
#65
CUITCHENSIYU
opened
2 years ago
3
Asking about inner and outer loop
#64
tranquangchung
opened
2 years ago
0
create_graph parameter is False hence first-order MAML?
#63
gunnxx
opened
3 years ago
1
about dataset spiltting
#62
raymond00000
opened
3 years ago
0
Incorrect losses_q
#61
zilin129
opened
3 years ago
3
Can you please add a 1-d CNN model to the learner.
#60
zilin129
opened
3 years ago
0
Does the hessian really gets computed?
#59
tanghl1994
opened
3 years ago
1
omniglot dataset download error
#58
vegetarianfish
closed
3 years ago
1
why use custom grad clip function?
#57
QasimWani
opened
3 years ago
0
Overlap between meta-training and meta-testing?
#56
atseng17
closed
3 years ago
1
Something wrong in the config of the last max pooling layer.
#55
MrDavidG
opened
4 years ago
1
something about Computational Graph
#54
kongbia
opened
4 years ago
0
how to save and load the model
#53
asker-github
closed
4 years ago
0
Question about variable, losses_q
#52
sailist
closed
4 years ago
1
Why not "loss_q = sum(losses_q) / task_num" ?
#51
geekyutao
closed
4 years ago
2
Performance on Omniglot is slightly lower than paper report
#50
YaxinLi0-0
opened
4 years ago
3
What's the difference between task_num and N-way?
#49
luciaL
opened
4 years ago
0
for step, (x_spt, y_spt, x_qry, y_qry) in enumerate(db):
#48
Peterdingpeng
opened
4 years ago
1
Some ideas about parameters updating.
#47
zhaoyu-li
opened
4 years ago
2
Only use Conv and Linear
#46
CongWeilin
opened
4 years ago
1
RuntimeError: Expected object of scalar type Long but got scalar type Int for argument #2 'target'
#45
Peterdingpeng
opened
4 years ago
1
why running_mean/running_var: requires_grad must be False?
#44
arxrean
opened
4 years ago
0
Some questions about the codes
#43
HongduanTian
opened
4 years ago
0
Measure the model performance
#42
kk2491
opened
4 years ago
17
accuracy format
#41
eghouti
opened
4 years ago
1
questions with respect to the no_grad()
#40
d12306
opened
4 years ago
1
Conv-ReLU-BN issue
#39
mileyan
opened
4 years ago
1
Mutiple GPU
#38
dipanjan06
opened
4 years ago
0
OmniglotNShot
#37
li829
opened
4 years ago
1
Re-sampling tasks after each epoch increases the performance
#36
ShawnLixx
opened
4 years ago
3
Result is not deterministic because of iterating over Python dictionary
#35
ShawnLixx
opened
4 years ago
3
How to use the trained MAML model for predicting one image at a time?
#34
vatsalsaglani
opened
4 years ago
6
run code have a mistakes on db = DataLoader(mini, args.task_num, shuffle=True, num_workers=1, pin_memory=True)
#33
hongshengxin
opened
4 years ago
5
2nd Order or 1st Order Approximation?
#32
Vampire-Vx
opened
4 years ago
2
learning_rate
#31
dawn2034
opened
5 years ago
0
Should I fine-tune the model on each new task
#30
AaboutL
closed
5 years ago
2
Maybe there is some error in learner.py?
#29
crashmoon
closed
5 years ago
1
Confused
#28
kimnoic
closed
5 years ago
0
why not set bn_training=False when test net on x_qry in finetunning phase
#27
boy-be-ambitious
opened
5 years ago
2
Next