-
Hi --
This is a really great implementation and improvement of MAML!
I'm curious about whether it's actually a good idea to let the network meta-learn a nonzero bias initialization for the linea…
-
The current [README](https://github.com/cyberark/conjur#conjur) begins with a long bullet list, which reads like a word salad of jargon and buzzwords. The very first feature we sell is MAML, our con…
-
Thank you for releasing the code.
I notice that the function
`def forward(self, input, num_step, params=None, training=False, backup_running_statistics=False)`
has a training indicator. However…
-
Hi Antreas.
Thanks for the great work MAML++! This will be very helpful!
I have the following questions.
1. The LSLR mentioned in the paper is not reflected in the code. The learning rate of each l…
-
which one is the correct one to be doing:
```
# Accumulate gradients wrt meta-params for each task
qry_loss_t.backward() # note this is more memory efficient (as it remov…
-
I'd like to use the Adafactor scheduler as the hugging face code has (but their code does not work for CNNS).
questions as follow:
a) How do I use schedulers with pytorch-optimizer?
b) Can we ad…
-
Hi, thanks for your code which gives me a lot of help.
But I have a question about the AntVel env, when I tried to train MAML in this environment, I can't get a good result. But it works well in the …
-
### 🚀 The feature, motivation and pitch
Repro:
```
with enable_torch_dispatch_mode(FakeTensorMode(inner=None)):
torch.tensor([torch.ones([], dtype=torch.int64), 0],)
```
> NotImplementedErro…
-
I propose updating the C# language specification to include a mention of a `` element in documentation. This "inline" element identifies content that is not yet finalized and/or needs further review.
…
-
Hi, thanks for your work on this library!
Using a weight normalized network in higher's inner loop raises the following error:
```
load from omniglot.npy.
DB: train (1200, 20, 1, 28, 28) test …