-
Thanks for releasing the code for the meta-rPPG paper! Very interesting paper!
I am wondering where is the code for the **learning phase** described in the algrotihm 1 (Training of Meta-Learner)? I…
-
Hi Yujia, why did you learn lambda and alpha in the log space in r2d2.py? In the implementation of the paper "META-LEARNING WITH DIFFERENTIABLE CLOSED-FORM SOLVERS", the authors seem to just learn lam…
-
Hi authors, many thanks for your released code, it helps me better to understand your excellent work. A question is that reported results in your paper on the MiniImagenet dataset 1-shot 5-way using 4…
-
Hi,
Thanks for sharing your code. And I wonder if you can share the curves (like training loss, training acc, val loss, val acc), since my training curves seem to be strange.
-
python ensembles/train.py --model.model_name=deep_robust10 --data.dataset=mini_imagenet --model.backbone=deep --ens.num_heads=10 --ens.relation_type=robust
!!!!!! Starting ephoch 0 !!!!!!
0%| …
-
Hi, it's pleasure for me to read your code. However, I have some doubts, and I would be very grateful if you could reply the answers to me.
1、For a task i, why repeat training K times, and doesn't …
-
Thanks for your great work!
Could you give me some instructions about the Data Preparation for COCO Dataset?
-
Getting this error from meta_dataset.py file
```python
chosen_class_inds = random.sample(
all_class_inds, self.n_train + self.n_test)
```
I am training a single model on cub d…
-
In fewshot_imprinted.py
`# Reverse the last imprinting (Few shot setting only not Continual Learning setup yet)
model.reverse_imprinting()`
To reuse the original weight for each iterat…
-
Hi guys,
first of all congrats to this great model! I do multilingual tweet classification and it performs stunningly for monolingual cases. I freeze the model and use a linear layer on top of the …