-
你好,结合论文和代码顺利把代码run起来了,对于median有一些疑问,median是在collect和test阶段产生,分别用于指导fine-tuning基于50%分位数进行迭代和test infer的point forecast。代码中生成median的部分如下:
true = np.array(true)[96-pred_len:]
gen_samples = np.array(pred…
-
test.py run result:
model load successfully
Adaptive adjacency matrix generation is skipped as 'nodevec1' or 'nodevec2' is not initialized.
Evaluate best model on test data for horizon 1, Test MAE:…
-
Thank you for your efforts, but I have a question about MAE code.
https://github.com/lucidrains/vit-pytorch/blob/dc57c75478c98241fd232a64a7bb4c23c5861730/vit_pytorch/mae.py#L91
MSE loss was ca…
-
Great works!
However, I also wonder that the reconstruction loss (MSE Loss) in MAE is lost in the code. I couldn't find it.
Does the MDT computes the noising prediction MSE loss twice in training pr…
-
Running `eurosat_finetune`, from the error:
```
model = models_vit_tensor.__dict__[args.model](drop_path_rate=args.drop_path,
KeyError: 'mae_vit_base_patch8_128'
```
Adding `print(list(models…
-
I would like to ask how the MAE indicator is calculated. I did not see the relevant calculation in the code you released
-
使用自己的数据集,训练时loss在4.7左右,虽然最后MAE降到了1.2,比EFFICIENTPHYS要高,这正常吗?
-
The MOFA Bioc package (see https://microbiome.github.io/OMA/multi-assay-analyses.html) requires a bit multi-faceted process for MAE data -> assess whether we could support PR or wrapper to enable MAE …
-
-
May be with TrainingTimeoutCallback()