-
Hi
Thanks for your efforts! May I ask the eventual hyperparameters of the model and for training? Since I ran on the default seeting, the performance is less exciting. And can I know when the pretr…
-
Hi I would like to know if there are some potential improvements by tuning a bit those args https://huggingface.co/Maple728/TimeMoE-50M/blob/main/modeling_time_moe.py#L54
I don't know if in the ge…
-
Although Optuna is used for hyperparameter tuning in the project, the tuning results may vary across different environments. How can the reproducibility of hyperparameter tuning be ensured?
-
Thank you for this wonderful repo!
I have a quick question: are the hyperparameters listed in the README optimized for both finetuning on a specific task and finetuning to support a new language (…
-
Dear all,
Reading Surreal-GAN paper on aging (Yang et al., Nat Med, 2024), I'm now trying to apply Surreal-GAN to our data.
As a hyperparameter, candidates on lamda were set to 0.1, 0.2, 0.4, 0.…
-
in losses.py,
total_loss = (self.alpha1 * smooth_l1_l + self.alpha2 * perc_l +
self.alpha3 * hist_l + self.alpha5 * psnr_l +
self.alpha6 * color_l + se…
-
### System Info
- `transformers` version: 4.47.0.dev0
- Platform: Linux-4.18.0-513.11.1.el8_9.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.14
- Safetensors version: 0.4.5
- Accelerate vers…
-
-
Perform hyperparameter tuning and benchmark the performance:
- We use grid or random search to perform the tuning (Note that gradient descent is more efficient but sub optimal you should ask the inst…
-
Hyperparameter tuning requires that tunable parameters be identified for each step and assigned functions that generate suitable grid ranges. The current implementation provides `hom_degree()` and `ma…