-
Hello,
With reference to my issue #336 , It is clearly shown, that `fineSize` and `loadSize` are always mattering and effective on how much exactly do you choose them for training/testing.
By ta…
-
Kudos to the great work so far. I saw the hyper-parameters for the mainstream models but is there any resource to find the optimal hyper-parameters for models like Qwen extensive without hit and trial…
-
I tried to AD `aug_elbo` in the `NegBinomialLikelihood` example, i.e. (removed unnecessary bits), purposefully avoiding ParameterHandling.jl and trying only with `ForwardDiff.gradient`
```julia
# …
-
When using optuna (on its own) it is possible to "store" the current study and restart or do more trainings on it. This is done currently by storing the study on an SQL "server" which you give optuna …
-
### What?
- 현재 팀 SOTA Swin-L로 hyperparameter tuning을 해보자!
- fold 1 default : LB Score 0.8189
- max_epoch = 80
- optimizer = AdamW
- loss = CrossEntropyLoss
- lr_config = CosineRestart
- seed = …
km9mn updated
2 years ago
-
Hi there,
I am looking for the codes related to hyperparameter tuning for the DeepSurv method in the following example, but I am not able to find it.
https://github.com/havakv/pycox/blob/master/…
-
-
#### Description
see #133 which is manual hyperparameter by the user. This issue directly extends from #133 as FeatureEngineerer and Intents both may change the pipeline (# steps, # columns). Consequ…
-
### Feature request
Microsoft has introduced their [microsoft/LongRoPE](https://github.com/microsoft/LongRoPE) implementation. Unlike plug-and-play solutions, LongRoPE requires hyperparameter tunin…
-
/kind feature
**Why you need this feature:**
Kubeflow currently doesn't have a unified metadata/artifact management story beyond what's supported in KFP. For example, the concept of a "ML experime…