-
Using the full posterior distribution with the hyperparameters as unknown variables is known to give better results in Bayesian optimization (see https://arxiv.org/pdf/1206.2944.pdf).
A user could …
-
Hi,
Thank you for sharing your work! I have a few questions regarding the grid search in the VTAB-1k benchmark and would greatly appreciate it if you could provide more details:
1. Did you use a…
-
Currently, it is not possible to update the estimator hyperparameters with the hyperparameters passed to TrainingStep if a Placeholder is used as input. The merging of hyperparameters can only be done…
-
hi, sorry for this possible trivial question. how can i fix certain hyperparameters before the optimization? for example, my prior mean function is constant with a certain value, which should be fixed…
mh510 updated
3 years ago
-
Is it possible to optimize the hyperparameter in Matlab?
Or hou can I just calculate them before regression?
-
The hyperparameter settings (batch size and learning rate) in the paper seem inconsistent with the code. Which could reproduce the performance (80.6 acc) reported in the paper?
-
I don't understand why we need to use llm_attention weights as the weight for the CLIP attention. And I tried just setting vis_attn as all ones, but then the difference is minimal (maybe need to adjus…
-
Hello,
I believe this is the github repo for the paper "Benchmarks for Deep Off-Policy Evaluation".
Do you have any plans to release the **hyperparameters & setups** used for baselines results?
…
-
Currently, the `expand.grid` method to inject hyperparameters into tune-enabled base learner does not accommodate for multiple hidden layers. For example, when setting hyperparameters, the `hidden_uni…
-
Thank you very much for open sourcing this framework, which I think greatly facilitates the development of knowledge tracing. I would like to ask if it is possible to publish the optimal hyperparamete…