-
When running with the CLI a solver with a specific set of hyperparameters (say a learning rate or the l1_ratio.) Currently BenchOpt syntax for the CLI is a bit heavy handed:
```
benchopt run -s MySo…
-
Hello,
I was able to run your code successfully,
Now my Aim is
I want to use the OFA loss for custom dataset, so how should I be approaching it,
Should I train the teacher from scratch or shoul…
-
Reported by a student of mine:
"One problem we encountered with OpenML is that we had to delete every run for the spectrometer dataset while tuning the hyperparameters for TabuSearch. OpenML doesn’…
-
Hi,
It seems like the current [retrain.py ](https://github.com/melloddy/SparseChem/blob/master/examples/chembl/retrain.py)code is not compatible with hyperparameters.json specifications we have fr…
-
Trying the example query 9.9 vs 9.11 query on a M1 Max 64GB, Python 3.11.2. I haven't ran the original entropix logo but the model output doesn't seem right. Only "I" at the beginning gets printed as …
-
Hi there,
I am trying to replicate my own Funsearch experiments on a much smaller scale compared to that presented in the original paper, and am trying to tune the hyperparameters.
As detailed…
-
Hello, I'm an undergraduate trying to run the code from your paper "Attentional Encoder Network for Targeted Sentiment Classification", thank you so much for your work, but I am having trouble getting…
-
Hi, could i know your hyper parameters when training with DPO:
batch size, beta, learning rate
I train on 8 A100 with batch size per device = 16 (as you said in the paper, bz=128), it is out of me…
-
At the moment we've got a class [TunableMethod](https://github.com/pints-team/pints/blob/master/pints/_core.py#L326-L365) that samplers and optimisers and anything else can implement, that provides tw…
-
# 🐛 Bug
I found some unexpected interactions between `ard_num_dims` and the shapes of priors for kernels -- a few settings where if I sample from a hyperparameter prior I don't get a tensor the same …