-
My reproduction of the results on location 9 of the NQ dataset in the longllmlingua paper using the prompt compressor resulted in a large discrepancy from the original results. My hyperparameters are …
-
There should be a main file that can run the walk generation + embeddings training steps of the pipeline without the clutter involved in testing the embeddings.
This main would take a configuratio…
-
Hi,
I would like to reproduce TinyImageNet results. But, I do not see best hyperparameters for TimyImageNet. Please update them.
Thanks in advance,
-
Currently every optimizer comes with a `config` specifically for that optimizer that manages the hyperparameters for the optimizer.
This is made because of the following reasons:
* A lot of hyp…
-
Awesome work!
Could you detail more about required computational budget (e.g. the type and number of gpus and time) for each benchmark?
Best,
Wonkwang.
-
Hello! The work is great and thanks for sharing the codes!
I want to reproduce the BERT-EMD4's result, but there are so many hyperparameters that it is difficult to reproduce the experimental resul…
-
/home/anaconda3/envs/cogltx/lib/python3.7/site-packages/pytorch_lightning/utilities/warnings.py:18: UserWarning: The dataloader, train dataloader, does not have many workers which may be a bottleneck.…
-
I try to use tune with LightGBMTrainer to find the best hyperparameters for a test model, but when the experiment finishes I can't find any checkpoint in the experiment directory.
Ray 2.10.0 on Mac…
-
Your work is very good and effective. But I have some questions about the baseline approach. I tried different hyperparameters to adjust supervised contrastivelearning or unsupervised contrastive lea…
-
**Describe the use case example you want to see**
A notebook example describing how to tune hyper-parameters while training with SM Training Compiler on TensorFlow 2.9. SM Training Compiler recent…