-
python run_zerogen.py --alpha ${ALPHA} --beta ${BETA} --eta ${ETA} --k ${K} --condition_method add \
--task ${TASK} --decoding_len ${LENGTH} --alpha_scale --alpha_activasize ${…
-
Hyper-parameters in the [training script](https://github.com/sanyalsunny111/LLM-Inheritune/blob/26fff62c50f5809ac12a523ba4fc7e9f6e6444b5/lit-gpt/Training/train.py#L45) are inconsistent with paper, inc…
-
Hey, I found out that the default shape of the latent space shape for DMC Proprio is 1024 which is way bigger the the shape of the observations. Can you explain me why ?
-
Hi ;)
for comparability reasons, it would be beneficial for the community to have insights into the full hyper-parameter setups.
I am especially interested in the LaViLa captioning config to use w…
-
I am trying to tunning the hyperparameters with optuna. Accordingly, the objective for optimization should be assigned. The AICc is currently considered, though advice about how to reasonably and conv…
-
The newly added valued on the description page appears to be incomplete for the Hyper.
When monitoring the values that are coming in the actual MQTT feed there are many fields contained there that …
-
Hi, thank you for sharing the code for your nice work.
Since CoMPILE will be used as a baseline method to compare, I want to ask about the hyperparameters you use. Does `python train.py -d data` work…
-
I run your code in my computer, but I didn't get the good results as your figure (acc about 80%). Do I need to reset the hyper-parameters?
-
Hello every body
I 'm using cross validation(cv) for a classification problem, I spilt my data into test and train, I used train for cv model, and test data for inference step.
my code for cv and ea…
-
Hello,
Thanks for sharing this code.
I was wondering if you could also share the hyper-parametrization you have used in order to obtain the training curve on the front page: 94% cluster accuracy a…