-
### ⚠️ Please check that this feature request hasn't been suggested before.
- [X] I searched previous [Ideas in Discussions](https://github.com/OpenAccess-AI-Collective/axolotl/discussions/categories…
-
I am currently developing a model to classify a large transaction graph and want to test different graph architectures. For this purpose I want perform hyperparameter optimization for the model traini…
-
Hi, thank you so much for developing such a nice framework!
I'm new to model training and I'm trying to train the model on a dataset with about 6000 vocalizations, consider there are around 10 types…
-
**Is your feature request related to a problem? Please describe.**
Many DL models need extensive hyperparameter optimization in order to find the best-performing model. Since the `v0.1` version #142 …
-
When tuning hyperparameters for non time-series data, normally one would split the dataset into training set, validation set and test set. The validation set is then used to test which set of hyperpar…
-
Hi @eleurent . thank you so much for the contribution. Please I need to know how you figured out the hyperparameters of DQN in the highway run env. did you use optuna for optimizing the hyperparameter…
-
**What the problem is**:
In the current implementation of GASearchCV, I find it cumbersome to manually define a wide range of parameters and potential values for optimization without guidance on whi…
-
This issue tracks the progress of developing the LLM Hyperparameters Tuning API in Katib. The API aims to provide an easy-to-use interface for tuning the hyperparameters of large language models, leve…
-
I am trying to reproduce results for SplitCIFAR-100, I used the exact same hyperparameters mentioned in the code. I am getting average accuracy as 59.42, and forgetting as 6.41.
In the paper,average …
-
Hello,
I'm working with a very large dataset consisting of 7.5 million rows and 18 columns, which represents customer purchase behavior. I initially used UMAP for dimensionality reduction and attem…