-
Hi,
I intended to utilize solely the hinge feature in MAXENT. Consequently, I set MAXENT.hinge = TRUE and assigned FALSE to the other four features. However, the tuning outcome remained unchanged. …
-
Thanks for your excellent work!
I encounter a bug when I run `MODEL=facebook/opt-1.3b TASK=RTE EPOCH=5 MODE=random_masking LR=1e-2 MASKING_PROB=0.9999 LOCAL_HOST=0 SEED=0 bash run.sh`
```
Traceba…
-
### Feature request
This request aims to introduce functionality to delete specific adapter layers integrated with PEFT (Parameter-Efficient Fine-Tuning) within the Hugging Face Transformers librar…
-
Hello, We did not pass the pre-trained_model_path parameter during fine-tuning, So the code (fine-tuning/run_classifier.py) initializes the model parameters randomly. We finetuned this model on the .t…
-
I want to tune hyperparameter inside the model which will be best suited for data. how to do that
-
Write functionality to optimize model parameters for a particular dataset, a la `scikit-learn`.
-
Perform hyperparameter tuning and benchmark the performance:
- We use grid or random search to perform the tuning (Note that gradient descent is more efficient but sub optimal you should ask the inst…
-
For fine-tuning GLIGEN*, around which step does it typically converge? This information would be helpful for my further research work.
Are the parameter settings consistent?
-
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussion…
-
### Describe the workflow you want to enable
In the [GaussianProcessRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.gaussian_process.GaussianProcessRegressor.html), `alpha` sta…