-
I would suggest to expose the tuners as `sklearn` compatible tuning wrappers, e.g.,
`HyperactiveCV(sklearn_estimator, config)`,
or
`HyperactiveCV(sklearn_estimator, hyperopt_tuning_algo, conf…
-
Hello, i have followed the `distributed_train.py` and finished training a florence base ft model. However, I tried to use it for inference. I hit error during model loading stage using the following c…
-
## Environment
python 3.11.9
cuda 11.8
torch 2.4.0+cu118
PyTorch information
-------------------
PyTorch version: 2.4.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM…
-
### Feature request
Adding generation configurations to the parameters that can be tuned in a `Trainer`.
### Motivation
When defining the Optuna hyper-parameter space, I would like to invest…
-
Add multi-gpu support with accelerate. Requires wrapping the model as well as data with the accelerate function.
-
Personally I find it hard to accurately place the start/end of lyrics even while using 0.7x speed, so I would appreciate an option possibly that could appear after/during the timing process which let …
-
### Feature request
Currently the Owl-vit models support inference and CLIP-style contrastive pre-training, but don't provide a way to train (or fine-tune) the detection part of the model. According …
-
Hey, I propose that some kind of log level configuration logic is added to at least the `gosqs` part of the application.
There are two logs that are especially bothering me:
https://github.com/…
-
Above 2740MHz, it seems that one cannot transmit on one PortaPack and receive on the other. One unit is H2R4+, and the other is H2R4+/r9.
I don't know yet if it's a transmit or receive issue, or s…
-
Hello dear Tongyi SpeechTeam
I am interested with the controllable generation via instruction, and I want to fine-tune the model with my own version of data. Based on this, my question is, are there …