gallantlab / himalaya

Multiple-target linear models - CPU/GPU
https://gallantlab.github.io/himalaya
BSD 3-Clause "New" or "Revised" License
80 stars 13 forks source link

MultipleKernelRidgeCV deltas shared over all targets? #36

Open cnnmat opened 2 years ago

cnnmat commented 2 years ago

Is it somehow possible to have the deltas of the model optimized for best fitting all targets when using Multiple-kernel ridge with scikit-learn API? Meaning that the deltas_ output (or another output) of MultipleKernelRidgeCV would be: array of shape (n_kernels, 1) Best log kernel weights for all targets. instead of array of shape (n_kernels, n_targets) Best log kernel weights for each target.

I still have several targets but would be interested in the "shared over all targets" result.

I tried using the parameter local_alpha = False in the solver_params of the MultipleKernelRidgeCV but I don't find any optimized alpha or delta in the output model (and there is no more bestalphas or deltas_ either).

Thank you in advance for your help :)

TomDLT commented 2 years ago

I agree optimizing a single delta for all targets would be useful. ~It is currently not implemented, but I will add it to the todo list.~

cnnmat commented 2 years ago

Great, thank you very much! :)

TomDLT commented 2 years ago

Actually I just checked, and using local_alpha=False does select the same alpha/delta for all targets, as found in model.best_alphas_ and model.deltas_. My previous answer assumed that it only shared alphas and not deltas, but it does share both. (The option is only available with solver="random_search" though.)

there is no more bestalphas or deltas_ either

Are you sure you fitted the model the second time?