Open hep07 opened 1 year ago
Hi @hep07 ,
sorry for the super late response.
I'm not sure I understand what you're trying to achieve with this code sample. After a quick look I think you're calculating some version of the gradient (dfit_bwd
) and then differentiating it with torch.autograd.grad
?
You are also having an issue with the StochasticNystromCompReg
objective for hyperparameter optimization?
Would you have a short reproducing code sample for that problem?
Giacomo
Hi, I am trying to use stochastic objective function in hopt to do gradient based hyperparameter optimization. Tried running it and the first iteration takes forever for some reason. My falkon solver works without problems now. I take a look at the code and wrote a small replication script based on how stoch_new_compreg.py is implemented. Anything I did wrong in the following script?
I am also wondering if we implement the gradient computation this way, we would not able to use multi-GPU in the backward pass. Am I right?
Thanks!