Closed hartytp closed 6 months ago
When we do this we should update the MLE fitter class, which currently disables all parameter rescaling in a horribly hacky way!
It would be nice to have a quick means of disabling rescaling globally for a model when debugging fitting issues.
Issue: we currently decide whether a parameter can be rescaled by having its
scale_func
returnNone
. We only rescale parameters if all parameters can be rescaled.Proposal: whether or not we can rescale is a property of the model, so it would make more sense for
can_rescale
to be a property of the model rather than its parameters.Issue we currently treat rescaling in
x
andy
the same way - all parameters are either rescaled in both or no rescaling is done. This is not ideal. For example, if a model does not have a y-axis scale factor parameter we can't rescale it at all. Quantum logic operations usually don't need y-axis rescaling (since populations are always between 0 and 1 already) but often do want x-axis rescaling (frequency in GHz). The current scheme relies on us having a y-axis scale parameter like readout levels to absorb the y-axis scale factor into. The current Molmer-Sorensen interaction model doesn't have readout levels, so can't be rescaled at all.Proposal: models have a
can_rescale
method which returns a typle ofcan_rescale_x, can_rescale_y
.Issue: the whole way we treat rescaling at present is a bit convoluted (see e.g. the
get_scaled_model
stuff). It would be good to simplify this.Proposal A: the simplest thing to do would be to implement rescaling inside
Fitter._fit
. That keeps all the rescaling local to the fitter and means we don't need to worry about changing model parameter properties.The only downside I can see of this approach is that we would no longer automatically rescale the input data in the heuristics. That probably not actually a problem - unlike the optimizers, which are very sensitive to numeric (e.g. deciding appropriate step sizes to use), the heuristics are typically quite simple and just need to be accurate enough to get us into the right solution basin. That's a much lower bar. I suspect this is unlikely to be problematic in any real-world case. If this does fail in some odd corner case, it's pretty easy for the user to use
rescale_model_x
to do the rescaling manually.Given how much of a corner case this is, we should not allow a significant increase in the code complexity to tolerate this. I suggest we start by doing this, but fallback to proposal B below if we do find issues.
Proposal B: the current complexity largely comes about from the fact that some containers have inner parameters - parameters which aren't exposed to the user but are used internally. Since this isn't properly supported we have to have some extra flexibility to let the models rescale these. I propose we make inner parameters a first-class part of the toolkit. Then we can simply have something like a
set_rescaling_enabled
method which turns scaling on / off for the parameters and scrap the wholeget_scaled_model
machinery.