Open bqth29 opened 1 year ago
The SB backend must run with float dtypes because the values of the oscillators are in [-1, 1]. During tests carried out for #61 it appeared that some key PyTorch functions are not defined for float16
. Thus, option 2 would be the best one with torch.float32
and torch.float64
being the only two accepted dtypes.
Currently, the oscillators in the SB optimizer have the same dtype as the IsingCore model which itself inherits its dtype from the polynomial model defined by the user. Although it makes sense to create a polynomial model with an integer dtype (float32, float64, ...) and to cast the SB results to this integer dtype to allow a full-integer computation, it is counter-productive to use this very dtype for the SB optimization because the oscillators' range of values is [-1, 1] which would not work with integer values.
Thus, it would be nice to allow the user to chose a dtype for the model and a dtype for the optimization.
Several options are availables to remedy this problem:
Option 1: int to float mapping
The dtype provided in the
sb.optimize
,sb.minimize
andsb.maximize
functions, is used for the model and the SB computation is derived from it:Option 2: dtype is only for SB computation
The dtype passed is only used for the SB computation (a float dtype is required). If the model to optimize is created first, it can have any dtype, but the equivalent Ising model will have its own dtype. If the polynomial is directly provided in the
sb.maximize
orsb.minimize
function, its dtype will be the SB computation one as well.Option 3: use two parameters in functions
The optimization functions use 2 parameters:
model_dtype
andcomputation_dtype
which are respectively used to create the model and run SB