bqth29 / simulated-bifurcation-algorithm

Python CPU/GPU implementation of the Simulated Bifurcation (SB) algorithm to solve quadratic optimization problems (QUBO, Ising, TSP, optimal asset allocations for a portfolio, etc.).
MIT License
112 stars 26 forks source link

SB Optimizer computation dtype v. Model dtype #41

Open bqth29 opened 1 year ago

bqth29 commented 1 year ago

Currently, the oscillators in the SB optimizer have the same dtype as the IsingCore model which itself inherits its dtype from the polynomial model defined by the user. Although it makes sense to create a polynomial model with an integer dtype (float32, float64, ...) and to cast the SB results to this integer dtype to allow a full-integer computation, it is counter-productive to use this very dtype for the SB optimization because the oscillators' range of values is [-1, 1] which would not work with integer values.

Thus, it would be nice to allow the user to chose a dtype for the model and a dtype for the optimization.

Several options are availables to remedy this problem:

Option 1: int to float mapping

The dtype provided in the sb.optimize, sb.minimize and sb.maximize functions, is used for the model and the SB computation is derived from it:

Option 2: dtype is only for SB computation

The dtype passed is only used for the SB computation (a float dtype is required). If the model to optimize is created first, it can have any dtype, but the equivalent Ising model will have its own dtype. If the polynomial is directly provided in the sb.maximize or sb.minimize function, its dtype will be the SB computation one as well.

Option 3: use two parameters in functions

The optimization functions use 2 parameters: model_dtype and computation_dtype which are respectively used to create the model and run SB

bqth29 commented 8 months ago

The SB backend must run with float dtypes because the values of the oscillators are in [-1, 1]. During tests carried out for #61 it appeared that some key PyTorch functions are not defined for float16. Thus, option 2 would be the best one with torch.float32 and torch.float64 being the only two accepted dtypes.