bqth29 / simulated-bifurcation-algorithm

Python CPU/GPU implementation of the Simulated Bifurcation (SB) algorithm to solve quadratic optimization problems (QUBO, Ising, TSP, optimal asset allocations for a portfolio, etc.).
MIT License
112 stars 26 forks source link

RuntimeError: expected scalar type Double but found Float #46

Closed MarMarhoun closed 11 months ago

MarMarhoun commented 11 months ago

I used the following code to optimize the proposed dataset:

import yfinance as yf
import torch
import random # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import yfinance as yf
import matplotlib.pyplot as plt

import json
import numpy as np
#from deap import base, creator, tools, algorithms

from datetime import datetime as dt, timedelta as td
#from datetime import datetime
import simulated_bifurcation as sb

asset_name = 'AAPL'

import numpy as np

def generate_weights(data):
    # Get the number of columns in the data
    num_cols = data.shape[1]

    # Generate random weights between 0 and 1
    weights = np.random.rand(num_cols, num_cols)

    # Normalize the weights
    normalized_weights = weights / np.sum(weights)

    return normalized_weights

def generate_normalized_weights(data):
    # Get the number of rows in the data
    num_rows = data.shape[0]

    # Generate random weights between 0 and 1
    weights = np.random.rand(num_rows, 1)

    # Normalize the weights
    normalized_weights = weights / np.sum(weights)
    print(normalized_weights.shape)

    return normalized_weights.flatten()

data = yf.download(tickers=asset_name, period='1y', interval='1d')
data
m_sb = (torch.DoubleTensor(generate_weights(data)))
m_sb = m.double()
m_sb 
sb.set_env(time_step=.1, pressure_slope=.01, heat_coefficient=.06)
best_vector, best_value = sb.maximize(m_sb, #domain='int10',
                                      agents=100, device='cuda',
                                      max_steps=10000, sampling_period=30, ballistic= True,
                                      convergence_threshold=50, use_window=True, heated=True, best_only=True)

I faced the following issue, any suggestions to solve this issue:

🔁 Iterations       :   0%|          | 0/10000 [00:00<?, ? steps/s]
🏁 Bifurcated agents:   0%|          | 0/100 [00:00<?, ? agents/s]
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
[<ipython-input-65-480d0b2ec372>](https://localhost:8080/#) in <cell line: 2>()
      1 sb.set_env(time_step=.1, pressure_slope=.01, heat_coefficient=.06)
----> 2 best_vector, best_value = sb.maximize(matrix, #domain='int10',
      3                                       agents=100, device='cuda',
      4                                       max_steps=10000, sampling_period=30, ballistic= True,
      5                                       convergence_threshold=50, use_window=True, heated=True, best_only=True)

7 frames
[/usr/local/lib/python3.10/dist-packages/simulated_bifurcation/optimizer/stop_window.py](https://localhost:8080/#) in __compare_energies(self, sampled_spins)
    109 
    110     def __compare_energies(self, sampled_spins: torch.Tensor) -> None:
--> 111         energies = torch.nn.functional.bilinear(
    112             sampled_spins.t(), sampled_spins.t(), torch.unsqueeze(self.ising_tensor, 0)
    113         ).reshape(self.n_agents)

RuntimeError: expected scalar type Double but found Float
bqth29 commented 11 months ago

Hi @MarMarhoun,

It appears that the problem comes from the dtype of your input matrix when the early stopping check is performed by the stop window.

Here are some tricks you could use to overcome the problem:

  1. Force the dtype of the matrix for the SB computation by adding dtype=torch.float32 as a parameter of sb.maximize
  2. Simply cast your matrix to float before hand: m_sb = m_sb.to(dtype=torch.float32)
  3. (if the previous did not work) Do not use the stop window by setting use_window=False

Hope this helps.

MarMarhoun commented 11 months ago

@bqth29 It works. Thank you for your support.