dwavesystems / dwave-networkx

D-Wave Systems extension of the NetworkX Python package for graphs and graph algorithms.
https://docs.ocean.dwavesys.com/projects/dwave-networkx/en/latest
Apache License 2.0
87 stars 55 forks source link

TSP - why inconsistent? #103

Open neverwiredhouse opened 5 years ago

neverwiredhouse commented 5 years ago

I am curious what causes it to be wrong almost always by the quantum computer? It seems like it's not ready yet, but I am more just curious the difficulties presented by this problem.

sploiber commented 5 years ago

Hello @neverwiredhouse, could you please share an example in which you found a wrong answer?

neverwiredhouse commented 5 years ago

@sploiber , No problem. Thanks for the fast reply. here is a gist

pretty simple. I try using the "exact" method, which is on the PC, and then the quantum method and the answer is always random.

sploiber commented 5 years ago

Hello @neverwiredhouse, yes indeed, I am also seeing strange behavior from the QPU example. I will write it up below, but I will also document the overall picture thoroughly, to be sure we're on the same page. First, here's the exact solver code:

import dwave_networkx as dnx
import networkx as nx
from dimod import ExactSolver
G = nx.complete_graph(4)
G.add_weighted_edges_from({(0, 1, 1), (0, 2, 2), (0, 3, 3), (1, 2, 3), (1, 3, 4), (2, 3, 5)})
sampler = ExactSolver()
answer = dnx.traveling_salesman(G, sampler)
print(answer)

Running in Python 3.7, on a Mac with Anaconda, I get an answer [2, 1, 0, 3]. If I add up all the paths in the graph (2->1, 1->0, 0->3, and 3->2), I get a distance of 12. This does seem to be the minimum answer, so ExactSolver seems to be working.

If I run with SimulatedAnnealingSampler instead, I get different paths sometimes, but the distance is always 12. This is consistent - the different solver is finding a valid answer - just a different path.

Now, if I look at the code with the QPU:

import dwave_networkx as dnx
import networkx as nx
from dwave.system.samplers import DWaveSampler
from dwave.system.composites import EmbeddingComposite
G = nx.complete_graph(4)
G.add_weighted_edges_from({(0, 1, 1), (0, 2, 2), (0, 3, 3), (1, 2, 3), (1, 3, 4), (2, 3, 5)})
sampler = EmbeddingComposite(DWaveSampler())
answer = dnx.traveling_salesman(G, sampler, 1)
print(answer)

I do indeed get unpredictable results - sometimes long vectors - sometimes vectors of length one. I'll continue on this today.

arcondello commented 5 years ago

For

import dwave_networkx as dnx
import networkx as nx
from dwave.system.samplers import DWaveSampler
from dwave.system.composites import EmbeddingComposite
G = nx.complete_graph(4)
G.add_weighted_edges_from({(0, 1, 1), (0, 2, 2), (0, 3, 3), (1, 2, 3), (1, 3, 4), (2, 3, 5)})

We can take a look at the actual qubo by running

Q = dnx.algorithms.traveling_salesman_qubo(G)

I won't print the actual output here, because it is long, but

>>> len(set().union(*Q))  # number of variables/nodes
16
>>> len(Q)  # number of interactions/edges
256

so the first thing to notice is that the qubo is fully connected and has n^2 variables (where n is the number of cities). However, in writing this reply I noticed that a log of the edges has weight 0 and could be removed, I generated an issue #105

The next thing we're interested in is what the energy range will be when it is run on the QPU. So we need to convert to Ising

>>> import dimod
>>> h, J, off = dimod.BinaryQuadraticModel.from_qubo(Q).to_ising()

This has the same number of variables/nodes and edges/interactions but we can look at the biases

>>> set(h.values())  # the unique linear biases
{8.5, 9.0, 9.5, 10.0}
>>> set(J.values())  # the unique quadratic biases
{0.0, 0.25, 0.5, 0.75, 1.0, 1.25, 2.0}

The system scales h/J down to [-2, 2] and [-1, 1] respectively, you can see this by running

>>> qpu_sampler = DWaveSampler(solver=dict(qpu=True))
>>> qpu_sampler.properties['h_range']
[-2.0, 2.0]
>>> qpu_sampelr.properties['J_range']
[-1.0, 1.0]

so that means that our biases are scaled down. With the long chains caused by the fully connectedness above, I think the biases are just totally getting washed out. Addressing the above issue will help, by reducing the chain length, but it will not address the bias range problem.

derekwisong commented 5 years ago

I'm not quite sure if this is related to the same as this issue, but I also see some instances of None in the resultant path from the graph used in the prior comments here.

G = nx.complete_graph(4)
G.add_weighted_edges_from({(0, 1, 1), (0, 2, 2), (0, 3, 3), (1, 2, 3), (1, 3, 4), (2, 3, 5)})
traveling_salesperson(G, sampler=dimod.SimulatedAnnealingSampler())
[0, 1, None, 2]

G = nx.complete_graph(4)
G.add_weighted_edges_from({(0, 1, 1), (0, 2, 2), (0, 3, 3), (1, 2, 3), (1, 3, 4), (2, 3, 5)})
traveling_salesperson(G, sampler=dimod.ExactSolver())
[1, None, 2, 0]
vgoliber commented 4 years ago

@derekwisong I've taken a look at your question and code. We've made some updates to the code to make sure we don't have any instances of None in the final route, and hopefully those will be merged soon.

vgoliber commented 4 years ago

To follow-up on the earlier issue of inconsistent results:

We just merged some adjustments to the QUBO created by this code. Using this updated version, I was able to get consistent results between ExactSolver and the D-Wave Sampler by adjusting the parameters chain_strength and num_reads to 30 and 100, respectively. Additionally, I set lagrange=15.0 as another parameter. With these settings, I was able to obtain 16/17 of the optimal TSP routes for the above graph using the D-Wave sampler.