Open shaesaert opened 5 years ago
idx_diag = [i for i in range(len(a.data)) if a.coords[axis1][i] == a.coords[axis2][i]] This line is causing the most time delay
I tested the LP problem for different sizes of MDPs. This is the result:
states | 307 | 325 | 381 | 421 | 463 |
---|---|---|---|---|---|
init solver | 14.53 | 9.63 | 19.085 | 32.67 | 48.4812 |
run solver | 0.174 | 0.58 | 0.3478 | 0.2754 | 0.5939 |
Interesting, could you try running it by replacing the slow function with the following that avoids the for loop:
def diagonal(a, axis1, axis2):
'''perform diagonal operation on tensor a:
Ex: diagonal over x,z on A_xyzw gives Bxzy (new dimension appended at the end)
Analogous to np.diagonal'''
if a.shape[axis1] != a.shape[axis2]:
raise Exception('dimensions must agree for diagonal')
new_axis_order = np.array([axis for axis in range(len(a.shape)) if axis != axis1 and axis != axis2] + [axis1])
new_shape = [a.shape[axis] for axis in new_axis_order]
idx_diag = a.coords[axis1] == a.coords[axis2]
crds = a.coords[:, idx_diag]
return sparse.COO(crds[new_axis_order, :], a.data[idx_diag], new_shape)
If this doesn't speed things up we might have to find a faster more direct way to code the LP, but already this way was very complicated..
You can also try to install the latest version of sparse
, the diagonal function is available there and it runs in numba which compiles it to c code. If you could share your script for generating LPs I could experiment a little as well.
Dear Petter,
This issue should have been posted in mdp_network. I have added a pull request to mdp_network with the changes that I made to test the performance.
With kind regards, Sofie
The function diagonal out of best.models.pomdp_sparse_utils import takes a substantial amount to process. This is currently taking 95% percent of the time for solving LP problems.