goat-community / goat

This is the home of Geo Open Accessibility Tool (GOAT)
GNU General Public License v3.0
89 stars 47 forks source link

Refactor and performance improvement network to grid for isochrone #1761

Closed EPajares closed 1 year ago

EPajares commented 1 year ago

Goal of this issue

Actually, the isochrone currently has three bottlenecks: network reading, bringing traveltime to grid and amenity intersection. This issue should explore how we can speed up in particular three functions used for bringing traveltimes to the grid.

The following steps are suggested:

Resources

All functions are here: https://github.com/goat-community/goat/blob/feature/isochrone-improvements/app/api/src/core/isochrone.py I would suggest using Numba for this and making use of numpy array wherever possible.

Deliverables

Refactored functions with major speed improvements.

Branch to derive

https://github.com/goat-community/goat/tree/feature/isochrone-improvements

metemaddar commented 1 year ago

Starting from filter_nodes, There are two or three choices: Using:

  1. pandas
  2. numba
  3. cython

With waiver of cython for now, for using numba and pandas, we need the data types to be in numpy array. Here we can define node_coords as numpy array: https://github.com/goat-community/goat/blob/939935e5ad91da3af243afdd3a91f0902769b378/app/api/src/core/isochrone.py#L160

But here when we create node_coord_list, we have to combine list and np array: https://github.com/goat-community/goat/blob/939935e5ad91da3af243afdd3a91f0902769b378/app/api/src/core/isochrone.py#L375

The interpolated coords is a list of coords whitch is created by split_edges(). Because we can not define dynamic shaped array in numpy, we had to define it as normal python list witch has high overhead. Even if it was in C++‌ the dynamicly shaped array could have overhead.

Predict the size of interpolated_coords

One approach is to predict the size of interpolated_coord which is created by split_edges. How can it be possible? We need a function to predict a high range for this size. Then we can know how much of this array populated and pass it to the next functions. or just reshape the array to omit the rest of the array.

To determine the high range for the coords, we can use the length of edges and number of geoms. And assume the worst state which each part of the edge has remaining from devision of lengh by pixles. So we need to first calculate the smallest possible state by dividing full length to the pixel and then add the reminings to it witch is full number of geoms.

metemaddar commented 1 year ago

Data structure for geoms

For split edges and many more parts we need to walk on edges. But the edges data structure is not fast. Because it's a list of list which has overhead. Another data structure for geoms can be like (an array and address). How can we prepare it. This should be prepared at database part. First we need to take all geoms in order of edges, in a single array. for example for data like this: Edges Geom
edge1 p1,p2,p3
edge2 p4,p5
edge3 p6,p7,p8
edge4 p9,p10
We need a numpy array of (named edges_geom): Edges Geom
P1
P2
P3
P4
P5
P6
P7
P8
P9
P10
Also we need to change the main query to take count of points of each edge instead of edges: Edges Geom Count(Geom)
edge1 p1,p2,p3 3
edge2 p4,p5 2
edge3 p6,p7,p8 3
edge4 p9,p10 2
In fact just this: Edges Count(Geom)
edge1 3
edge2 2
edge3 3
edge4 2
After getting the point_count we can convert it to address (geom_address) by using the cumsum() function of numpy. Which adds a zero at first. We also need to minus one from all rows to get address (Starting from zero). i Geom Address
0 0
1 2
2 4
3 6
4 9

So the geom points of edge[i] can be accessed by edges_geom[geom_address[i]:geom_address[i+1]) This will let us get rid of list[list] type for geom.

@EPajares, Can we have a query to take geoms as an array in the order of edges?

metemaddar commented 1 year ago

The two queries can be done in two threads in parallel.

metemaddar commented 1 year ago

As cumsum doesn't add 0 at first we can have the zero as the first row if possible at sql level.

Edit: We can have it

metemaddar commented 1 year ago

The new data-structure for geom was implemented and could reduce the execution time from .09 to .02 by use of numba compilation. Also the UnorderedMap were implemented in order to replace dictionary in remap_edges.

Re implement the adjacency list

I think we can improve the dijkstra by re-implementing the adjacency list. To create adjacency list we can use edge_source and edge_target.

  1. First we should find the forward_adjacency and backward_adjacency by searching the id through edge_source and edge_target
  2. Compute the djkstra using forward and reversed adjacency. So when we use the forward, the cost will be cost and when we use the reversed, the cost will be reversed_cost.

This can let us use the numpy where() to filter forward/backward.

EPajares commented 1 year ago

This sounds worth trying. For me it is hard to say at the moment if this will work. It sounds reasonable though.

metemaddar commented 1 year ago

@EPajares Thanks. It increased the calculation time (About .01s). I think it maybe because of np.where() is slow for such search. I don't know how to measure it for now. But I try to implement other approach instead of where and test it.

metemaddar commented 1 year ago

feature/isochrone-improvements (original):

Remap edges time is:             0.023065805435180664 s
Adj construct time is:           0.0035555362701416016 s
Dijkstra time is:                0.005146980285644531 s
Split Edges time is:             0.016598939895629883 s
Filter_nodes time is:            0.01898670196533203 s
isochrone calculation time:      0.08478617668151855 s

Remap edges time is:             0.0365447998046875 s
Adj construct time is:           0.004857301712036133 s
Dijkstra time is:                0.006662607192993164 s
Split Edges time is:             0.019170045852661133 s
Filter_nodes time is:            0.014683246612548828 s
isochrone calculation time:      0.10410380363464355 s

after converting all datatypes to array implement UnorderedMapping Refactor Dijkstra By modifying the UnorderedMap and remove sort(), the Remap Edges improved

Convert geom array time:         0.010674238204956055 s
Remap edges time:                0.02243494987487793 s
Dijkstra time:               0.005245685577392578s
Split Edges took                 0.0009660720825195312 s
Coords concatenations time:      0.0002956390380859375 s
Filter nodes time:               0.0006384849548339844 s
Compute Isochrone time:          0.06448602676391602 s

Convert geom array time:         0.01153421401977539 s
Remap edges time:                0.01858377456665039 s
Dijkstra time:               0.005613088607788086s
Split Edges took                 0.0011539459228515625 s
Coords concatenations time:      0.00043010711669921875 s
Filter nodes time:               0.0005915164947509766 s
Compute Isochrone time:          0.07920241355895996s

MicrosoftTeams-image

p4b-bro[bot] commented 11 months ago

This task/issue closed on Tue Jun 06 2023 ✅