Open jorgensd opened 2 years ago
In general, anything that replaces a std::map
with a O(1) lookup and contiguous memory is good. The main question is probably how to compute the data efficiently in the first place.
This is a good idea, let's do it once the current work on MeshTag refinement is in main.
I wonder if we should rather go for something along the lines of what is done in:
https://github.com/FEniCS/dolfinx/pull/3396
where we use a sorted std::vector<std::int32_t, std::int64_t>>
and a std::lower_bound
for searches.
Any thoughts on this?
When computing refinement data, we currently create a
std::map
from the local edge numbers to the processes sharing that edge (https://github.com/FEniCS/dolfinx/blob/main/cpp/dolfinx/refinement/utils.h#L38-L39) In turn we use this map multiple times, withmap.find()
, which has logarithmic complexity; https://github.com/FEniCS/dolfinx/blob/main/cpp/dolfinx/refinement/plaza.cpp#L645 https://github.com/FEniCS/dolfinx/blob/main/cpp/dolfinx/refinement/plaza.cpp#L87-L88 https://github.com/FEniCS/dolfinx/blob/main/cpp/dolfinx/refinement/utils.cpp#L256-L257I wonder if it would be faster to use an
AdjacencyList<std::int32_t>
which hasnum_local_edges
as number of nodes, and each link containing the neighborhood ranks. Then every find would be replaced with:Any thoughts @garth-wells ?