ika-rwth-aachen / mqtt_client

ROS / ROS 2 C++ Node for bi-directionally bridging messages between ROS and MQTT
MIT License
187 stars 42 forks source link

Topic mappings in std::map instead of std::unordered_map #37

Open mvccogo opened 1 year ago

mvccogo commented 1 year ago

Greetings,

Is there a reason why the mappings are being stored in a std::map and not in a std::unordered_map? I can think of some reasons:

  1. Memory constraints
  2. Order of the mappings is necessary
  3. Number of mappings is too small
lreiher commented 1 year ago

I don't believe there is any reason for std::map. The most obvious one would be, obviously, that the mappings need to be ordered. I don't believe the ordering property is implicitly used anywhere, though, so the argument doesn't count.

std::unordered_map might be a little more efficient in time with a little more overhead in memory. In practice, however, I don't see this becoming relevant in the context of the mqtt_client. The mappings are mainly for setting up the subscribers/publishers and only scale with the number of topics one wants to bridge. So I doubt that there will be any noticeable performance difference.

If you have a good reason for switching to std::unordered_map though, I'd be happy to approve a PR!

mvccogo commented 1 year ago

Thanks for the explanation. I don't know if anyone is using the bridge like so, but in our case topics are mapped for each agent of our virtual arena. Because of that, during runtime the map could indeed get a lot larger as new agents are spawned/registered.

This post does a quick benchmark between ordered and unordered maps.

lreiher commented 1 year ago

How many agents are we talking here? I would assume that it's still a matter of micro-seconds, let it be milli-seconds, difference between the two implementations. I feel like this somewhat tends towards premature optimization, but would still accept the change in your PR.

mvccogo commented 1 year ago

Yes, this might be really close to premature optimization territory. I opened the issue mostly because it is a simple change (no refactoring needed). The goal is to deal with around 80k to 100k agents, we are probably going to hit other constraints so the bridge really needs to be as performant as possible.