Closed wolfgangschwab closed 3 years ago
(my understanding is that for Virtual it is "nice to have feature" for "show time", while in System it is "must have" ... never the less example would be probably nice)
You can send the maps/markers either from robots or from teambase. Each of them has the /cloud
etc. topics available on its ROS master (that's the latest commits you saw in robot models - adding further processing of these topics and relaying them to the simulator).
In theory, you should find the clouds/markers in the simulator on Ignition topics /model/NAME/cloud
(for a robot called NAME)
. However, there is no way to visualize them. You can just check that something is there using ign topic -l /model/NAME/cloud | head -c 1000
.
Cloudsim can visualize at least the clouds you send, so if you have a simulation running and open the web visualization, there are a few new options under the NAME
item for each robot. If you unroll the menu, you'll see items for the robot's cameras and lidars, and also robot map
. If you turn this map on and wait, you should see little blue dots representing the map you are sending. While testing this, be aware that the maps are only sent once in 10 s of sim time, so if the simulation is running 1% realtime, you could wait up to 1000 seconds of real time for the map to appear.
You can, of course, use these topics for RViz visualizations on your locally running simulations. Just open up an RViz set to a ROS master of one of the robots, and the /cloud
etc. topics should be there available for visualization. I guess DARPA plans to either somehow utilize the RViz visualizations, or will write Ignition plugins allowing visualizations of all these data in the simulator.
Regarding the frame in which the cloud comes, it is mostly ignored. None of the nodes that post-process the clouds or markers does any kind of geometric transformation (they do not even subscribe TF). The requested darpa
frame (API tells DARPA
but I think that's wrong) should be identical to artifact_origin
, so if you want to visualize the clouds together with other data from your robots, you can add a static transformation (identity) between darpa
and artifact_origin
(assuming your robots are TF children of the artifact_origin
frame once they are localized).
@peci1 , thank you very much for the detailed description. That makes it much clearer.
We will try to add it to our solution, but I'm not sure whether this add-on will make it to the final solution, because the time for testing before the closing of the submission window is very short. :-(
Got it working in principle. So issue closed.
tldr: Can anybody give us a how-to or an example how the data has to be provided for the mapping server and how one can see the results in cloudsim?
Long form: We just got the email with the remainder for the closing submission window for the finals prize round. There is also a remark in this email that we also shall provide map and telemetry data for the mapping server. I never noticed that there is such a requirement that has to be fullfilled. Getting this 3 days before the closing of the submission window is not good.
I noticed that there is some kind of mapping server but on the other side a number of robots where updated with this functionality only a few days ago. (Is this release, 16th August, already available in the docker image? Is this release available on cloudsim?)
The description about the mapping server (mapping server) gives some information but for me it is still unclear how is can be used.
So, we would really appreciate any short description or how-to or example how the mapping server is to be used.