Open severin-lemaignan opened 10 years ago
a probably sufficient solution for 2D maps would be to:
In high-enough resolution that gives a decent rendering of a gridmap with walls. This is quite efficient I believe (we could even set the camera to only render wireframes, as only the boundaries are relevant), and very easy to implement, right? Some post-processing would allow to convert it into a e.g. a ROS costmap for navigation. Some convenience API that sets rendering parameters to create such a map with the right scale etc. could prove useful.
Here's an example rendered by BGE:
Using "sensor" to generate map of the environment for this purpose looks strange for me.
We have the whole model of the world, being in blender format, collada or whatever you want. One tool can read this description and do the needed transformation, which will be relatively specific to each model.
Another option is to extract it at runtime, we have all the bounding box, and we can simply "synchronise" non-static object (in the same way we are doing for multi-robots simulation, or even now in case of multi-scene for multi camera resolution) (we can assume that bouding box won't change as far as we are not using soft bodies).
If more information are need (more semantic one), we can add some properties and sync them also.
I'm not quite sure I understood all of @adegroote suggestion correctly, but here's my view:
Just my two cents.
Scripts that control aspects of the simulation would benefit of accessing a good model of the environment, like 2D occupancy grid or, even better, an octomap.
One typical application could be crowd/swarm/flock simulation: if you want to simulated a crowd of humans moving in your simulation, the crowd simulator (typically, an external application that interface with MORSE through the socket API) need a dynamic model of the simulated environment (where are the walls, the objects like furnitures, the robots, etc.).
This could alos be useful to introspect/monitor the simulation, or extract metrics on the robot behaviours.
Means to implement such a feature are not clear. The "naive" approach (using a laser scanner/kinect/velodyne sensor, taking samples at various random locations in the scene, and interpolating them into a dense map) seem inefficient, but may still be useful to prototype.
A better implementation is likely to require some dedicated OpenGL shaders. It could also be interesting to investigate projects like "Blender for Architecture" which may have tools to take slices of a scene.
This feature request is part of the MORSE for HRI 2014 WS outcomes