Closed ComplexSysSolutions closed 2 years ago
I have made an attempt to do just this two years ago. I added two upward-pointing Pico Flexx cameras to the front and back of the robot in order to prevent collisions with the top part of our robot. The images below show one of the cameras.
Then, I made the following changes:
move_base
configuration on the MiR internal PC to add the point cloud topics as observation_sources
.This kinda worked, but unfortunately not as well as I had hoped. There were two problems with this approach:
On the other hand, the MiR supports a very similar use case with its additional "top camera", so it must be possible somehow. But since this was not a top priority for me, I did not investigate further and just gave up after some time. I'm not saying that it is impossible.
If you want to "try before you buy", you could try including additional 3D cameras into the Gazebo simulation and move_base
config in this repo. This would at least enable you to figure out solutions to problem 2. Note that I wrote the move_base
config in this repo just to be able to run it in Gazebo. It's not identical to the move_base
config on the actual robot. Navigation on the actual robot works even better than my config.
If you want, I can provide you the code that I wrote back then.
Thank you very much @mintar for the detailed explanation. It really helps.
I'll have a more thorough look at the simulation once I have a chance. We are mostly concerned with detecting dynamic obstacles at the payload level (somewhat similar to your use-case) so there may be a way to have some of these detected points decay over time though this is hard when you don't have direct access to the mapping and move_base nodes on the MiR.
If you would be able to share your code of your previous experiment that would be much appreciated.
If you would be able to share your code of your previous experiment that would be much appreciated.
I'm sorry I didn't reply to this earlier. If you still need the code, let me know and I'll prepare something for you. It's a bit of work for me to dig through it all and put it into a presentable format, so only tell me if you really need it, not as a "nice to have".
Thanks @mintar. I'm starting to put together the equipment to do testing. Let me look into this in more detail and see if I can figure it out myself. If I can't, I'll be sure to reach out. Thanks again.
Hi @mintar. I have a MiR250 that I am trying to control through ROS (noetic). You mentioned to have accessed the move_base inside the mir computer? How is that done? Also, is there any way of setting the robot to ignore the SICK configurations and drive close to walls?
Hi @ravescovi, since your questions are a bit different from this issue, I've created a new issue for that and will reply there: #94
The MiR100 only appears to directly support the use of 2 3D cameras, which provides a limited field of view around the robot. I was wondering if it would be possible to add additional 3D cameras to a host PC running this library and publishing this data to a topic on the MiR PC such that it can use it for obstacle avoidance.
We are currently evaluating the MiR100 for our project so I unfortunately can't test this myself. Thank you.