Open xiaoyuch24 opened 6 years ago
Do not know what to do with the 2 sec of your main simulation. Do you have the best hardware? Does it matter if your have a dedicated graphic card? Did you try openscenegraph instead of qt?
As for speaking between openRave and ROS: machinekit people had a similar problem https://github.com/luminize/ros_hello_machinekit . Maybe, you can learn smth. from it.
Hi everyone,
I am currently working on a project with fetch robot in openrave. There is a camera (640480) and a 'BaseFlashLidar3D'(182137) attached on the robot. I need to get the RGB image data and the PointCloud data from the sensors and publish them to ROS topic.
In order to achieve this, what I am doing is (I call the following process as one simulation):
simulation {
The above simulation will cost around 2 seconds per running. However, I am working on on-line robotics planning, in which I need run thousands of simulations, so I really need the getting the data as faster as possible, like around 0.2s. The other issue of the above method is that, after a long run, it will get stuck at some point when I do either power off or renderdata off and return an error shown below:
Error in `python': double free or corruption (out): 0x00007f6fda79aa20 Aborted (core dumped)
Besides, I am wondering that why do I have to do the env.stepsimulation? This is also very time consuming to me!
In order to get the sensor data faster, I already tried to turn on the two sensors at the very beginning, and never turn off them. However, if keep the 3D sensor on, the 'qctoin' viewer become very slow, under which, the point cloud data is good but the RGB image data is delayed for a long time.
Is there any possible faster way to get the sensors data and publish them to ros topic? Is there a way to get the sensors data without opening the 'qctoin' viewer?
I am using ros-indigo under ubuntu 14.04 and openrave 0.90.
Thank you very much!
Best, Yuchen