Closed osrf-migration closed 11 years ago
Original comment by Brian Gerkey (Bitbucket: Brian Gerkey, GitHub: gerkey).
Simulation is meant to be a stand-in for the physical system, with which you can't run synchronously. So we don't support writing ROS controller nodes that run in that way. If you really want to run synchronously with simulation, then your code needs to be inside a Gazebo plugin, where you can do whatever you want, while time is frozen.
On a related note, drcsim 2.0, expected soon, will include UDP-based control via ROS that our internal testing suggests will support pretty high control rates (higher than your 200Hz).
Original comment by Jesper Smith (Bitbucket: jespersmith).
Let me rephrase the problem a bit. On our physical robots, we run a real time OS and our control loop is guaranteed to be run at (in our case) x Hz, with the restriction that our control loop finishes in less than 1/x seconds. It is also guaranteed that sensors are read and control signals applied at every tick.
The simulation is not guaranteed to run real time, especially not on all our development computers. That means that, even if we use a realtime system to guarantee a x Hz loop rate for our control, the effective loop rate in the simulation is not constant. Furthermore, udp packets are not guaranteed to be delivered, and do not neccesarly arrive in order.
So maybe synchronous control is a misnomer, but we would like to have a guaranteed loop rate of our controller with respect to simulation time.
Our algorithms are based on force control of the robot instead of position control, so we cannot use the builtin pid controllers(which i assume are updated with a constant loop rate with respect to the simulation)
Original comment by Isura (Bitbucket: isura).
Does this mean that a type of real-time/deterministic controller will be included in the next release? To my knowledge the older PR2 controller manager implementation did not have the issue mentioned by Jesper.
Original comment by John Hsu (Bitbucket: hsu, GitHub: hsu).
Hi Jesper, Is your controller tolerant to any kind of jitter in update rate at all? Is there a way to make it tolerant to jitters in the update loop, so that a command update is late once in a while? If so, can you characterize what kind of latency is tolerable? If not, we'll have to find ways to create synchronous communication between sim machine and field computer. John
Original comment by Twan Koolen (Bitbucket: tkoolen).
John,
As our software is currently implemented, we are not tolerant to any kind of jitter. With some work we could make it tolerant to a little jitter. However, more critical is guaranteed maximum delay between the simulated time that the controller gets state information to the simulated time that the control action based on that state is applied to the simulator. For more detail on that request please see our DRC forum post: https://www.theroboticschallenge.org/forum/default.aspx?Function_fst=ViewThread&ThreadID_fin=161
Also it's pretty critical that state readings are received in order and that control actions are applied in order.
Both the maximum lag guarantees and the ordering can be achieved with synchronous communication between simulation and field computer. A real simple TCP direct socket connection between the simulator and the field computer could achieve that really easily. That might make it harder to get the sim to run in real time, but there are more fundamental things going on (such as numerical instability of the dynamics engine) that likely will make the sim not be able to run real time anyway.
Original comment by Jesper Smith (Bitbucket: jespersmith).
We created a plugin for our use that does basically what we want. It uses protobuff for communication using a simple boost TCP server. Commands are received asynchronously, while state updates are send synchronous. If no packet is received, the plugin re-uses the torque commands from the previous control tick.
We have tried to make it slightly efficient (using unordered_maps instead of getJoint to resolve jointnames for example), but we haven't profiled and tested it thoroughly yet.
For debugging, several cheat options are available that can be disabled using defines in the source file
The plugin itself is in csrc/GazeboStateCommunicator.cc, listening on port 1234.
It works in tandem with a simple world controller plugin to pause, rewind and resume the simulation. This plugin is in csrc/SimpleWorldPlugin.cc, listening on port 1235.
The protocol buffers are defined in RobotCommand.proto for torque commands, RobotState.proto for the robot state messages and SimpleWorldControl.proto for gazebo commands. Messages are written to the TCP socket using writeDelimitedTo(included in java protobuffers, simple C++ implementation in ProtoParser.cc) and read using readDelimitedFrom (also included in java protobuffers, simple C++ implementation in ProtoParser.cc)
We made some launch files in launch/ to start the qualifications.
I've shared the source on bitBucket. The java classes in src are for our own controller and do not work stand alone (but might give an example how to use it).
Original comment by Jesper Smith (Bitbucket: jespersmith).
Just saw that the repo was private, it is public now.
Original report (archived issue) by Jesper Smith (Bitbucket: jespersmith).
Currently, the way to control the simulation is to publish ROS topics, which the simulation then reads and executes. Especially for our balance control it is important to have fast and synchronous control (~200Hz) of the robot. With synchronous control, we mean that the frequency of the control loop is guaranteed to be fixed with respect to the simulated world time.
If the simulation software or the control software runs faster or slower for whatever reason, the controller frequency changes in the simulated world.