bulletphysics / bullet3

Bullet Physics SDK: real-time collision detection and multi-physics simulation for VR, games, visual effects, robotics, machine learning etc.
http://bulletphysics.org
Other
12.23k stars 2.85k forks source link

pybullet transform debug tools #1136

Closed ahundt closed 5 years ago

ahundt commented 7 years ago

Two tools would be extremely helpful for pybullet:

  1. A way to add arbitrary poses/coordinate systems with optional visualization, like an unlimited version of CoodinateSystemDemo.
  2. A useful companion would be a way to get arbitrary transforms between any two objects, including what is loaded from files like URDF.

Is there a transform object with a vector + quaternion or are position and orientation always stored separately at the moment?

Here is my first pass at an API, which will need some refinement:

import pybullet as p

# create/modify a pose object
id = p.setPoseObject(transform,
                     itemUniqueId=None # create by default
                     relativeToId=WorldFrameId, 
                     parentFrameId=WorldFrameId, 
                     ballOpacity=1, 
                     triadOpacity=1, 
                     name="PoseObject####",
                     color="yellow")

# move any object to an arbitrary position relative to an arbitrary frame
success = setTransform(itemUniqueId, 
                       transform, 
                       relativeToId=WorldFrameId)

# change the frame within which the specified frame moves
success = setParentFrame(itemUniqueId, parentUniqueId)

# get the id of the parent frame
parentId = getParent(itemUniqueId)

# get the transform between two arbitrary objects
transform = getTransform(itemUniqueId, relativeToId=WorldFrameId)

# get separate v,q (maybe not necessary if the transform above is [v,q]
v, q = getTransformVectorQuaternion(itemUniqueId, relativeToId=WorldFrameId)

It would be important that any frame moves along with its parent, for example if you were setting a tool point on a gripper, the frame should move with the gripper. If the world frame is the parent, it should remain static.

Why is this important? It will be extremely useful for debugging, defining goals and visualizing them, etc.

erwincoumans commented 7 years ago

I was planning to allow to anchor the user debug items (lines/text) to a specific body unique id and link index. That means there will be additional parameters and APIs related to addUserDebugText and addUserDebugLine, and removeUserDebugItem.

Such additions would include specifying a parent (object unique id/link index) and a elative coordinate system to the parent. Once that is in place, we can also query the Cartesian/world coordinates of the user debug item. Then you may not want to visualize the debug item, so we can allow user debug items without graphical output. It may also be useful to add some generic linear algebra tools to compute relative transforms given two transforms etc.

It looks like that would fit your proposed API too. Also, MuJoCo XML files (loaded using pybullet.loadMJCF) has similar functionality called 'site' (see http://www.mujoco.org/book)

Of course you can already add additional child links in the URDF file, without visual and without collision, using a fixed joint. This will give you similar functionality to sites. Have you considered that (for feature #1).

ahundt commented 7 years ago

Cool, thanks for the suggested workaround! That should work when the sites don't need to change.

In addition to mujoco I think v-rep has some nice design elements in the UI and API, plus it uses bullet as the default physics haha.

erwincoumans commented 7 years ago

V-Rep (and Gazebo) use the old Bullet 2.x maximal coordinate method. pybullet (and OpenAI Roboschool) uses the newer Bullet btMultiBody with Feathersone ABA method that is more suitable for robotics. pybullet / Bullet C-API also has inverse kinematics and inverse dynamics, and graphics rendering on CPU (for cloud simulation) and OpenGL3 etc. At the moment our inverse kinematics is based on damped least squares with improvements using null-space etc. It would be useful to extend the inverse kinematics with constriants and solve the IK with constraints using some optimization method indeed (like the DART sim library does). Note that several of Karen Liu's PhD graduates who worked on DART joined Google and I'm working with them.

ahundt commented 7 years ago

Thanks for the details I'm looking forward to what's coming, that sounds awesome! I mentioned v-rep for the convenient UI and programmatic setup/scripting tools., and I agree the physics implementation difference is definitely important.

erwincoumans commented 7 years ago

I'm looking forward to what's coming, that sounds awesome!

Ah wait, what specific part(s) exactly do you look forward to the most?

ahundt commented 7 years ago

Context would likely be most helpful. I was at X last summer, and I'm starting out with reinforcement learning for grasping because of the grasping dataset so I'm looking to complement that. I have a few ideas on a method of predicting objectives, constraints, weights etc to train grasping to work more efficiently at runtime than existing papers, and to train grasping specific objects. I was also hoping to look at transfer learning between simulation and real data, using simulation to augment the existing dataset. Eventually I hope to move from grasping to full end to end basic tasks first trained in simulation, since I don't have 20 robots like Google. :-)

So, for those reasons I was hoping for this idealized laundry list, so don't worry if they're not likely to happen:

  1. Maybe just v-rep, gazebo, or a more fully featured rendering engine integration could simplify much of this list?
    • I think v-rep and gazebo might be able to run headless on a server.
    • Ensuring the developers of gz & vrep are aware of bullet's new features could help too
  2. "mimic joint" like constraint for 2 finger physical robotiq gripper
    • same UR5 as #1140 @cpaxton, apparently this is resolved but he might have better suggestions for this list overall
  3. Transform Debug tools mentioned originally here (I'll likely use v-rep for now)
  4. Something equivalent to Tasks' more complete QP functionality
    • I'm deciding among: (1) using what's in pybullet now and waiting for improvements, (2) a quick graft of Tasks, (2+) consider deeper integration of Tasks, but only if it might make sense to merge. Any suggestion?
  5. Better visual fidelity (or maybe just v-rep or gazebo integration?)
  6. Adding colored depth sensor data (point cloud/mesh/slam results too) with a way to update rapidly
    • not on your list, I'll probably use v-rep for this at the moment
    • If something like this ends up on your roadmap look at Pangolin
  7. Camera Intrinsics
  8. Collision constraints

By the way are the inertial params etc in the urdf for the Kuka 14kg model accurate? I have one of those too but I wouldn't want unexpected surprises if I send it joint torques from bullet, haha.

erwincoumans commented 7 years ago

Thanks for the list, that helps understanding what you (and likely others) need.

On (1) and (5): Can you be more specific how Gazebo (and V-Rep) rendering is better, and what rendering features are missing? Bullet's OpenGL3 and TinyRenderer are shader based, so it should be possible to improve: the existing renderers support shadows (shadow map), texture mapping, single light source etc, but many models don't even have texture maps assigned.

Aside from improving the internal renderers (TinyRenderer/OpenGL3Renderer) We may integrate some better renderer, such as G3D, Mitsuba, Blender Cycles or AMD RadeonProRender: see https://github.com/GPUOpen-LibrariesAndSDKs/RadeonProRender-Baikal

(6) We provide the z-buffer and segmentation mask (visible object uid), how does a z-buffer differ from colored depth sensor data? see

rendering

(7) What kind of camera intrinsics do you like to see (I recall someone worked on this within Google, haven't exposed this in the open source version).

(8) What and where do you use those collision constraints?

The KUKA inertial parameters are not really very accurate, although we measure better values using system identification, we didn't end up using it, since we use position control on the KUKAs.

ahundt commented 7 years ago

Thanks for the list, that helps understanding what you (and likely others) need.

I'm very happy you're interested in feedback, thanks a lot for your consideration. I also know some of my particular interests may be in a niche that doesn't end up working out, so no worries if this doesn't match up with your project goals.

On (1) and (5): Can you be more specific how Gazebo (and V-Rep) rendering is better, and what rendering features are missing? Bullet's OpenGL3 and TinyRenderer are shader based, so it should be possible to improve: the existing renderers support shadows (shadow map), texture mapping, single light source etc, but many models don't even have texture maps assigned.

What you describe sounds like it could improve the appearance substantially. I do have high level familiarity with each of those techniques, but I don't actually have a lot of rendering expertise. I also forgot to mention AirSim https://github.com/Microsoft/AirSim that quality but indoors or ray tracing based rendering would be ideal, I'd prefer open source though, I think AirSim uses UE4.

Aside from improving the internal renderers (TinyRenmderer/OpenGL3Renderer) We may integrate some better renderer, such as G3D, Mitsuba, Blender Cycles or AMD RadeonProRender: see https://github.com/GPUOpen-LibrariesAndSDKs/RadeonProRender-Baikal

I checked each site (but without looking at the APIs) and, Blender Cycles looks like a great option for this, the rendering looks quite good, and it could be especially good considering it comes with the Apache v2 license.

I'll go off topic/pie in the sky for a second, forgive me. Apparently someone did rendering via tensorflow, rendering with tensorflow code, rendering with tensorflow video. Is it possible everything including rendering and physics could be done in such a way that it is differentiable and could be directly incorporated into the gradients of the machine learning model? /pieinsky

The other part is setup, editing, and scripting of experiments. This ties back to your comment on the python api discussion https://github.com/bulletphysics/bullet3/issues/1138#issuecomment-302955419, V-REP for example lets a user create a scene with a mix of programmatic creation and creation with the V-REP UI, a tool I like and have used the most, but pybullet is compelling enough that I'm starting to use it where I can.

I was wondering what impact could different approaches have on the time to reach a mature rl simulation tool?

These aren't the only approaches and I'm sure the answer will vary based on specific goals and/or technical reasons. :-)

(6) We provide the z-buffer and segmentation mask (visible object uid), how does a z-buffer differ from colored depth sensor data? see rendering

I'll definitely be using that feature! However, I meant putting real sensor data into pybullet for dataset augmentation. In other words, to take real sensor data and put it into pybullet as a point cloud (first step) or mesh via marching cubes or SLAM (later step), re-render the scene there, run a simulation inside, and pull the augmented images back out.

That segmentation you have is amazingly convenient by the way, thank you! I might want to add in a way to assign the ids in two additional ways aside from unique object Ids:

  1. Specify specific ids for each object, for example to match existing real world dataset classes like those in PASCAL VOC 2012, COCO, or COCO + coco-stuff for example.
  2. To assign ids for class annotation, instance annotation, or both. The other case, I assume the existing unique ids cover this, is assigning them persistently over simulation time.

(7) What kind of camera intrinsics do you like to see (I recall someone worked on this within Google, haven't exposed this in the open source version).

A basic pinhole model + setting the camera resolution would be great to start, this page has a pinhole model with some nice sliders https://ksimek.github.io/2013/08/13/intrinsic/. More advanced and lower priority would be configuring distortion, noise, etc. I personally wouldn't worry about more complicated intrinsics models until proven necessary.

(8) What and where do you use those collision constraints?

I don't use collisions yet, but I'm planning to use it to selectively enable / disable which regions of the scene and which regions of the robot will interact with each other. That is how I plan to make use of the redundancy of robots when it is available, for example so the elbow doesn't hit something.

The KUKA inertial parameters are not really very accurate, although we measure better values using system identification, we didn't end up using it, since we use position control on the KUKAs.

Cool are those typically easy to figure out or does it involve tricky physical aspects like timing? (Tasks implements this but I haven't run it)

erwincoumans commented 7 years ago

This is a start, to allow to render lines and text in local frame (objectUniqueId, linkIndex). https://github.com/bulletphysics/bullet3/pull/1145/commits/db008ab3c215f4550cfe6914f331bd87626d1edf

import pybullet as p import time p.connect(p.GUI) p.loadURDF("plane.urdf") kuka = p.loadURDF("kuka_iiwa/model.urdf") p.addUserDebugText("tip", [0,0,0.05],textColorRGB=[1,0,0],textSize=1.5,trackObjectUniqueId=kuka, trackLinkIndex=6) p.addUserDebugLine([0,0,0],[0.1,0,0],[1,0,0],trackObjectUniqueId=kuka, trackLinkIndex=6) p.addUserDebugLine([0,0,0],[0,0.1,0],[0,1,0],trackObjectUniqueId=kuka, trackLinkIndex=6) p.addUserDebugLine([0,0,0],[0,0,0.1],[0,0,1],trackObjectUniqueId=kuka, trackLinkIndex=6) p.setRealTimeSimulation(1) while (True): time.sleep(0.01)

ahundt commented 7 years ago

sweet, it showed up in the right spot for me on mac.

erwincoumans commented 7 years ago

cool, I'll add some example and do some more testing/tweaking (rename trackObjectUniqueId/trackLinkIndex to parentObjectUniqueId/parentLinkIndex for more consistent naming).

On 24 May 2017 at 08:33, Andrew Hundt notifications@github.com wrote:

sweet, it showed up in the right spot for me on mac.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/bulletphysics/bullet3/issues/1136#issuecomment-303761747, or mute the thread https://github.com/notifications/unsubscribe-auth/AAsR3CdhWWJINBwQi9AdEmMtL3yjubl5ks5r9E3VgaJpZM4NhodS .

ahundt commented 7 years ago

If the joints rotate should the frame rotate accordingly?

erwincoumans commented 7 years ago

Yes, attached is an example. I will rename the API before merging, and include the modified example.

On 24 May 2017 at 08:48, Andrew Hundt notifications@github.com wrote:

If the joints rotate should the frame rotate accordingly?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/bulletphysics/bullet3/issues/1136#issuecomment-303766200, or mute the thread https://github.com/notifications/unsubscribe-auth/AAsR3CUZvogBpz_aNVvfR8tcBOB3Jl5Mks5r9FE4gaJpZM4NhodS .

import pybullet as p import time p.connect(p.GUI) p.loadURDF("plane.urdf") kuka = p.loadURDF("kuka_iiwa/model.urdf") p.addUserDebugText("tip", [0,0,0.05],textColorRGB=[1,0,0],textSize=1.5,trackObjectUniqueId=kuka, trackLinkIndex=6) p.addUserDebugLine([0,0,0],[0.1,0,0],[1,0,0],trackObjectUniqueId=kuka, trackLinkIndex=6) p.addUserDebugLine([0,0,0],[0,0.1,0],[0,1,0],trackObjectUniqueId=kuka, trackLinkIndex=6) p.addUserDebugLine([0,0,0],[0,0,0.1],[0,0,1],trackObjectUniqueId=kuka, trackLinkIndex=6) p.setRealTimeSimulation(0) while (True): time.sleep(0.01) p.stepSimulation()

erwincoumans commented 7 years ago

Just updated, and included the example: https://github.com/erwincoumans/bullet3/blob/master/examples/pybullet/examples/debugDrawItems.py Note that you can press 'w', 'a' and 'l' in the example browser for additional debugging (W for wireframe, A for axis aligned bounding boxes and L for link info)

ahundt commented 7 years ago

works for me!

ahundt commented 7 years ago

API design might eventually benefit from a bit of simplification but I can get going with this.

erwincoumans commented 7 years ago

Heh, unfortunately simplification of the pybullet API may not happen (any time soon), since I care a lot about backward compatibility (not breaking peoples scripts etc)... But if you have some simplification suggestions, please share them here. The GUI windows ('example browser') will be revamped, while maintaining backward compatibility. Likely replace GWEN with ImGui, replace modified glew with glad. Add buttons to pause, restart simulation, buttons for wireframe/boundingbox rendering etc.

ahundt commented 7 years ago

My original post which opened this thread is a decent example, I'll extend that a bit.

Specifically for the example you just posted I think the parentObjectUniqueId/parentLinkIndex could be reduced to a single ID, with an accessor function to get one as part of another.

Just a concept, probably needs an additional iteration:

import pybullet a p
import numpy as np
# these eigen calls could be numpy equivalents
import eigen as e

jointNum = 6
kuka = p.loadURDF("kuka_iiwa/model.urdf")

# this function makes many other APIs simpler since they now only need 1 UID
linkUID = getRobotLinkID(kuka, JointNum)
lineStart = np.array([0,0,0.1])
lineEnd = np.array([0,0,1])

# line API, only one UID parameter required
# second optional one could specify the frame the line 
# is relative to separately from the definition of which frame is the parent
p.addUserDebugLine(lineStart,
                        lineEnd,
                        ParentObjectUniqueId=linkUID,
                        relativeToId=linkUID, # either parent or world frame by default)

# Frame API
transform = e.geometry.transform.identity()

# create/modify a pose object, only 
poseID = p.setPoseObject(transform,
                     itemUniqueId=None # create by default
                     parentFrameId=kuka, # world frame by default
                     relativeToId=kuka, # either parent or world frame by default
                     ballOpacity=1,
                     triadOpacity=1,
                     scale=1,
                     name="PoseObject####",
                     color="yellow")

# move any object to an arbitrary position relative to an arbitrary frame
success = p.setTransform(poseID, 
                       transform, 
                       relativeToId=WorldFrameId)

# change the frame within which the specified frame moves
success = p.setParentFrame(poseID, parentUniqueId)

# get the id of the parent frame
parentId = p.getParent(poseID)

# get the transform between two arbitrary objects
transform = p.getTransform(itemUniqueId, relativeToId=WorldFrameId)

# get separate v,q (maybe not necessary if the transform above is [v,q]
v, q = getTransformVectorQuaternion(itemUniqueId, relativeToId=WorldFrameId)

Perhaps old APIs can be left in place but deprecated?

It could also be reasonable to at least require numpy. I think a user in the various problem domains pybullet is designed for would want the performance benefits and almost certainly already have it.

erwincoumans commented 7 years ago

I see some convenience of a single id, but we picked the tuple (object unique id, link index) and prefer to stick with that. If you don't provide the object UID/link index, it will be in world space.

One the other topic, there is already the option to get base and link frames. The new APIs would involve:

1) changeDebugDrawText/Line to change text, line from/to, transform (position/orientation) 2) getTransform(debugItem) 3) some helper function to multiply transforms, inverse etc. I don't plan on making this a full matrix library, prefer to use numpy, eigen or tensorflow for this, but some basic methods would be good.

erwincoumans commented 7 years ago

of course you can make some mapping between single id and object uid/link index, both ways, and somehow expand/compress between APIs, if you really need it. Debug items (lines, text) are already single id of course (vaguely similar to 'sites')

But I think there is no real drawback in using two separate integers. By the way, some APIs have array functionality, for example setJointMotorControlArray, where you can access a subset/list of links to control.

ahundt commented 7 years ago

Sorry lost track of this issue, but I think doing what I need would take a lot more effort than it does in V-REP for example, ignoring the 1 vs 2 param id difference:

id = simCreateDummy()
parent = -1 # world 
simSetObjectPosition(id, world, [1,2,3])
simSetObjectQuaternion(id,world,[1,0,0,0])

It would be extremely convenient to both set and get a persistent debug pose relative to any other entity with position & orientation, with any parent. Defining the transform should really be a single function call, I don’t think drawline really covers this yet.

It would also be desirable to do this in bulk, and yes I have a specific use case in mind. :-)

erwincoumans commented 7 years ago

Well, addUserDebugText does exactly this, it is a frame relative to any object/parent link, and it is 1 function call: pybullet.addUserDebugText("optionalText", [relativePos], [relativeOrn], parentObjectUniqueId=..., parentLinkIndex=...).

Is the problem that you cannot easily access the actual world space coordinates of that user debug item? It is pretty trivial using pybullet.multiplyTransforms.

We can look into bulk versions, and possibly add persistent debug items without the text, where you can query for its world space position/orientation.

ahundt commented 7 years ago

addUserDebugText does exactly this

the Text kind of floats around in the demo so orientation is visually unknown, or perhaps it is fixed in the world frame?

Sorry if this is mentioned redundantly, an addUserDebugTransform that has options which default to None for each of, xyz colored axes, a sphere (cube works too) with size and color, and text would be ideal. Here is the V-REP version called dummies:

v-REP dummy object

They're super useful!

Is the problem that you cannot easily access the actual world space coordinates of that user debug item? It is pretty trivial using pybullet.multiplyTransforms.

Both world space coordinates, and the coordinates of any other object/link. I agree it is definitely very easy, but perhaps worth automating on the backend with a relativeTo option. That'd simplify user code and prevent user error. Always a win :-)

We can look into bulk versions, and possibly add persistent debug items without the text, where you can query for its world space position/orientation.

That would be great! Actually for the bulk version having it work like other collective entities with a uniqueid and pointIndex might be a good way to do it so they can be collectively bulk deleted too.

ahundt commented 6 years ago

1258 is good for debug viewing, but I'm still interested to be able to query these transforms at any time and create them in bulk.

erwincoumans commented 5 years ago

We don't have resources to work on this. If a volunteer shows up with a small patch, we consider merging it. Until then, closing old issues.