Open jleibs opened 1 year ago
The parametric transform could potentially be done using some kind of entity reference.
rr.log('joint/arm', rr.components.Angle(radians=arm_rotation_rad))
rr.log("robot/arm", rr.Transform3D(axis=(1,0,0), angle=rr.components.Angle.Reference('joint/arm')))
Issue Overview
In the current design, the transform graph must perfectly match the entity path hierarchy.
However, there are situations where the user-data graph transform representation does not match the path hierarchy we want to use for logging data. For example, a URDF file may contain several linkages that logically exist to match the mechanical design of the robot, but are otherwise not interesting to the user.
At present the user must choose between two sub-optimal options:
Proposal: "Derived" transforms.
The idea is that the value of the transform for that link in the path-hierarchy, rather than being directly fetched from the store, would be derived from some other function at render time. Specifically, this must be a function of time (along with some other data logged to the store). However, the high-level transform semantics are crucially unchanged: the transform (regardless of how its derived) still represents the transform between those two spaces defined by the path hierarchy and is considered fixed for a given instant in time. As such, cross-space projection can be totally independent of of the resolver -- it just needs to know that at a given timepoint on a given timeline, the transform was derived as having a particular value. Additionally the means of derivation need not play a role in space-partitioning and view-construction perspective. The mere fact that a "derived transform" has been logged to a path allows us to do all of the same path-hierarchy analysis we already perform.
The current transforms already fit into this model. They simply use a "default resolver" that returns the latest-at value of the transform logged to that path.
There are two immediate mechanisms of derived transforms that come to mind as being useful...
"graph-lookup" transform
First, raw transforms would be logged with a graph-centric representation.
For example, adopting from a ROS tf system, transforms would be logged to a path derived from their source frame, (e.g.
tf/<child_frame_id>)
and include a link to their dest frame (e.g.tf/<parent_frame_id>
).Then, a given path can be logged as a derived "graph lookup" transform:
At resolution time, this would then do a graph walk based on the logged topology.
Parametric transform
Much simpler than graph-based resolution is simple support for parametric transforms.
Once we have derived transforms, we can derive transforms from things like scalar values representing joint rotations or linear actuator offsets.
Exa:
This avoids the need for something like the "robot state publisher" in ROS.