Open cgokmen opened 2 years ago
Good point. We can have a camera object to store the viewpoint info, and the render function takes the camera object.
Note that we need to store the current viewpoint and previous viewpoint to render optical flow and scene flow information.
@fxia22 In that case our optical flow/scene flow setup could be quite buggy unless some code takes special case to reset the previous viewpoint only when the simulator timestep has changed.
(I'm saying this because on every simulation step currently, many things can call the render function)
Currently, the way of using the renderer is first setting the camera and then calling render.
The renderer thus stores its camera state (e.g. the V and P matrices etc) and uses this when rendering. But for one simulation step there's not one camera angle: the renderer camera can get changed multiple times in a single simulator step based on the viewers enabled, the number of robots in the scene, so on and so forth. Thus relying on this global camera state is dangerous and bug-prone.
Instead, we should be able to simply do:
and render from a given viewpoint.