Open brettle opened 3 months ago
Further investigation reveals that the Viewpoint's perspective is updated after each step, but it is updated relative to the perspective that the Viewpoint had when tracking was initiated. As a result, to get the effect I expected (where the Viewpoint has the same position and rotation as the tracked solid), one needs to copy the translation and rotation fields of the tracked Solid into the position and orientation fields of the Viewpoint and then have the simulation running while you update the translation and rotation fields of the tracked Solid. Is that the way it is supposed to work?
Yes, that's the way it is supposed to work. It allows setting the viewpoint with some offset with respect to the position and orientation of the robot, which is often useful.
I can see how that would be useful, but I think that can be achieved while providing a user experience more inline with what I was expecting. Thoughts on the following possible changes?
If the current behavior is to be maintained, the associated documentation should probably be made clearer.
I believe there are use cases where the current behavior is useful. For example, if you watch a soccer game, you want to set the viewpoint to some position and orientation and instruct to follow the ball. You don't want to change the viewpoint that you just set. Generally, changing anything set by the user is usually considered as a bad UX. So, I would not apply your first suggestion. The second one probably makes more sense and could be implemented.
I should have been clearer in my description of my first proposal above. I'm proposing that when the user initiates tracking, the Viewpoint should be set as follows:
To be clear, the above would only occur when the user initiates tracking. Since the user has requested the tracking, I don't see changing the Viewpoint in response as a bad UX. From a UX perspective, this behavior is the one that provides the "least surprise", imo.
To return to your soccer example, if you want to track the ball but not have it centered in the view, it seems easier to first track the ball and then adjust the Viewpoint than it would be to first set the Viewpoint and then track the ball.
This is difficult for me to evaluate this without being able to actually test it. Maybe you can go ahead with some implementation and we can test it to see if that seems better than the current system.
Describe the Bug Viewpoint tracking does not appear to be working.
Steps to Reproduce
Viewpoint.follow
set to "robot:camera" andViewpoint.followType
set to "Mounted Shot".Expected behavior The view in the 3d window should be from the perspective of the camera.
Screenshots Instead, the view looks like this:
System
Additional context
Running the simulation doesn't seem to have any effect on the issue.I'm seeing this in R2023b and the current R2024a master.