Open jwnimmer-tri opened 1 year ago
Some of the camera intrinsics have some non-obvious mappings in Blender.
In blender, we create the effect of non-uniform focal lengths using two mechanisms:
Currently, we explicitly set the vertical field of view. We converged onto this via a bit of trial and error (as documented in this review conversation. Setting camera.data.angle_x = params.fov_x
did not work.
Further blender investigation suggests that there are three other parameters that can contribute to this topic:
The fields of view, focal lengths, and image dimensions are all interconnected in Drake's camera parameters. In Blender, there are both the image dimensions (rendering setting), but there are also sensor dimensions (the so-called sensor_width
and sensor_height
indicated above). If the sensor aspect ratio doesn't match the camera's intrinsics, simple operations can lead to surprising outcomes.
The test we currently have works for a square image. However prodding around inside of blender shows that the value of sensor_fit
(combined with the ratio of the sensor dimensions) can significantly impact the final rendering, playing havoc with the base line focal length. We don't actively do anything with the sensor dimensions, but we should.
Options:
angle_y
.The current pixel aspect ratio logic, while a bit counter-intuitive, seems robust and correct (assuming the baseline field of view is correct).
Blender fun and games:
I defined the following two functions in a blender console sessions:
def cam(w, h, fov_x = None, fov_y = None):
c.sensor_width = w
c.sensor_height = h
if fov_x is not None:
c.angle_x = fov_x
elif fov_y is not None:
c.angle_y = fov_y
and
def cam_fix(w, h, fov_x = None, fov_y = None):
c.sensor_height = h
c.sensor_width = w
if fov_x is not None:
c.sensor_fit = 'HORIZONTAL'
c.angle_x = fov_x
elif fov_y is not None:
c.sensor_fit = 'VERTICAL'
c.angle_y = fov_y
For a square image output aspect ratio, I executed the following (with the indicated results):
c = bpy.data.objects.get("Camera").data
cam(32, 32, pi / 2) # Expected scene framing.
cam(32, 32, None, pi / 2) # Expected scene framing.
cam(32, 24, pi / 2) # Expected scene framing.
cam(32, 24, None, pi / 2) # Focal length decreased; scene appears to draw away.
cam(24, 32, pi / 2) # Expected scene framing.
cam(24, 32, None, pi / 2) # Focal length increased; scene appears to draw closer.
cam_fix(32, 32, pi / 2) # Expected scene framing.
cam_fix(32, 32, None, pi / 2) # Expected scene framing.
cam_fix(32, 24, pi / 2) # Expected scene framing.
cam_fix(32, 24, None, pi / 2) # Expected scene framing.
cam_fix(24, 32, pi / 2) # Expected scene framing.
cam_fix(24, 32, None, pi / 2) # Expected scene framing.
The result is different if the output image is rectangular. Simply declaring the sensor_fit is insufficient. Further investigation required.
But as far as this issue goes, we should make sure we test output images with various aspect ratios as well as anisotropic aspect ratios.
We also need to investigate that sensor aspect ratio output in our gltf files (e.g., test code). Where does the aspect ratio come from and does that inform the sensor dimensions in blender? Should we be doing something explicit about that on the drake side?
Our current tests use a single "camera model" (width, height, center, fov, etc.). We should expand our test coverage to ensure that any model the user specifies ends up being correctly applied.