SMPTE / ris-osvp-metadata

Creative Commons Attribution 4.0 International
25 stars 1 forks source link

Lens Data Misc - Entrance Pupil, CoC, focal adapters... #9

Open revisionfx opened 1 year ago

revisionfx commented 1 year ago

I have a few questions (motivated by partial ignorance rather than suggestions):

1) In a table as below, the CoC is hard-coded to produce the DOF table.

cooke-cinematography-lens-depth-of-field-chart.pdf (cookeoptics.com)

In VES list as people are very interested in DOF effect (as well as stylization like aperture shapes for bokeh)... and some suggest the "CoC" is what should be transported to effects...

Is it possible to derive that -- what is used as constant CoC to generate this sort of tables,linked above based on other variables if we have Min and Max (I know complicated as halfway is not in center or always a power function based on distance 2:1) and FD? Is it just a function curve that stops at half-"infinity" (hyperfocal)? As in it could be approximated and provided as a curve by lens vendor?

In effects parlance, the bokeh ball has a maximum size... which could be derived from something similar to CoC (a non-constant value) I guess???

///

2) I also am not clear about Entrance Pupil point - I understand the idea, and I have seen implementations and had to read about it years ago now, as it's similar to how I used it for no-parallax point in panaroma stitching And I am ok with the CG camera perspective transform (image plane) being located based on that virtual aperture location. It's not that straight forward in practice, and I am assuming it will vary based on FD at least... would this be approximated with just a function curve too?

As in could it be parametrized with a few points? I am wary of any process that requires shooting grids/checkerboard. I understand it might be in i/ data.

///

3) For lens adapter affecting the flange distance, I saw somewhere on some ASWF list maybe someone putting a link for lens/camera data in JSON schema form, that included the adapter transform. Can't find that reference to forward as reference.

This reminds me, if you ever used one of these with focal reducer... it will often shift the center (critical when dealing with fisheye), even putting the same lens on same camera might generate 2 different centers at the size of photosites (known issue for people doing stereo), even the physical aperture hole might be not exactly a circle and aligned... I am not an optical lens designer and out of scope here, but I see in active machine vision benches they use a micro-translation thing to callibrate the center, I always wondered if someone could make a simple lens illuminating cap with micro-controller (knobs) to move the light dot to align center.. where in-camera (or in attached monitor) for example generates colored pixels to visually align or capture at least micro-offsets of center? I understand this would generate a blurry circle if placed on illumination cap, which is fine. Without such scheme, any tech marketing mention of sub-pixel accuracy is not substantiable. I have seen ad-hoc technique tutorialized by Syntheyses to collect a greyscale gradient shooting a flat illuminated surface.

4) For lens softness at the edges, vignetting, distortion - are you missing a base definition which would be simply two circles - where the internal circle is the defined known "good coverage" as Cine-Lens calls it, which should be part of the lens specs but is not necessarily what is imaged. For example, if you use a certain lens designed for full-frame 35mm on a larger sensor, it will image something outside of the good coverage circle (which is what the lens vendor should spec) - this extra stuff might still be more useful than just cropping for post and is sometimes a look even... (e.g. shooting with a full-frame lens on a Red Monstro, you might have only 7K of 8 of good coverage as the sensor is larger than full frame).

Pierre

revisionfx commented 1 year ago

I quote here from VES meta-data list for reference:

"It is definitely an ill-defined term in that there are two completely different but equally valid definitions. But I would say in the context of “reproducing virtual lenses” and focus characteristics the definition which is “The diameter of the Out of Focus circle as projected on the sensor” (aka the size of the bokeh circle projected on the sensor) is the relevant one of the two, not the “what the most focused point of light looks like” (although that definition would also ideally be sensor independent).

Your cited concerns are more relevant to the definition of what constitutes the “Hyperfocal” threshold which I agree is entirely subjective, sensor dependent and therefore irrelevant to modeling lenses but it’s also completely different from either definition of CoC. The only commonality between “Circle of Confusion” and “Hyperfocal” is that CoC is one of the variables one needs to know to calculate Hyperfocal Distances regardless of what definition of Hyperfocal you pick. A CoC (in a simplified model) is independent of whether you have a Bayer Pattern, small pixel pitch, big pixels, micro lenses etc..

In VFX artist terms I think CoC is an ideal unit to use and it’s easy to define and implement no matter what focus method you use whether that’s in the renderer or as a post process. For a single focal plane @ focus distance it would be:

Bokeh Kernel Size (in screen space pixels) = CoC Diameter (in Real World Units) / Sensor Pixel Pitch (in Real World Units).

Those measurements could be done on a Lens Projector independent of the camera sensor and then fed into your defocus kernel/raytracer. Modeling the aliasing/softening of a particular sensor pattern would be a problem for the sampling algorithm independent of the lens model or a separate kernel layered on top if done in 2D comp. With a proper CoC model you should be able to use the same CoC values applied to emulated 16k film grain or a 1080p bayer sensor.

e.g. If the CoC model tells you it’s 0.2mm diameter @ 10m with focus at 5m then the defocus kernel on a RED Helium 8K (274px / mm) = 54.7px bokeh vs Alexa LF (121px/mm) = 24.2px bokeh. Those are nice simple unambiguous values that a VFX artist can easily understand and apply as needed. And passing it around as a metric unit also means that if you measure CoC on an Alexa I don’t need to know that because you told me in mm independent of the camera (maybe you even did it in a lens simulator) and that’s one less piece of information I need to apply the model.

You could also store it easily as a 3 dimensional function: coc(Subject Distance, Focus Distance) or you could add 2 more dimensions to your function at varying levels of precision: coc(Subject Distance, Focus Distance, ImageCircleX, ImageCircleY) or return two basic values for a simplified lens model: coc,vignette(Subject Distance, Focus Distance, ImageCircleX, ImageCircleY) for basic cat-eye vignetting. You could also do Chromatic Aberration coc(Subject Distance, Focus Distance, wavelength) then apply a kernel of CoC size for each wavelength fed.

Gavin Greenwalt

"

JGoldstone commented 11 months ago

@revisionfx in re: your point 2 of the topmost post, the entrance pupil is the pinhole of a virtual camera; but you need the pinhole focal length to find out where the pinhole camera's image plane would be. For \i, one can use the formula given in the Cooke "Camera and lens definition for VFX" paper, but only if one has the distance between the front nodal point (which is NOT the same thing as the entrance pupil) and the image plane. In this text:

image

one can be forgiven for thinking the 'we' in '…we can calculate…' is inclusive of the reader of the document. Reader, it is not; that's Cooke using the 'royal we'. Only the lens designer knows where that front nodal plane is for any given combination of lens ring positions.

To me the 'right thing' would be for the same system delivering dynamic lens data from the lens (focus distance, t-stop, nominal focal length, effective focal length, entrance pupil offset) to grow by one metadatum: pinhole focal length. That's more direct than delivering the distance of the front node position relative to either object or image plane and has less chance of causing a misunderstanding in someone who hasn't, e.g., read the Cooke paper. (I will forever be grateful to Kees Van Oostrum, former ASC President, who is now taking Cooke forward; were it not for his support this paper would not exist)

revisionfx commented 10 months ago

Zeiss via i/3 support has entrance pupil meta-data... I don't disagree with what you say. Info also available for photo lenses here: https://www.photonstophotos.net/GeneralTopics/Lenses/OpticalBench/OpticalBench.htm UI for that: (P and P' for the pupils) - for zoom lens ones, use button next scenario to see different samples and press pupils button to display location inside the lens body (or outside in some cases)

trandzik commented 8 months ago

@JGoldstone this is very interesting, thanks for sharing. If I understand it correctly, one can not simply obtain proper focal length for pinhole camera model from Cooke’s /i data (since it doesn't provide front node position required in mentioned equation)... Is that correct?

Anyway it would be great to have proper focal length for pinhole camera model available directly in /i metadata - just like you said. I was surprised to see it isn't already there and even more surprised to see that one can't even calculate it from given data.

JGoldstone commented 8 months ago

Actually I believe it can be derived from given data but there was a terminological inconsistency that made me think otherwise. Trying to find it now, looking in my notes....

OK. "Cooke /i Technology Part III" from 2021, §14, says:

"The basic pinhole camera model is defined by the following parameters:

A little later, in §§17-18, it shows how a normalized focus setting dn is derived as the quotient of a static quantity, the lens minimum focus setting, and a dynamic quantity, the current focus setting. With the normalized focus setting in hand, a cubic polynomial in dn at the bottom of §17, with coefficients f0 through f3 being measured per-lens values, gives principle distance f, a.k.a. … pinhole focal length.

The Cooke "Camera and Lens Definitions for VFX" mentions pinhole focal length six times, principle points and principle planes once each (in the context of effective focal length), but does not ever equate principle distance with effective focal length.

revisionfx commented 8 months ago

It has Focal Length (e.g. 50mm) as a nominal value (a particular lens could be 49.7mm and the same model 50.1mm) I attached a simple old file which has wrong values (Lockit was not working very well back then) - note as focus is changed further down. (42614124 is Infinity I guess?, Pupil Distance is an int that flips between 9 and 10, ...) - Horizontal FOV is an ambigous value - is it just the image circle hint from the lens vendor about the lens being in focus within that range (as opposed to the whole coverage function of sensor physical dimension?)

I can't reboot my older qnap (goes in a perpetual reboot) with my i/3 sample files. Does anyone has a sample file with zoom in it from a non-anamorphic lens (Cooke does not make regular zoom: Zeiss, Angenieux, Fujinon,...)? I don't remember what nominal focal lenght would be then (2 args?) and if zoom is actual Focal Lenght?

Some confusion is about location of the pinhole (if one wants to locate the pinhole at the non-static entrance pupil point) CookeS3_50mm_test1.xlsx

https://cookeoptics.com/wp-content/uploads/2023/07/Cooke-Camera-Lens-Definitions-for-VFX-210723.pdf

trandzik commented 8 months ago

@JGoldstone @revisionfx thank you very much for explanation and attached documents - great stuff. I have never worked with /i3 data but they certainly seem very interesting (any sample data would be great if someone has these).

Now that I understand how can one obtain parameters for pinhole camera model (f, cx, cy), along with distortion coefficients (k1, k2, k3, p1, p2) using the mentioned function of the focus setting, I started wondering how does Cooke handle changes in these data for zoom lenses? Or is this standard designed to work only for prime lenses?

In zoom lenses these data would have to change across two axes (meaning they should be calculated as a function of both focal length and focus setting) and therefore they would have to be arranged in some sort for a matrix, right? Or do you think the zoom lens would report different lens data as its focal length changes (meaning the the complete set of 42 floats forming the data would dynamically change with changes in focal length)? The document doesn't seem to provide answers in this area so I am a bit confused.

Also I am not sure if this is the right place to ask, but since you seem to know a lot about this stuff was wondering what do you think about this sentence from Cooke website: ZEISS eXtended data is included in Cooke /i Technology providing shading and distortion data for post-production. https://cookeoptics.com/i-cubed-technology/

To me it seems like ZEISS XD lenses (that have originally introduced "fork" to /i data - adding distortion coefficients and shading characteristics to custom XD protocol) should now correctly report shading and distortion data through latest standard /i3. This would then basically make whole XD "fork" obsolete (I think this would be a good thing as having such forks in /i protocol doesn't really help with standardization). However I can't seem to find answer to this so I should probably contact ZEISS, but I thought maybe you have worked with some of their XD lenses and have answers to this. Thanks

revisionfx commented 8 months ago

Would be cool to have a sample lens data dump from a zoom lens going over range. I don't have that. Theory would be focal length would dynamically vary if a zoom lens and yes on cine-camera varifocal an object is supposed to stay in focus as you zoom in and out. There is shading and distortion out of Cooke lens data. Not wanting to attach to their tech marketing but Cooke will say they measure each lens individually (as nominal 28mm can be 27.7 for a lens and 28.3 for another) and Zeiss will say they have less tolerance in manufacturing so they can work from a math model simulation for a given lens model.

Given the type of work you do, one idea would be to ask eztrack.studio for sample files maybe? And I stand corrected, looks like Cooke regular zoom lenses are back last year (how much I pay attention): [ Varotal/i FF - Cooke Optics ] I am not clear if they modeled these zoom lenses yet. Asking other vendors who say they support the i/3 protocol would be useful too: https://www.angenieux.com/wp-content/uploads/2016/01/20151201-ASU-WEB-1.pdf