SMPTE / ris-osvp-metadata

Creative Commons Attribution 4.0 International
25 stars 1 forks source link

Capture Rate #5

Open revisionfx opened 1 year ago

revisionfx commented 1 year ago

Thanks for adding this, as for high-speed video this is often different than the playback rate put as meta-data in the recorded video file (also becomes there is no SMPTE FPS above 120), the last one could be added as already in media file. Just a note here, a number of small cameras (phones, webcams....) that are actually VFR (Variable Frame Rate) - the file has a playback rate but the frames have a timestamp for each frame so you might at 59.94 only have 57 frames actually recorded in a second.

Additional side-note: When matching meta-data I often see cameras which are not good in multi-threading tend to record (sample other sensors e.g. inertial sensors) after the frame record done. I have seen 1.5 frames offset between meta-data and center of frame capture interval. Similarly with sound, it's often 1/2 frame off is you consider the center of a frame capture interval (shutter speed) at 0.5, and the sound captured at much higher rate over the full frame interval.

The other thing not discussed here related to this is line locked sync or even pixel sync if the two cameras are the same model. You need to go to IP time model (much more precise time than FPS) to sync cameras. I would like a good to have for that. I have for example a set of Panasonic block cameras over ethernet in my home studio and they line lock very well. This is really important for stereo for example.

JGoldstone commented 9 months ago

...hoping SMPTE will consider this one-page excerpt from a 58-page document as some sort of 'fair use'...

In RDD 55 we define a new essence container carrying 'Supplemental Data' that can be interleaved with the traditional essence containers for image, sound, etc ... the important thing is that it is not tied to frame boundaries per se, rather both frame-based image and sound and ... essence and Supplemental data contain PTP timestamps.

image

If you are a member of the RIS for OSVP you should have access to the two RDDs (54 for ARRIRAW-specific stuff, 55 for general ARRI camera metadata).

image

Note that these are DRAFT (and watermarked) RDDs; but the process of going from that late a draft to the final document was mostly about typo finding and way, way too much time fighting Word on figures...

When you said 'IP time model', did you mean PTP? Or if not, what, and how does it compare with PTP?

For the most accurate PTP sync (sub-nanosecond, like they have at CERN) you pretty much need to be wired. I am curious as to how well one can do with PTP sync in a wireless environment on an OSVP set.

revisionfx commented 9 months ago

PTP like (nanoseconds as opposed to NTP I guess milliseconds - not to imply 1 ns exact just more than ms), just like most high-end industrial/computer vision cameras do with a camera being master and time stamping network packet. Only did it via ethernet cable network with vendor supported of same camera model network so never looked too deep at it but I think possible with WIFI | antennas now. My main concern is line sync... (more precision than time-code frame sync) and even be able to represent pixel sync when it's same camera model. Applications might be: stereo video, audio exact sync, multi-view as some 360 camera setup, volumetric and Virtual Set sync with camera motion and alignment of different samplers. e.g. from Z-Cam marketing: "Pixel-level multicamera sync can be achieved for up to 100 devices—requires optional cable(s)."