JPC-AV / JPC_AV_videoQC

AV processing scripts for the Johnson Publishing Company archive
GNU General Public License v3.0
1 stars 1 forks source link

encoding settings tag #31

Open BleakleyMcD opened 7 months ago

BleakleyMcD commented 7 months ago

make a better encoding settings value. see end of discussion in issue #30

@eddycolloton @EmCNabs @dericed @chialinchou1

eddycolloton commented 7 months ago

Encoder Settings Examples

Here's some example encoder settings from NMAAHC files: C=Color, S=Analog, VS= NTSC, A=4:3, T=Sony SVO-5800, T=Blackmagic UltraStudio 4K Mini SN123456, ffmpeg vrecord; in-house, O=FFV1mkv, W=10-bit, M=YUV422p10, N=Emily Nabasny

O=VHS, C=Color, S=Analog, VS= NTSC, F=24, A=4:3, R=720x486, T=Sony SVO-5800, O=FFV1mkv, C=Color, V=Composite, S=Analog Stereo, F=24, A=4:3, W=10-bit, R=720x486, M=YUV422p10, T=Blackmagic UltraStudio 4K Mini SN123456, ffmpeg vrecord; in-house, O=FFV1mkv, W=10-bit, R=720x486, M=YUV422p10, N=Emily Nabasny

O=VHS, C=Color, S=Analog, VS= NTSC, F=24, A=4:3, R=640×480, T=Sony SVO-5800, O=FFV1mkv, C=Color, V=Composite, S=Analog Stereo, F=24, A=4:3, W=10-bit, R=640×480, M=YUV422p10, T=Blackmagic UltraStudio 4K Mini SN123456, ffmpeg vrecord; in-house, O=FFV1mkv, W=10-bit, R640x480, MYUV422p10 N=AJ Lawrence

(I've reproduced the last set "as is" but I believe the last line ought to be R=640x480, M=YUV422p10, N=AJ Lawrence)

The existing method has the obvious benefit of drawing on standards, plus it is efficient in that you get a lot of info in just a few characters. Another benefit of sticking with this and modifying it slightly is keeping continuity with metadata in existing nmaahc files.

But it is pretty complicated.

Questions for you:


For reference, here is the "key" from p14 + p15 of BWF Embed Guidelines:

Each variable within a string is separated by a comma-space and each line should end with a carriage return and line feed.

Summary of subelements:

  • A=coding algorithm
  • F=sampling frequency
  • B=bit rate (only for MPEG)
  • W=word length/bit depth
  • M=mode/sound field
  • T=free ASCII text string; contains no commas but semicolons may be used

Detail on subelement syntax:

  • A = Coding Algorithm <ANALOG, PCM, MPEG1L1, MPEG1L2, MPEG1L3, MPEG2L1, MPEG2L2, MPEG2L3>
  • F=Sampling frequency <11000, 22050, 24000, 32000, 44100, 48000, 96000, 176400, 192000, 384000, 768000> Implied unit of measure [Hz]
  • B (ONLY FOR MPEG ENCODING) = Bit-rate <any bit-rate allowed in MPEG 2 (ISO/IEC13818-3)>, Implied unit of measure [kbit/s per channel]
  • W= Word Length/Bit Depth <8, 12, 14, 16, 18, 20, 22, 24, 32> Implied unit of measure [bits]
  • M=Mode/Sound Field <mono, stereo, dual-mono, joint-stereo, multitrack, multichannel, streams >
  • T=Text, free string <a free ASCII-text string for in house use. This string should contain no commas (ASCII 2Chex). Examples of the contents: ID-No; codec type; A/D type; track number for multitrack recordings, description of channel layout for multichannel audio, number and arrangement of streams>

Sound Directions Example: A=ANALOG,M=mono,T=Studer816; SN1007; 15 ips; open reel tape, A=PCM,F=96000,W=24,M=mono,T=Pyramix1; SN16986, A=PCM,F=96000,W=24,M=mono,T=Lynx; AES16; DIO, (* see note below about EOL comma use) Explanation: Line 1 reads: an analog, mono, open-reel tape played back on a Studer 816 tape machine with serial number 1007 at tape speed 15 ips. Line 2 reads: tape was digitized to PCM coding in mono mode at 96 kHz sampling frequency and 24 bits per sample on a Pyramix 1 DAW with serial number 16986. Line 3 reads: the audio was stored as a BWF file with PCM coding in mono mode at 96 kHz sampling frequency and 24 bits per sample using a Lynx AES16 digital input/output interface NOTE: These examples from the Sound Directions project include a comma (“,”) at the end of each line of text but the EOL comma is not included in EBU R98. FADGI is including the comma in this document to faithfully represent the Sound Directions example but FADGI does not require EOL commas.

eddycolloton commented 7 months ago

I found the FADGI dpx guidelines I believe these are drawing from.

Looks like the nmaahc "keys" are not 1:1 with either of these guidelines though. Are they custom? Do we have the list somewhere?

Page 36-37 of DPX Embed Guidelines:

Values: The first line documents the source film reel, the second line contains data on the capture process and the third line contains data on the storage of the file. A new line is added when the coding history related to the file is changed. Each variable within a string is separated by a comma-space and each line should end with a carriage return and line feed. Each variable is optional, to be used when needed. The exception to this rule is that each row must start with “O=” to designate the start of a new row.
O=format (reversal, print, positive, negative, DPXv1, DPXv2, etc.) G=gauge (super8mm, 8mm,16mm, 35mm, etc.) C=color (color, BW) S=sound (silent, composite optical, composite mag, separate optical reel, separate mag reel, etc.) D=summary of condition issues, especially if condition impacts visual quality of digitized image F=frames per second A=aspect ratio L=timing, grading (one-light, scene) W=bit depth (12-bit, 10-bit, 8-bit, etc.) R=resolution (2K, 4K, 8K, etc.) M=color model (RGB Log, etc.) N=name of vendor or operator who scanned film (if applicable) T=free ASCII text string; contains no commas but semicolons may be used. Example:
O=positive, G=16mm, C=color, S=silent, F=24, A=4:3,D=warped O=DPXv1, L=one-light, W=10-bit, R=2K, M=RGB Log, T=FilmScannerA; SN123456; in-house O=DPXv1, W=10-bit, R=2K, M=RGB Log

[Explanation: Line 1 reads: a 16mm positive color print, with no associated soundtrack, at 24fps and 4:3 aspect ratio (1.375:1). The film was warped and impacted the visual quality of the image.
Line 2 reads: film was digitized to a DPX version 1 file. One-light grading was employed. The image is 10-bit at 2K resolution (2048x1556) with RGB Log color model. The film was digitized via FilmScannerA (anonymized name of film scanner), serial number 123456, which is an in-house film scanner. Line 3 reads: the file is stored as DPXv1, 10-bit 2K RGB log] Example: O=positive, G=35mm, C=BW, S=optical, F=24, A=4:3, T=composite optical O=DPXv2, L=one-light, W=10-bit, R=4K, M=RGB Log, T= FilmScannerB; SN98765; SoftwareX; soundtrack in frame; offsite , N=ScanningVendor1 O=DPXv2, W=10-bit, R=4K, M=RGB Log

[Explanation: Line 1 reads: a 35mm positive black and white print, with a composite optical soundtrack, at 24fps and 4:3 aspect ratio (1.375:1). The film had some shrinkage.
Line 2 reads: film was digitized to a DPX version 2 file. One-light grading was employed. The image is 10-bit at 4K resolution (4096x3112) with RGB Log color model. The film was digitized via FilmScannerB (anonymized name of film scanner), serial number 98765 and processed through SoftwareX (anonymized name of workstation processing software). The optical soundtrack is captured within the image frame. The film was digitized offsite by ScanningVendor1 (anonymized name of digitization vendor). Line 3 reads: the file is stored as DPXv2, 10-bit 4K RGB log]

eddycolloton commented 7 months ago

I think ideally we could create 3 fields:

  1. source
  2. capture process
  3. storage file

Each field would have the subfields that correspond to the letters in the FADGI dpx guidelines. This way we're still capturing the same metadata and in the same standardized format, but in mkv tags (which I think can have sub elements?). We would also have reverse compatibility because we could always convert these fields back to the FADGI format if we need:

The first example from my post this morning, in the new format would be:

BleakleyMcD commented 7 months ago

Encoder Settings Examples

Here's some example encoder settings from NMAAHC files: C=Color, S=Analog, VS= NTSC, A=4:3, T=Sony SVO-5800, T=Blackmagic UltraStudio 4K Mini SN123456, ffmpeg vrecord; in-house, O=FFV1mkv, W=10-bit, M=YUV422p10, N=Emily Nabasny

O=VHS, C=Color, S=Analog, VS= NTSC, F=24, A=4:3, R=720x486, T=Sony SVO-5800, O=FFV1mkv, C=Color, V=Composite, S=Analog Stereo, F=24, A=4:3, W=10-bit, R=720x486, M=YUV422p10, T=Blackmagic UltraStudio 4K Mini SN123456, ffmpeg vrecord; in-house, O=FFV1mkv, W=10-bit, R=720x486, M=YUV422p10, N=Emily Nabasny

O=VHS, C=Color, S=Analog, VS= NTSC, F=24, A=4:3, R=640×480, T=Sony SVO-5800, O=FFV1mkv, C=Color, V=Composite, S=Analog Stereo, F=24, A=4:3, W=10-bit, R=640×480, M=YUV422p10, T=Blackmagic UltraStudio 4K Mini SN123456, ffmpeg vrecord; in-house, O=FFV1mkv, W=10-bit, R640x480, MYUV422p10 N=AJ Lawrence

(I've reproduced the last set "as is" but I believe the last line ought to be R=640x480, M=YUV422p10, N=AJ Lawrence)

The existing method has the obvious benefit of drawing on standards, plus it is efficient in that you get a lot of info in just a few characters. Another benefit of sticking with this and modifying it slightly is keeping continuity with metadata in existing nmaahc files.

But it is pretty complicated.

Questions for you:

For reference, here is the "key" from p14 + p15 of BWF Embed Guidelines:

Each variable within a string is separated by a comma-space and each line should end with a carriage return and line feed. Summary of subelements:

  • A=coding algorithm
  • F=sampling frequency
  • B=bit rate (only for MPEG)
  • W=word length/bit depth
  • M=mode/sound field
  • T=free ASCII text string; contains no commas but semicolons may be used

Detail on subelement syntax:

  • A = Coding Algorithm <ANALOG, PCM, MPEG1L1, MPEG1L2, MPEG1L3, MPEG2L1, MPEG2L2, MPEG2L3>
  • F=Sampling frequency <11000, 22050, 24000, 32000, 44100, 48000, 96000, 176400, 192000, 384000, 768000> Implied unit of measure [Hz]
  • B (ONLY FOR MPEG ENCODING) = Bit-rate <any bit-rate allowed in MPEG 2 (ISO/IEC13818-3)>, Implied unit of measure [kbit/s per channel]
  • W= Word Length/Bit Depth <8, 12, 14, 16, 18, 20, 22, 24, 32> Implied unit of measure [bits]
  • M=Mode/Sound Field <mono, stereo, dual-mono, joint-stereo, multitrack, multichannel, streams >
  • T=Text, free string <a free ASCII-text string for in house use. This string should contain no commas (ASCII 2Chex). Examples of the contents: ID-No; codec type; A/D type; track number for multitrack recordings, description of channel layout for multichannel audio, number and arrangement of streams>

Sound Directions Example: A=ANALOG,M=mono,T=Studer816; SN1007; 15 ips; open reel tape, A=PCM,F=96000,W=24,M=mono,T=Pyramix1; SN16986, A=PCM,F=96000,W=24,M=mono,T=Lynx; AES16; DIO, (* see note below about EOL comma use) Explanation: Line 1 reads: an analog, mono, open-reel tape played back on a Studer 816 tape machine with serial number 1007 at tape speed 15 ips. Line 2 reads: tape was digitized to PCM coding in mono mode at 96 kHz sampling frequency and 24 bits per sample on a Pyramix 1 DAW with serial number 16986. Line 3 reads: the audio was stored as a BWF file with PCM coding in mono mode at 96 kHz sampling frequency and 24 bits per sample using a Lynx AES16 digital input/output interface NOTE: These examples from the Sound Directions project include a comma (“,”) at the end of each line of text but the EOL comma is not included in EBU R98. FADGI is including the comma in this document to faithfully represent the Sound Directions example but FADGI does not require EOL commas.

@eddycolloton yes it was a combination of the two FADGI standards (and maybe @EmCNabs found a non-FADGI one for video at some point?) but it was clearly hard to keep consistent and only recently standardized (for NMAAHC) in the work @EmCNabs is doing. But, it is a compromise and, as you noted, complicated.

BleakleyMcD commented 7 months ago

I found the FADGI dpx guidelines I believe these are drawing from.

Looks like the nmaahc "keys" are not 1:1 with either of these guidelines though. Are they custom? Do we have the list somewhere?

Page 36-37 of DPX Embed Guidelines:

Values: The first line documents the source film reel, the second line contains data on the capture process and the third line contains data on the storage of the file. A new line is added when the coding history related to the file is changed. Each variable within a string is separated by a comma-space and each line should end with a carriage return and line feed. Each variable is optional, to be used when needed. The exception to this rule is that each row must start with “O=” to designate the start of a new row. O=format (reversal, print, positive, negative, DPXv1, DPXv2, etc.) G=gauge (super8mm, 8mm,16mm, 35mm, etc.) C=color (color, BW) S=sound (silent, composite optical, composite mag, separate optical reel, separate mag reel, etc.) D=summary of condition issues, especially if condition impacts visual quality of digitized image F=frames per second A=aspect ratio L=timing, grading (one-light, scene) W=bit depth (12-bit, 10-bit, 8-bit, etc.) R=resolution (2K, 4K, 8K, etc.) M=color model (RGB Log, etc.) N=name of vendor or operator who scanned film (if applicable) T=free ASCII text string; contains no commas but semicolons may be used. Example: O=positive, G=16mm, C=color, S=silent, F=24, A=4:3,D=warped O=DPXv1, L=one-light, W=10-bit, R=2K, M=RGB Log, T=FilmScannerA; SN123456; in-house O=DPXv1, W=10-bit, R=2K, M=RGB Log [Explanation: Line 1 reads: a 16mm positive color print, with no associated soundtrack, at 24fps and 4:3 aspect ratio (1.375:1). The film was warped and impacted the visual quality of the image. Line 2 reads: film was digitized to a DPX version 1 file. One-light grading was employed. The image is 10-bit at 2K resolution (2048x1556) with RGB Log color model. The film was digitized via FilmScannerA (anonymized name of film scanner), serial number 123456, which is an in-house film scanner. Line 3 reads: the file is stored as DPXv1, 10-bit 2K RGB log] Example: O=positive, G=35mm, C=BW, S=optical, F=24, A=4:3, T=composite optical O=DPXv2, L=one-light, W=10-bit, R=4K, M=RGB Log, T= FilmScannerB; SN98765; SoftwareX; soundtrack in frame; offsite , N=ScanningVendor1 O=DPXv2, W=10-bit, R=4K, M=RGB Log [Explanation: Line 1 reads: a 35mm positive black and white print, with a composite optical soundtrack, at 24fps and 4:3 aspect ratio (1.375:1). The film had some shrinkage. Line 2 reads: film was digitized to a DPX version 2 file. One-light grading was employed. The image is 10-bit at 4K resolution (4096x3112) with RGB Log color model. The film was digitized via FilmScannerB (anonymized name of film scanner), serial number 98765 and processed through SoftwareX (anonymized name of workstation processing software). The optical soundtrack is captured within the image frame. The film was digitized offsite by ScanningVendor1 (anonymized name of digitization vendor). Line 3 reads: the file is stored as DPXv2, 10-bit 4K RGB log]

@eddycolloton documented here - https://confluence.si.edu/display/NMAAH/Embedded+Metadata%3A+mkv+Video

and here - https://github.com/NMAAHC/documentation/blob/main/02_video_preservation/mkv_tags.md

but already outdated and incongruent between the two.

Moving fast - hard to keep up!

Latest version of mkvnote (https://github.com/NMAAHC/nmaahcmm/blob/mkvnote_expand/mkvnote) @dericed has final (final?) tags in key:value style using almost exclusively official MKV tags... the value for ENCODING_SETTINGS being currently under discussion.

(some_test_env) /Users/bleakley/github/nmaahc/nmaahcmm
≈:≈ ./mkvnote /Users/bleakley/Desktop/2012_79_2_49_1a_PM.mkv 

Note: Invalid profile or no profile specified with -p. You will be prompted to select a profile.

Available tag profiles and their respective tags:

1) JPC profile tags:
    - COLLECTION
    - TITLE
    - CATALOG_NUMBER
    - DESCRIPTION
    - DATE_DIGITIZED
    - ENCODING_SETTINGS
    - ENCODED_BY
    - ORIGINAL_MEDIA_TYPE
    - DATE_TAGGED
    - TERMS_OF_USE
    - _TECHNICAL_NOTES
    - _ORIGINAL_FPS

2) NMAAHC profile tags:
    - COLLECTION
    - TITLE
    - CATALOG_NUMBER
    - DESCRIPTION
    - DATE_DIGITIZED
    - ENCODING_SETTINGS
    - ENCODED_BY
    - ORIGINAL_MEDIA_TYPE
    - DATE_TAGGED
    - TERMS_OF_USE
    - _TECHNICAL_NOTES
    - _ORIGINAL_FPS
    - _TAGTAG

You must select a tag profile.
Press 1 for JPC, 2 for NMAAHC, or 0 to exit.

Enter your choice: 
BleakleyMcD commented 7 months ago

I think ideally we could create 3 fields:

  1. source
  2. capture process
  3. storage file

OK. That's an idea, tracks with FADGI BWF guidelines.

Each field would have the subfields that correspond to the letters in the FADGI dpx guidelines. This way we're still capturing the same metadata and in the same standardized format, but in mkv tags (which I think can have sub elements?). I think so too. We would also have reverse compatibility because we could always convert these fields back to the FADGI format if we need: Debatable if it's better to throw everything in one tag or break it out into individual tags/sub-tags. The latter could be very laborious to fill out. @dericed had suggested ENCODING_SETTINGS templates...

  • format (reversal, print, positive, negative, DPXv1, DPXv2, etc.)
  • gauge (super8mm, 8mm,16mm, 35mm, etc.)
  • color (color, BW)
  • sound (silent, composite optical, composite mag, separate optical reel, separate mag reel, etc.)
  • video standard (nmaahc has been using "VS=" for NTSC. Can we use "color mode" for this, or do we need this field?) We can use color mode
  • summary of condition issues, especially if condition impacts visual quality of digitized image
  • frames per second
  • aspect ratio
  • timing, grading (one-light, scene)
  • bit depth (12-bit, 10-bit, 8-bit, etc.)
  • resolution (2K, 4K, 8K, etc.)
  • color model
  • name of vendor or operator who scanned film (if applicable)
  • free ASCII text string; contains no commas but semicolons may be used. --- this is "T=" in both DPX and BWF, but depite just being "free text" it is used pretty similarly across all of the examples in both guidelines. I'm leaving it as "free text" for now but maybe we should change it to "transfer details" or something?

The first example from my post this morning, in the new format would be:

  • source

    • color: Color
    • coding algorithm: Analog
    • video standard: NTSC
    • aspect ratio: 4:3
    • free text: Sony SVO-5800
  • capture process

    • free text: Blackmagic UltraStudio 4K Mini SN123456, ffmpeg vrecord
    • name of vendor or operator: in-house
  • storage file

    • format: FFV1mkv
    • bit depth: 10-bit
    • color model: YUV422p10
    • name of vendor or operator: Emily Nabasny

Some of these, such as "name of vendor or operator" are duplicative of other mkv tags listed in the tag profiles are mkvnote. I want to the ENCODING_SETTINGS to document that equipment/hardware used to capture that. Most of 1 and I think all of 3 are either entered elsewhere or easily accessed from logs.

something similar to what you suggested:

very open to suggestions! but definitely want to keep it easy!

eddycolloton commented 7 months ago

Oh, that's a good point, a lot of this is duplicated information.

If the priority is to document the capture equipment, maybe we just want a signal flow text string?

New MKV tag could just be "capture signal flow". Looks like you have a syntax started here already, using semi-colons to separate devices.

Something like: capture device, serial number, signal type ; next device, next device's sn, next signal type ; etc...

For example: Sony SVO-5800, SN: 12345, composite ; DPS575, SN: 23456, SDI ; BlackMagic Ultra Jam, SN: 34567 ; vrecord, ffmpeg

This is easy enough to split programmatically with only the 1st subfield (capture device) being a required field, and SN and signal type being optional.

I'd be curious if there is any standardization in documenting signal flow? Seems doubtful.

This of course would be throwing out the fadgi/bwf stuff entirely...

BleakleyMcD commented 6 months ago

@eddycolloton ya that is more or less what I'm thinking. The FADGI stuff was (I think) a compromise for trying to fit all this information into the file header, either wav or dpx, with byte restrictions and not breaking the fil. We have no such problem with mkv, which is kinda designed for exactly what we are trying to do.

I like your example. Maybe modify as such: Source VTR: SVO5800, SN 122345, composite ; Frame sync: DPS575 (or could be internal, referring back to the VTR), SN 23456 SDI ; Capture Device: Black Magic Ultra Jam, SN 34567, Thunderbolt (not sure what the best descriptor here is...) ; Computer: Mac Mini, SN 45678, OS 14.4, vrecord (version), ffmpeg

This would (hopefully) be something akin to a profile that could be copied/pasted for batches of tapes. the OS and vrecrod versions may change a bit (maybe unnecessary?) but everything else would be largely static per format.

eddycolloton commented 6 months ago

Yeah I think this works. Do we want to keep the field name "Encoder Settings" or change it to "transfer signal flow" or something?

Would it be convenient to have some kind of spreadsheet macros form or google form or something to format this string?

eddycolloton commented 6 months ago

Just summarizing our meeting from today...

leaving the field name the same: encoder settings

Encoder Settings format would be: Source VTR: model name, serial number, video signal type (Composite, SDI, etc.), audio connector type (XLR, RCA, etc.) ; Frame sync/TBC/Name tbd: model name(or could be internal, referring back to the VTR), serial number, SDI (note if audio is embedded) ; Capture Device: model name, serial number, data connection type (Thunderbolt/PCIe/SATA/etc) ; Computer: model name, serial number, computer os version, capture software (including version), encoding software (ffmpeg version not required)

We can either make all subfields required, and have the code expect 3 subfields per field, and expect all named fields above, or make it more flexible.

Now that we have outlined this, I've changed my mind from what I said in the meeting. I do not think we should make the subfields optional. I'm more in favor of starting with the check being strict, expecting all exact field names and requiring all subfields, and then we can make it more flexible if necessary.

eddycolloton commented 6 months ago

As of this commit: https://github.com/JPC-AV/JPC_AV_videoQC/commit/4919c715c41f6c30dc6f03cb5418490447f04990

The ffprobe_check checks for the following fields in ENCODER_SETTINGS: Source VTR, Frame sync, Capture Device, Computer (separated by ;)

It then checks for an expected number of subfields = {'Source VTR': 4, 'Frame sync': 3, 'Capture Device': 3, 'Computer': 5}

Lastly it checks to see if each field has a serial number matching any of the following formats (not case sensitive): ["SN ", "SN-", "SN##"]

@BleakleyMcD Give it a try when you can, and let me know when you would like to merge into the main branch.

If the JPC_AV files won't have anything in ENCODER SETTINGS, the script will just report "No 'encoder settings' in ffprobe output"

BleakleyMcD commented 6 months ago

@eddycolloton after hearing from Morgan he suggests splitting the TBC field into two: TBC/Framesync and ADC.

if we use the 575 as a TBC/framesync and a ADC we can put 575 for both. But if we use an internal TBC/framesync and a 575 to convert then we can put “internal” for the TBC/framesync and 575 for the ADC.

That adds a lot more potential variability to the number of subfields, which makes the idea of choosing specific profiles for this tag that the script can populate and/or check against appealing...

BleakleyMcD commented 6 months ago

Dub Transcoder for using 295.

BleakleyMcD commented 6 months ago

@eddycolloton

After further thought and discussion, lets separate TBC and Framesync Source VTR: model name, serial number, video signal type (Composite, SDI, etc.), audio connector type (XLR, RCA, etc.) ; TBC: model name(or could be internal, referring back to the VTR), serial number, video signal type (Composite, SDI, etc.) Framesync: model name(or could be internal, referring back to the VTR), serial number, video signal type (Composite, SDI, etc.) ADC: model name, serial number, video signal type (Composite, SDI, etc.) note if audio is embedded ; Capture Device: model name, serial number, data connection type (Thunderbolt/PCIe/SATA/etc) ; Computer: model name, serial number, computer os version, capture software (including version), encoding software (ffmpeg version not required)

The Framesync and the ADC will pretty much always be the same when using the 575, which is our go-to ADC. But for some formats we may also use the 295 in the signal chain, in which case there would be yet another category like "transcoder" or "color processor". Anyway we can ignore the 295 option for now.

eddycolloton commented 6 months ago

After further thought and discussion, lets separate TBC and Framesync

Sounds good! I will add these fields to the checks

choosing specific profiles for this tag that the script can populate and/or check against appealing

Sure we can do this.

We'll need to consider where to store the different signal flow values we're testing against. Right now the config.yaml stores expected values that the video file is checked against. That file is editable, but we haven't yet automated changes to it. The "profiles" of the command_config.yaml say to run certain checks, but not which values to check.

There's plenty of good ways to do this of course, just a matter of choosing what is intuitive for us. We're already using yaml so I'd like to store the fields in that format, plus it can prevent against missing a comma or semicolon. Some options:

  1. We could store signal flow metadata in a separate config file, name the different signal flows, for example "JPC_AV SVHS", "NMAAHC Media Lab DigiBeta", etc. Then we add a signal flow field to the command_config, and build on our existing command_config editing functions.

  2. We already have the ability for multiple values to be "approved" in the config.yaml. So another approach would be for any acceptable signal flow value to be added to the config.yaml. This would obviously not protect against an approved signal flow being embedded in the wrong file, but it would prevent unapproved values (typos for instance) from being embedded.

  3. Or we make a new function that changes the config.yaml file. This might make the most sense in the long wrong, as I can imagine scenarios where you would want to check against different sets of approved metadata values.

DSohl commented 6 months ago

I know you said “ignore the 295 option for now,” but I thought I ought to chime in and say it’s a 275.

On Wed, May 15, 2024 at 10:27 AM Bleakley McDowell @.***> wrote:

@eddycolloton https://github.com/eddycolloton

After further thought and discussion, lets separate TBC and Framesync Source VTR: model name, serial number, video signal type (Composite, SDI, etc.), audio connector type (XLR, RCA, etc.) ; TBC: model name(or could be internal, referring back to the VTR), serial number, video signal type (Composite, SDI, etc.) Framesync: model name(or could be internal, referring back to the VTR), serial number, video signal type (Composite, SDI, etc.) ADC: model name, serial number, video signal type (Composite, SDI, etc.) note if audio is embedded ; Capture Device: model name, serial number, data connection type (Thunderbolt/PCIe/SATA/etc) ; Computer: model name, serial number, computer os version, capture software (including version), encoding software (ffmpeg version not required)

The Framesync and the ADC will pretty much always be the same when using the 575, which is our go-to ADC. But for some formats we may also use the 295 in the signal chain, in which case there would be yet another category like "transcoder" or "color processor". Anyway we can ignore the 295 option for now.

— Reply to this email directly, view it on GitHub https://github.com/JPC-AV/JPC_AV_videoQC/issues/31#issuecomment-2112857171, or unsubscribe https://github.com/notifications/unsubscribe-auth/BDGTHWPSVGGWCFPCNYQR6NTZCN5GTAVCNFSM6AAAAABGPV3PTWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMJSHA2TOMJXGE . You are receiving this because you are subscribed to this thread.Message ID: @.***>

eddycolloton commented 6 months ago

I'm moving forward with option 3 from my prev comment. We need to be able to switch file name profiles too, so we will need to be able to automate config.yaml changes eventually.

lots of good progress: https://github.com/JPC-AV/JPC_AV_videoQC/commit/02534e648f12cd4ef97f88120a5c699e41ee9178

Adding 2 new args: -sn/--signalflow -fn/--filename

These will ref python dictionaries stored alongside the command profiles in yaml_profiles.py

File name and signal flow "profiles" can then be read into the config.yaml and used for checking input files.

Something like: av-spexy -sn JPC_AV_SVHS /path/to/JPC_AV_05000

(bowser branch is behind main so it would be "python JPC_AV/process_file.py -sn JPC_AV_SVHS /path/to/JPC_AV_05000" for now)

File name check needed to be rewritten to account for varying number of file name "fields". Still tinkering with it. Here's the approved values for a JPC_SVHS signal flow and both file name profiles.

JPC_AV_SVHS = { "Source VTR": ["SVO5800", "SN 122345", "composite"], "TBC": ["SVO5800", "SN 122345", "composite"], "Framesync": ["DPS575", "SN 23456", "SDI"], "ADC": ["DPS575", "SN 23456", "SDI"], "Capture Device": ["Black Magic Ultra Jam", "SN 34567", "Thunderbolt"], "Computer": ["Mac Mini", "SN 45678", "OS 14.4", "vrecord (2024.01.01)", "ffmpeg"] }

bowser_filename = { "Collection": "201279", "MediaType": "2", "ObjectID": r"\d{3}\d{1}[a-zA-Z]", "DigitalGeneration": "PM", "FileExtension": "mkv" }

JPCAV_filename = { "Collection": "JPC", "MediaType": "AV", "ObjectID": r"\d{5}", "FileExtension": "mkv" }

BleakleyMcD commented 6 months ago

Awesome. soon-to-be FADGI subgroup on video "coding history" documentation in mkv tags? I think so.

EmCNabs commented 6 months ago

Source VTR: model name, serial number, video signal type (Composite, SDI, etc.), audio connector type (XLR, RCA, etc.) ; TBC: model name(or could be internal, referring back to the VTR), serial number, video signal type (Composite, SDI, etc.) ; Framesync: model name(or could be internal, referring back to the VTR), serial number, video signal type (Composite, SDI, etc.) ; ADC: model name, serial number, video signal type (Composite, SDI, etc.) note if audio is embedded ; Capture Device: model name, serial number, data connection type (Thunderbolt/PCIe/SATA/etc) ; Computer: model name, serial number, computer os version, capture software (including version), encoding software (ffmpeg version not required)

Clarifying for my own brain & workflow documentation, the Audio Signal Type that will be documented for each piece of equipment is the Audio Out signal form, correct?

eddycolloton commented 6 months ago

Yes, I believe that is correct. Same as video signal type? Right @BleakleyMcD ?

BleakleyMcD commented 4 months ago

@EmCNabs @eddycolloton yes! correct! right now we have it as the connector type. (XLR, RCA, etc.) but should switch it to the signal type PCM... or something. Will query the community for ideas!

BleakleyMcD commented 4 months ago

Update:

Source VTR: Sony BVH3100, composite, analog ; TBC/Framesync: Sony BVH3100, 10525, composite, analog ; ADC: Leitch DPS575 with flash firmware h2.16, 15230, SDI, audio embedded Capture Device: Blackmagic Design UltraStudio 4K Extreme, s/n: B022159, Thunderbolt Computer: Mac Mini, H9HDW53JMV, OS 14, vrecord, ffmpeg

There isn't really a scenario where our TBC and framesync would be separate. You can buy a separate framesync for $1,000s but I don't know any setup that does that, certainly not ours.

Also, "analog" seems more inline with describing the signal flow than "XLR". Only other possible thing to add would be "balanced" or "unbalanced"

Source VTR: Sony BVH3100, composite, analog, balanced ;

TBD on that but will decide soon! In the meantime, can you make the updates in the script and any respective documentation @eddycolloton @EmCNabs @DSohl ?

Getting close to closing this immense comment thread!

eddycolloton commented 4 months ago

You are able to toggle expected signal flow using the -sn/--signalflow.

Current JPC_AV_SVHS values are:

ENCODER_SETTINGS:
        Source VTR:
        - SVO5800
        - SN 122345
        - composite
        TBC:
        - SVO5800
        - SN 122345
        - composite
        Framesync:
        - DPS575
        - SN 23456
        - SDI
        ADC:
        - DPS575
        - SN 23456
        - SDI
        Capture Device:
        - Black Magic Ultra Jam
        - SN 34567
        - Thunderbolt
        Computer:
        - Mac Mini
        - SN 45678
        - OS 14.4
        - vrecord (2024.01.01)
        - ffmpeg

Are you saying the new ones should be:

ENCODER_SETTINGS:
        Source VTR:
        - SVO5800
        - SN 122345
        - composite, analog, balanced
        TBC/Framesync:
        - DPS575 with flash firmware h2.16
        - SN 15230
        - SDI, audio embedded
        ADC:
        - DPS575 with flash firmware h2.16
        - SN 15230
        - SDI, audio embedded
        Capture Device:
        - Black Magic Ultra Jam
        - SN B022159
        - Thunderbolt
        Computer:
        - Mac Mini
        - SN H9HDW53JMV
        - OS 14.4
        - vrecord (2024.01.01)
        - ffmpeg

Also, should I create a signal flow "profile" for the Sony BVH3100? With a Sony BVH3100 signal flow profile you can switch expected encoder setting values like this:

av-spex --dryrun --signalflow BVH_FLOW

If so what should it be called?

EmCNabs commented 4 months ago

Code for this also needs to be updated to look for the Tag ENCODER_SETTINGS instead of _ENCODINGSETTINGS which is what it currently checks for. Or is it setup to look for both fields?

Some specified MediaTrace fields or values are missing or don't match:
DESCRIPTION metadata field not found
ENCODING_SETTINGS metadata field not found
_TECHNICAL_NOTES metadata field not found
_ORIGINAL_FPS metadata field not found
eddycolloton commented 4 months ago

Thanks Emily! I've updated the mediatrace fields in the config.yaml to say ENCODER_SETTINGS instead of ENCODING_SETTINGS

https://github.com/JPC-AV/JPC_AV_videoQC/commit/b2b394a52da1ae1a79f74e2fe6fa08c9c6bd447e

BleakleyMcD commented 3 months ago

You are able to toggle expected signal flow using the -sn/--signalflow.

Current JPC_AV_SVHS values are:

ENCODER_SETTINGS:
        Source VTR:
        - SVO5800
        - SN 122345
        - composite
        TBC:
        - SVO5800
        - SN 122345
        - composite
        Framesync:
        - DPS575
        - SN 23456
        - SDI
        ADC:
        - DPS575
        - SN 23456
        - SDI
        Capture Device:
        - Black Magic Ultra Jam
        - SN 34567
        - Thunderbolt
        Computer:
        - Mac Mini
        - SN 45678
        - OS 14.4
        - vrecord (2024.01.01)
        - ffmpeg

Are you saying the new ones should be:

ENCODER_SETTINGS:
        Source VTR:
        - SVO5800
        - SN 122345
        - composite, analog, balanced
        TBC/Framesync:
        - DPS575 with flash firmware h2.16
        - SN 15230
        - SDI, audio embedded
        ADC:
        - DPS575 with flash firmware h2.16
        - SN 15230
        - SDI, audio embedded
        Capture Device:
        - Black Magic Ultra Jam
        - SN B022159
        - Thunderbolt
        Computer:
        - Mac Mini
        - SN H9HDW53JMV
        - OS 14.4
        - vrecord (2024.01.01)
        - ffmpeg

Also, should I create a signal flow "profile" for the Sony BVH3100? With a Sony BVH3100 signal flow profile you can switch expected encoder setting values like this:

av-spex --dryrun --signalflow BVH_FLOW

If so what should it be called?

Yes, ENCODER_SETTINGS like that. signal flow name BVH3100

example here in the second table "example for JPC" https://confluence.si.edu/display/JPCAV/Embedded+metadata%3A+video

eddycolloton commented 3 months ago

Ok, I've updated the Encoder Settings check to look for these additional values, and I've added the BVH3100 profile, which you can activate w/ av-spex -sn BVH3100

https://github.com/JPC-AV/JPC_AV_videoQC/commit/f075bab05080fbc2d8c9539882860bb92b935ca1

Currently parses the string from the ffprobe output, because that is where the check was before we started using mediatrace. Shouldn't be too bad to move the check over to mediatrace if you would prefer, but since mediatrace is XML and the check is written for .txt format, I'd like to wait until we've run the new version of the check on a handful of test files first. This way we can be sure we have it working right before I try to port it over to mediatrace XML.

As of now, the check confirms that correct number of subfields are present, and that an SN exists for each device. Again this is sort of inherited from an earlier iteration. I presume we want the check to be more strict, lookiing for exact matches per subfield, which we can add soon.

Approved values for the 2 signal flows are:

JPC_AV_SVHS = {
    "Source VTR": ["SVO5800", "SN 122345", "composite", "analog balanced"], 
    "TBC/Framesync": ["DPS575 with flash firmware h2.16", "SN 15230", "SDI", "audio embedded"], 
    "ADC": ["DPS575 with flash firmware h2.16", "SN 15230", "SDI"], 
    "Capture Device": ["Black Magic Ultra Jam", "SN B022159", "Thunderbolt"],
    "Computer": ["2023 Mac Mini", "Apple M2 Pro chip", "SN H9HDW53JMV", "OS 14.5", "vrecord v2023-08-07", "ffmpeg"]
}
BVH3100 = {
    "Source VTR": ["Sony BVH3100", "SN 10525", "composite", "analog balanced"],
    "TBC/Framesync": ["Sony BVH3100", "SN 10525", "composite", "analog balanced"],
    "ADC": ["Leitch DPS575 with flash firmware h2.16", "SN 15230", "SDI", "embedded"],
    "Capture Device": ["Blackmagic Design UltraStudio 4K Extreme", "SN B022159", "Thunderbolt"],
    "Computer": ["2023 Mac Mini", "Apple M2 Pro chip", "SN H9HDW53JMV", "OS 14.5", "vrecord v2023-08-07", "ffmpeg"]
}
eddycolloton commented 3 months ago

I'm trying to make some test file with mkvnote, but I've set it up on a new computer and I'm getting a weird error.

@EmCNabs have you encountered this:

mkvnote /Users/eddycolloton/git/JPC_AV/sample_files/bowser_files/bowser_backup/2012_79_2_230_1a_PM/2012_79_2_230_1a_PM.mkv
A cataloging record is opened for /Users/eddycolloton/git/JPC_AV/sample_files/bowser_files/bowser_backup/2012_79_2_230_1a_PM/2012_79_2_230_1a_PM.mkv. Edit that and save it, click any key to continue... B)

But then when I press any key I just get: ^[[C^[[A^[[D^[[Bksm^M]^M^[^[^[^[;lm B^M)^M3^C

I end up having to "Control C" and my changes don't get changed.

If there's a known fix (probably user error on install/permissions), let me know, otherwise I'll see if I can find a mkvpropedit work around.

eddycolloton commented 3 months ago

Hi @EmCNabs and @BleakleyMcD , as of this commit: https://github.com/JPC-AV/JPC_AV_videoQC/commit/8cc63cfff15989ab9ba847810e3c0f5ab527fb9b

Encoder Settings check is now in both MediaTrace and ffprobe checks, and looks for precise matches for every device and device subfield (model number, signal type, SN, etc.). Should work with the BVH3100 signal flow that Blake linked to here: https://confluence.si.edu/pages/viewpage.action?spaceKey=JPCAV&title=Embedded+metadata%3A+video

If any one subfield doesn't match it will print the "device" (Source VTR, TBC/Framesync, ADC, etc.) and all of the subfields. like this:

Some specified MediaTrace fields or values are missing or don't match:
Source VTR ['some other device', 'SN 10525', 'composite', 'analog balanced']