valadaptive / ntsc-rs

Free, open-source analog TV + VHS effect. Standalone application + plugin (After Effects, Premiere, and OpenFX).
Other
422 stars 9 forks source link

clarification [parameters/ json] #226

Open abcnorio opened 3 days ago

abcnorio commented 3 days ago

Hello,

while trying to understand the parameters to write some script to insert randomness based on pre-defined anchors (ie. profiles):

(1) Non-linear sliders

While looking into the json with parameters some values have an extremely long length ie. a lot of digits after comma.

e.g. head_switching_mid_line_jitter 0.0299999993294477

Is this just rounding? Looking at the slider behavior for such numbers it becomes obvious that most are nonlinear sliders, e.g. between 0...1 a lot of digits and this reduces for 1<x<10 and 10<x<100. So is the assumption correct that you scale with exponents? like each 1e-01 downwards adds another digit-after-comma? Or do you use some special function for those value ranges? IF that is the way like assumed above one can create a sequence from it and just sample from this sequence.

At the moment the task is to sort out which values have which possible values/ range to sample later from it either randomly or via a pre-defined prob dist.

(2) Categories

Correct that categories are saved as integers like

chroma_lowpass_out
default = 2
possible values = {1,2,3} or {full, light, none}

Correct?

(3) T/F

FALSE/ TRUE values are just 0/1

Correct?

(4) version

Version has a value = 1 but is not in the GUI (could not find it), this is just for internal usage?

(5) Scaling vertical lines

This is not present in the json, correct? That would be

scale T/F
lines INTEGER e.g. 480
method CATEGORY {1,2,3} or {nearest, bilinear, bicubic}

Correct assumption?

Thanks!

valadaptive commented 2 days ago
  1. Yes, those sliders are logarithmic. Here's the code in the GUI library I use that maps them to linear sliders. You might need to apply your own weighting to many or all of the settings if you want to generate a "uniform" random preset that doesn't look weird.

  2. Yes. You can see all the categories/dropdowns here. However, they start from 0 and not 1.

  3. No, they should be saved as true and false in the JSON.

  4. The settings will fail to parse if version != 1. This is solely used as a futureproofing measure in case I decide to make backwards-incompatible changes to the preset settings.

  5. This is correct. As the tooltip on the "Scale to" checkbox says, it is not saved as part of presets.

abcnorio commented 2 days ago

Thanks for clarifications! Very helpful. My plan is to use a profile as anchor and only slightly insert changes around the anchor and not too far away based on probs/ weights. And not every setting should be changed. Will take one image and vary according to that and see which settings may look weird to set some meaningful restrictions (mostly based on visual inspection).

abcnorio commented 2 days ago

btw - R is quite nice to create a (here: short) sequence to draw from:

> create.log <-function(start=1e-6,end=1e2,length=1e2+1)
+ {
+   10^( seq(log10(start),log10(end),length.out=length) )
+ }
> set.seed(996677)
> dats <- create.log()
> dats
  [1] 1.000000e-06 1.202264e-06 1.445440e-06 1.737801e-06 2.089296e-06 2.511886e-06 3.019952e-06
  [8] 3.630781e-06 4.365158e-06 5.248075e-06 6.309573e-06 7.585776e-06 9.120108e-06 1.096478e-05
 [15] 1.318257e-05 1.584893e-05 1.905461e-05 2.290868e-05 2.754229e-05 3.311311e-05 3.981072e-05
 [22] 4.786301e-05 5.754399e-05 6.918310e-05 8.317638e-05 1.000000e-04 1.202264e-04 1.445440e-04
 [29] 1.737801e-04 2.089296e-04 2.511886e-04 3.019952e-04 3.630781e-04 4.365158e-04 5.248075e-04
 [36] 6.309573e-04 7.585776e-04 9.120108e-04 1.096478e-03 1.318257e-03 1.584893e-03 1.905461e-03
 [43] 2.290868e-03 2.754229e-03 3.311311e-03 3.981072e-03 4.786301e-03 5.754399e-03 6.918310e-03
 [50] 8.317638e-03 1.000000e-02 1.202264e-02 1.445440e-02 1.737801e-02 2.089296e-02 2.511886e-02
 [57] 3.019952e-02 3.630781e-02 4.365158e-02 5.248075e-02 6.309573e-02 7.585776e-02 9.120108e-02
 [64] 1.096478e-01 1.318257e-01 1.584893e-01 1.905461e-01 2.290868e-01 2.754229e-01 3.311311e-01
 [71] 3.981072e-01 4.786301e-01 5.754399e-01 6.918310e-01 8.317638e-01 1.000000e+00 1.202264e+00
 [78] 1.445440e+00 1.737801e+00 2.089296e+00 2.511886e+00 3.019952e+00 3.630781e+00 4.365158e+00
 [85] 5.248075e+00 6.309573e+00 7.585776e+00 9.120108e+00 1.096478e+01 1.318257e+01 1.584893e+01
 [92] 1.905461e+01 2.290868e+01 2.754229e+01 3.311311e+01 3.981072e+01 4.786301e+01 5.754399e+01
 [99] 6.918310e+01 8.317638e+01 1.000000e+02
> sample(dats, 10, replace=TRUE, prob=NULL)
 [1] 9.120108e-04 7.585776e-06 1.000000e+02 3.630781e-06 5.248075e-04 8.317638e-01 2.511886e-04
 [8] 1.445440e-06 5.754399e-03 2.089296e-04

It's rather easy to make the sequence longer to have enough values to choose from randomly. The prob statement can be replaced by values of a prob distribution (ie. weights) of the same length as the sequence (which is just one simple approach, there are others...).

btw - do you know whether there is by chance some theory or best practice about parameters and whether they follow this or that prob distribution? Otherwise one can do that manually in accordance to one's wishes (and let the weights be estimated by some R func to create proper dist parameters), but if there is empirical evidence this is always the better way to draw.