Closed GreatAttractor closed 1 year ago
One possible, low-tech starting option would be to add command-line arguments to imppg for these use cases. Then people can build shell, batch, etc. scripts around ImPPG as a start. It won't replace a more sophisticated API of course (and will not be as fast). E.g. something like this could be crafted using example above:
#!/usr/bin/env bash
find input/ -type f | while read -r file; do
imppg --input="$file" --lr-sigma=1.3 --output "output1/$file"
imppg --input="$file" --lr-sigma=1.4 --output "output2/$file"
# Run ffmpeg commands here to make videos from files in output1 and output2
done
The above is just a rough idea.
I've considered this approach, and might yet still add it as a secondary option, but I'm already well into the embedded Lua implementation (see the scripting
branch). The main reason is that there would be quite a time overhead on ImPPG startup if we want to use the OpenGL back end (initializing GLEW extensions, creating the context). Now multiply it by a few hundred/one thousand images to process (as, e.g., I tend to have for some time lapses)...
Also, with Lua I'll be able to test more easily some new functionality - processing RGB images and allowing multiple blended layers of unsharp mask with different parameters. That would be quite a proliferation of command-line options for all that :)
Ah indeed, yes, it'll be much slower re-launching all the ImPPG processes. Great to hear you are into the embedded Lua implementation. I will check it out.
Implemented in v1.9.0-beta
(documentation). Image alignment is not yet exposed in the API.
Image alignment is now exposed in the API. Any new processing capabilities will be as well, as they're added.
ImPPG should be scriptable. This will especially help with trying out slightly different settings for large groups of files. E.g., “process this 500-file sequence with L-R sigma = 1.3, save in
output1
, then with sigma = 1.4, save inoutput2
, then run FFMPEG on both to create videos for a side-by-side comparison”.