This project is used to encode behavior videos and, depending on the user request, will compress those videos while ensuring that they are formatted for correct display across many devices. This may include common image preprocessing steps, such as gamma encoding, that are necessary for correct display, but have to be done post-hoc for our behavior videos.
This will attempt to compress videos so that the results:
This video compression is often lossy, and the original videos are not kept, so this library will attempt to produce the highest-quality video for a target compression ratio. The speed of this compression is strictly secondary to the quality of the compression, as measured by the visual detail retained and the compression ratio. See this section for more details.
Additionally, this package should provide an easy to use interface that:
A surprising fact is that video encoders implementing the same algorithm, but
written for different compute resources do not have the same visual
performance; for a given compression ratio, or similar settings, they do not
retain the same amount of visual detail. This is also true for different presets
of the same encoder and compute resource even if the other settings are
identical. For example, the presets -preset fast
and -preset veryslow
of the
encoder libx264
produce videos with the same compression ratio, but differing
visual quality.
This can be seen in the plot below, where the GPU encoder and CPU encoders retain different amounts of visual detail, as assessed with visual perception-based metric VMAF. Also note the difference between presets for the same encoder and compute resource: CPU Fast and CPU Slow.
This figure shows that for compression ratios greater than 100, it often makes sense to take your time and use a slow preset of a CPU-based encoder to retain as much visual information for a given amount of compression.
While it may be tempting to select a faster preset, or faster compute resource like GPU for dramatic speedups shown below, doing will degrade the quality of the resulting video.
Because the output of this package are permanent video artifacts, the compression is lossy, and the intent is to delete the original, taking the CPU time to produce the highest quality video possible might well be worth it.
To develop the code, run
pip install -e .[dev]
There are several libraries used to run linters, check documentation, and run tests.
coverage run -m unittest discover && coverage report
interrogate .
Use flake8 to check that code is up to standards (no unused imports, etc.):
flake8 .
Use black to automatically format the code into PEP standards:
black .
Use isort to automatically sort import statements:
isort .
For internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily use Angular style for commit messages. Roughly, they should follow the pattern:
<type>(<scope>): <short summary>
where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:
The table below, from semantic release, shows which commit message gets you which release type when semantic-release
runs (using the default configuration):
Commit message | Release type |
---|---|
fix(pencil): stop graphite breaking when too much pressure applied |
|
feat(pencil): add 'graphiteWidth' option |
|
perf(pencil): remove graphiteWidth option BREAKING CHANGE: The graphiteWidth option has been removed. The default graphite width of 10mm is always used for performance reasons. |
(Note that the BREAKING CHANGE: token must be in the footer of the commit) |
To generate the rst files source files for documentation, run
sphinx-apidoc -o doc_template/source/ src
Then to create the documentation HTML files, run
sphinx-build -b html doc_template/source/ doc_template/build/html
More info on sphinx installation can be found here.