Closed Corruptinator closed 5 years ago
This isn't the first time this feature has been requested, and it is indeed planned, but I wouldn't expect it for some time (maybe 6 months or so) as other features are completed.
It does seem FFmpeg is fully capable of handling this. The hurdle will be, as you say, color management and plugging it into OpenGL (where all the composition happens). I've been directed to some libraries that may help but I'll need to set aside time to research everything so it can be done "correctly". Tentatively, I'm thinking of doing a color release where I specifically focus on true color correction tools and processes (including HDR) for one release cycle. Again, hopefully within the next 6 months or so.
Understood, I knew this could take a while. Just figured I share what I found out about rendering HDR videos. Thats all.
The core issue is the tip of the iceberg that all attempts at NLEs ram into; pixel management and interchange.
Try compositing a red star with a slight blur over a cyan background. See the problem?
This cascades into bit depth, which no system is designed to be able to deliver reliably because they don’t focus on creating an online vs offline rendering pipe at appropriate bit depth.
So before even considering HDR, let it be known with absolute certainty that the core system design would need to be reconsidered from the ground up before attempting it.
Pixel management.
@sobotka I'm not too concerned about potential rewrite/redesign work. If there are resources you can direct me to to let me know how it should work, I'd love to look through them.
Easiest way to see the issue is to start with the simplest case which would be a red star, slightly blurred, over a cyan background.
The solution to that issue is the same core issue that leads to:
It’s deeper than this of course, and we can work through it, but at the most simple level, it can help to wrap one’s head around the issues to see the problem clearly. Once a system is in place to deal with the above, HDR and any / all ingestible encodings are possible.
Sadly there isn’t a simple book or such with all of the required information in it. If there is, I have yet to discover it.
The following zip files contains three images that you can test your issues with. The associated / unassociated file encodings will test your alpha handling, and the issue will appear in both even when your alpha is handled correctly, which it likely isn't.
Update: Thanks to the wonders of crap software encoding incorrectly, the first package had a broken TIFF. Here's the updated ZIP with the proper file encodings. For the record, TIFF should store associated alpha by default. PNG stores unassociated. The background doesn't matter here.
This project deals with 10 bit video shot on the MagicLantern Canon camera hack: MLV-App Maybe this project can be used to find some pointers?
Btw. @itsmattkc It's really impressive and fearless work you're doing here with Olive.
Bit of a history there.
The knowledge around colorimetry was woeful, and last time I looked, there were very few folks who understood how to composite correctly.
There isn’t any further need for “pointers” when it is quite clear how to achieve what is required. The only obstacle is to have people understand the core issues.
Performing the above exercise outlined is precisely how to see what the issue is.
@tin2tin Thank you!
@sobotka So the result is a strong black glow around the red star, which I'm guessing is the problematic result you expected. I notice Photoshop's is more of a mid-gray glow, is that more the result we want to achieve?
Olive doesn't do anything with color except convert frames to RGBA8888 for OpenGL. I haven't gotten around to any HDR-esque functionality yet. The compositing/blending is achieved with glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE)
, but otherwise I think it's a fairly blank canvas to work from.
No. Both results are dead wrong. Photoshop offers a hack around it, but the hack is limited and breaks easily. It is also inapplicable to image footage encodings such as those that are encountered in an NLE paradigm.
The Blending operations aren’t the core issue, although the proper format internally must always be associated aka premultiplied, for reasons we can get into in another thread. Unassociated alpha will never composite correctly.
If you have Blender kicking around, you can try it in the Compositor to see the correct result. Note I said Compositor, because the VSE is wrong as well.
The core issue is something known as linearization; the conversion from a nonlinear Transfer Function encoded state, to a linear state that represents radiometric ratios of energy.
Why is this relevant to HDR? Because the ability to properly linearize footage depends on a float buffer. Why float? Because every image encoding function uses a different transfer function, and those transfer functions can not be decoded correctly to integer based buffers.
This is where the ability to correctly decode VLog, SLog, LogC, etc. comes into play.
The only way to understand this chain, is to see the simple broken results via that image. If I find time, I’ll try to demonstrate a LogC encoding and how it has an entirely different transfer function and why applications must do things properly.
Very, very, few do, and Libre / Open Source tools become too buried under layers of brokenness to actually be able to fix things.
The core issue is the predicate of an always on background thread pool, rendering the operations to a buffer. Along with the caveat that all colour handling is handled correctly into their respective buffers above that.
As you can see, incorrect handling means even the simplest dissolve or blur transition is being calculated entirely wrong, and results in broken output.
The good news is that if one understands the core issue early on in the development process, and implements correct handling from the ground up, the software has a hope of working correctly. This also asserts correct handling of HDR and other encoded image formats.
Not sure if it helps but If you get the chance take a look at this video. Through inspiration I've learned from this video is that regardless of any video editor there is something called "BPC" or Bits Per Channel. We would need to change the color settings BPC to 32 bit float for use of image sequences that are in 32 bit float:
To properly transform imagery, you must use a float buffer. By default, float buffers are 32 bit. ILM implemented a variant known as "half" which is 16 bit and supported on GPUs, which would be sufficient for offline rendering, and full 32 for online.
The TL;DR: It's more than simply bit depth and integer vs float.
Ah. I see.
Ahh, so is this (After Effects with "Linearize Working Space" enabled) the correct result?
Indeed, that is the correct output.
The downside of the entire Adobe suite, largely historical, is that it is a hack and assumes the sRGB transfer function. That means that all of colour handling in AE and Photoshop is completely wrong for motion picture work. Some notable shortcomings include:
The above issues have a direct impact on HDR encodings.
The key point to note here is that it takes architecture in its very early incarnation to be done correctly:
Once you see the problem, it becomes very clear that the handling requires careful thought to keep things performant. This typically means a background caching system that has a thread pool that will attack any region of frames marked as dirty, and render them to the output workprint cache.
Some bandwidth considerations can be considered on the input media as well. If the native encoding is kept, then every time a change happens to mark it as dirty, a large amount of cycles would need to be dedicated to properly transform from the native encoding to the reference space, and then from the reference space state manipulated and encoded for the workprint output cache. As such, it may make good sense to have an always in reference buffer for each input kept ready to go. This leads to dynamic serialization and volatile memory management etc.
If the issue isn't tackled in a lower quality offline manner, the offline editorial changes and effects won't match up with the online highest quality non-realtime rendering, creating more work and / or a low quality output.
Tricky stuff once you dive in.
Whoa. Technical information that deep. Didn't think that HDR could lead towards uncharted areas, but definitely interrsting to analyze and understand, if not at the margin of error.
I was wondering if it would help if HDR video files are provided for experimentation?
Any progress here @itsmattkc ? I can offer some guidance on where to start if you are interested in getting the core into ship shape.
@sobotka Not yet, I've been sidetracked with other issues, but I have been reading up to try and understand this properly before diving in. Here's my current understanding, and feel free to correct anything or provide more details if you wish:
Let me know if this understanding is correct or if I'm mistaken.
Digital images are stored with a non-linear gamma curve that is later corrected/inverted by the computer's display.
“Gamma” is an anachronistic term and typically should be applied when discussing CRTs. The ISO uses the term Color Component Transfer Function to describe this aspect of a colour transform. This is relevant because camera log-like curves are scene referred encodings, extending from zero to infinity. Equally there are two forms of linearization; display linear and scene linear, as covered in that Visual Effects Society Handbook.
To compose in a linear working space, we want to correct/invert the source image's gamma curve to linear before compositing (and then convert the resulting image back to non-linear for display). Considerations need to be made for the specific curve of the source image and the curve of the display.
Transfer functions only cover one portion of a proper colour transform. The other two are converting the colours of the primary lights between the contexts, and the achromatic colour. In addition to this, the camera rendering transforms may have other elements wound into them.
As a general concept however, you have the right idea. There are an infinite number of transfer functions and primaries out in the wild, so the system must be flexible and configurable enough to apply different transforms to different imagery, and support a variety of output contexts, including the subject of this thread, HDR.
I've been doing a little research in HDR video editing, and found out about ACES (Academy Color Encoding System. Apparently its an open source color management that is used to change the color display, some of the options to change colorspace included Rec. 2020: (http://opencolorio.org/configurations/aces_1.0.3.html) ACES can be used in OpenColorIO, which is a color management system. Would it be possible to use OpenColorIO to implement ACES for an HDR video editing workflow? Could possibly be used in two or several ways, one to display the color format in the correct visible detail and the other to convert the imported video's color format to another such as rec. 2020 to sRGB (HDR to SDR). Figure I point out what I found out so far. Not even sure if it helps.
OCIO is indeed a potential path forwards. The bottom line is that a colour management system is required, and given the availability of the only one that can handle camera encodings, HDR, and other needs of a nonlinear video editor, OCIO would appear to be the sole viable library available.
Implementing OCIO into the core solves the CMS handling of the system, but the actual architecture needs to deal with the caching and threaded workload, as well as serialization. OCIO integration asserts that any and all colour transforms can be accomodated, including ACES.
Wager over half of the existing issues are moot and based on proper pixel management. I’d strongly encourage tackling this sooner than later, if you are going to.
It literally touches nearly every edge of the software.
I've been looking at OCIO and it does look quite good. I'd like to clarify a few things:
So I'm guessing for a linear workflow, we'd want to convert from the source color space to linear and then convert to the display color space? I'm assuming with a non-linear workflow it'd convert the source to sRGB instead of linear?
When you talk about caching, do you mean caching the frames that have been converted to linear color space assuming the conversion is too slow to occur in realtime?
So I'm guessing for a linear workflow, we'd want to convert from the source color space to linear and then convert to the display color space?
Yes. All pixel manipulations must be done on scene linear data, otherwise they are completely incorrect. #216 has a ground truth to broken nonlinear encodings for example, decent key pulls are wound up with it #356, convolutions / scaling that would impact #190, etc.
I’m assuming with a non-linear workflow it’d convert the source to sRGB instead of linear.
OCIO is built around a scene referred linear reference space of choice. No pixel manipulation is correct on nonlinear reference spaces, so it would be foolhardy to permit it. Education is wiser, and it results in a wiser culture.
When you talk about caching, do you mean caching the frames that have been converted to linear colour space assuming the conversion is too slow to occur in realtime?
You can’t avoid a background rendering system simply because the cycles stack. Imagine one alpha over, then two, then three. On footage it’s playing a losing battle, and the accumulation of blocking effects always will exceed cycles of the CPU.
Further, performance choices can be made. If you consider N count of shots in a rough offline cut that involves some degree of pixel manipulation, every shot of N count needs to be taken to the reference space. For three shots that would be 3xNxWidthxHeight. Then apply the manipulations per N, across N. That is a significant amount of processing, and it doesn’t even begin to tackle more sophisticated blocking that would be taken to a compositor. It’s always a losing battle ala Blinn’s Law.
So in essence, this means that image buffers should be “ready to manipulate” and cached in the reference space for use. The threaded rendering would be whirring and ready to crunch frames as the manipulations are made, rendering them down to a blit buffer “workprint” cache to hold the constant frame rate. Given that volatile memory is limited, this requires the caching system to segue into efficient serialization. From the scene linear reference cache, it is plausible that a CPU / GPU could perform the properly chosen view transform dynamically, but in instances where it can’t keep up, the cache would likely need to bake that in per display as well.
With the addition of proxies, the pixel management pipeline has now gotten more difficult to integrate. Related to #416 and #430.
Pixel transforms also relate to #415 and #419. I’ve likely missed a few.
My best advice is to either tackle the beast or not. The entire pixel pipeline only grows more complex to untangle later if not.
It’s entirely fine to simply want to be an iMovie, but if your goal is to aspire higher, postponing the pixel pipeline management is only making the problem more impossible to fix.
@sobotka It will probably be the next thing I do
So, I've read this thread literally a dozen times now, and I would love to pretend I understand it, but I definitely don't. Looking at the reference images, I can see the benefits of implementing all these conversion processes and such to achieve accurate effects whether you are doing an offline or online edit. (Especially the one from Nick Shaw)
That said, over and over again performance is raised as an issue. If this does end up significantly hampering performance, will there be a way to turn it off if you don't need these features / can accept inaccuracies?
Hopefully I'm woefully misunderstanding this complex topic, but for some people I'm sure speed can sometimes trump color accurate effects, and I hope this doesn't bog Olive down even as it unlocks a truly professional workload for Hollywood level color grading and everything else.
I know the ultimate aim of the project is to make a truly professional grade FOSS NLE, but I just hope we don't cripple the consumer and prosumer markets ability to use the software on low-tier or even mid-tier hardware in the process.
Just my two cents, sorry to barge in on a thing I definitely don't fully understand or even honestly half understand haha.
It’s up to folks to learn about the craft to understand just how godawful every damn attempt out there is. The loud folks or random developers stack garbage on top of awful, and before long, it is such a colossal pile of garbage code that it is impossible to fix. Olive and the folks using it need to decide what it should be and what they want to support.
Personally? I don’t mind. I’ll happily aid anyone keen on trying to do things correctly and help get the concepts down. I’m also equally happy to sit back and watch yet another NLE pop onto the scene only to become countless hours of worthless code.
So iMovie or something new? And let’s be honest, all of the Libre options can’t even hit iMovie as a baseline.
Nope.
Doesn’t work. Not really related to the topic at hand.
@sobotka a little intense, but the question I have is, does even Premiere handle this correctly? You were saying Adobe was using a hack.
a little intense
Try being the person attempting to demonstrate the issues for over a decade; you learn to cut to the chase.
It’s best to try the things; I prefer folks actually pushing the pixels have a hands-on understanding. Everything cascades from it. A wise leader of a project once said something to the effect “I prefer educated users over ‘smart’ software.” Good wisdom.
does even Premiere handle this correctly?
Yes, Adobe is a complete mess. It’s a hack on top of a kludge. One part history and another part needing to “integrate” with their other software. IIRC, only REC.709 was supported up to a version ago, and they have only recently hacked their software to barely support REC.2020 for HDR. “Support” in the loosest possible sense of the term; it’s junk.
With that said, you also have to remember that even in short films, you are typically conforming in another application for the grade, and as such, the software isn’t outputting a single pixel.
Resolve is better with their internal management, although it too isn’t ready for scene referred workflows without some careful coordination. Their ACES is still a mess, for example. Resolve will likely get there long before Adobe and Apple catch up, however.
Again though, the design decisions aren’t up to me or you; it’s all @itsmattkc’s effort. The walls all of the projects hit are the same ones though. Olive is in a unique position to have someone putting in the effort as well as not being buried under the colossal weight of garbage code yet.
Yeah, I hope it didn't come off like I was against proper pixel management, just as an editor whose particular workflow doesn't require HDR or perfect color I hope we don't end up with a piece of software that requires a Titan V just to have real-time playback with a few effects thrown on a clip.
Obviously everything is up to @itsmattkc and I didn't want to discourage you or him from working these tough issues out.
Yeah, I hope it didn't come off like I was against proper pixel management, just as an editor whose particular workflow doesn't require HDR or perfect color I hope we don't end up with a piece of software that requires a Titan V just to have real-time playback with a few effects thrown on a clip.
See, that's the problem. Colour/Pixel management is not about "perfect color". It's about producing the right colour from the app, not some sort of high-end stuff that only very specialized users need. What every graphics application does is drawing pixels. If it fails in that very basic aspect then everything else is pointless. Imagine that you wanted to paint an orange pixel and your program painted yellow instead. Would you say it's ok because you don't need perfect color? Well, that's exactly what happens when you display an intense scene-referred orange through a not properly managed screen. When an application doesn't understand color, then pixels are wrong. It might be just a tad off or incredibly wrong. Nobody wants that.
Regarding your concern about performance: I'd say that neglecting proper pixel management is potentially more likely to make things slower, because a ton of hacks will be needed to make the most basic things work. It happened before, and it will happen again if this is overlooked.
Yikes, I didn't think the discussion to implement HDR would take too far. Is there any way it can be simplified? Maybe a separate branch so it doesn't affect the main Olive "Master" branch? That way Olive can stay simple while HDR can be implemented appropriately without breaking Olive?
That way Olive can stay simple while HDR can be implemented appropriately without breaking Olive?
HDR simply makes broken code all the more apparently broken; there is no “stay simple.” It is really just “stay broken.”
Hmmm... Darn.
What is really exciting, is that if Olive can address color now, then it will be among the few Libre / Proprietary software's actually doing it correctly. This, I believe is an important point.
It amazes me that even Adobe gets this wrong if it is this important and affects everything so much and has such a large effect on performance, and nearly every effect applied to a clip.
Premiere was the most common piece of software used when I was at University, and nearly every filmmaker I know who doesn't like, work in Hollywood uses it.
I'm glad to hear it shouldn't destroy performance.
It amazes me that even Adobe gets this wrong if it is this important and affects everything so much and has such a large effect on performance, and nearly every effect applied to a clip.
Premiere was the most common piece of software used when I was at University, and nearly every filmmaker I know who doesn't like, work in Hollywood uses it.
I'm glad to hear it shouldn't destroy performance.
What Premiere used to do (but it's changing because circumstances forced the hand of Adobe to reconsider it) was to default to rec.709. That meant that things worked more or less properly when you produce stuff for rec.709 output. Since rec.709 is what HD video uses, for some time it was playing safe to stick with it (and that's why most of the people found nothing wrong when using it), but now the landscape is changing with HDR and rec.2020 becoming mainstream and rec.709 doesn't cut it anymore for high end video That, plus a display-referred design that wasn't appropriate for wide dynamic-range captures make Adobe Premiere pretty limited. You might find the claim that Adobe is weak in the field of colour suspicious, after all everyone in the world is using their software. But just take a look at their colour management configuration: Everything is designed around icc profiles, a technology that works great in the press and DTP industries, but it's completely absent from any high end video or compositing software. Adobe color management is mostly legacy stuff that is likely to break when faced with today's requirements.
Solid article that touches on the core of the pixel pipeline discussed here: https://medium.com/netflix-techblog/protecting-a-storys-future-with-history-and-science-e21a9fb54988
Interesting. Its amazing how the technology of color changes. And color outputs on dedicated monitors or viewers are an example of it:
"Common display standards include sRGB (internet, mobile), BT. 1886 (HD broadcast), Rec. 2020 (UHD and HDR broadcast), and P3 (digital cinema and HDR)"
I'm guessing one way to test out HDR would be to provide RAW footage or pictures. I do happen to have a DSLR that van take RAW pictures, and a game console that can record game footage in HDR. Maybe I could help out in testing the color grading process.
“Raw” isn’t a singular encoding characteristic, but rather a specific camera vendor encode. As cameras capture greater dynamic range, we will see the mandatory shift from a linearized, vendor specific camera encoding with specific primaries, to a vendor specific log encoding with vendor primaries.
TL;DR: It isn’t the specific encoding characteristic that is important, but how the data is handled in the internal pipeline. The article does an excellent job of describing the landscape.
Ah. I see.
Just happened to discover this video where Color Grading Expert (Dado Valentic) goes on in detail on how Color grading is done in HDR and SDR. Its almost an hour long and the footage captured isn't the best but figure I give heads up about it:
https://www.youtube.com/watch?v=jDtnmvkNW3I
(Timecodes just in case of interest:) 06:40 - How to Grading Session 08:20 - Natural capture in RAW, and process of HDR Gamut and Range 09:35 - Deciding on the contrast ["Contrast is more important than colors"] 12:00 - Using film in comparison to Digital 14:45 - Switch to HDR 30:20 - Questions and Answers
What I do like in this video is that often the video is edited in SDR first before switching the color profile to HDR just so then in SDR you can experiment Contrast before Color. Then when in HDR you tweak the output energy of the picture.
Typically you master for a single target, then do what is known as a trim pass for other, lower grade formats. A proper pipeline as an example, would master to an HDR, then provide trim passes for SDR or other destinations.
That sounds about right actually.
Alrighty, firstly let me apologize for getting to this so late. I imagine the radio silence was not encouraging, but I have been spending time trying to wrap my head around OpenColorIO and the issues at hand.
OCIO has been implemented in the branch furtherocio
which has finally gotten us this output (using the nuke-default config):
(Tested with OCIO v1.0.9 compiled from http://opencolorio.org/downloads.html)
Preferences let you switch out OCIO configs during runtime. I assume a preference for the display transform should be added too. Am I correct in assuming OCIO's LUT will need to be updated for different clips depending on the color space of the source footage? Will the source footage's color space need to be a preference that users can set/override?
Also just to make sure, OCIO occurs after the alpha is associated. Is this correct behavior?
OCIO is currently implemented as an OpenGL shader which seems to work well with no noticeable performance impact (it can currently be disabled as well if anyone finds it does). Obviously compounding concurrent footage and effects will inevitably reach the peak of any computer's processing power - I'm thinking the bigger picture may be to have a background process caching the results of all shaders on a clip-level, not just the color transform. That way it could also benefit expensive effects like blurs and address #617 in the process. Let me know if this is off base or not.
First up, it's a heck of a bit of work doing this. See the shader comment for why. ;)
Preferences let you switch out OCIO configs during runtime. I assume a preference for the display transform should be added too.
Typically you'll need to expose:
The Display dictates the colorimetry of the particular display, so if you have a panel on a dual head system, you'd want to be able to select the appropriate display type for each display if they are different. The View selects the chosen view for the particular Display's colorimetry. The Look would be a high level look designed for the particular view.
Am I correct in assuming OCIO's LUT will need to be updated for different clips depending on the color space of the source footage?
100% correct. Imagine mixing Fuji FLog / FGamut with Sony SLog 3 and SGamut. What about a series of still images you want to bring in? Each strip requires not only an input transform, but also the look exposed. If you shoot something and develop a look for it, you probably want that loose idea displayed during editorial.
Will the source footage's color space need to be a preference that users can set/override?
Yes. Typically the best means is to do two things that OCIO supports. First, try to look for a transform label in the filename. OCIO has this capability. Something like overlay_srgb.tiff or scene1010_acesAP1.exr etc. Then, failing that call, default to whatever is set based on the predefined roles. That is according to OCIO v1, with OCIO v2 currently in development, further options may be exposed.
Also just to make sure, OCIO occurs after the alpha is associated. Is this correct behavior?
This is a hell of a tricky question in truth because imagery comes in many combinations of linear and nonlinear encoded states. For nonlinear encodings with alpha, it is absolutely mandatory to disassociate the alpha prior to transform, then reassociate. Note that I said disassociate, assuming an associated alpha state. As you know, the RGB is valid in the case where alpha equals zero, and must be skipped. Olive wisely enforces associated alpha, and as such, it would be a simple call to disassociate. To properly associate from an unassociated image, you must multiply by the alpha, knocking out the RGB as it is not data in that context.
TL;DR It's safe to assume disassociate prior to transform if you want a global catch all approach. In theory, formats that are already linearized shouldn't require this, as the RGB represents the linear emissions already.
OCIO is currently implemented as an OpenGL shader which seems to work well with no noticeable performance impact (it can currently be disabled as well if anyone finds it does).
Note that for 3D LUTs, tetrahedral interpolation in OCIO v1 is only available via the CPU. That is, maintaining an offline and online path would help to keep things clear here. When doing the final online render[1], it should be via CPU for maximum quality.
Obviously compounding concurrent footage and effects will inevitably reach the peak of any computer's processing power - I'm thinking the bigger picture may be to have a background process caching the results of all shaders on a clip-level, not just the color transform.
Bingo. This is unavoidable and why I hinted that it's a rather massive and transformational change to the software. Unavoidable because it is safe to assume that certain ranges of the editorial will have compounded layers of blocking or temporary effects, or if used as an "always online" editor for quick projects, folks will be trying to use it as a Kitchen Sink NLE. The caching is a crucial part of any NLE, and why a majority of the open source offerings are catastrophic failures. It's also nontrivial to design one that works well.
Specifically with regards to OCIO, it's plausible that V2 will provide very speedy shaders for the view transforms, but even then, caching would be wise. The only issue is say someone decides to flip the Display or View or Look is that the cache would be invalidated and re-applied. Hence having the cache operate at the reference space level makes it easier than flagging all of the frames as dirty and re-engaging the entire graph to re-render; the OCIO component would only need to re-apply the rendered reference space output.
That way it could also benefit expensive effects like blurs and address #617 in the process. Let me know if this is off base or not.
100% correct. Also note that designing a high level offline / online approach helps here as well. The offline variant can make corner cutting needs in the effects as required to minimize overhead, including fast path approximations. The online on the other hand, can use the deepest possible bit depth and "most correct" approach. This would be relevant to other views for something such as a Grading / Node view integrated with the shot view.
Hope this helps and doesn't make things more nightmarish. It's a huge task, and one that not many folks will appreciate until another member of the community steps up and showcases just how powerful it ends up being. It also means that plenty of the issues where folks want XXX feature might need to be shelved or dismissed.
Tough challenges ahead.
[1] Editorial / post production terminology historically has referred to the "online" being the highest quality, non real-time context, with "offline" being the lower quality real-time scenario. GPU / CPU path tracing rendering terminology flips that meaning around, and frankly it makes more sense with "offline" being the non-real-time highest quality rendering and "online" being the lower quality variant for working. Apple removed their older terminology page sadly, but here is a quick description of the offline vs online editorial approach. Graphics rendering discussed here.
Mentioned shader comments in the furtherocio
branch if anyone's wondering
Until now, I was unaware about colour/pixel management's importance to an NLE. A lot of amateur users (like myself and OP) use Olive to cut, alter some colours, make transitions, and done. Basically, using Olive like a libre iMovie clone.
After reading the linked comments, I agree that requests the page-curl effect should be shelved in favor of proper pixel/colour management. Fortunately, the issues tab is not flooded with such requests yet.
Professional is the first word in Olive's description, so I hope Olive can somewhat live up to that word.
Been trying to compile "furtherocio" branch, this error shows up:
$ make
make -f Makefile.Debug
make[1]: Entering directory '/c/Github/olive'
g++ -c -fno-keep-inline-dllexport -g -std=gnu++11 -Wall -W -Wextra -fexceptions -mthreads -DUNICODE -D_UNICODE -DWIN32 -DMINGW_HAS_SECURE_API=1 -DQT_DEPRECATED_WARNINGS -DGITHASH=\"afa89ff\" -DQT_MULTIMEDIA_LIB -DQT_OPENGL_LIB -DQT_SVG_LIB -DQT_WIDGETS_LIB -DQT_GUI_LIB -DQT_NETWORK_LIB -DQT_CORE_LIB -I. -I../../msys64/mingw64/include/QtMultimedia -I../../msys64/mingw64/include/QtOpenGL -I../../msys64/mingw64/include/QtSvg -I../../msys64/mingw64/include/QtWidgets -I../../msys64/mingw64/include/QtGui -I../../msys64/mingw64/include/QtNetwork -I../../msys64/mingw64/include/QtCore -Idebug -I\include -I../../msys64/mingw64/share/qt5/mkspecs/win32-g++ -o debug/renderfunctions.o rendering/renderfunctions.cpp
rendering/renderfunctions.cpp:33:10: fatal error: OpenColorIO/OpenColorIO.h: No such file or directory
#include <OpenColorIO/OpenColorIO.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[1]: *** [Makefile.Debug:14231: debug/renderfunctions.o] Error 1
make[1]: Leaving directory '/c/Github/olive'
make: *** [Makefile:38: debug] Error 2
I'm assuming I'm going to need to find the "OpenColorIO.h" file or somehow compile OpenColorIO right?
This might be challenging to ask but would it be possible to implement a High Dynamic Range video editing workflow to Olive?
I was doing a bit of research into videos that are compiled/rendered in HDR. They're mostly HEVC/x.265 video files encoded with an HDR metadata embedded information. FFMPEG is capable of rendering videos in HDR using this command:
ffmpeg -i <infile> -c:v libx265 -tag:v hvc1 -crf 22 -pix_fmt yuv420p10le -x265-params "colorprim=bt2020:transfer=smpte2084:colormatrix=bt2020nc:master-display=G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(10000000,10):max-cll=1000,400" <outfile>.mkv
Just for a useful rundown:
I was thinking that when exporting in HDR, the Export Window could be utilized like this:
(Its a photo-editing example just to give an idea nonetheless...)
Assuming that Olive uses FFMPEG 3.4 and above, rendering and exporting videos in HDR sounds simple to do, that is in using the command line above.
Another problem to tackle would be that the video viewer would need to utilize a Colorspace adjuster to be able to view HDR videos in near similar colors in SDR. One way that can be done is using color-grading lut files to quickly modify HDR videos to be able to display similar HDR colors in SDR without spending money on an HDR reference monitor. Just a little useful information that could serve purpose.
As I already said earlier, this is challenging to ask and you probably are very busy, but I just wanted to post this feature request so then the idea of implementing the HDR Video Editing workflow idea isn't faded away. If this doesn't work for some reason, well at least it was worth asking.
P.S. Olive would need to have the x.265 encoder (FFMPEG) to save the videos in HEVC for HDR.