awawa-dev / HyperHDR

Highly optimized open source ambient lighting implementation based on modern digital video and audio stream analysis for Windows, macOS and Linux (x86 and Raspberry Pi / ARM).
http://www.hyperhdr.eu/
MIT License
1.13k stars 117 forks source link

Implement standard smoothing filter structures #788

Closed jclsn closed 8 months ago

jclsn commented 8 months ago

Feature request

What problem does this feature solve?

There are standard filter structures which are widely used for smoothing. Using them would make it easier to understand how the filters work and possible get rid of the delay introduced by current filter implementations.

What does the proposed API look like?

Not applicable

How should this be implemented in your opinion?

There are off-the-shelf filter structures that can be used for this, e.g. moving averages filters, the Kalman filter or other types low-pass filters.

Are you willing to work on this yourself?

Yes, if the current design of the Smoothing class would be made more extensible. There should be one member function and SmoothingConfig per filter structure. Different filter require different parameters and currently it is not trivial to implement new filters.

awawa-dev commented 8 months ago

Hi You would have to propose an algorithm with a specific implementation. The current one can be described as "0-delay" because it immediately starts marching towards a new target. It's hard to say anything about samples here because they are aggregated. So you have: current color, target color (updated by the video frame that came in), target time (delta ms from the user smoothing configuration, updated when new target color is set). And based on the target color and time, the current color is updated by the clock.

jclsn commented 8 months ago

There is no filter with zero delay. You would need at least two samples that you multiply with filter coefficients and therefore minimum one sample of delay. I asked about this in the Hyperion channel once and I think no one really knew how it was implemented over 10 years ago. I saw that you also added a weighted version to make it react faster to changes.

I might be wrong and thinking too much in audio processing terms. Maybe there is not much to improve. Nonetheless, the color is not important for smoothing, only the brightness values, which would be the samples. We are also speaking about 30-120Hz max here is really a very narrow spectrum. Simple lowpass filtering would probably not be enough here. Can you show me the code section where the algorithm is implemented? I would assume it to be a simple moving average with interpolation.

awawa-dev commented 8 months ago

I have indicated what I mean by this term. The new smoothing, when making large changes, immediately generates a jump that should be immediately noticeable by the user and at the same time ensures smoothness. At the same time, it eliminates the decay effect when changes in the scene are slight. The second aspect is the anti-flickering filter, especially for discreet changes, but I don't know if it's what you mean because you're writing about Smoothing, and it appears as a separate component and only applies to certain cases at low brightness.

If the user experiences delays with the default smoothing settings, it is usually caused by other factors, most often a delay in the video source (e.g. too low sampling), or an inefficient LED driver (e.g. in my setup there is a clearly noticeable difference between LEDs with HyperSPI and Philips Hue lamps, to a greater or lesser extent this may apply to all controllers using wifi, Adalight at low speed, etc.).

I asked about this in the Hyperion channel once and I think no one really knew how it was implemented over 10 years ago.

Even though the original algorithm was called linear, in practice it had little to do with linearity. This behavior could only be observed when no new samples were arriving. Otherwise it turned into something exponential with a very unpleasant (for Ambilight) decay effect that seemed to last forever (in my tests in real use even up to 0.4-0.5 second on default settings). And as I have already written, it uses 3 states: the current color, the target color that has just arrived, the target time defined by the user.

Nonetheless, the color is not important for smoothing, only the brightness values, which would be the samples.

You've hit the spot. I was currently about to rewrite the smoothing for a such reason. Operating on the RGB model does not describe well how the human eye perceives changes. We can use LCH colorspace instead and I have done some work already for the next version v21 (v20 is closed for changes in the final test phase).

awawa-dev commented 8 months ago

Nonetheless, the color is not important for smoothing, only the brightness values, which would be the samples.

You've hit the spot. I was currently about to rewrite the smoothing for a such reason. Operating on the RGB model does not describe well how the human eye perceives changes. We can use LCH colorspace instead and I have done some work already for the next version v21 (v20 is closed for changes in the final test phase).

Here is a good comparison (for example: change green rgb(0,255,0) to blue rgb(0,0,255) transition and apply): https://davidjohnstone.net/lch-lab-colour-gradient-picker There is no doubt that the selected color space can influence the smoothing process. Unfortunately, the best ones (LCH and OKLCH) are very expensive due to the use of the trigonometric function. Not sure if look-up table will have some side-effects.

jclsn commented 8 months ago

Yeah, on second thought, sure the color has to be taken into account. Else you will have a smoothed brightness and colors will still be flickering.

Can you point me to the code section where the current filter is implemented?

jclsn commented 8 months ago

Anyway, I guess I had the wrong notion of the filter. I was thinking about this from an audio signal processing perspective.

awawa-dev commented 8 months ago

This filter https://www.hyperhdr.eu/2021/04/how-to-set-up-hyperhdr-part-i-basic.html#chapterFlickering ? Curretnly it's rather cruel solution, it works but for sure it's not optimal.

jclsn commented 8 months ago

Tbh this doesn't tell me much. I don't know what the threshold or the step mean. There is no formula for how the filter works. Just a showcase and the announcement of "alternative linear", which also doesn't explain what it actually does.

Filter structures are best described with a filter structure graph. Here are some examples https://en.wikipedia.org/wiki/Digital_biquad_filter#Direct_form_1 @./0?redirect=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FDigital_biquad_filter%23Direct_form_1&recipient=cmVwbHkrQUVXNDdOWk8zQlROTDRaQ0I0UVZYWVdEN1hEVlZFVkJOSEhJRFJYSE00QHJlcGx5LmdpdGh1Yi5jb20%3D) Can you send me a link to the lines of code where it is implemented? Maybe I can come up with something of what is currently there. On März 13 2024, at 6:46 pm, Awawa @.> wrote:

This filter https://www.hyperhdr.eu/2021/04/how-to-set-up-hyperhdr-part-i-basic.html#chapterFlickering @./1?redirect=https%3A%2F%2Fwww.hyperhdr.eu%2F2021%2F04%2Fhow-to-set-up-hyperhdr-part-i-basic.html%23chapterFlickering&recipient=cmVwbHkrQUVXNDdOWk8zQlROTDRaQ0I0UVZYWVdEN1hEVlZFVkJOSEhJRFJYSE00QHJlcGx5LmdpdGh1Yi5jb20%3D) ? Curretnly it's rather cruel solution, it works but for sure it's not optimal. — Reply to this email directly, view it on GitHub @./2?redirect=https%3A%2F%2Fgithub.com%2Fawawa-dev%2FHyperHDR%2Fissues%2F788%23issuecomment-1995150984&recipient=cmVwbHkrQUVXNDdOWk8zQlROTDRaQ0I0UVZYWVdEN1hEVlZFVkJOSEhJRFJYSE00QHJlcGx5LmdpdGh1Yi5jb20%3D), or unsubscribe @.***/3?redirect=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAEW47N3VH57UPQNRR6OTTCLYYCGFVAVCNFSM6AAAAABEOHB3COVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSOJVGE2TAOJYGQ&recipient=cmVwbHkrQUVXNDdOWk8zQlROTDRaQ0I0UVZYWVdEN1hEVlZFVkJOSEhJRFJYSE00QHJlcGx5LmdpdGh1Yi5jb20%3D). You are receiving this because you modified the open/close state.

awawa-dev commented 8 months ago

OK, I will briefly describe the problem that made life very difficult for me and many users. And why this simple filter was implemented. I've explained this many times, but I can't find it now. Even if we capture a static scene, there are almost always some minimal disturbances due to, for example, the use of dithering or some filters by the grabber. If we have a very bright scene, it is imperceptible to the eye if (254,255,0) suddenly appears for some average area instead of (255,255,0). But if we have a dark scene, between (2,2,0) and (1,2,0) there is a dramatic difference on the LEDs, which we perceive as flashing and can be seen on the videos.

Another intensifying factor is the high sampling frequency, so the higher it is, the faster it will flicker despite smoothing without a filter. Therefore, threshold specifies the maximum brightness of the LED for which this filter is activated. Anything below threshold will pass through this filter. And we will no longer allow a situation where instead of (2,2,0) (1,2,0) appears for a while and then comes back. The change must be at least by step factor, e.g.: 2, so (2,2,0) can change to (0,2,0). For averaged areas, this basically completely eliminates the problem of source flickering. But it also blocks changes when such a discrete change is not due to flickering, but simply (2,2,0) has changed to (1,2,0) and it should be updated. It would be necessary to detect whether any discrete frequency appears and block it. However, do not block discrete changes on dark scenes if the flickering do not occur. Currently there is a timeout that tries to handle it, but it introduces unnecessary delay (but only for anything below 'threshold' and only for discrete changes).

It's implemented here https://github.com/awawa-dev/HyperHDR/blob/fc8357927db85be3f779184339a3ef6a95427334/sources/base/Smoothing.cpp#L184 and is waiting for improvement ;)

jclsn commented 8 months ago

Thank you for the explanation. Just tried to build it, but encountered an awful lot of Qt6 errors. Eventually it failed. The log is so long, I couldn't even post it here. Maybe it is expecting Qt5 libraries and can't handle the new Qt6 libraries shipped with Plasma 6.0?

https://0x0.st/HFXd.txt

awawa-dev commented 8 months ago

Switch to c++20 branch

jclsn commented 8 months ago

Shouldn't that be defined in the CMakeLists.txt?

On März 14 2024, at 9:03 am, Awawa @.***> wrote:

Switch to c++20 branch — Reply to this email directly, view it on GitHub @./0?redirect=https%3A%2F%2Fgithub.com%2Fawawa-dev%2FHyperHDR%2Fissues%2F788%23issuecomment-1996777791&recipient=cmVwbHkrQUVXNDdONFRENjdLUlRRWkVIRE9FWE9ENzJJRFBFVkJOSEhJRFJYSE00QHJlcGx5LmdpdGh1Yi5jb20%3D), or unsubscribe @./1?redirect=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAEW47N3ODZEDBIZC4NLKK6DYYFKTPAVCNFSM6AAAAABEOHB3COVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSOJWG43TONZZGE&recipient=cmVwbHkrQUVXNDdONFRENjdLUlRRWkVIRE9FWE9ENzJJRFBFVkJOSEhJRFJYSE00QHJlcGx5LmdpdGh1Yi5jb20%3D). You are receiving this because you modified the open/close state.

awawa-dev commented 8 months ago

It's in CMakeLists.txt check last commit on that branch. On Linux currently it is using c++11 because we primarily target qt5. Windows and macOS (M1) are using qt6 on default and more modern c++. After I release v20 I switch HyperHDR to c++20 because I miss some improvements over c++11.

awawa-dev commented 8 months ago

but actually it's good idea to add a check if qt6 is selected and force c++20 also on linux currently.

jclsn commented 8 months ago

I just added it myself. Works for now. I just want to get the compile commands to jump through the code.

jclsn commented 8 months ago

Okay, I am starting to understand the API. Implemented a first Weighted Moving Average. I am aiming for implementing a standard set of filters for smoothing now and changed the initial post accordingly.

Struggling a bit with adding new entries to the Webserver selection menu. I would like to change the displayed input parameters in the smoothing section. Currently those are static, because both of the implemented filters use the same. For the Exponential Moving Average I would only need an Alpha value. I am also thinking about implementing a Kalman filter, which would need other parameters as well.

Currently the smoothing section is not very extensible in this regard. Could you maybe change the design so there will be one member function per smoothing type and also make the set of smoothing section parameters changeable?

This is how far I got. Maybe you can help me out there a bit. Once it is done for one filter, I will be able to do the rest. I invited you as collaborator for my fork.

https://github.com/jclsn/HyperHDR/commit/57493d6961030006db2fef29d9903cf56fa267a0

The extensibility should probably be done on a separate commit, but I am still a bit lost.

jclsn commented 7 months ago

I don't think a smoothing filter needs to be all that complex. Really a mystery to me

https://github.com/awawa-dev/HyperHDR/blob/fc8357927db85be3f779184339a3ef6a95427334/sources/base/Smoothing.cpp#L531

Could you maybe show me where the processing section is generated, so I can see if I can change the parameters according to the filter structure?

awawa-dev commented 7 months ago

You can change LED driver to debug (to file), add in the linearSetup debugOutput(newTarget) then you will see the difference.

jclsn commented 7 months ago

I don't understand

jclsn commented 7 months ago

The thing is: I can implement some filters. I am not used to complex C++ applications though, since I mainly code in C. Would be great if you could help me a bit out here!

awawa-dev commented 7 months ago

I don't understand

This is real test how each smoothing algorithm behaves in the practice. How the transition is perform depending on the incoming colors, including full range change (0->255 and 255->0). debugOutput provides sample data (overrides grabber/effect source) and in the LED debug file you have real output.

jclsn commented 7 months ago

Well, I am planning to implement completely different approaches. For this I need your help to change the processing section layout though!

jclsn commented 7 months ago

You completely misunderstood what I was asking I think.

awawa-dev commented 7 months ago

I think you will need to perform that test anyway. It's easy to setup it and it will allow you to measure delay/laltency also (doesnt you want to lower it after all?).

Incoming LED colors (Object Thread) => invokes UpdateLedValues => apply anty-flickering fileter and set new color target for smoothing

Timer (from LED driver thread!) => invokes updateLeds() => performs smoothing and calculate new colors => queueColors send colors to LED driver

jclsn commented 7 months ago

Well, in order to measure my results, I first need to be able to implement the filter and tune it during runtime. For this I need to be able to add a set of filters to the processing selection section. I don't want to plug in new values in the code and recompile every time I want to test something out. Without this I will get demotivated very quickly.

So I was asking you for assistance to change the processing section accordingly or make it easier to add new filters to it, that require different parameters. If that were done for one filter (I sent you a very simple example above that only needs one parameter), I could do it for more sophisticated filter structures myself.

Profiling can be done later...

awawa-dev commented 7 months ago

What exactly can I help you with? For https://en.wikipedia.org/wiki/Digital_biquad_filter we certainly do not currently store previous samples. The question is whether to allow the user to define coefficients (I can see that you've already handled one new parameter on your fork) or adopt certain constant values?

jclsn commented 7 months ago

I have posted a filter member function above, which just needs this one alpha parameter. I have added this filter to the filter selection drop-down menu, but I would like to change the parameter selection section along with it, so only the alpha is visible. This is currently not easily done. It would require a different SmoothingConfig constructor I think or overloading it somehow, if that is possible. This is where my C++ skills are not sufficient. You could probably do this with ease.

After this is done, I can orientate myself on the diff and do the same for other more advanced filters like the Kalman filter, which is able to smooth input signals without lag.

If any filter will require a buffer of delayed samples, I can implement this also without your help.

awawa-dev commented 7 months ago

If you want to hide parameters depending on the selected option, you should do so in the scheme. For example, use dependecies as in the grabber configuration: https://github.com/awawa-dev/HyperHDR/blob/master/sources/base/schema/schema-videoGrabber.json There is also a more advanced if-then-else method available, although I haven't used it so far https://github.com/json-editor/json-editor?tab=readme-ov-file#if-then-else More advanced methods will require coding in JavaScript (for example https://github.com/awawa-dev/HyperHDR/blob/master/www/js/grabber.js as some grabber properties dependent on OS). In constructors and reading configuration from JSON, you can simply adopt some default values and at most not use them if a given option does not require them. However, it will be difficult not to use, for example, frequency because the LED driver is very dependent on it.