Closed morrolinux closed 2 years ago
It's not really a "bug", it takes time for the audio to re-render. The alternative would be to just mute the audio/that track for the time it takes to re-render, which may be less disconcerting, but would also be inconvenience in other ways.
I'd advice to look for more efficient ways of doing this. I've never seen any video editor maxing out the CPU on a high end machine just for just moving a track on the timeline. As you can see it makes syncing an audio track to the video an almost impossibly time consuming job. This gets even worse if I edit a full video and then decide to ripple delete a clip at the beginning, as this gets done for all subsequent clips. It doesn't scale all that well from what I can tell.
@itsmattkc can the audio waveform data be cached? There is a max zoom for Olive (one frame * N Pixels), which is a huge amount of audio samples.
While importing footage, rip the audio at a specific resolution to a float **
(obviously allocate / resize the array when creating a new media object)?
Obviously this also needs to be saved to disk, and reloaded when project is opened.
Drawing the audio waveform should be easy for the GUI from the pre-processed cached waveform.
When using FCPX, importing footage (just dragging and dropping) triggers audio waveform to be calculated, I'm assuming Apple are also caching the audio waveform at a lesser sample rate that matches the max zoom of the timeline.
@morrolinux If you give me some perf
information to show what the worst offending functions are, I can look into speeding them up.
@alcomposer I think you're misunderstanding the issue, this isn't related to the visual audio waveform you see on the timeline (which is already cached and mipmapped for the best performance and fidelity), this is related to rendering the audio itself, which gets rendered/cached in advance before preview/playback. Though to be fair, I can't say for sure what the cause is until I get a perf
report.
@itsmattkc Gotcha.
To throw a rather huge spanner in the works, I wonder if you even need to cache the audio output at all. Audio is rather low on the CPU side these days.
Video processing is many orders of magnitude more demanding. So split the caching for Video only, and deliver on demand the audio mix?
I mean, the reason why video has to cache is due to the fact that stacking effects will eventually stop video processing being RT. Audio in Olive shouldn't have that same issue?
As Olive isn't really DAW, the tools for being able to overload the audio system are not specialised enough in Olive to allow a user to push the audio rendering past RT anyway.
This is where interchange comes in, and the work with OTIO is so important, (on Olive's side and any compatible DAW)
Saying that, Audio processing is embarrassingly vectorisable. SSE would get you 4x the performance without any tricks.
Admittedly I don't know if you are already doing this?
@morrolinux If you give me some
perf
information to show what the worst offending functions are, I can look into speeding them up.
In this log, I
perf.zip (flame graph) perf.data.zip (full perf)
Hopefully it's the right material to catch the issue
@alcomposer There is an argument for not caching the audio, and I have thought about it before. The argument for caching, in my mind, is more of an accuracy thing than a performance thing: say you have a look-ahead compressor or a reverb where samples are dependent on the samples that come before it - with realtime audio, you may get different results if you start the playhead at different positions, cached audio is always "what you hear is what you get".
Admittedly, DAWs are almost all realtime, including those used for film (i.e. Pro Tools), so perhaps it's not a huge deal for end users. Particularly since audio is already converted to PCM on ingest (which was done because I remember having a lot of timing issues with codecs back in the 0.1 days).
Saying that, Audio processing is embarrassingly vectorisable. SSE would get you 4x the performance without any tricks.
I use SIMD instructions (memcpy, memset, et al) wherever possible, but I won't know exactly what slowdown is in the OP until I look at the perf report.
@itsmattkc Also I forgot: shouldn't the perf data I attached be useful, I can also provide you with the project and media. Have a good one
@itsmattkc you're completely correct.
Have a look at what is going on, and if there is a bottleneck that can't be avoided with precached audio, revisit the design. Otherwise I see no reason to go RT.
From experience, starting and stopping a track with compression applied doesn't really cause any issues.
By the time a user has registered that sound is playing, the lookahead limited should already well and truly be locked in.
However DAWs also have track freezing, and bouncing in place. So that can mitigate such discontinuities (if they exist at all).
Yet I really don't think you need to worry too much about audio mixing. It's a huge area to cover, and I'm not sure an NLE is the best place to do a mix anyway.
So be reassured that if the caching lag is unavoidable, there could be a plan B (RT Audio)
I would advice against caching the audio by default. Even if it was faster, it would probably never be fast enough to keep up with the user, and performance doesn't really scale for longer projects/tracks.
Let's suppose you just want to tune the volume of an audio track. Each time you make an adjustment you want to hear the change immediately. Having to wait even just for a second is already too much I think.
If you need to sync an audio track (e.g: a song like in my attached video) you can't really wait seconds for the change to take effect each time you nudge the track left or right.
I get the point of being consistent with effects that require previous frames like a reverb, but I'd rather be fast on all basic and very recurrent operations than not having to place the play head a bit earlier on the track for such effects.
Besides, many DAWs do this RT and also have bounce in place feature like @alcomposer already pointed out, so you could probably follow this approach too in the future if needed.
I tried disabling "automatic caching" in the sequence settings but the audio still lags behind. Does that setting only disable video caching?
Yes, that setting only affects video.
@morrolinux Please try the latest commit.
Thanks for the follow-up. I've briefly tested on a ~25 minutes track and it looks like it's much faster. Unfortunately some other recent changes in the code have made all my previous projects incompatible with the latest build (the timeline is seen as empty) so I can't really compare on the same project, but I'll test it thoroughly in the next few days for editing my new video.
Can you send the project file? I had tried to make any changes to the project format backwards compatible so that shouldn't have happened.
Sure pinebook.zip
Thanks, 8b3ea8c421c96f6546c2db520906c9f66de74813 should be able to load your project.
Indeed it works. So I tried the same stuff and it looks like the audio caching can't keep up with the video caching and the audio cuts off during playback-while-caching.
Also as I said it's tremendously improved but still not super snappy and high demanding on resources for just every simple cut that I make:
I remain convinced that real time audio would be ideal for most situations. I see on some of your previous commits that you actually separated audio and video caching settings but I couldn't find any settings in the UI for disabling audio caching. Maybe I got this wrong?
I should mention, this is realtime audio. That's the performance gain you're experiencing. The audio cache in the latest commit is disabled by default. You can turn it back on in the Sequence's properties under the Parameter Editor.
So I tried the same stuff and it looks like the audio caching can't keep up with the video caching and the audio cuts off during playback-while-caching.
This is more likely a simple timing issue or bug than a performance issue.
Also as I said it's tremendously improved but still not super snappy and high demanding on resources for just every simple cut that I make:
The high resource usage is most likely the waveform generation. Olive generates an accurate waveform, i.e. it doesn't simply show the footage waveform, it also shows the waveform post-effects. This obviously requires background rendering to achieve.
I'm guessing your CPU is significantly less powerful than the systems I usually test on, because while waveform generation is resource intensive, it's fairly quick on my end, even for long 40+ minute clips. Systems like yours may benefit from a setting to use "quick waveforms", i.e. just use a cached footage waveform for speed at the expense of accuracy.
EDIT: It also could be the hashing, trying disabling auto-cache video and see if resource usage goes down.
I should mention, this is realtime audio. That's the performance gain you're experiencing. The audio cache in the latest commit is disabled by default.
Oh, that's odd. With real-time audio I was expecting to be able to adjust volume during playback and hear the change while doing so, instead I can only hear the change in volume after a few good seconds of playback since I've made the change:
You can turn it back on in the Sequence's properties under the Parameter Editor.
I'm sorry isn't this it?
This is more likely a simple timing issue or bug than a performance issue.
If that's the case then I guess it's probably related to the audio adjustments not happening "real-time" as well?
The high resource usage is most likely the waveform generation. Olive generates an accurate waveform, i.e. it doesn't simply show the footage waveform, it also shows the waveform post-effects. This obviously requires background rendering to achieve.
Of course, maybe it would be a good idea to check whether the waveform actually needs to be re-rendered or not.
When I split a clip and ripple delete it, I'm not making any change to the effects so the waveform should just move along with the clip to the left, isn't that so? (see audio_resources.mov
)
I'm guessing your CPU is significantly less powerful than the systems I usually test on, because while waveform generation is resource intensive, it's fairly quick on my end, even for long 40+ minute clips.
Well I'm on a AMD Ryzen 9 3900X 12-Core Processor
and 32Gb RAM
they look like reasonable specs to me.
Systems like yours may benefit from a setting to use "quick waveforms", i.e. just use a cached footage waveform for speed at the expense of accuracy.
Is that something I can enable by myself or an idea for you to code in?
EDIT: It also could be the hashing, trying disabling auto-cache video and see if resource usage goes down.
unfortunately not, it's the same
Oh, that's odd. With real-time audio I was expecting to be able to adjust volume during playback and hear the change while doing so, instead I can only hear the change in volume after a few good seconds of playback since I've made the change:
Yes, during playback it renders ahead 2 seconds. It does so, in fact, to try to prevent the cutting out you mentioned earlier (though 93bdf409b9d168384f5d95420bcbf483cf545036 might have fixed a potential issue with that). You can adjust this interval here and experiment with it if you'd like: https://github.com/olive-editor/olive/blob/master/app/widget/viewer/viewer.cpp#L50
I'm sorry isn't this it?
No, there's an extra setting if you select the Sequence node and look in the Parameter Editor. Enabling "Auto-Cache Audio" will restore the earlier functionality.
Is that something I can enable by myself or an idea for you to code in?
It has to be coded it, it isn't something that exists right now. But if you want to confirm it's definitely the waveforms, try setting this to false: https://github.com/olive-editor/olive/blob/master/app/render/previewautocacher.cpp#L37 It may be something other than the waveforms, especially since...
Well I'm on a AMD Ryzen 9 3900X 12-Core Processor and 32Gb RAM they look like reasonable specs to me.
Your CPU is, if anything, significantly more powerful than mine. Additionally, waveforms would explain some of the jump in CPU, but not so much the lag in the UI. That might be something else and worth perf
ing.
Also do you usually compile yourself? If so, it might be worth testing the AppImage too.
Yes, during playback it renders ahead 2 seconds. It does so, in fact, to try to prevent the cutting out you mentioned earlier (though 93bdf40 might have fixed a potential issue with that). You can adjust this interval here and experiment with it if you'd like: https://github.com/olive-editor/olive/blob/master/app/widget/viewer/viewer.cpp#L50
Thanks for getting back to me so quickly. So, I've done some more testing and the audio cuts off also when video caching threads are not working. This was on your referenced commit 93bdf40
While doing so, I was staring at htop
and I noticed that during playback (if no video caching is happening) it looks like there's always just a single thread near 100% while all the others are doing nothing, so I asked myself: where is the audio renderer in all this? is it working on the main thread contributing to the spike I see, or is it just being scheduled too sporadically to even see it spike? if there's nothing to enforce thread priority such that audio samples are produced at least one frame ahead of the current play head position, the audio render thread might as well be scheduled in a way such that the audio caching lags behind the play head. Maybe a thread synchronization construct like a semaphore could help with that.
Or maybe it's another issue entirely.
It's not like I've read the source code of other DAWs or video editors, so correct me if I'm wrong, but for "real-time" 2 seconds seem much to me. If you can ensure that the audio is always computed 1 video frame ahead of the play head, you probably need to render no more than that at each time (which would be 1/30 of a sec for a 30fps sequence) and that would truly be real-time.
It has to be coded it, it isn't something that exists right now. But if you want to confirm it's definitely the waveforms, try setting this to false: https://github.com/olive-editor/olive/blob/master/app/render/previewautocacher.cpp#L37 It may be something other than the waveforms, especially since...
Belive it or not, it was the waveform. setting that to false "fixed" the all-cores all-time-high CPU usage when ripple deleting.
Your CPU is, if anything, significantly more powerful than mine. Additionally, waveforms would explain some of the jump in CPU, but not so much the lag in the UI. That might be something else and worth
perf
ing.
Indeed. The UI lag is always there, I'll provide you with a perf
update on this.
Also do you usually compile yourself? If so, it might be worth testing the AppImage too.
Yes also because I maintain my own fork of Olive for some nodes that I need, remember? But I'll try it out just to see if anything changes performance wise.
Here is the perf
data (and flamegraph)
perf.zip
I was basically just selecting clips for most of the time, I hope it contains some useful insight.
BTW I've tested the AppImage but unfortunately it doesn't make a difference
Indeed. The UI lag is always there, I'll provide you with a perf update on this.
I've written more optimizations into 5f859895a86843213a69e5c6cb5363d63fe269c5 that should help with the UI lag.
Belive it or not, it was the waveform. setting that to false "fixed" the all-cores all-time-high CPU usage when ripple deleting.
I seem to remember planning to do some optimizations for ripple deleting but never getting around to it. I think you'll find that operations like "rippling" or using the Q and W hotkeys won't trigger a waveform render because they've been optimized, which should be done to the ripple delete too.
I've written more optimizations into 5f85989 that should help with the UI lag.
Yes, I can select multiple clips with ease now :)
I seem to remember planning to do some optimizations for ripple deleting but never getting around to it. I think you'll find that operations like "rippling" or using the Q and W hotkeys won't trigger a waveform render because they've been optimized, which should be done to the ripple delete too.
IDK let me know if I can be of any help investigating this issue
Latest commit implements a more robust audio backend (had been planning to rewrite that for some time), let me know if that fixes the audio dropouts.
As far as I can tell, the last thing is to finish optimizing the ripple delete function. Do you think that's fair? I think optimizing to reduce the amount of waveform re-renders is a better goal than implementing "less accurate waveforms", particularly since your hardware seems fairly capable.
Latest commit implements a more robust audio backend (had been planning to rewrite that for some time), let me know if that fixes the audio dropouts.
It's getting better and better! I did a brief test and it looks like the audio doesn't drop out anymore when the playback happens within the green zone, it tends to stutter (and lip sync breaks) when video cache is building near the play head (I'll test this thoroughly in the next days and let you have some hopefully useful info) but overall great improvement I would say :)
As far as I can tell, the last thing is to finish optimizing the ripple delete function. Do you think that's fair? I think optimizing to reduce the amount of waveform re-renders is a better goal than implementing "less accurate waveforms", particularly since your hardware seems fairly capable.
Yes absolutely. I have a feeling that waveform re-rendering is triggered when it's not needed at times so that would be a logical step to take I think. This weekend I'm not home but I will be able to test on a more common setup (8th gen i5 laptop) I'm curious to see how much the current waveform generation will actually take on that hardware.
It's getting better and better! I did a brief test and it looks like the audio doesn't drop out anymore when the playback happens within the green zone, it tends to stutter (and lip sync breaks) when video cache is building near the play head (I'll test this thoroughly in the next days and let you have some hopefully useful info) but overall great improvement I would say :)
Try adjusting the audio playback interval here: https://github.com/olive-editor/olive/blob/master/app/widget/viewer/viewer.cpp#L53 I set it to 1/8 of a second for more realtime feedback, but that might be too extreme. Larger sizes fix the issue on my end.
Try adjusting the audio playback interval here: https://github.com/olive-editor/olive/blob/master/app/widget/viewer/viewer.cpp#L53 I set it to 1/8 of a second for more realtime feedback, but that might be too extreme. Larger sizes fix the issue on my end.
I can confirm doing this improves things. 84de151693d00cc48123f5f50de14328846f6667 I agree that 1/4 of a second should be the way to go, but on my end it glitches audio (probably buffer overrun or underrun I never remember which one is it when there are not enough samples) most times at the beginning of the playback. Of course setting this to 1/2 of a second "fixes it" but my hardware should be able to handle much beyond that. If you need to test anything on that regard just reach out to me no problem.
20cf77d8f2a06a15c8e379c4c7e30a065e40067f is really useful. With e02f50becd285ab98dcaaf407081f67d04c6c06b and related commits we can be sure that no waveform re-drawing is happening when ripple deleting, right? There's something going on but I'm not sure what, I'll attach a video.
When the background cacher finishes, the job disappears, but if I'm to play the sequence it reports like it's doing something (maybe it's just a matter of removing it from the task manager?) but when I ripple delete I still get all cores up at 100% and I think that activity is being missed by the task manager?
No, I haven't done anything to optimize ripple deleting yet.
I see multiply ALSA underrun in terminal output:
...
[DEBUG] No arenas, creating new... ((null):0)
ALSA lib pcm.c:8424:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8424:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8424:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8424:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8424:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8424:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8424:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8424:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8424:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8424:(snd_pcm_recover) underrun occurred
ALSA lib pcm.c:8424:(snd_pcm_recover) underrun occurred
[DEBUG] olive::ProjectSaveTask(0x5a8b1c0) took 5 ((null):0)
...
Is it related to this issue OR should I create separate issue?
Are you actually hearing a buffer underrun? If not, you can safely ignore those.
Are you actually hearing a buffer underrun?
Yes, I hear underrun during playback.
This is solved in 3cb9caa35d56312e26eff2c0b0321489ecd18f88
Commit Hash 6d576a3e55c5c240b7b8290e8d00c7f6823c2d03
Platform Arch Linux
Summary I first noticed this when splitting and ripple deleting a normal audio+video clip: if I immediately play the track after such operation, the audio being played is the old pre-cut audio (or nothing at all) for a few seconds.
Then I've imported an audio only track and simply moving it along the timeline produces the same result.
You can see each time I move the audio clip all the cores ramp up to 100% usage and during that time, if I press play I can hear the audio corresponding to the old track position, then it becomes silent or stuttering for a few more seconds before it goes back to normal and starts playing the actual track at the correct position.
https://user-images.githubusercontent.com/20294254/132726737-7b9ca925-bbcb-4dfc-9bc2-39c807ef1e8d.mov
Additional Information / Output Notice how, when I move the audio track to the right, the track still plays like I didn't move it at all for the first few seconds