olive-editor / olive

Free open-source non-linear video editor
https://olivevideoeditor.org/
GNU General Public License v3.0
7.99k stars 542 forks source link

[Feature Request] Basic Mixing Controls Pan / Level #944

Closed Reaper10 closed 3 years ago

Reaper10 commented 5 years ago

audio mixer window would be great for live sound recording. You could all so use it to automating your valium or pan

untitled https://www.youtube.com/watch?v=xNw0Iw03F2M&t=728s 287974-adobe-premiere-pro-cs6-audio-mixer you could add jack audio to send audio to and from a daw software http://jackaudio.org/ https://github.com/jackaudio

ItsJustTee commented 5 years ago

This is a great idea

DaniSeeh commented 5 years ago

Not really sure how relevant this is to the goals of Olive. Wouldn't Ardour already accomplish all of this and more than whatever could be achieved in Olive alone? Good interchange with Ardour might be the better solution.

frink commented 5 years ago

@DaniSeeh interoperability with Ardour is something that I've been talking a lot about on here. Totally agree that needs to be a big priority. But a basic mixing interface is probably needed inside Olive too. If you have a lot of different files with audio it can get a bit much to handle. I would suggest something basic like volume, mute, solo, pan and probably submixes as noted in the video above...

As much as I don't want to add audio processing into this, I think it probably needs to be there in order to serve all of the common use cases of a pro video editor...

sobotka commented 5 years ago

+1 for horrible idea.

Interchange. Keep dysfunctional audio linkage way, way, way, way, far, way, timezones, light years away.

frink commented 5 years ago

Devil's Advocate: If Olive is mixing the audio anyway don't we want to expose it...? Not more than we have to...

My suggestion is that we allow a little bit of mixing in Olive but encourage the export/import loop for anything more than basic level adjustment. The audio stuff should be dead simple and force users to look for more power outside of Olive. Grouping could be per track to make the mixer stupid and less than perfectly helpful so as to provide basic functionality enough to help someone realize that they need a DAW...

In export, I'd want every source in it's own track already piped into mix groups that are the same as the tracks in Olive. The reason for keeping the audio in separate tracks in Ardour per source is so that an individual source can quickly be manipulated separate from others. The reason to have the bused groups is to manipulate the audio of the group together. I would also want to have the work in progress video sequence as a single video timeline showing in Ardour for use in both ADR and scoring...

Conclusion: We need a little more video in Ardour and a little bit more Audio in Olive to make things a perfect interchange for audio and video editors.

sonejostudios commented 5 years ago

IMHO Olive just need one thing: Jack Transport! http://jackaudio.org/ So it will be synced with Ardour, allowing high-end mixing, editing, lv2/vst-plugings, complex routing, automations, etc etc...

Ardour is so mighty! And with Jack Transport, it is even possible to sync to every compatible audio software, like Hydrogen, qTractor, Carla, Mixbus, etc etc...

(for now Shotcut is the only NLV Editor with Jack Transport, and Olive definitely needs it!)

sobotka commented 5 years ago

It is supported in ShotCut. Go use it if it is that useful.

frink commented 5 years ago

A Jack Transport WOULD sidestep a lot of the audio issues we have been discussing. BUT it introduces new issues that are just as concerning. The ideal solution for a Sound Designer IS NOT the ideal solution for the ideal solution for a Film Editor. Both think about audio very differently. We really need to figure out the project workflow between these two paradigms. I'll try to take a stab at mapping this over the next two weeks and see if we can't get a few deeper requirements.

I wish GitHub issues would allow for an Epic hierarchy. Handling interoperability really is a mammoth epic!

sobotka commented 5 years ago

A Jack Transport WOULD sidestep a lot of the audio issues we have been discussing.

spit take

Does Baselight support JACK? Resolve? Media Composer? Lightworks?

Beyond that, the plot is lost when an NLE adds JACK. Has anyone even googled how motion pictures are scored? Don’t answer that.

Also note, Olive is currently vapourware, so temper “ideas” against reality.

Adding JACK is an unfathomably stupid idea, and if it is that crucial to your mission critical works you have done, feel free to list them here. One work? How about a half of a piece? Post a GitHub repository of all of the links to the works that required JACK via an NLE and it will get crosslinked here. I’ll wait...

Finally, go spread the love. If JACK transport is mission critical for the work everyone is going to link to here, feel free to interchange with ShotCut.

To be clear:

frink commented 5 years ago

@sobotka I'm not disagreeing with you. But I am adding a perspective for the audio guys who are posting here without knowing how this stuff works... JACK could be used to sidestep the discussions on what audio to put into Olive. That would be a very different workflow than anything I've ever seen keeping Olive focused on video and Ardour on Video. But while it COULD work, it is an approach that nobody on either side will likely ever use because it's soooooooo weird. (Think using Radium as your main DAW... Could be done, but why pull out your hair..?)

Thus, it probably is a stupid idea...

Has anyone even googled how motion pictures are scored? Don’t answer that.

That is my point too. JACK doesn't fit the workflow of an NLE...

I don't think adding JACK helps anything for Olive. It's a nice-to-have that nobody will probably use in practice because JACK is a beast for the uninitiated... (see my last comment) I'm trying to explain this to those coming from the open source audio world who may not understand the video workflow.

Having JACK transport is about the most epically stupid idea ever given the tech burden to upside.

I agree. While JACK is really easy to add, it is NOT the right solution for any NLE...

For film scoring, XJadeo makes more sense.

For other solutions, a traditional export workflow makes more sense than some hybrid weirdness. Some might think that a JACK Transport should be used for "tracking" video along with audio when shooting a live event where the audio is the prime component of the production and video is secondary. But, live multi-tracking of video and audio has always been done by two separate systems with two separate editors and two separate approval workflows. If you want to, OBS Can be connected to a JACK transport for theses weird hybrid workflows. (But why...?)

Bottom line: These scenarios are completely out of scope for Olive...

Having some rudimentary audio controls is a no brainer for discussion. That’s mandatory.

There have been enough weird issues about how audio works that I believe some diagram explaining the expected use cases of audio is necessary to keep the issues feed free from endless dialog about audio in the NLE. There have been over half a dozen legitimate use cases for audio brought up in the discussion over the last few months (I've been in a lot of them...) and I think it would be good to summarize what has been discussed so we don't keep rehashing.

As I see it, there are a few legitimate things that can be useful for audio in the NLE:

Avoiding the Need for a Mixer Window: It occurs to me as a write this that a better way to handle audio would be to bus it to several stems which could be quickly output rather than generating huge 147 track projects for the editors who provide a basic folley to their scene with filler music and dialog. This is the main reason for mute/solo/pan that I've suggested... (In traditional workflow the dialog is output to the left and filler music to the right for the composition reference track...)

Based on this here's an interface suggestion:

This is rough and off the top of my head...

My thought is that the NLE needs to stay editor centric but that many of the complex mixing operations need to be available. If we keep the keyframe paradigm for automation then it makes more sense to keep all audio functions within the same paradigm as well.

So this feature, the Mixer, could very well live in the timeline window instead of a separate mixer.

What do you think?

unfa commented 5 years ago

(incoming video response)

sonejostudios commented 5 years ago

hey guys, hey sobotka, I really don't understand what is so wrong about having the possibility to sync the timeline of Olive with the timeline of Ardour ( via Jack, aka. Jack Transport). I'm not talking about Jack audio output (even if I would like it though). Jack Transport is a really really small feature with huge possibilities, and if you don't like it, just don't use it.

I studied film and I'm running my own recording studio since 8 years, so I have a lot to do with film and (music) recording, editing and mixing. Beside doing a proper audio mixing/mastering of the video audio tracks in ardour, I like to show you a use case I see very often.

I have a video/audio recording of a whole live concert (1h) of a band, with e.g. 5 cameras and 25 audio tracks. The video tracks need to be edited, the audio tracks need to be mixed and mastered. Now the band needs to do some cutting between the songs, and also inside the songs, on time, and make some corrections of the arrangements. The only professional way I see here is to work with a professional video editor on the video side and with a professional DAW on the audio side, with a synchronized timeline, so one can cut and move things on both side at exactly the same point in both timelines. Especially because Ardour offers a bpm snapping grid, so the cutting/editing/moving of audio AND video can be done on the beats. Everything other method here is a pain, trust me! After the editing (on both side) is done and the audio tracks are mixed properly, the only thing left to do is the audio master, which is then re-imported in the NLE and rendered together.

There are also a lot of other use cases, like writing a song (with ardour) for a video, which needs to be re-edited on the beats after the song is finished. Or simply because the pictures and the arrangement/composition of the song need to be decided together.

sobotka commented 5 years ago

I studied film and I'm running my own recording studio since 8 years, so I have a lot to do with film and (music) recording, editing and mixing.

Great to have you around.

The only professional way I see here is to work with a professional video editor on the video side and with a professional DAW on the audio side, with a synchronized timeline, so one can cut and move things on both side at exactly the same point in both timelines

Because your edit should be completed. Aka there's no reason to be in the editor at this point; if you need to do a live recording, you are scoring against a timecoded striped video that has already completed editorial.

If you are simply touching up and remixing etc., the audio is already picture locked, then you are already free to remix and simply conform the new audio against the finished edit.

There are also a lot of other use cases, like writing a song (with ardour) for a video, which needs to be re-edited on the beats after the song is finished.

You don't work a music video until you have a locked soundtrack. Even if you did, you'd rework the cuts to the music, not the other way around. If for some strange reason you had to rework the sound to the cut, you'd lock the cut and score accordingly. In neither case are you flip flopping with a live picture editorial.

Everything other method here is a pain, trust me!

You'd have an overwhelming body of evidence to overcome to support the claim, as the process has been around since the very early days of cinema.

Make sense?

unfa commented 5 years ago

JACK Transport is not a way to share timelines between software - it's only a way to make sure their playheads are in the same spot at all times.

You still have to do all your editing twice if you want to have the timelines in sync.

I guess a real solution would be OTIO support in Olive and Ardour.

But aside from that - all you actually need is multitrac kaudio export in 32-bit float from Olive - after you've done your cuts then you can mix it in Ardour (or any other DAW) and finally - mux the audio with video.

JACK Transport doesn't help with editing at all - it will only help you with playback. This migth be helpful if you're done with your editing in Olive and just want to mix audio as you play with your color correction and/or compositing - but why would you split your attention between the two, needlessly making yourself fail on both ends? Do each step separately for best results.

I've tried working on Animation and Audio in Blender + Ardour with JACK Sync. It's a gimmick, it's not a good idea. It's way easier, faster and less error prone to do these stps separately.

Maybe for some quick skeching, but fo quick sketching - you can even record voice over (and SFX even) in Olive, so why would you need Ardour for that?

Here I ramble on about his in detail:

https://youtu.be/9G6ZRPJs0SY

frink commented 5 years ago

JACK Transport is definitely unwise for the reasons @sobotka and @unfa have mentioned.

...all you actually need is multitrack audio export in 32-bit float from Olive - after you've done your cuts then you can mix it in Ardour (or any other DAW) and finally - mix the audio with video.

Yes, this keeps us within the industry accepted workflow... Should the need arise to iterate on audio production you should be able to import a mix into the timeline and muting everything else. A mute/solo facility becomes important for this use case. Grouping audio is also helpful if you are importing only the Foley or ADR comp.

We need to keep the mixer out of our NLE... Audio editing in the NLE is not the right approach for productivity. It is better to have modular workflow with great import and export among specialized tools. This is already the industry standard practice. That means we really need to think carefully about the import, export and proxy facilities we provide.

OpenTimelineIO may not be the right approach... Video people and audio people look at timelines differently. Audio guys are trying to get sample accurate while video guys are happy with matching 24hz-48fps. Besides, we don't know where OpenTimelineIO is going anyway. If Olive exports stems from grouped audio tracks that will be preferable for audio folks. They love stems!!

I suggest we provide basic audio manipulation in the timeline without a dedicated mixer window. It will be much more familiar to our user base and also encourage interoperability with external DAWs without reinventing an inferior wheel inside our software. This will create the most focused experience for pro users... - That's our goal. Right?

sobotka commented 5 years ago

OpenTimelineIO may not be the right approach...

An interchange gives you a small text file or file encoding, from which you can recover the latest, highest quality assets required, including iterations thereof, without any exporting, any rendering, etc.

This isn't rocket science.

frink commented 5 years ago

@sobotka just like you were telling me to wake up and see the filmmaker's workflow before, I'm going to say the same thing here with the audio workflow...

An interchange gives you a small text file or file encoding, from which you can recover the latest, highest quality assets required, including iterations thereof, without any exporting, any rendering, etc.

In theory, an interchange could work. But it's not standard and you're likely to annoy more people than you satisfy. Audio guys are more flexible towards their workflow than video editors, but they are fiercely loyal to their tools. In order to sell this new OTIO interchange approach to the audio guys you need support the favorite DAWs of both sound designer and composer.

How many DAWs support OpenTimelineIO right now? - It's probably premature to go down that road...

Going back to the classic way it's been done... If we're going to try to support the workflow of the industry at present, we need to go with stubs... For foley and music they either want a stereo file hard panned with temp music on left and dialog on the right or two separate files and occasionally a third track with background noise and temporary sound etc...

The only guy that MIGHT wants all the clips individually is the dialog editor... Some engineers may want individual takes to comp the dialog themselves - but that is rare. Most often, the scene editor is the one choosing the dialog comp. If the audio guy ends up wanting individual files they most often want one stub per actor and maybe an atmosphere track if it was recorded. Most lines end up es ADR anyway so the audio guy doesn't care too much about it. It's either ADR or they use rough comp for dialog most every time.

The final mixdown of dialog, foley, music and atmosphere cannot often be done by one system. Often three ProTools rigs are synced to the final cut via SMPTE and 3-7 different people man the mixer while the audio head honcho (title is different for every film) makes notes of changes that need to be done for the final mix. When they have a rough mix, or more often periodical throughout the process, the director comes in to preview the whole thing...

With film teams getting smaller the audio guy does more with less. A movie often has several hundred of tracks. One guy and one computer just can't wrangle all of that at once... Submixes becomes very important... "Mixdown early and mixdown often," becomes the de facto motto. This basically means that for smaller films, they are going to either redo everything with ADR or process the dialog comp and take what they can get with a quick shellac...

All that to say: Audio people really just want the stubs 95% of the time...

OpenTimelineIO is probably not the best approach from audio guys' perspective...

sobotka commented 5 years ago

How many DAWs support OpenTimelineIO right now? - It's probably premature to go down that road...

https://www.soundonsound.com/techniques/conforming-re-conforming-pro-tools https://www.pro-tools-expert.com/home-page/2016/7/1/pro-tools-conforming-and-re-conforming-in-audio-post-production-part-1 https://www.pro-tools-expert.com/home-page/2012/5/23/the-question-of-conforming-with-pro-tools.html

https://www.synchroarts.com/products/titan/overview http://www.thecargocult.nz/conformalizer.shtml http://www.virtualkaty.com/ https://www.soundsinsync.com/products/ediload

OpenTimelineIO is probably not the best approach from audio guys' perspective...

image

https://opentimelineio.readthedocs.io/en/latest/tutorials/adapters.html

frink commented 5 years ago

So what do you think the workflow SHOULD be?

sobotka commented 5 years ago

So what do you think the workflow SHOULD be?

It's not about me.

You didn't click on any of the links, did you?

frink commented 5 years ago

@sobotka - You've got several excellent articles there about conforming / reassembling... It's a huge problem. OTIO should be able to solve this easily once implemented in several DAWs. I was the one who initially suggested we get Ardour to embrace this and I still think we should. However, we have to stay grounded in reality of how things work now.

How many DAWs support OpenTimelineIO right now?

Pro Tools, Adobe Audition, Logic Pro, Reaper, Cakewalk, Nuendo, Cubase, BitWig Studio, Reason, Tracktion, Ardour / Mixbus, Ableton Live, Renoize, Sound Forge, Studio One

None supports OTIO currently. It's just premature to call OTIO a usable standard for audio. (Granted for video coloring and compositing it makes a ton of sense... But not for audio... yet.) Until we have support for OTIO in at least 3 of the major DAWs it's not really going to be used much in by audio engineers. That's the sad reality even though we actually needed it yesterday...

You keep using that word...

You yourself said that we should consider the film timeline locked BEFORE it goes to audio. It woud be awesome if there wasn't so much conforming that needs to be done. The whole reason that conforming exists is because the edit is NOT locked. This is why I believe that eventually OTIO or something like it will be extremely useful to the dialog editor in film.

For music and foley OTIO is not the right tool anyway... Stubs are simpler to transfer even in the case of conforming. I really love the concept of OTIO and I think it solves a lot of common cases that are real gotchas at the moment. But I don't even expect Ardour to consider implementing OTIO until 6.0 is out the door. And besides, if we are truly going to be useful we have to work with more than just Ardour. Pro Tools, Cubase, Logic and Audition are the others I'd expect to see in the music industry which we need to trade files with...

The simplest approach to conforming / reassembling is to go back to the way things we done on film and tape back in the day. Stubs and a cue sheet with SMPTE or frame count as the timecode that we use to keep everything in sync. While OTIO will be a better choice it only becomes that better choice if/when it is adopted by several DAWs. That's the dichotomy...

Anyway, I think we have exhausted the debate on an audio mixer window. Whether we agree on OTIO being integrated into the audio workflow or not doesn't really change this feature or the needs.

Each clip needs: Mute / Solo / Volume / Pan as well as grouping for quick editing.

Editors need a button to help them hear dialog clearly in bad recordings... Maybe libfilteraudio could work... Just came across that one today while reviewing uTox... This needs to be non-destructive and switchable. Perhaps not even rendering on export of audio video.

One other workflow we should consider is Pomplamoose Video Songs... Jack Conte is the one who originated the idea. The actual editing of these type of videos is one of the more nail biting to get right. Allowing the concept of musical time as a snap-to grid would speed this up in a huge way. Because there the audio is set before the video editing begins. That's definitely an odd case...

sobotka commented 5 years ago

sigh

You didn’t click the last link, did you?

Again, you keep using the name OpenTimelineIO, but I’m quite certain you don’t understand what it is nor what it does.

I’ll leave it at that.

frink commented 5 years ago

You didn’t click the last link, did you?

I notice that there is not one DAW in that list of NLEs in the last link...

What am I missing here? Instead of assuming what I did and didn't do it might be more helpful if you explain what you are thinking. At least that way I have a chance to comprehend things from your perspective...

sobotka commented 5 years ago

If you click the last link, you’ll see that it is a reference to what OTIO provides as adapters. That is, while OTIO is indeed an (evolving) interchange standard that has a hope of becoming a contemporary interchange unto itself, it is also supports adapters that interact with the OTIO schema.

That is, OTIO supports interchange of every single adapter on that list, making it somewhat of a Rosetta Stone of interchange formats.

Of particular note is the support for the ancient CMX3600 EDL format, which can be considered the great grandmother of all interchange formats. When you hear “EDL”, it typically references the CMX3600 format in some way.

OTIO brings support for every (beyond shareware) DAW conforming pipeline on the planet via the adapters. That is, it makes a plausible path possible immediately after it is integrated.

This is a single person developing something. Think about that[1].

Look at the army of issues that are Feature Requests. Which ones make sense? Which ones are absurd? How does one decide? This is one developer versus an army of ”good ideas.”

To this end, reframing the question of features down to “What is the shortest possible line between two design points for a specific audience, that a single developer has a hope in hell of executing?”[2]

While we could indeed spiral down rabbit holes, perhaps to something like the unwieldy OMF / AAF SDK[3], OTIO remains the most immediate method to bring in substantial interchange opportunity in the shortest possible (no pun intended) timeline. That means investment of energy to result is huge. Specifically, leaning on the tried and trusted interchange approach via OTIO would enable short to long form independent work to handle audio at the highest possible quality level utilizing external tools. As a development cost, this seems reasonable. Remember too who develops OTIO and uses it in production. I’d say that’s not a bad qualia baseline if they can make it work.

As just another rando, clueless dimwit, there’s less than 2 pennies. (Given that if every opinion that landed in this tracker was worth 2 pennies, there would be enough cash to buy a cup of coffee... Maybe.)

[1] Remember, the master branch still doesn’t let you even cut a frame of footage yet! [2] Olive certainly could do far better at making the core design audience more clear, and perhaps it would go a long way to being able to knock down many of the “features” listed in the tracker. Plenty of opinions, but how many folks have actually produced short or medium length works, let alone long form? [3] I would encourage anyone who believes that AAF / OMF would be anywhere even remotely close to the same time investment to OTIO to have a look at the AAF/OMF wrapper SDK. It’s a monster, and is quite a nightmare to even compile, let alone get it to work. Unsurprisingly, if you search through the many conforming threads, you’ll likely see that EDL approach is consistently chosen over some of the other options.

frink commented 5 years ago

@sobotka Thank you much!

I think I understand a little better why we are talking apples and oranges...

I agree with all three of your points:

  1. We need to keep a narrow focus to help development happen. Enthusiasm about good ideas can bog us down. Good things perish for lack of vision etc. Whatever our focus is we need to aim ONLY for that goal right now while the project is getting off the ground.

  2. We are making a Professional NLE. We need to engrose ourselves in the mindset of industry professionals and humbly limit ourselves to the best accepted approach for their workflow. The more we can define this for our user (many of whom will never produced even a short film...) the better Olive will meet our goal of building a professional editor.

  3. We should avoid spending time on AAF or OMF... When I speak of "stubs" I'm speaking of straight *.wav files at some sample rate either in stereo, mono or multitrack (5.1, 11.2, ambisonic or something). This is the defacto standard in the audio industry when interchanging files between professionals. OTIO may provide more bang but it's not implemented for the popular DAWs yet. Adapters can and should be written. But that is NOT where we should spend our energy. We have more important fish to fry. (back to your first point...)

The suggestions above concerning an audio mixer window all revolve around that video editors want to stay in the timeline window and do things with keyframes. This is where Olive really shines. The only audio facilities we need to provide are basic mute, solo, volume, panning, grouping, and temporary audio enhancement when editing poorly captured dialog... We both agree on this. No need to say more.

The ONLY thing that you @sobotka, still seem to be arguing here is the usefulness of OTIO. Which I'm agreeing is extremely useful for video and will likely become useful for audio in the future. Sadly, audio professionals have never used OTIO so we cannot present it as a currently viable solution for audio timeline interchange YET.

This is why we should to revert to the defacto standard of the audio industry: stereo wav files exported from audio timeline. I'm suggesting to go one step farther and export multiple files based on groups rather than just a stereo with dialog on the left and temp music on the right. Modern DAWs can handle this easier anyway. This will use current code already in Olive since we have to render a mixdown anyway.

To be clear, I still think OTIO should be added. But I don't see how it can work for any audio workflow in its current state without a massive effort from multiple agencies. (Most file file formats for DAWs are proprietary and have not been reverse engineered completely...) Let's pick the battles we can win right now.

P.S. - Perhaps we should close this thread and leave the others that deal with the different features that we have been discussing. This thread has gotten WAY off topic! @DaniSeeh, what do you think?

sobotka commented 5 years ago

We are making a Professional NLE.

At risk of nitpicking, there are two things that make me cringe here.

The first is the "We". I am not doing anything but participating in the social network known as "Open Source / Libre Software" issue tracker. There is no "we" making anything. I've hammered on a color manager class a bit, to try and compartmentalize the management component so that refreshes etc. happen, but I've shelved it for the time being until Matt gets other things in place. Matt's doing all of this. None of it is me, therefore the "we" here is goofy. There's some design work that I've had opinions on, sure. Ultimately though, the final arbiter is Matt, and therefore, "we" is uh... gross.

Second is that godforsaken word "professional." I loathe that term. Professional means literally "doing something as a profession." If anyone is delusional enough to set that as a goal for Olive, that is to have professional editors using it, give up now. It's a hopeless waste of time. No, "professional" is yet another one of those garbage terms that skulks around pandering to idiots. It's marketing garbage.

So how to avoid "professional" and in fact facilitate designing something that one developer can hammer out? Good question. I don't have any answers. I suspect that 95% of "design" here is dictated by an audience. That is, if you crawl over this horrible tracker laden with "features", you'll see that the features are simultaneously awesome and absolutely crap depending on the relative vantage of the audience. So who is the audience here? I have my internal ideas. Everyone else has theirs. Ultimately all of the opinions suck, and it's relative to Matt. So be it.

We need to engrose ourselves in the mindset of industry professionals and humbly limit ourselves to the best accepted approach for their workflow.

Completely, utterly, 110% disagree. This is mimesis and bullshit.

What I do agree with is learning from the people who have had the shovels out and already do / have done the things that say, a small independent filmmaker might want to do. That's a huge surface if you think about what is involved in a pushing out a bunch of moving images with audio. There is a metric shit tonne of experience to be gleaned from existing workflows, at the upper end of complexity, that can be harnessed by the lower end. Some of those features can make or break that independent person's massive effort. Does it risk complexity and shifting the design audience? Absolutely.

At the most basic level, if the goal is "Make something that shits out pictures and audio glued together" then guess what, Windows Movie Maker works for that. So does iMovie. They are terrific tools.

(many of whom will never produced even a short film...)

Not that I matter, at all, that feels like one of the worst audiences ever to consider; such an individual has so much to learn about moving picture work that by the time they've passed the bare minimum experience level, what they thought they thought they knew has long since expired. I'd suggest Movie Maker, or iMovie, or KDEnlive, or ShotCut, or OpenShot, or FlowBlade, or PiTiVi, or Avidemux, or Cinelerra, or any of the endless things that are all virtually identical on this front. That audience is well served even by YouTube Creator or whatever it is called.

[...] I'm speaking of straight *.wav files at some sample rate either in stereo [...]

Congratulations, you've just spend how many posts describing rendering from an NLE. I'll flip this around and suggest that you try to find an NLE that doesn't let you render to still frames and WAV? Zero, right?

Sadly, audio professionals have never used OTIO so we cannot present it as a currently viable solution for audio timeline interchange YET.

You have a serious problem with hitting comment and mashing the keyboard before reading. Apologies to sound harsh, but go back and read each and every word of my last post regarding adapters. Read the documentation linked. You've missed the point so many times it's driving me to distraction. Go read it. Read it again. Read it until you realize exactly what OTIO is and does. Also realize that it's used in production at fucking Pixar of all places. Do you think they have conforming issues or do you think they have secret in-house, ground up written-for-OTIO-DAWs, and run their own OTIO-based operating systems? Or maybe, just maybe, OTIO shims in and solves some particular set of production needs at Pixar? Maybe releasing it under an open source licence hints that it might have something to do with something that someone else might find rather useful?

It's exhausting reading you repeatedly typing about how something isn't something when you have clearly failed to grasp what it is and what it does. image

TL;DR Rando Opinion

Design Feature Version 1.0?
Pan / Level Reasonable
JACK Dumb as Dirt
Interchange via EDL Reasonable
OMF / AAF Unreasonable
frink commented 5 years ago

From @sobotka ...that godforsaken word "professional." I loathe that term.

From the olivevideoeditor.org: Olive is a free non-linear video editor aiming to provide a fully-featured alternative to high-end professional video editing software.

That's our aim. Get over it!

TL;DR Rando Opinion... [table]

So we agree. Good. Moving on...

I'm going to add more comments on OTIO in #310 where this discussion should have happened so that we actually add something useful to the mix. Sorry for allowing the hijack...

For the love of Olive please close this thread!!!

musaire commented 5 years ago
* **Normalization.** A default light **compressor/limiter** that "normalizes" captured audio so that an editor can do their work without futzing around with audio levels. 

Hi everyone, I'm excited to see Olive being developed!!! :)

I have mixed/produced some music in DAWs and edited music videos (not for big selling artists) and advertisements for campaigns. Also did camera work for those, and also played in music instruments.

My favorite for editing has been Sony Vegas Pro because its intuitive track and item functionalities for both audio and video (fade-in/fade-out buttons etc., loved the ability to zoom in up to sample size in scale in audio when I used audio envelopes to manually ride volume - not for music, usually some speach/effect synch). For DAW using Reaper. But I would run some VSTs in Sony too. For color, my favorite is obviously Davinci Resolve for its selections of curves, masking and keyeing, tracking, videoscopes, overall functionality for grading/correcting. (not used a panel) Fusion in DR is fun and creates lots of possibilities but I did more simple 3D stuff and compositing in Vegas. Davinci Resolve is too heavy for using on the go on the laptop, at least for me. Wish Olive is going to be easier on the GPU.

As I would use DAW anyway, I wouldn't worry too much about audio. Basic things only. Currently I like both the old Vegas and new Resolve (we wouldn't need that much at the beginning, shouldn't be the priority) for what can be done with audio. I haven't tried the Olive out yet, but I would expect initially I could at least edit audio gain in the Graphical editor window (spline editor or what it is) to adjust gain by keyframes and using beziers. I like the keyframe functionality both in Vegas and Resolve, but as I've seen I like Olive's Graphical editor for keyframes more than Vegas's - its bigger. I hate small graphical solution where I need precision - the worst is curves in Vegas, so small - 32-inch display won't help. Please Matt, design every single curve editor enormous!! :) I want precision in all the luma, CMYK, HSL curves and sat vs sat, sat vs hue, sat vs lum curves!!! In Vegas, ffs, my hand shakes and 10% of absolute value is changed...

I wouldn't expect "normalization" feature to compress or limit my audio signals. I'm sure the developer knows this but just in case, had to answer this. :) Normalization should just bring down the gain based on signal peaks or RMS values, not messing up the dynamics.

frink commented 5 years ago

I wouldn't expect "normalization" feature to compress or limit my audio signals. I'm sure the developer knows this but just in case, had to answer this. :) Normalization should just bring down the gain based on signal peaks or RMS values, not messing up the dynamics.

@musair - I agree that we SHOULD NEVER print dynamics before we send things to the DAW. The normalization thing I was talking about is for poor dialog recording not music. See #262 for more...

Thanks for sharing your thoughts! :-D

musaire commented 5 years ago

The normalization thing I was talking about is for poor dialog recording not music. See #262 for more...

Oh, looking at that thread, half of the responses doesn't realize that normalization has nothing to do with compression. Applies to speech as well as music, any sound. It's evening out gain level between clips or in relation to target values. I agree, compression tool is also needed. There are built in compression, EQ in Vegas, for example, as a comparison, also compression VSTs can be used. These are the most essential for audio besides gain. You can select normalize "effect" under right-click on the audio track, but not essential because it is just gain manipulation. Compression is more important than normalization, the latter can be done manually in some extent. Normalization effects/VSTs make sense when they can be applied to entire track, not clip only - not sure in what hierarchy the effects can be applied in Olive atm - in Vegas, we can add effects in clip level, in track level and whole project level. Not sure about Resolve audio, but video has clip and timeline effects, think audioFX can be on track and on clips, and surely on buses.

frink commented 5 years ago

VST work has been done but there is quite a bit more to do. Planned lv2 on Linux.

My thought is to have a foolproof "fix dialog" button for the uneducated who just need an OK sounding dialog while they get the cut right. In most film situations ADR will be done later. In low/no budget films this "fix dialog" button may actually be the finished audio. My thought to use communication grade noise cancelling is to encourage refining audio in the DAW, (the right way to make audio sound good...) but allow rapid cutting of the scene without great audio since words are the major cadence of dialog film cuts in most instances...

It's definitely a unique use case. But common enough to suggest a unique workflow... :-D