Open OsaAjani opened 1 year ago
Won't have time to write more before the weekend but in a nutshell:
with_effect
can accept a list as well as an effect. I would rather have it called with_effects
and if you only have a single effect you pass a single-element list. I tend to prefer having a single type of input now.__init__()
, I would say avoid having logics inside __init__()
. Yes it is a lot of boilerplate to be using init just to store parameters, but it's clean to think that an effect is just a description of specs, until it is applied to a clip.clip.afx
(which was for audio effects) and have all effects be applied via clip.with_effects([BlackAndWhite(), VolumeX(), WithEcho(), etc.])
As you said there is a risk of scope-creep in the work to do towards v2.0. What I would suggest is you do as much as you feel. When you have something you're happy with, submit an MR and let's push it. In the spirit of "shipped is better than perfect", we could even "launch" v2.0
without a fully definitive API, and that we could consider v2.1 as the first stable version of v2).
I'll shorten this issue's name so it's easier to follow in emails.
Hi @Zulko, if you just have the time for this one question. What of this two solutions would you prefer ?
VideoClip
and AudioClip
at runtime.VideoClip
and AudioClip
manually.I would probably advocate for 2 if we only have 3-5 essentials, for more I would go for 1.
If 2, what would you consider to be absolutely essentials effects in the current list ? resize
and crop
I guess, anything else ?
In both case, what should be the name of those methods. I have 3 suggestions:
MultiplySpeed
would be accessible as clip.multiply_speed
).with_
(for example, effect MultiplySpeed
would be accessible as clip.with_multiply_speed
). fx_
(for example, effect MultiplySpeed
would be accessible as clip.with_fx_multiply_speed
).If 2, I would personally advocate for going with B.
If 1, I would prefer C, maybe even with_fx
, as I would prefer to keep a clear distinction between automagically added filter methods and real methods.
I'm not sure I like that a function called
with_effect
can accept a list as well as an effect. I would rather have it calledwith_effects
and if you only have a single effect you pass a single-element list. I tend to prefer having a single type of input now.
Yeah, kind of unhappy with the single nature of the name too, but after using it (I had to write quite a few informal test to see if my migration of effect worked as expected), I feel like supporting a list as well as a single effect is really beneficial in terms of readability, both when calling one/multiple effect(s).
Honestly, supporting both is very easy, and because I do type hinting every-time I can, it makes it ultra clear for the user and the IDEs.
This is how it's done:
def with_effect(self, effects: Union['Effect', List['Effect']]):
"""Return a copy of the current clip with the effects applied
>>> new_clip = clip.with_effect(vfx.Resize(0.2, method="bilinear"))
You can also pass multiple effect as a list
>>> clip.with_effect([vfx.VolumeX(0.5), vfx.Resize(0.3), vfx.Mirrorx()])
"""
if not isinstance(effects, list) :
effects = [effects]
new_clip = self.copy()
for effect in effects :
# We always copy effect before using it, see Effect.copy
# to see why we need to
effect_copy = effect.copy()
new_clip = effect_copy.apply(new_clip)
return new_clip
Regarding your idea to do more in
__init__()
, I would say avoid having logics inside__init__()
. Yes it is a lot of boilerplate to be using init just to store parameters, but it's clean to think that an effect is just a description of specs, until it is applied to a clip.
Yeah, I agree it seems to be the logic in the Python world (I'm coming from PHP, where construct is more frequently used for that). Anyway, I've been mostly using @dataclass
for effects, and it makes it a lot simpler and more readable. And if for a reason or another dataclass is not practical for a particular effect, then we just go and write a good old __init__
.
Idea which can be for later: with this new API you can probably get rid of
clip.afx
(which was for audio effects) and have all effects be applied viaclip.with_effects([BlackAndWhite(), VolumeX(), WithEcho(), etc.])
Yep, it's already done ^^. Basically, all audio effects can be applied onto VideoClip
and only modify audio with the proper decorator, so I also felt like afx
was not needed anymore.
As you said there is a risk of scope-creep in the work to do towards v2.0. What I would suggest is you do as much as you feel. When you have something you're happy with, submit an MR and let's push it. In the spirit of "shipped is better than perfect", we could even "launch" v2.0 without a fully definitive API, and that we could consider v2.1 as the first stable version of v2).
I like that !!!
I will finish the adding of effect shortcuts into clip as soon as I have your answer, then I will freeze further change of the API and call it "definitive" ^^. Then I will update unit tests (while keeping all ropes and stools out of hand), and finish updating the doc. And finally, I can do the MR...
I would also like to add a way to tests all docs examples automatically, as well as update the docker so we can have a shared environment for testing.
I hope I can do all of that in the next 1 to 2 weeks. After that I will have to wrap it up anyway.
Regarding the core effects, I think adding a handful of effects manually is the way.
Some really core effects can have their own short name so they flow well: clip.cropped().rotated().resized()
For other effects, I would say anything that shows that it is outplace (e.g. starting with with_
) and that flows well in plain English with_volume_multiplier()
, with_speed_multiplier()
. Maybe these 5 effects are already enough? I have used fadein
, margin
and other effects a lot, but wouldn't mind using them as effect objects if need be.
Regarding with_effect
, I still think the function should be called with_effects
, especially if the internal parameter is called effects
. And I would only accept lists. I understand that it's not technically complicated to support both lists and effects, but it will still be
Each of these points are light and not very important, but they add up, and in general I have mostly regretted using mixed-type inputs.
For the rest, good calls :+1:
kind of missed that but ideally we wouldn't need effect_copy = effect.copy()
, effects would just be a description of specs and an apply
recipe to make the effect happen, but they would be immutable, with no internal state, and so no need for copy.
Thanks for taking the time @Zulko :)
Some really core effects can have their own short name so they flow well:
clip.cropped().rotated().resized()
I would have liked clip.with_crop().with_rotate().with_resize()
better. It flows a little less well, I admit, but it keep the API consistent and it makes it easier to document (the user just have to look at methods starting by with_
and he get all core modifications and only have to remember the one rule, if with_
then out-place).
This way we only have one path for the user to modify a clip, through the usage of with_*
methods.
Maybe these 5 effects are already enough? Yep, seems good to me, those are indeed very core functionalities that are enough when combine with
with_subclip
to do entire videos not needing transitions.Regarding
with_effect
, I still think the function should be calledwith_effects
, especially if the internal parameter is calledeffects
. And I would only accept lists. I understand that it's not technically complicated to support both lists and effects, but it will still be [...]
Okay, I will go with with_effects
then. I do agree with a lot of your points, I just liked the fact not having square brackets for one effect only made it more readable. Also, I do agree in the long run mixed-type almost always add burden for maintaining.
kind of missed that but ideally we wouldn't need
effect_copy = effect.copy()
, effects would just be a description of specs and anapply
recipe to make the effect happen, but they would be immutable, with no internal state, and so no need for copy.
That would be in an ideal world indeed, but truth is: working with effect without modifying internal state make things very hard and unintuitive. Because we set effect params on effect instantiation, but clip properties only become accessible on call to apply, we frequently tend to update effect params with default value to a real value coming from clip.
We could avoid doing so by only using local variables and never touch the internal, but it would result in lot of boilerplate and would probably increase usage of closure and other kind of "advanced" techniques that make things less readable (and of course writable) for the average user.
Even though we could consider enforcing strict rules internally to prevent side effect when writing internal code, we really can't assume the end users will do the same when writing custom effect, even if we put it in the doc in red glowing letters.
Trust me on that one, this one line of code will save us hundred of hours of fastidious debug and seemingly inexplicable issues in the long run.
I would have liked clip.with_crop().with_rotate().with_resize() better.
You are right regarding consistency, but core-core features can be allowed to be a bit irregular. My thinking here is that core examples will be shorter/nicer (which is petty, but counts). Side note that with_resizing
or with_resized_frames
would be more "plain english" than with_resize
.
working with effect without modifying internal state make things very hard and unintuitive
I would say follow your heart, this part of the implementation doesn't affect the user experience and so can be revisited later. If the idea is to allow clips to be re-evaluated in presence of a clip, then I would suggest the following:
def with_effects(effects):
new_clip = clip.copy()
for effect in effects :
updated_effect = effect.updated_by_clip(new_clip)
new_clip = updated_effect.apply( updated_effect)
This way:
updated_by_clip
. This methods is a no-op (return self
) in the base effect class, so that most effects don't even need to worry about it.apply
can focus on modifying a clip, not its effect (no side-effects, ahah) unless super-necessary.Okay, I will go with the short name then.
If we want to go back to with_effects we can indeed do that later. But I really think just copying the effect will be the simplest/most natural way to achieve consistent behavior when reusing an effect.
Do it your way :+1:
Okay, it's done ! I will go back to documentation and probably a lot of unit tests fixing ^^. If anyone want to have a peak at the new system, you can find the current state of the code at https://github.com/OsaAjani/moviepy
I had a look at the effects, looking good :+1: One interesting thing to try to get an idea of the new API, would be to update the examples in the example folder (maybe only the ones that are reproducible! Dusting off the 10-year-old examples might be a rabbit hole we don't want to get into now).
Yeah I thought about the examples too, but I will not have the time to update them to the new API. The new documentation have quite more code examples along the road though. For now I'm writing the introduction tutorial to moviepy.
The goal would be to have something more like the panda doc. With a getting started with install, main concept and presentation if moviepy and a 10 minutes tutorial for the users who want to have a rapid understanding of the basis. Then a user guide with more in depth explanation of the different objects, etc. For the users who want to dig deeper and better understand. And finally the api reference with the doc of the function for full understanding.
Reading along after a few days of development on my side projects testing speed effects of FFMPEG and similar video tools. Impressive work @OsaAjani, and thank you @Zulko, for driving things so strong forward.
A few comments about Readability, and about Scope Creep:
.with_effect()
that only accepts one effect and returns a clip is generally a superior super-structure of code. It is slightly verbose chaining together multiple effects, but providing them each on new lines is very, very clear to the reader what is happening. More important, the inside code, and the inheritance models usually stay much simpler. Most importantly, the less "spaghetti glue" code for handling different inputs, the easier this is to maintain across the package when random engineers read the code down the line. I strongly prefer only accepting one effect and CamelCase, as the CamelCase implies Object Orientation encapsulation is atomic and interchangeable across types.cropped()
, a spatial function, superior to bw()
an equally simple color function? Truth is, just because of convention wandering around other packages. Trust the user to apply the Effect is my opinion, or, mandate that every Effect have a "method name" and create those dynamically (which generally is frowned upon). @OsaAjani Do you need me to read something tomorrow morning? Some code? Documentation? Want me to go through all the Effects and see whether I can organize something for you? I have time.
Unable to catch up on everything posted so far but quickly checking in to say thanks, @OsaAjani, for creating a separate issue! I pinned it alongside the Future of MoviePy issue it branched off of + the Roadmap v2 issue for quick & easy access & so it catches people's attention.
TL;DR, Doing some preview and show of clip, I think I found that CompositeVideoClip
masks auto computed at generation are broken, or that rendering of composite video clip with clip that have masks is broken and we never seen it because ffmpeg ignore alpha channel when producing video format that does not support transparency (basically anything but webm). I need someone to check. Please go see how I think I fixed it for preview and show in https://github.com/OsaAjani/moviepy/commit/aa17db206945ca1364212a893a379d58eaf35b73 for a first entry point to the problem.
@OsaAjani Do you need me to read something tomorrow morning? Some code? Documentation?
Well, there is something I just stumble upon and I'm not sure to understand. I'm not sure how much you can help though as it is quite core behavior and I dont know how much you had time to look into MoviePy code yet. So maybe @Zulko opinion would be more adapted on that one, anyway if you think you can help, the more the merrier :)
So, here is my problem: I think there is, and probably always has been, a bug in MoviePy handling of composite clip masks, but I'm not sure, maybe I just miss something. It's kind of hard to explain, probably because things are still unclear in my head, but I will try to explain as well as possible.
Also it is 3PM right now and I wanted to post all of that while it was still fresh, but I kind of dropped thinking about 1 hour ago, so please excuse me if my explanations and ideas are kind of confuses all over the place...
As you know I've been doing some rework of the preview/show functions to use ffplay and show. While writing some code for the intro tutorial, I encounter some unexpected behavior when using the show and preview function, more precisely when previewing a composite video clip with images that have masks, I had black background where they should have been transparent. Both for preview and show.
Expected result:
Actual result:
What was strange was that it only happen on preview, on write_videofile
it was ok.
Looking more in depth to what seems to be the problem, I think I nailed it to be part of the code that I just copy/paste from how MoviePy previously compute a frame to be send to ffmpeg for writing, precisely that part that I removed (I have fix the problem, so and I invite you to go and see the changes in ffplay_previewer.py
with my last commit to see full code):
if clip.mask is not None:
mask = 255 * clip.mask.get_frame(t)
if mask.dtype != "uint8":
mask = mask.astype("uint8")
frame = np.dstack([frame, mask])
This part seems to basically apply the clip mask. That led me to think that composite video clip computed masks are bad. Are we maybe applying the mask twice ? Like for composite video clip get_frame
would have already done the mask apply in the something_blit
(dont remember the exact name) function ?
In fact I think the whole generation of transparent video always had that bug (at least for v2), and that we simply never seen it because viewing transparent video is so hard (almost all tools use a black background and not a mosaic like photo previewing or simply drop transparency), and people were generating videos in formats that does not support transparency, making ffmpeg just ignore the buggy alpha channel, or with simple enough video (without compositing, or without overlapping transparent/non-transparent part of images), so that the bug was never spotted or clearly identified earlier.
What is sure is, if I also remove that part from video writer, it does not change the result.
So, I would need someone to look at that and tell me if I'm wrong, if transparent video generation with overlapping images such whith img with transparency under partly overlapping img with transparency does work as expected, or if I'm right and this is buggy and need to be fixed. In which case I would simply propose that we drop transparent video generation support for now.
The bug is probably only visible when you have multiple clip with overlapping mask/non-mask zone, like a text behind an image with a transparent background, like in my example.
If it can help, and because I will forgot this if I dont write it somewhere:
I think the composite video clip just combine all individuals clips masks into one new composite video clip and set this as the mask. But in fact, the composite video clip should probably never have any mask, at least if we dont intend to support transparency for video.
Not sure if the combining of individual masks into a composite video is bad logic, like this is not how to compute the final mask, or if this is bad implementation on composite video clip rendering when is_mask == true
. But what I know is that if we apply the mask, the result is not the expected one. More precisely I think that it make "any transparent zone in any clip" become transparent for the entire clip, instead of "any zone that is transparent on every clip". Also I'm not sure how this would play for clip where only partial transparency apply.
I think the proper logic for combining masks would be something like a simple addition of all the masks with a max at 1, like a [0, 0.2, 0, 1] + [0, 0.5, 1, 1] = [0, 0.7, 1, 1]
seems to be the expected behavior, provided that the mixing of all clips colors have been done already. So the idea would really only be to draw clip 1 with his mask on a transparent background, then clip 2 with his mask on top of that, etc. We get the final result, keep the RGB channels on one part, and we use the alpha channel as our new mask.
Also, reminder for later, I think we use quite a lot of pillow to do kind of that in the something_blit
function, but we could probably do it a lot faster in pure numpy...
Again, sorry for going in all direction with that one.
@OsaAjani I’ll revisit this in the AM when it is not 3AM as it is now. I noticed bugs in the CompositeVideoClip
months back and created a fix on my side; more to do with the code structure such that CompositeVideoClip
and VideoClip
do not share the same structure all functions need to operate on either identically. I'll look back into your comment here in the morning and think through what problem you are tackling.
Hey guys, first a little update: I have finished writing the introduction tutorial, the getting started part seems ok, the user guide is almost done, the API doc is now fully auto-generated (using autosummary and a little customization) I still have to write the "upgrade from 1.0 to 2.0" page, and probably a few others, like adding some doc about the moviepy.tools
module in the user guide.
I'm still not sure what we should do with the gallery and examples though (for both the code is old, we probably miss some resources to remake them, and I will not have the time to update them), any thoughts ? Maybe we could remove those for now and add them back when someone have updated them.
I was also wondering if we should keep CompositeVideoClip
into his own file and under moviepy.composite
(as the dir is now empty except for composite video clip), or if we should move it inside of the main VideoClip.py
. What do you think @Zulko ?
Can you post the link to the docs to review/read?
I haven't committed it yet, still a few things to do. I know it would be easier for you if I was to publish in a more atomic way, but I running short on time and will likely not have time to do forth and back after I realeased my changes. So I prefer trying to deliver everything as one coherent release to serve as basis, and let you and others make the necessary adjustments after/before publication.
Nope. Whatever works best for you. So long as goes through reading and editing before production.
I'm motivated to eventually dust off the gallery of examples, adapt it to the new API. No preference for CompositeVideoClip, but it would make sense to me to keep it in its own file (I prefer smaller file sizes).
So am I— @OsaAjani let me know if you want to go this together. Not motivated by individualist efforts at this time on my side. Personal preference— here to make colleagues. You are doing excellent work.
PS I have offline resources we should discuss at some point. Working orthogonal to you.
I'm motivated to eventually dust off the gallery of examples, adapt it to the new API. So am I— @OsaAjani let me know if you want to go this together.
That would be nice, we could include thoses into the getting started part of the doc. I won't have any time myself, but maybe you two can see to do it together ?
On my part I haven't been able to do much these last two days, but I think I'm more or less done with the documentation, I need to update my examples to be in sync with the latest api, and I really want to add some kind of testing on the doc examples. Nothing to fancy, just making sure the script run without error. This way we can be sure the doc is up to date, and it will also serve as some sort of functional testing with almost 0 overhead.
I also need to run the unit tests and fix everything that don't work (god I hate unit testing). Then it will be shipping time 🥳.
After that I will probably not be able to work much on the project for at least one or two months, I'll make sure to pass a head now and then, but nothing as time consuming.
While working on updating tests, I think I found a bug in current master, where gif written with imageio have no duration and have incorrect framerate, that should be fixed with my last commit. So if you do something related with gif in the examples, make sure to use latest commit.
Hey @Zulko, I could use your help to re-introduce tests on iPython in this commit: https://github.com/OsaAjani/moviepy/commit/5cd0cac1b46124536b1133f1d008f62b929881e2
I dont understand ipython enough and I dont have the time to familiarize myself with it enough to understand what seems to go wrong (not sure but I think i've seen someone tell this unit test is already broken in current master).
Looks good to my naive eyes. New syntax is very clear. One comment: do not import *
for security reasons.
I'm motivated to eventually dust off the gallery of examples, adapt it to the new API. So am I— @OsaAjani let me know if you want to go this together.
That would be nice, we could include thoses into the getting started part of the doc. I won't have any time myself, but maybe you two can see to do it together ?
I will go through the gallery of examples tomorrow in the afternoon. For an hour or two and write a set of notes. From there pitch it back to you and @Zulko and see what the group wants to do.
Should we create a separate branch off master
against which the PR should be made, so the review process (+ any further changes necessary) can happen independently of what goes on in the master branch? I feel like that would create fewer headaches in case anything else gets merged into master
during that time.
Makes sense to me, @keikoro. Might want to wait until @OsaAjani finishes this major change; and, anything after that or done by someone else goes on a separate branch you create. Thoughts?
Should we create a separate branch off
master
against which the PR should be made
I'm not sure we need it, as I think we are now at no more than a few days of making the PR, and I hope the review will be fairly quick.
In the future though, I would advice that we keep only the master branch representing the current published version, and that we add a second branch named "dev" to made PR against and prepare the next release. This way a user landing on the github will see the current state of things in sync with pipy, and the dev will give a preview of the future. By accepting PR on dev frequently if unit tests pass, then the dev can also be used as some sort of nightly build, letting users try latest features in advance.
TL;DR, Doing some preview and show of clip, I think I found that
CompositeVideoClip
masks auto computed at generation are broken, or that rendering of composite video clip with clip that have masks is broken and we never seen it because ffmpeg ignore alpha channel when producing video format that does not support transparency (basically anything but webm). I need someone to check. Please go see how I think I fixed it for preview and show in OsaAjani@aa17db2 for a first entry point to the problem.
I had faced the issue 2/3 months back, but just ignored it(dropped the idea), as I thought, I might be doing something wrong as I am new to programming. But I had composited a complex composite clip (as I am from Graphics background) which consisted of a mask which itself was a composite clip, consisting of images with their own masks.
In the future though, I would advice that we keep only the master branch representing the current published version, and that we add a second branch named "dev" to made PR against and prepare the next release.
Oh yeah, I didn't mean to target this upcoming PR only, or specifically, I very much agree we should have two branches going forward (might have mentioned this in older comments as well), this PR would simply have been the one with which we could introduce this new system.
I also think if we're already changing how we handle branches anyway, we should revisit an older suggestion of mine to rename the master branch to main since main
branches have become a lot more prevalent since GitHub made it the default and many (much) large(r) projects have also switched over time. Tbh, when I see master
branches on established projects now, I wonder if it's due to inactivity, having slept on this change or stuffiness/super oldschool-ness/unwillingness to go with the times.
rename the master branch to main since
main
branches have become a lot more prevalent since GitHub made it the default
Seems like a good idea to me.
@keikoro
Is there t
TL;DR, Doing some preview and show of clip, I think I found that
CompositeVideoClip
masks auto computed at generation are broken, or that rendering of composite video clip with clip that have masks is broken and we never seen it because ffmpeg ignore alpha channel when producing video format that does not support transparency (basically anything but webm). I need someone to check. Please go see how I think I fixed it for preview and show in OsaAjani@aa17db2 for a first entry point to the problem.@OsaAjani Do you need me to read something tomorrow morning? Some code? Documentation?
Well, there is something I just stumble upon and I'm not sure to understand. I'm not sure how much you can help though as it is quite core behavior and I dont know how much you had time to look into MoviePy code yet. So maybe @Zulko opinion would be more adapted on that one, anyway if you think you can help, the more the merrier :)
So, here is my problem: I think there is, and probably always has been, a bug in MoviePy handling of composite clip masks, but I'm not sure, maybe I just miss something. It's kind of hard to explain, probably because things are still unclear in my head, but I will try to explain as well as possible.
Also it is 3PM right now and I wanted to post all of that while it was still fresh, but I kind of dropped thinking about 1 hour ago, so please excuse me if my explanations and ideas are kind of confuses all over the place...
As you know I've been doing some rework of the preview/show functions to use ffplay and show. While writing some code for the intro tutorial, I encounter some unexpected behavior when using the show and preview function, more precisely when previewing a composite video clip with images that have masks, I had black background where they should have been transparent. Both for preview and show.
Expected result:
Actual result:
What was strange was that it only happen on preview, on
write_videofile
it was ok.Looking more in depth to what seems to be the problem, I think I nailed it to be part of the code that I just copy/paste from how MoviePy previously compute a frame to be send to ffmpeg for writing, precisely that part that I removed (I have fix the problem, so and I invite you to go and see the changes in
ffplay_previewer.py
with my last commit to see full code):if clip.mask is not None: mask = 255 * clip.mask.get_frame(t) if mask.dtype != "uint8": mask = mask.astype("uint8") frame = np.dstack([frame, mask])
This part seems to basically apply the clip mask. That led me to think that composite video clip computed masks are bad. Are we maybe applying the mask twice ? Like for composite video clip
get_frame
would have already done the mask apply in thesomething_blit
(dont remember the exact name) function ?In fact I think the whole generation of transparent video always had that bug (at least for v2), and that we simply never seen it because viewing transparent video is so hard (almost all tools use a black background and not a mosaic like photo previewing or simply drop transparency), and people were generating videos in formats that does not support transparency, making ffmpeg just ignore the buggy alpha channel, or with simple enough video (without compositing, or without overlapping transparent/non-transparent part of images), so that the bug was never spotted or clearly identified earlier.
What is sure is, if I also remove that part from video writer, it does not change the result.
So, I would need someone to look at that and tell me if I'm wrong, if transparent video generation with overlapping images such whith img with transparency under partly overlapping img with transparency does work as expected, or if I'm right and this is buggy and need to be fixed. In which case I would simply propose that we drop transparent video generation support for now.
The bug is probably only visible when you have multiple clip with overlapping mask/non-mask zone, like a text behind an image with a transparent background, like in my example.
If it can help, and because I will forgot this if I dont write it somewhere:
I think the composite video clip just combine all individuals clips masks into one new composite video clip and set this as the mask. But in fact, the composite video clip should probably never have any mask, at least if we dont intend to support transparency for video.
Not sure if the combining of individual masks into a composite video is bad logic, like this is not how to compute the final mask, or if this is bad implementation on composite video clip rendering when
is_mask == true
. But what I know is that if we apply the mask, the result is not the expected one. More precisely I think that it make "any transparent zone in any clip" become transparent for the entire clip, instead of "any zone that is transparent on every clip". Also I'm not sure how this would play for clip where only partial transparency apply.I think the proper logic for combining masks would be something like a simple addition of all the masks with a max at 1, like a
[0, 0.2, 0, 1] + [0, 0.5, 1, 1] = [0, 0.7, 1, 1]
seems to be the expected behavior, provided that the mixing of all clips colors have been done already. So the idea would really only be to draw clip 1 with his mask on a transparent background, then clip 2 with his mask on top of that, etc. We get the final result, keep the RGB channels on one part, and we use the alpha channel as our new mask.Also, reminder for later, I think we use quite a lot of pillow to do kind of that in the
something_blit
function, but we could probably do it a lot faster in pure numpy...Again, sorry for going in all direction with that one.
Did you find the solution to this Alpha channel not being used in transparent webm video? I also see black instead of the BG . Looks like Alpha masking calculation is broken.
: output with transparent webm video in the middle
I have absolutely no memory of the state of the issue in the end. It's been quite a long time... You might dig in the commits to see if I posted any comment about fixing alpha channel. Also make sure the alpha channel is actually present, it's quite hard to find a webm test file that actually has an alpha channel ^^.
Yes, webm file has alpha, verified using ffmpeg. I gave up on Moviepy and used ffmpeg directly. :(
Maybe this issue should be renamed / closed ?
Making suite to @keikoro suggestion in issue #1874, I open this issue to be the place we can discuss the changes (general as well as specific implementations) we want to see for v2.0 API and architecture.
If you was part of technical discussions for v2.0 API, please come here: @Zulko @mgaitan @tburrows13 @davidbernat