Closed de-code closed 4 years ago
I was considering it myself and currently there is almost no code I cannot re-license in the main files. I am still not sure about if I want it or not. I (almost) avoided merging other code and was considering to ask people to license code for the main project as MIT/BSDL or some other weak copyleft license to keep that option.
Do you have any specific project for which you would like to use it with a reason why it cannot be GPLed?
If you're only developing plugins there is no problem keeping your own code MIT or a similar GPL compatible license to re-use it in other projects.
Other issues:
bodypix_functions
are a rewrite of some of the code. No code copied, but some techniques re-implemented in a very similar way.Thank you for getting back so quickly.
Do you have any specific project for which you would like to use it with a reason why it cannot be GPLed?
There is no specific project. It's more like a what if... some of the code becomes useful (e.g. contributions) for other purpose or could become a library. Or maybe allow other people to use it in other projects licensed under MIT. The GPL license just makes that more difficult with more grey areas. I was considering contributing, e.g. to make it faster. (I wouldn't have a lot of time to spare though)
If you're only developing plugins there is no problem keeping your own code MIT or a similar GPL compatible license to re-use it in other projects.
Well, that is one of the grey areas I would say. If the GPL code depends on MIT code than that is certainly fine. But if MIT code has a hard dependency on GPL code than that could be an issue.
- the node module is apache licensed. This is GPL compatible, but I don't know about the compatibility with MIT/BSDL. The
bodypix_functions
are a rewrite of some of the code. No code copied, but some techniques re-implemented in a very similar way.
Apache and MIT license are good friends. Wikipedia refers to them as Permissive software license. I usually use MIT but any license of the group would probably be fine.
- I wish the tensorflow model would have a clear license, but I cannot find one anywhere. Otherwise I could bundle the decoded model instead of using tfjs-to-tf and download scripts.
Probably can find out. But all related code appears to be Apache licensed.
You have a dependency on pyfakewebcam
which is GPL. Maybe it would be good to find an alternative for that or re-implement it.
Python and GPL are kind of a grey area anyway. The GPL talks about distributing (binaries) and on your PC freedom 0 applies (use it like you want). As long as I distribute it unbundled, the user is combining different sources on his PC and none of the licenses forbids it. Even Java developers tend to use licenses like the CDDL, that are more specific for programming languages that combine Code/intermediate code without linking.
In the end this would mean as you said that your plugin is treated as GPL as long as you use it with the project but as MIT when you're distributing it standalone. If you then consider it having a GPL dependency or just being incomplete and needing to be adapted to work standalone is a question of your view point, but your code without additions from others will always be your code and can have many licenses.
Do you only want to contribute when it isn't GPLed? There are two questions here when you want to contribute larger changes to the core scribt, first if you don't like to contribute to a GPL project (your code can be dual licensed, but for changes in the core it doesn't really make sense) and second if I would like a GPL patch there or would prefer it being weak copyleft so the project can become more open later on, without asking too many people to re-license.
Probably can find out. But all related code appears to be Apache licensed.
Yes, but nothing about the model anywhere and no Apache licensed repo that contains the actual model.
Do you only want to contribute when it isn't GPLed?
I'd be more inclined to invest a bit more time in it. But I already appreciate you considering it and having the discussion.
There are two questions here when you want to contribute larger changes to the core scribt, first if you don't like to contribute to a GPL project (your code can be dual licensed, but for changes in the core it doesn't really make sense) and second if I would like a GPL patch there or would prefer it being weak copyleft so the project can become more open later on, without asking too many people to re-license.
I guess I am in favour of making code permissive to anyone. So I thought it's still early to make it easy.
Probably can find out. But all related code appears to be Apache licensed.
Yes, but nothing about the model anywhere and no Apache licensed repo that contains the actual model.
This thread might be worthy a separate issue?
I found Google Coral's project-bodypix which embeds a version of the model (repo itself is Apache licensed). The repo itself might also be a source of inspiration.
The tfjs-models repo is also Apache licensed. While the model files themselves are not included it does give the impression that they are licensed under the same or related license (since the repo is about the models).
It's certainly worth considering. I just don't see any indication that the model is meant to be less permissive.
In any case, I am not sure whether the files need to be re-hosted. The download itself could just be improved (e.g. on-demand).
My thoughts about it were that I am working in computer graphics and when the program got popular I was not sure if it may be something that our clients may want to have. I like to have it available to everyone, so the open code is at least GPLed, but for licensing it to some company they may both want it be usable for them (what may mean that I may want to keep patches from others as MIT/BSDL or similar, as I do not like to start with complicated CLAs), but also that they possibly would not like to have others integrate the open code in their competing solution without giving something back. Now that the initial Zoom/video conferencing hype is over it both looks like keeping it GPL or opening it may be possible without major consequences, but I am not sure yet. For keeping it open I probably would like it being GPL as it may be very easy to create a product without giving back anything at all.
This thread might be worthy a separate issue? It may be. I tried to find it out myself, e.g., here: https://ai.stackexchange.com/questions/20369/how-does-a-software-license-apply-to-pretrained-models
I found Google Coral's project-bodypix which embeds a version of the model (repo itself is Apache licensed). The repo itself might also be a source of inspiration.
This is interesting. In such projects I am always not so sure if they aare licensing it or if they are careless about documenting the license/copyright (who should be named as authors?) themself. I would wish for some hint how to properly credit the authors.
In any case, I am not sure whether the files need to be re-hosted. The download itself could just be improved (e.g. on-demand).
I'd like to rewrite the script from simple body-pix in python. But probably I should find out before how to use the quantized models, so any of the available models can be used.
I like to have it available to everyone, so the open code is at least GPLed, but for licensing it to some company they may both want it be usable for them (what may mean that I may want to keep patches from others as MIT/BSDL or similar, as I do not like to start with complicated CLAs),
I am not sure how you will be able to effectively apply a different license to patches when they are intermingled with your code that you are also maintaining.
but also that they possibly would not like to have others integrate the open code in their competing solution without giving something back.
I guess that is the general argument for GPL. But unless you are really selling a product, what would be the loss and how likely is it? There may even be contributions back by having used it, if there any bugs.
Back when I started out coding I didn't release any code because I thought someone might use it or I could have. Now I would go back in time and just tell myself to get it out there.
In any case, I don't completely understand your work-related involvement and what it might mean. So you may have other considerations if you are intending to create a product out of it. But it will make contributions more difficult. You might find it easier yourself to have a MIT licensed core project with contributors and a propriety product which may be GPL but that only you or colleagues contribute to.
I am not sure how you will be able to effectively apply a different license to patches when they are intermingled with your code that you are also maintaining.
When a patch is MIT licensed you can apply it to any code given you give credit as described in the license.
This would mean you can for example apply it both to the GPL code and to another GPL free code. One can license code that's just by them under different licenses, e.g. one time GPL and one time "for customer's X product only" and then apply MIT patches giving the right credit. In the public version the patch will only be usable together with GPL code, in the private version there is no GPL code, because it is the same code licensed another way.
This is of course only true as long as no GPL contribution from other people is used in the private code.
Back when I started out coding I didn't release any code because I thought someone might use it or I could have. Now I would go back in time and just tell myself to get it out there.
I plan to keep the project open. The only question is if I have the option to provide potential customers with an extended version with changes they payed for that possibly should not be available for their competitors.
In any case, I don't completely understand your work-related involvement and what it might mean. So you may have other considerations if you are intending to create a product out of it. But it will make contributions more difficult. You might find it easier yourself to have a MIT licensed core project with contributors and a propriety product which may be GPL but that only you or colleagues contribute to.
MIT core is the Alternative to "CLA core" (in the simplest form meaning to accept core patches only under weak copyleft without requiring developers to sign complicated contracts).
But this also means that the open version can be the base for a lot of commercial products without any return. Imagine a company like Zoom incorporating your product and the only thing you get out of it is your name on the "About this product" page. I get the idea for libraries, but I do not like it for products.
I currently asked one contributor in the issue linked above if he would like to license his patch under MIT, so I have some more time to decide on GPL/open core/MIT, but I think I want it to be GPLed. And if you look closely there are even three GPLed lines in the core, which are not under my copyright.
Hello again, not sure if it was a good idea, but I embarked on a project to re-implement the model as python-tf-bodypix, licensed under MIT. I was reading the original tfjs code while doing so, but I may not have completely unseen your implementation. Please do let me know, if you think there is any issue. Perhaps you might even find it useful and of course would welcome collaboration. It also has simple code to download the original model. It's designed to be used as library focusing on the bodypix model or demo CLI application. i.e. I might add background replacement for demo purpose but advanced config and filters would be out of scope of that project.
It is mostly about the bodypix_functions.py
file, isn't it?
I already thought about licensing this file under MIT/BSDL/Apache or some similar license. The part I am unsure about is the license or not is the main project.
The bodypix functions are created after reading the bodypix typescript code, so it may be fair to keep it under the same license. I did not decide this part yet as it is still used correctly under GPL for now and I need to make clear what part of the project has which license to license them differently, but I think I am open to license these functions with weak copyleft.
When your code can be used easily and does not introduce too many dependencies it may be a good idea to work together on having this part as a weak copyleft library.
Is there a reason why you did not choose the Apache license here?
For libraries I usually just use what the upstream (here inspired by bodypix js) uses to make it easier for people to use my contributions if they find them useful. Maybe the Google engineers would like to use it with python as well.
Did you get the quantized models to work in your implementation?
It is mostly about the
bodypix_functions.py
file, isn't it?
Perhaps in a way. Maybe slightly more than that. I understand that this file is mostly replicating the JS code. The code that mostly copies the JS code I put in the bodypix_js_utils
subpackage. So I guess the main purpose of that new project is to also include running it via TensorFlow, helping with getting the model, further abstracting away the difference between mobilenet and resnet etc. So that with a few lines of code one can make the bodypix model useful (I've included something in the README).
Then it could be used in projects like yours, or other projects. (e.g. I was considering using it in OBS Studio, just not sure what an easy way would be)
When your code can be used easily and does not introduce too many dependencies it may be a good idea to work together on having this part as a weak copyleft library.
For that reason I made all dependencies "extras", so that one can include just the dependencies needed for the use-case. TensorFlow 2.x (and I think numpy) will be required in any case.
Is there a reason why you did not choose the Apache license here? For libraries I usually just use what the upstream (here inspired by bodypix js) uses to make it easier for people to use my contributions if they find them useful. Maybe the Google engineers would like to use it with python as well.
It is just that MIT is currently my default license. The license should be compatible with the Apache license. So far I have seen them as interchangeable. Would be happy to change the license if that becomes an issue.
Did you get the quantized models to work in your implementation?
I haven't tried yet. I am certain it will currently not work due to some dependency on the floats. When I looked where time is spent using your project with mobilenet, then the majority was spent after the model or calculating masks that weren't needed for the selected filters. But certainly something that would be nice to try out.
When you also reimplemented the Javascript code of the bodypix functions and some model handling, I think it's nice to have a package bundling it. Libraries under weak copyleft is useful, I just (currently) do not want to release the main program under a weak copyleft license.
My first implementation started as a fork in https://github.com/allo-/simple_bodypix_python which does not have a clear license, which was one reason to create project that does not depend on this code.
For the actual weak copyleft license I have no strong preference when writing libraries. Using the upstream license even when you could use another one is more about making it easier to merge the file into the upstream without requiring them to add another license file for a very similar license.
I guess for python code that cannot be merged with the js code anyway it does not really matter.
For the differences between mobilenet and resnet, this project has to consider the effect on plugins that use the body part masks here. I am not sure in how much detail you want to process this data.
And I think resnet isn't that useful for real-time processing on most machines and it depends on lightening and other factors if the biggest model is actually the best model.
Did you see the github issue in the obs-studio project for integrating segmentation? This could be a good starting point for discussing the inclusion of your library.
I haven't tried yet. I am certain it will currently not work due to some dependency on the floats. This may be some task for the tfjs-to-tf project, but I am not really sure at which point this needs to be handled.
For the actual weak copyleft license I have no strong preference when writing libraries. Using the upstream license even when you could use another one is more about making it easier to merge the file into the upstream without requiring them to add another license file for a very similar license. I guess for python code that cannot be merged with the js code anyway it does not really matter.
They are all valid comments. To be honest I have been quite ignorant about the difference. You made me read up more about it but need to digest it more.
For the differences between mobilenet and resnet, this project has to consider the effect on plugins that use the body part masks here. I am not sure in how much detail you want to process this data.
I may not have implemented everything. e.g. I haven't added anything related to the poses. But the part masks seem to be the same in principal?
And I think resnet isn't that useful for real-time processing on most machines and it depends on lightening and other factors if the biggest model is actually the best model.
Have you tried resnet with a GPU? (I don't have one on my local laptop) With the CPU, it is taking around 180ms for the model. There may be room for improvement by being more selective about the tensors. e.g. the part mask may not be relevant for the use-case. For the purpose library it's just to offer the option. At least as long as the effort is limited.
Did you see the github issue in the obs-studio project for integrating segmentation? This could be a good starting point for discussing the inclusion of your library.
I haven't actually. Perhaps you could post the link? My keywords only led to virtualcamera issues.
I guess using the Python project would still be somewhat a hack. The "proper" way would probably be via TensorFlow Lite for C/C++. But now I'd be happy with a hack. Not sure if you have an idea to get a blurred alpha mask into OBS? (I know this is getting well of topic)
I haven't tried yet. I am certain it will currently not work due to some dependency on the floats. This may be some task for the tfjs-to-tf project, but I am not really sure at which point this needs to be handled.
I only had a brief look. Actually the "quant" TensorFlow JS models seem to load just fine. But they are using floats and have the same performance. I am not familiar with the quant models myself, but it might be hardware dependent on whether it provides a speed-up? It may only make sense with TensorFlow Lite. The models in project-bodypix are specific for the EdgeTPU.
I may not have implemented everything. e.g. I haven't added anything related to the poses. But the part masks seem to be the same in principal?
No, the tensors are different:
if model_type == "mobilenet":
segment_logits = results[1]
part_heatmaps = results[2]
heatmaps = results[4]
else:
segment_logits = results[2]
part_heatmaps = results[5]
heatmaps = results[6]
And the preprocessing is also different.
Have you tried resnet with a GPU? (I don't have one on my local laptop)
I think it worked on my GPU with 2-5 fps, but I would need to test again. It wasn't that useable and with the parameters I tried the results were not that much better than using mobilenet.
I haven't actually. Perhaps you could post the link? My keywords only led to virtualcamera issues.
Sorry it was webcamoid, see https://github.com/webcamoid/WebcamoidIssues/issues/26
I guess using the Python project would still be somewhat a hack. The "proper" way would probably be via TensorFlow Lite for C/C++.
In the beginnen of the project I thought about C/C++ (before having plugins), but python modules with C backends are very fast. You can't beat numpy with naive C code easily. I guess the numpy backend uses vectorization and other optimizations that a simple C program does not use.
I only had a brief look. Actually the "quant" TensorFlow JS models seem to load just fine. But they are using floats and have the same performance.
I am not sure. The last time I tried something was wrong. I am not sure what was the problem, but I think the tensors had no usable results. Thinking about it now, it may be that the tensors need other preprocessing. Maybe it provides only a speedup on hardware that works faster with floats than with doubles.
I may not have implemented everything. e.g. I haven't added anything related to the poses. But the part masks seem to be the same in principal?
No, the tensors are different:
if model_type == "mobilenet": segment_logits = results[1] part_heatmaps = results[2] heatmaps = results[4] else: segment_logits = results[2] part_heatmaps = results[5] heatmaps = results[6]
So far I seem to get by using the names.
And the preprocessing is also different.
There still seems to be little difference, that can be hidden away in the library.
This is what I have at the moment: https://github.com/de-code/python-tf-bodypix/blob/7b65e904fca000944e413ee491c2fe375196244b/tf_bodypix/model.py#L81-L109
I haven't rigorously tested it though. Especially the part masks. It seems to show something though.
Have you tried resnet with a GPU? (I don't have one on my local laptop)
I think it worked on my GPU with 2-5 fps, but I would need to test again. It wasn't that useable and with the parameters I tried the results were not that much better than using mobilenet.
That is what I seem to be getting with my CPU for resnet without part masks (>= 4.5 fps). I did see some noticeable difference for some sample images. But for myself I haven't either yet (I am not wearing much decoration I guess).
I haven't actually. Perhaps you could post the link? My keywords only led to virtualcamera issues.
Sorry it was webcamoid, see webcamoid/WebcamoidIssues#26
Okay, great. That's a long thread.
I guess using the Python project would still be somewhat a hack. The "proper" way would probably be via TensorFlow Lite for C/C++.
In the beginnen of the project I thought about C/C++ (before having plugins), but python modules with C backends are very fast. You can't beat numpy with naive C code easily. I guess the numpy backend uses vectorization and other optimizations that a simple C program does not use.
Yes, you are right, Python makes it easy to use C code. The reason why I thought using C TensorFlow Lite would be better for OBS, was because it is written in C and it doesn't seem to support proper Python plugins (just some limited Python scripts). webcamoid
also seems to be in C++. I guess I need to read all of the comments of the issue..
@de-code I licensed the bodypix_functions.py
file under MIT. If you like to use something from it for your project, you can now integrate it.
And see also webcamoid/webcamoid#46 for using a quantized model.
@de-code I licensed the
bodypix_functions.py
file under MIT. If you like to use something from it for your project, you can now integrate it.
Great, thank you.
I close here for now. I think I'd like to keep the main program GPLed an may add MIT license to other library parts where it makes sense.
For example something like the get-model script could be a candidate for a very open license, when I extend it. But I think you have a download function in your library? Then it's probably more useful to depend on this.
But I think you have a download function in your library? Then it's probably more useful to depend on this.
Yes, I attempted to make things easy. Remote models or files are downloaded and indefinitely cached based on the URL.
BTW just thought I mention I started experimenting with an extension project layered-vision
, which is somewhat similar to yours. Just wanted to experiment with a slightly different approach. It is a bit less coupled to the model (e.g. you could also use a chroma key without the model being loaded). But the config is more verbose. Not sure if it will serve an actual purpose in the end. In any case, don't mean to advertise it.
Hi,
Thank you for creating the project. I was thinking of creating something like that after seeing
Linux-Fake-Background-Webcam
, but then you already created it. This may be a long shot, but I was wondering whether you are open to change the license to MIT (or compatible)? It could potentially make it more attractive for re-use. I understand if you rather don't.Thank you