Closed rubyFeedback closed 3 years ago
I assume it scans the image for a human, segments the image in parts which are connected to the potentially moving human in the foreground and other parts will be the background via exclusion. Otherwise it would probably not work for animated backgrounds.
Another approach would be using an additional image source of the background only to help separating between background and foreground via differences to the background source. But because there is no such additional source given as parameter in the examples, that doesn't work here.
The most consistent way would probably be an additional input from a depth sensor/camera. So there wouldn't be the requirement of a human or specific object to be recognized in the foreground. However then you had bigger hardware limitations for users of the software. ^^'
Just check the code. The gist of the algorithm is xuebinqin/U-2-Net
.
The first models (u2netp
and u2net
) are for salient object detection, and the other (u2net_human_seg
) is for human segmentation.
pymatting/pymatting
as pymatting.foreground
.On a side-note, the README was updated 3 days ago (https://github.com/nadermx/backgroundremover/commit/5cfa5bdc5557fb1b56b610a6f380250a114c201c) with a link to a little blogpost mentioning U2-Net, which was posted two days before the OP asked the question. So the information was already available through this other mean but not as easily available as it is now.
Finally, I will mention a competitor to U2-Net, which is MODNet. I don't know which performs better.
Thank you @woctezuma I will also add, that the video part, has a whimsical with a video made by https://github.com/ecsplendid whom is the developer who's code I modified to get it working on this, which all came from this issue in the rembg library
@woctezuma I'll also put on the todo to be able to import in different models.
How does it work?
Would it be possible to add a paragraph to the main README.md?
I am not looking for a detailed 1:1 source code explanation, but just the main gist of how the correct pixels are determined and whether the program may also pick the "wrong pixels" (I assume some backgrounds will be harder). If you have some time in the future, a paragraph would be nice in this regard - does not have to be overly long eithers. Thanks for reading!