nadermx / backgroundremover

Background Remover lets you Remove Background from images and video using AI with a simple command line interface that is free and open source.
https://www.backgroundremoverai.com
MIT License
6.7k stars 558 forks source link

How does it work? #1

Closed rubyFeedback closed 3 years ago

rubyFeedback commented 3 years ago

How does it work?

Would it be possible to add a paragraph to the main README.md?

I am not looking for a detailed 1:1 source code explanation, but just the main gist of how the correct pixels are determined and whether the program may also pick the "wrong pixels" (I assume some backgrounds will be harder). If you have some time in the future, a paragraph would be nice in this regard - does not have to be overly long eithers. Thanks for reading!

TheJackiMonster commented 3 years ago

I assume it scans the image for a human, segments the image in parts which are connected to the potentially moving human in the foreground and other parts will be the background via exclusion. Otherwise it would probably not work for animated backgrounds.

Another approach would be using an additional image source of the background only to help separating between background and foreground via differences to the background source. But because there is no such additional source given as parameter in the examples, that doesn't work here.

The most consistent way would probably be an additional input from a depth sensor/camera. So there wouldn't be the requirement of a human or specific object to be recognized in the foreground. However then you had bigger hardware limitations for users of the software. ^^'

woctezuma commented 3 years ago

Just check the code. The gist of the algorithm is xuebinqin/U-2-Net.

https://github.com/nadermx/backgroundremover/blob/5cfa5bdc5557fb1b56b610a6f380250a114c201c/src/backgroundremover/bg.py#L183-L189


  1. a segmentation mask is predicted with a neural network:

The first models (u2netp and u2net) are for salient object detection, and the other (u2net_human_seg) is for human segmentation.

https://github.com/nadermx/backgroundremover/blob/5cfa5bdc5557fb1b56b610a6f380250a114c201c/src/backgroundremover/u2net/detect.py#L20-L30


  1. then alpha-matting cut-out is performed using:

https://github.com/nadermx/backgroundremover/blob/5cfa5bdc5557fb1b56b610a6f380250a114c201c/src/backgroundremover/bg.py#L158-L161

https://github.com/nadermx/backgroundremover/blob/5cfa5bdc5557fb1b56b610a6f380250a114c201c/src/backgroundremover/bg.py#L132-L133

https://github.com/nadermx/backgroundremover/blob/5cfa5bdc5557fb1b56b610a6f380250a114c201c/src/backgroundremover/bg.py#L7


On a side-note, the README was updated 3 days ago (https://github.com/nadermx/backgroundremover/commit/5cfa5bdc5557fb1b56b610a6f380250a114c201c) with a link to a little blogpost mentioning U2-Net, which was posted two days before the OP asked the question. So the information was already available through this other mean but not as easily available as it is now.


Finally, I will mention a competitor to U2-Net, which is MODNet. I don't know which performs better.

nadermx commented 3 years ago

Thank you @woctezuma I will also add, that the video part, has a whimsical with a video made by https://github.com/ecsplendid whom is the developer who's code I modified to get it working on this, which all came from this issue in the rembg library

nadermx commented 3 years ago

@woctezuma I'll also put on the todo to be able to import in different models.