Open mikebilly opened 1 year ago
Fastest way and less effort is: train normal and crop model after.
I want to do this automatically, I'm thinking of removing the background of the input images first and give to gaussian splatting
I think the best way is: 1) do background-segmentation on 2D images, and represent as rgbA or mask 2) modify GS to GS++ to support mask. The reconstructed result for 'object-reconsruction' should be the best: clean.
@yuedajiong I tried background segmentation on 2D images and created RGBA images but Gaussian Splatting just ignores it. The output .ply still included the background. I used the following code:
original_image = Image.open(original_path).convert("RGBA")
transparent_image = Image.open(transparent_path).convert("RGBA")
# Resize transparent image to match original image size
transparent_image = transparent_image.resize(original_image.size, Image.ANTIALIAS)
# Extract alpha channel from transparent image
alpha_channel = transparent_image.split()[-1]
# Combine RGB of original with alpha of transparent image
combined_image = Image.merge("RGBA", (*original_image.split()[:3], alpha_channel))
# Save combined image in 'input' folder
combined_image.save(os.path.join(input_folder, base_filename + '-masked.png'))
Could you please tell me how to make Gaussian-Splatting detect RGBA? And could you please tell me how to modify GS to GS++ to support mask and do object-reconstruction?
For mask: I use colmap mask feature https://colmap.github.io/faq.html#mask-image-regions For HUD/UI like element it is ok. If you trying it for 'moving' mask you will have a lot of work. (hint: you can use different soft for this)
Steps:
Still is some cases leaving background is profitable for convert phase, cus colmap using part of backgraund for tracking purpose too. You will get more pictures for train phase (this is important for me ;).
@jaco001 My ultimate goal is to create a 3D mesh of only the wanted object from 2D input pictures through command line. There are two problems that I'm stuck on. First is to separate the background from the object. Second is to generate a 3D mesh.
For the first problem, I tried using online website to remove the background of the images and merge into the original image to create RGBA. So as you say, I would create a new folder called mask
and save the corresponding mask for the image, like 1.png
in the mask
folder corresponds to the original image 1.png
in the input
folder. And the mask would be like the object is white, the background is black and the resulting .ply would have no background?
For the second problem, I'm still looking for the solution..
Hey, as some have said the code doesnt support rgba, it would have to be modified. https://github.com/graphdeco-inria/gaussian-splatting/issues/64#issuecomment-1658597573 BUT you can try the mask the outside area of the object with pure black, this has to be applied over the original images, the code doesn't support separate mask files.
This approach is not designed for creating meshes, other photogrammetry techniques are much more suited.
This approach is not designed for creating meshes, other photogrammetry techniques are much more suited.
100% GSs are tempting to use for reconstruction, but this is a dead end. Treat the GS as an 'almost model' + baked ray tracing on it (without heavy ray tracing).
and the resulting .ply would have no background?
In short: First .ply after convert.py - yes Second point_cloud.ply still will have some cus train algorithm put its own points and colorize it.
So we back to first answer -> train normal, remove after <-less work
@jaco001 The thing is I want to do batches, I have many sets of input pictures to create 3D models, and I want to do this process automatically without human labour.
@jaco001 The thing is I want to do batches, I have many sets of input pictures to create 3D models, and I want to do this process automatically without human labour.
Did you successed with batching the images ?
@mikebilly
if we want to support really mask in GS, we need modify both python/pytorch and cpp/cuda.
I do not implement it so far. :-(
I'm trying to separate the background from the foreground by making the background transparent. I tried making the background transparent on the input images before giving to gaussian-splatting but it doesn't work. If I give it RGBA images where the background's alpha is 0, gaussian-splatting just ignores it and still gives out model with visible background. Can anyone help me?