Anttwo / SuGaR

[CVPR 2024] Official PyTorch implementation of SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering
https://anttwo.github.io/sugar/
Other
2.12k stars 153 forks source link

A really good issue! I did it using Windows! #66

Open kitmallet opened 8 months ago

kitmallet commented 8 months ago

It took about a week to figure this out as I am not as savvy as most of you when it comes to computers. After many attempts changing many of the python scripts I got it working and managed a small 39 image mesh (to save time) that was put through the gaussian splatting and finally all the way to outputting a workable obj I could use in blender. @Anttwo you did an amazing job! I did have to go through individually: train_refined.py extract_mesh.py train_refined.py and finally, extract_refined_mesh_with_texture.py

This is a small test, but with 39 photos it does not even compare to what meshroom can do which I have specialized in for 4 years.

Here is what came out. mountain

Thanks again for everyone's help and I am willing to help anyone else that is having issues on their windows platform. Regards, Kit

cubantonystark commented 8 months ago

hi @kitmallet, would you care to share a how to on windows as well as what changes did you make to the scripts to get such amazing results? Looking forward to your reply.

kitmallet commented 8 months ago

@cubantonystark Would love to, time to get some sleep now and will compile things tomorrow. It is pretty disorganized and would like to give something what will be clear and precise.

Anttwo commented 8 months ago

Hello @kitmallet,

Super cool! It's really nice to see that you managed to run the code on Windows with your GPU, and achieve the same quality than what we get on Linux. I see that your mesh has \~136,000 vertices, so I supposed you used the low_poly settings (or used a custom number of vertices)?

You seem to get similar results to the paper, so I suppose you haven't changed the hyperparameters. But if you did, I'm interested in knowing which hyperparameters you changed, as it can be useful for other users that have a similar GPU / memory budget.

Thank you so much, and looking forward to your reply!

Ryan-ZL-Lin commented 8 months ago

hi @kitmallet Happy New Year. It's amazing to see the 3DS converted to OBJ and displayed in Blender on Windows. I can't wait to see you sharing your experience, do you mind making a video to tell us how to set it up on Windows?

DiamondGlassDrill commented 8 months ago

@kitmallet also very much interested in it. PS: Happy New Year!!!

kitmallet commented 8 months ago

@cubantonystark @cubantonystark @Ryan-ZL-Lin @DiamondGlassDrill Just back from holidays and hope to have something to show in the next day. Happy New year everyone! Best, Kit

yifanlu0227 commented 8 months ago

Is it an outdoor scene? The extracted mesh is excellent!

kitmallet commented 8 months ago

@yifanlu0227 @Anttwo Yes it is an outdoor scene of a mountain. It was done with a small data set that was under 40 pictures and downsized to about 11152x648. As with any process like photogrammetry, nerfs or gsplat it is very important to sharpen the pictures through photo editing software. I used Topaz gigapixel and sharpen. The method of taking the pictures is quite different than photogrammetry though. I used a small data to speed up the process as it took hundreds of passes to test each result. Fortunately I have two computers although with only 12 gb RTX 3060s. But I do have 128gb ram which does help at times.

Please understand that I have only started learning python less than 3 weeks ago and I am a beginner at this. Since I did this on Windows platform I did have to change the "/" "\" convention in several .py modules. I adjusted them in: train_coarse_density.py extract_mesh.py, train_refined.py extract_refined_mesh_with_texture.py and coarse_mesh.py. Because of the convention issue I also had to run each module separately. This was a good thing though as I could adjust many of the issues that come with a smaller gpu and various errors.

This is what my last output name looked like recently as much of the information is contained in it.. Mesh save path: final_rock\sugarfine_3Dgs12000_densityestim1_sdfnorm1_level05_decim3000000_normalconsistency01_gaussperface1.obj

I adjusted various parameters in each module, removing processes I did not want. Like decimation, for example, I simplified from 0.1, 0.3 and 0.5 to just 0.5. I also adjusted many of the iterations like default to 12,000 which seemed ideal to me. I also increased the output decimation from 1000000 to 3000000.

I had to have my business partner with an RTX 4090 process the extract_refined_mesh_with_texture as my gpu and combined vram was not sufficient. I assume that for my needs I will need something to the tune of an 80gb A100 to get the results I would like.

In the process, I also realized that the max_faces_per_bin needed to be increased from the default set at 50_000. While the final result still is not ideal with a minimal dataset, It shows clearly that a great result can be produced. As a film maker is this enough in terms of resolution? I am thinking of perhaps using the textures from my meshroom output and baking them onto the gaussian splat in blender as I usually have 10-15 16k png files from that.

@Anttwo amazing job again to you! My question for you would be, what is the largest texture output possible with SuGaR?

Also, If anyone would like more information, let me know. @cubantonystark I didnt forget about you. There needed to be some more fine tuning as like I said, I am a beginner at this.

Best, Kit

alexzhenglong commented 8 months ago

Thank you for your contribution. Can you please provide specific instructions on how to run and view the mesh on Windows?

kitmallet commented 8 months ago

@cubantonystark @Ryan-ZL-Lin @yifanlu0227 @Anttwo @DiamondGlassDrill @alexzhenglong Windows steps for Sugar reconstruction from photos to gaussian splat to obj.

Changes to .py files

Subfolder changes, “/”, “\”, sugar_extractors folder coarse_mesh.py 70 args.mesh_output_dir = os.path.join("./output/coarse_mesh", args.scene_path.split("/")[-1]) To: args.mesh_output_dir = os.path.join("output\coarse_mesh", args.scene_path.split("/")[-1]) 72 args.mesh_output_dir = os.path.join("./output/coarse_mesh", args.scene_path.split("/")[-2]) To: args.mesh_output_dir = os.path.join("output\coarse_mesh", args.scene_path.split("/")[-2])

refined_mesh.py 37 args.mesh_output_dir = os.path.join("output\refined_mesh", args.scene_path.split("/")[-1]) To: args.mesh_output_dir = os.path.join("output\refined_mesh", args.scene_path.split("/")[-1])

sugar_trainers folder coarse_density.py 233 args.output_dir = os.path.join("./output/coarse", args.scene_path.split("/")[-1]) To: args.output_dir = os.path.join("output\coarse", args.scene_path.split("/")[-1]) 235 args.output_dir = os.path.join("./output/coarse", args.scene_path.split("/")[-2]) To: args.output_dir = os.path.join("output\coarse", args.scene_path.split("/")[-2]) Coarse_sdf.py 229 args.output_dir = os.path.join("./output/coarse", args.scene_path.split("/")[-1]) To: args.output_dir = os.path.join("output\coarse", args.scene_path.split("/")[-1]) 231 args.output_dir = os.path.join("./output/coarse", args.scene_path.split("/")[-2]) To: args.output_dir = os.path.join("output\coarse", args.scene_path.split("/")[-2]) refine.py 233 args.output_dir = os.path.join("./output/coarse", args.scene_path.split("/")[-1]) To: args.output_dir = os.path.join("output\coarse", args.scene_path.split("/")[-1]) 235 args.output_dir = os.path.join("./output/coarse", args.scene_path.split("/")[-2]) To: args.output_dir = os.path.join("output\coarse", args.scene_path.split("/")[-2])

Six Steps to OBJ Reconstruction

I all these steps need to be done one by one as I could not get it all working in windows with the single “train.py”. There are 6 .py steps to create a final obj. after data file is put into sugar.

convert.py train.py train_coarse_density.py or train_coarse_sdf.py extract_mesh.py train_refined.py extract_refined_mesh_with_texture.py

I worked in the film industry as a stuntman for many years so I wanted to do something fun for a scan! Enjoy, Kit

zombietest

OuOu2021 commented 8 months ago

I made it too! Since the author used string concatenation and hard-coded “/”s to handle paths, rather than cross-platform writing, slashes and backslashes appear mixed on Windows(like "./output/refined_mesh\playroom"), resulting in exceptions in str.split('/'). Instead of changing all the slashes in the scripts, I replace str.split('/') with re.split(r"[\\/]",str) function in re module to take multiple separators(both "/" and "\") at the same time, and it finally worked on Windows and can still work in Linux in theory. There might be better ways, but I'm also new to python so that is all I can do...

Anyway, @Anttwo amazing work! Thank you for your dedication.

WN-Wolf commented 8 months ago

@kitmallet Hi, you result is amazing, could you please share your dataset with ours?

kitmallet commented 8 months ago

@WN-Wolf do you mean you want me to share the pictures I used? I can do that. Let me know. I can also tell you how I prepare my photos if you like.

citystrawman commented 7 months ago

re.split(r"[\/]",str)

what is the "re" reference to? thank you

OuOu2021 commented 7 months ago

@citystrawman It refers to Python's built-in Regular Expression library. import re before using the function.

citystrawman commented 7 months ago

@citystrawman It refers to Python's built-in Regular Expression library. import re before using the function.

so by replacing str.split('/') with re.split(r"[\/]",str), you can now execute train.py on windows, without execute the command one by one as described by @kitmallet ?

OuOu2021 commented 7 months ago

@citystrawman yes. The positions changed are the same but by a different approach.

kitmallet commented 7 months ago

@citystrawman @citystrawman is right. Much easier method!! As I said I am very new to this, and I did it in a basic way that I knew would work but took more time doing step by step.

citystrawman commented 7 months ago

@OuOu2021 @kitmallet Thank you so much!

JiatengLiu commented 6 months ago

@kitmallet Hello, I have some question when run the repo. Let me start with my configuration:Ubuntu20.04 GeForce RTX 3090 and I use the train in tandt_db dataset for train and test, but I get very terrible result below. image Could you tell me the whole process to get the mesh(For example, the order in which files are run) Thanks:>

shadowlunaforever commented 5 months ago

Great job! I'm trying it and wish I can success!