MarcWangzhiru / SpeclatentGS

SpecGaussian with latent features: A high-quality modeling of the view-dependent appearance for 3D Gaussian Splatting
19 stars 0 forks source link

about Shiny Blender #2

Open 2693748650 opened 1 month ago

2693748650 commented 1 month ago

你好 请问[Shiny Blender]这个数据集在哪

MarcWangzhiru commented 1 month ago

Thank you for your interest. You can find Shiny Dataset at https://nex-mpi.github.io/.

2693748650 commented 1 month ago

Thanks for the guidance I trained according to your python train.py -s data/shiny/cd -m output/shiny/cd --eval 3090 (24G), you wrote in the paper that I used 4090 for training, why does it show that my GPU memory is not enough, is there something wrong?

2693748650 commented 1 month ago

77ecc42eee2af5cb88b2b8c863d7a51

2693748650 commented 1 month ago

I only succeeded if I trained on a 1/8 scale with -input r 8. Is this normal, please

MarcWangzhiru commented 1 month ago

Considering that our model is too slow to train, most of the computational overhead is caused by the view embedding. So our code puts the view embedding part into the Camera class, which is calculated before training. You could comment out the view embedding code in the Camera class and write it in train.py, although it might take longer to train. Or you could consider training on highlight scenes with a smaller number of images.

MarcWangzhiru commented 1 month ago

image

2693748650 commented 1 month ago

Hello, is there a detailed solution, please. I'd like to try to reproduce your code but I've been unsuccessful. I don't have a clue about writing the view embed code.

MarcWangzhiru commented 1 month ago

image

MarcWangzhiru commented 1 month ago

You can simply add these two lines of code to train.py, after splatting and before the decoding network. The view_embed here is the same as the preprocessing in Camera class, you need to comment out the view_embed in Camera class.

MarcWangzhiru commented 1 month ago

Our experiments are conducted on 1/4 resolution images, and for Shiny Dataset, I remember most scenes are 1008×756 resolution. The cd and lab scenes are 1600×900 resolution

2693748650 commented 1 month ago

Thank you for your guidance

2693748650 commented 1 month ago

I have another question for you. I ran the experiment after following your instructions (all experiments were performed at a resolution of -r 4). The resulting indicators do not match the papers, as shown in the figure below image image Excuse me why this is

MarcWangzhiru commented 1 month ago

This is a bit strange. Let me check it. Thank you for asking this question.

2693748650 commented 1 month ago

I trained on python train.py -s data/360_v2/{scene} -m output/360_v2/{scene} --eval -r 4.

MarcWangzhiru commented 1 month ago

Can you provide some code for your train.py? Let me make sure it's still view_emd problem.I'll test this code later

2693748650 commented 1 month ago

image I basically didn't change any of your github code, just added two lines as you guided, and the indicator would be a little higher without these two lines of code image

MarcWangzhiru commented 1 month ago

Did you concate them together? image

2693748650 commented 1 month ago

image

MarcWangzhiru commented 1 month ago

That looks fine. Thank you, I will test the code later.

2693748650 commented 1 month ago

Sorry to bother you. thank

MarcWangzhiru commented 1 month ago

I remember the original code can be trained with a single 3090 on the food scene, you can try it. And I will test the code later.

MarcWangzhiru commented 1 month ago

This is strange. Here are the results of my tests on the original code. This is the 7000 rounds result for the food scene. Maybe you can test the food scene using the original code to determine if there is bug in your modifier code. Maybe I can help you better if you could provide your code. image

MarcWangzhiru commented 1 month ago

I will provide modification instructions later in the README, thank you for your interest in our work.

2693748650 commented 1 month ago

image Hello, this is my result after 7000 iterations of python train.py -s data/shiny/food -m output/shiny/food --eval How can I give you the code?

MarcWangzhiru commented 1 month ago

You can just click on the article link and you will see my email address.

MarcWangzhiru commented 1 month ago

Just package your code and send it to my email. Look forward to receiving your email !

MarcWangzhiru commented 1 month ago

And the quality of the images you render is just as bad?

2693748650 commented 1 month ago

Okay, I've sent it, looking forward to your reply

wuweiyexiaomian commented 1 week ago

Hello, your code does not include the submodules module. I tried using the submodules module from 3D Gaussian Splatting, but it requires modifications. So, I would like to ask if you will be releasing your modified submodules module?

MarcWangzhiru commented 1 week ago

I uploaded its CUDA rasterizer to my other repository:https://github.com/MarcWangzhiru/Feature-Gaussian-Splatting, you may need to download glm third_party and the original simple-knn submodule.

XipengY commented 1 week ago

It's a interesting work, the details PSNR of each scene in shiny, as show in https://github.com/MarcWangzhiru/SpeclatentGS/issues/2#issuecomment-2376826485, could show it to us? thanks.

MarcWangzhiru commented 3 days ago

image Our results should be this. We downsample the images in the dataset 4x and run Colmap again.