Open 2693748650 opened 1 month ago
Thank you for your interest. You can find Shiny Dataset at https://nex-mpi.github.io/.
Thanks for the guidance I trained according to your python train.py -s data/shiny/cd -m output/shiny/cd --eval 3090 (24G), you wrote in the paper that I used 4090 for training, why does it show that my GPU memory is not enough, is there something wrong?
I only succeeded if I trained on a 1/8 scale with -input r 8. Is this normal, please
Considering that our model is too slow to train, most of the computational overhead is caused by the view embedding. So our code puts the view embedding part into the Camera class, which is calculated before training. You could comment out the view embedding code in the Camera class and write it in train.py, although it might take longer to train. Or you could consider training on highlight scenes with a smaller number of images.
Hello, is there a detailed solution, please. I'd like to try to reproduce your code but I've been unsuccessful. I don't have a clue about writing the view embed code.
You can simply add these two lines of code to train.py, after splatting and before the decoding network. The view_embed here is the same as the preprocessing in Camera class, you need to comment out the view_embed in Camera class.
Our experiments are conducted on 1/4 resolution images, and for Shiny Dataset, I remember most scenes are 1008×756 resolution. The cd and lab scenes are 1600×900 resolution
Thank you for your guidance
I have another question for you. I ran the experiment after following your instructions (all experiments were performed at a resolution of -r 4). The resulting indicators do not match the papers, as shown in the figure below Excuse me why this is
This is a bit strange. Let me check it. Thank you for asking this question.
I trained on python train.py -s data/360_v2/{scene} -m output/360_v2/{scene} --eval -r 4.
Can you provide some code for your train.py? Let me make sure it's still view_emd problem.I'll test this code later
I basically didn't change any of your github code, just added two lines as you guided, and the indicator would be a little higher without these two lines of code
Did you concate them together?
That looks fine. Thank you, I will test the code later.
Sorry to bother you. thank
I remember the original code can be trained with a single 3090 on the food scene, you can try it. And I will test the code later.
This is strange. Here are the results of my tests on the original code. This is the 7000 rounds result for the food scene. Maybe you can test the food scene using the original code to determine if there is bug in your modifier code. Maybe I can help you better if you could provide your code.
I will provide modification instructions later in the README, thank you for your interest in our work.
Hello, this is my result after 7000 iterations of python train.py -s data/shiny/food -m output/shiny/food --eval How can I give you the code?
You can just click on the article link and you will see my email address.
Just package your code and send it to my email. Look forward to receiving your email !
And the quality of the images you render is just as bad?
Okay, I've sent it, looking forward to your reply
Hello, your code does not include the submodules module. I tried using the submodules module from 3D Gaussian Splatting, but it requires modifications. So, I would like to ask if you will be releasing your modified submodules module?
I uploaded its CUDA rasterizer to my other repository:https://github.com/MarcWangzhiru/Feature-Gaussian-Splatting, you may need to download glm third_party and the original simple-knn submodule.
It's a interesting work, the details PSNR of each scene in shiny, as show in https://github.com/MarcWangzhiru/SpeclatentGS/issues/2#issuecomment-2376826485, could show it to us? thanks.
Our results should be this. We downsample the images in the dataset 4x and run Colmap again.
你好 请问[Shiny Blender]这个数据集在哪