alievk / npbg

Neural Point-Based Graphics
MIT License
324 stars 51 forks source link

The result doesn't show any color #15

Closed KimMingyeol closed 3 years ago

KimMingyeol commented 3 years ago

Hi I printed a trained data of epoch 20 by using viewer.py, but there's an issue As shown in the image below, it doesn't have any color I think there should be something wrong with the process of fitting descriptor Is there any solution for this? Thanks.

그림1 그림2 그림3

seva100 commented 3 years ago

Hi @KimMingyeol, can you please share with us your data (images and reconstruction) and commands you used to train and view the results?

KimMingyeol commented 3 years ago

Hi @seva100, here I attach a zip file in google drive containing 1) original images (images folder) 2) images undistorted (images used for training) 3)cameras.xml 4) point_cloud.ply 5) scene.yaml 6) trained data (pointtexture, unet.. ) Google Drive : https://drive.google.com/file/d/1hesoVmz-yPpMdKnOvPDF0m-x5dLulz5L/view?usp=sharing

Though the trained data says epoch_1, I have tried over 30 epochs, which after all, remained showing dark blue colors. I couldn't find the original epoch_30 trained data. I think i accidentally emptied the log folder. So I'm just sending you epoch_1 data. (it's in fact not to far away from epoch_30 results either way)

I did not use the pretrained data (net_ckpt: None), and other than that I basically used all default commands posted on github.

Commands used for training: python train.py --config configs/train_example.yaml --pipeline npbg.pipelines.ogl.TexturePipeline --dataset_names scene

Commands used for viewing: python viewer.py --config /home/alex/Codes/npbg_photosets/Train_Jigok2/scene.yaml --checkpoint data/logs/epoch1/checkpoints/PointTexture_stage_0_epoch_1_scene.pth --origin-view

Finally, About getting the point cloud, I also just simply used the script included in your npbg repository, so I doubt there was a problem there. It'd be nice to have a solution back asap. Thanks.

seva100 commented 3 years ago

@KimMingyeol, I can confirm the issue. I tried checking if everything is correct by the procedure outlined here, and the renderings, which should contain XYZ points colored as RGB, are all completely black. Most likely, it means that something is wrong with the camera poses in cameras.xml. Please try exporting the cameras from Agisoft Metashape again.

In the meanwhile, I'm trying to reconstruct and train your scene on my side.

seva100 commented 3 years ago

Seems like either the cameras were wrong or the point cloud was not aligned with them. I've made a reconstruction based on your photographs, and NPBG trains pretty well with it:

https://user-images.githubusercontent.com/5861398/104465017-a67b9500-55c4-11eb-893c-caf2d3b21e08.mp4

I've trained it for 30 epochs on a point cloud with 10x fewer points (this resulted in ~2 mln points). To make the results better, you can:

Here is the folder with cameras, reconstruction, configs, and learned NPBG parameters: https://drive.google.com/drive/folders/1LbrbKZDI2yFJUfnKoQ4fVENLTM4XbZcO?usp=sharing

KimMingyeol commented 3 years ago

I checked projection images of point cloud for each camera pose through pyplot, and confirmed that the camera poses were inconsistent with corresponding images. I'll train it again with your cameras.xml file. Thanks for your help! I'll close this issue.