gmum / points2nerf

Points2NeRF
70 stars 10 forks source link

Using Own Dataset #4

Open sheshap opened 1 year ago

sheshap commented 1 year ago

Hi

Very Interesting Work.

I have a dataset that has points and colors in the below format in the form of .txt files. X,Y,Z,R,G,B

Kindly, please suggest how to use your method.

Thanks in advance

Ideefixze commented 1 year ago

Hi,

do you want to use it for training? If so, you will also need images and camera matrix poses. To prepare training dataset see and use: https://github.com/gmum/points2nerf/blob/main/dataset_generation_scripts/generate.py

Additional description is in main README.md

If only for inference, you will need to probably load it directly in code and use https://github.com/gmum/points2nerf/blob/main/utils.py#L49

It takes entry["data"] which is a point of cloud of 2048 points (X,Y,Z, R,G,B) and returns code from which Hypernetwork generates NeRF. Data would need to be very similar (scale, shape) to existing ShapeNet objects from existing pre-trained models (car, plane or chair).

sheshap commented 1 year ago

Hi,

Yes, I would like to use it for training (my dataset has different classes).

I am not sure if I have camera matrix poses, but I do have intrinsic matrices.

Thanks

sheshap commented 1 year ago

my mobile camera extrinsic matrix is same for all images

image

Kindly, Please suggest if the extrinsic matrix needs to be different?

Ideefixze commented 1 year ago

Yes, it should be different if images are from different angles. NeRF would have no idea from which position it is looking at, so you will need to change your dataset.

shuyueW1991 commented 1 year ago

Hi,

do you want to use it for training? If so, you will also need images and camera matrix poses. To prepare training dataset see and use: https://github.com/gmum/points2nerf/blob/main/dataset_generation_scripts/generate.py

Additional description is in main README.md

If only for inference, you will need to probably load it directly in code and use https://github.com/gmum/points2nerf/blob/main/utils.py#L49

It takes entry["data"] which is a point of cloud of 2048 points (X,Y,Z, R,G,B) and returns code from which Hypernetwork generates NeRF. Data would need to be very similar (scale, shape) to existing ShapeNet objects from existing pre-trained models (car, plane or chair).

Hi, I am also interested in the point cloud thing. I am wondering if there is corresponding car/chair/plane point cloud. Now in the ds.zip package, there is only sampled npz files. In 'shapenet' directory, there is nothing.

Ideefixze commented 1 year ago

@shuyueW1991

In .npz files you should have for each object: point cloud, images and poses. Shapenet directory, which is needed in generating data or calculating metrics need to be downloaded from offical Shapenet source