marcopesavento / Super-resolution-3D-Human-Shape-from-a-Single-Low-Resolution-Image

Other
25 stars 0 forks source link

data loading is slow #4

Open Oceanliuruizhi opened 1 year ago

Oceanliuruizhi commented 1 year ago

Hi, thank you for your wonderful contribution.

I recently tried to retrain the model, but the data loader appears to be very slow. I guess it could be the sampling procedure which is time-consuming. However, the problem is not addressed after I followed Pifu's instruction to install Pyembree. I am now a bit confused about what could be the issue for the slow data loading.

I would appreciate it if you have one or two hints about this problem. Thank you.

marcopesavento commented 1 year ago

PIFu's instructions to install Pyembree are to speed up the rendering of the dataset so they are not related to the training of the network.

You are right, the data loader is slow because of the sampling procedure (performing the sampling of a new set of points at every epoch).

A trick to speed it up is sampling the points with a pre-process (also without any GPUs) and saving them with the correspondent ground-truth labels in an npz file. Then, during training, you can directly load the npz file that contains the sampling and the ground truth labels. Just be aware to perform a different sampling for every epoch.

Hope this is all clear.

Oceanliuruizhi commented 1 year ago

Thank for your response. I followed your suggestions and trained the model. At leasat I could tell from the training loss and validation loss, the model was properly trained.

May I kindly ask you how should I evaluate the model in centimeters. Because I followed Pifuhd's code to calculate the chamfer distance and p2s distance, the results are quite small and don't look like centimeter at all.

Thank you in advance.

PIFu's instructions to install Pyembree are to speed up the rendering of the dataset so they are not related to the training of the network.

You are right, the data loader is slow because of the sampling procedure (performing the sampling of a new set of points at every epoch).

A trick to speed it up is sampling the points with a pre-process (also without any GPUs) and saving them with the correspondent ground-truth labels in an npz file. Then, during training, you can directly load the npz file that contains the sampling and the ground truth labels. Just be aware to perform a different sampling for every epoch.

Hope this is all clear.

Thank for your response. I followed your suggestions and trained the model. At leasat I could tell from the training loss and validation loss, the model was properly trained.

May I kindly ask you how should I evaluate the model in centimeters. Because I followed Pifuhd's code to calculate the chamfer distance and p2s distance, the results are quite small and don't look like centimeter at all.

Thank you in advance.