liruihui / SP-GAN

MIT License
111 stars 25 forks source link

About Evaluation #2

Open AnjieCheng opened 3 years ago

AnjieCheng commented 3 years ago

Hi, thanks for releasing the code.

During training, SP-GAN uses a different data normalization approach (instance-wise normalizing the points to fit a unit ball) comparing to prior works such as PointFlow (zero-mean per axis and unit-variance globally). How about during the evaluation, how is the data preprocessed?

Would you release the evaluation code, or share how you preprocess/post-process the point cloud before evaluation?

Thank you!

liruihui commented 3 years ago

The only processing is on the training point set (normalizing the points to fit a unit ball), which is a common operation in most point cloud-based analysis. No other processing is needed before evaluation.

The evaluation part is the same as PointFlow and latent GAN. https://github.com/stevenygd/PointFlow/tree/master/metrics

AnjieCheng commented 3 years ago

Thank you for responding!

Since the model is trained with normalized data, the generated point cloud is also expected to be within the same normalized scale. If no other processing is needed for evaluation, how can the evaluation metrics (e.g., chamfer distance) be accurate? I feel there are two possible solutions: 1. the testing data is also preprocessed which is normalized to a unit cube 2. de-normalize the generated point cloud back to the original scale.

https://github.com/stevenygd/PointFlow/blob/master/test.py#L114 As shown above, PointFlow denormalizes the generated shape back to the original scale before evaluation.

Please correct me if there is any misunderstanding. Thank you!

CRISZJ commented 3 years ago

Hello, it's a great job. I also have some issue about metrics. The paper doesn't seem to mention the test set, so after training with the training set, I use the generated results to calculate the data in the training set. I generated 1000 results and performed calculations with 6000 results in the training set. The results of the calculation are as follows: COV 9.08822661552 MMD 8.834443055093288 JSD 0.023300660444105503 As you can see, mmd is relatively close, and cov is far away. I want to know, where may I have not calculated correctly.

TiankaiHang commented 2 years ago

Thank you for responding!

Since the model is trained with normalized data, the generated point cloud is also expected to be within the same normalized scale. If no other processing is needed for evaluation, how can the evaluation metrics (e.g., chamfer distance) be accurate? I feel there are two possible solutions: 1. the testing data is also preprocessed which is normalized to a unit cube 2. de-normalize the generated point cloud back to the original scale.

https://github.com/stevenygd/PointFlow/blob/master/test.py#L114 As shown above, PointFlow denormalizes the generated shape back to the original scale before evaluation.

Please correct me if there is any misunderstanding. Thank you!

Hi, have you got the number reported in the paper? @AnjieCheng