samxuxiang / BrepGen

[SIGGRAPH 2024] Official PyTorch Implementation of "BrepGen: A B-rep Generative Diffusion Model with Structured Latent Geometry".
Other
237 stars 30 forks source link

Evaluation Metrics #24

Open mingilikesmangos opened 1 month ago

mingilikesmangos commented 1 month ago

Hi, I am working on reproducing the quantitative results (Table 1) from the paper and have a couple of questions:

  1. on Validity of the final output

Following the current code flow, it seems that MMD, COV, and JSD are calculated using point clouds sampled from the final B-rep. During this process, failures in constructing a "valid" B-rep appear to be filtered out at try/except in steps like (sample.py): 3-1: Detect shared vertices 3-4: Build the B-rep

My question is: After the process passes the above two section and the STEP/STL files are successfully saved, does that guarantee they contain a "valid" B-rep? It would be great if you could clarify this or provide the specific code you use to evaluate "Novel", "Unique", and "Valid" metrics.

  1. on exact Metric Values

Using 3000 samples generated from the ABC-pretrained checkpoint, I obtained the following results by running eval.sh: { 'avg-MMD-CD': 0.012816025968641042, 'avg-COV-CD': 0.6104999959468842, 'avg-JSD': 0.010387567029045464 } Does Table 1 in the paper present the metrics as *100 of these values?

Thank you!

uanu2002 commented 3 weeks ago

Do you solve it? I have same questions.