Closed XLechter closed 2 years ago
N is how many points are calculated parallelly. You can change N to fit your cuda memory.
When sampling meshes, the number of points is usually very large. Thus I need to change it to a smaller value as @liuzhengzhe indicated.
In each training step, only a small number of points (1024 or 2048) are used in inside/outside binary classification. This small point subset is sampled from a larger set which contains N points. In data preprocessing, we sampled N=500k points (in OccNet, N=100k). However, we found that N=100k is enough to get reasonably good results.
I found that some parts of my code are using the wrong symbols which might be confusing. However, you can always find the correct explanation in the paper.
Thanks a lot! @1zb @liuzhengzhe
Thanks for sharing the codes. The codes are well organized and easy to read. I have some questions about the decoder stage. If I unstander correctly, you extract T point features and centers at the encoder stage, and sample N points with feature interporlation to compute the logits in the decoder stage. So I wonder what the value of N? Is N equal to 100,000 the same as Occupancy Network? Thanks!