Some of the larger datasets out there (such as Shapenet and ABC) have edge counts ranging from 100's to 100000's.
Do you have any advice on selecting a pool_res and ncf for datasets with this sort of edge count variation? And/or any thoughts on the effect this would have on your research?
I responded to a similar question in this issue. Basically, I suggest processing the data first. I just posted a very basic script to do this in blender here.
Some of the larger datasets out there (such as Shapenet and ABC) have edge counts ranging from 100's to 100000's.
Do you have any advice on selecting a
pool_res
andncf
for datasets with this sort of edge count variation? And/or any thoughts on the effect this would have on your research?