ChrisWu1997 / PQ-NET

code for our CVPR 2020 paper "PQ-NET: A Generative Part Seq2Seq Network for 3D Shapes"
MIT License
116 stars 19 forks source link

about improving the resolution of shape #12

Closed fredcool7 closed 3 years ago

fredcool7 commented 3 years ago

hi, ChairsWu, it is a good job. and i have a quesion after reading your paper. in your paper, the resolution of part is up to 64. how about improving the resolution to 128 and so on? Have you tested when you can reach a limit about the resolution? thanks, looking forward to your reply.

ChrisWu1997 commented 3 years ago

Hi! Thanks for taking interest in our work. In the paper, quantitative metrics are calculated under resolution 64, but the visual results mostly come from resolution 256. And yes, increasing resolution does improve the quality, as shown in Table 1 in the paper. I did try resolution 512 but didn't put those results in the paper, because it runs too slow.

As for the limit about the resolution you mentioned, you mean computationally or the quality? Increasing the resolution would increase the computational cost in a cubic manner; and the quality gain from increasing the resolution would become smaller.

fredcool7 commented 3 years ago

ok, thanks. I was thinking before, if it’s just a classification task, can it also process high-resolution shapes at the same time without taking up a lot of memory.

ChrisWu1997 commented 3 years ago

Not likely, since you would have 8 times more points to query if multiplying the resolution by 2. So both the time and memory cost would increase (but of course there's a trade-off between them).

CRISZJ commented 3 years ago

Hi! Thanks for taking interest in our work. In the paper, quantitative metrics are calculated under resolution 64, but the visual results mostly come from resolution 256. And yes, increasing resolution does improve the quality, as shown in Table 1 in the paper. I did try resolution 512 but didn't put those results in the paper, because it runs too slow.

As for the limit about the resolution you mentioned, you mean computationally or the quality? Increasing the resolution would increase the computational cost in a cubic manner; and the quality gain from increasing the resolution would become smaller.

Hello, I also want to ask, if you use 512 resolution, probably how much memory will be occupied

ChrisWu1997 commented 3 years ago

Well, I cannot give a specific number, but I think under 512 resolution, it's not possible to process all points in a single forward pass on a typical GPU like Titan X.

CRISZJ commented 3 years ago

Well, I cannot give a specific number, but I think under 512 resolution, it's not possible to process all points in a single forward pass on a typical GPU like Titan X.

thanks for your reply. so, I want to know if the reason for this limitation is due to GPU storage or computing performance?

ChrisWu1997 commented 3 years ago

I think the limitation is more on the algorithmic side. As I explained above, the number of points to query increases in a cubic manner when increasing the resolution since we use the naive policy to just sample all points in voxel grids.

Of course, if the GPU memory is large enough, it's ok to process all 512^3 points in a single pass; but with limited GPU memory, we have to split all 512^3 points into several batches, process them one by one and finally combine them together.

CRISZJ commented 3 years ago

I think the limitation is more on the algorithmic side. As I explained above, the number of points to query increases in a cubic manner when increasing the resolution since we use the naive policy to just sample all points in voxel grids.

Of course, if the GPU memory is large enough, it's ok to process all 512^3 points in a single pass; but with limited GPU memory, we have to split all 512^3 points into several batches, process them one by one and finally combine them together.

ok, thanks.