muhanzhang / pytorch_DGCNN

PyTorch implementation of DGCNN
MIT License
372 stars 122 forks source link

some issues about the code in pytorch_embeding.py #24

Closed OceanTangWei closed 3 years ago

OceanTangWei commented 5 years ago

Hello, I'd like to know , in line 38, why the stride is set to sum(latent_dim) as well? when I run the code, I've met some problems: RuntimeError: Calculated padded input size per channel: (1). Kernel size: (5). Kernel size can't be greater than actual input size.

muhanzhang commented 5 years ago

Hi, I set it the sum(latent_dim) because in this line I flatten sorted node embeddings to 1 * k(sum_latent_dim). If the kernel size is larger than actual input size in your case, this should not happen in the first conv1d layer (since you have at least k times kernel width sequence), and could possibly happen in later conv1d layers. You may reduce the kernel width of later conv1d layers, or increase k to solve the issue.