ranran0523 / SPECNN

code repo for paper accepted in ICML 2023
MIT License
11 stars 3 forks source link

interleaved encoded #2

Open czh-rot opened 9 months ago

czh-rot commented 9 months ago

I apologize for the repeated disturbance. This time, I would like to inquire whether your code has been fixed and made public. This is mainly because I have some confusion about the description in your paper. If the code has not been fixed yet, could you please help me clarify my current confusion? The main confusion arises from Figure 3 in the paper, where, for HE-group Conv, the number of convolutional kernels seems to decrease. For example, compared to the general HE Conv, K13 and K14 disappear. Therefore, I find it challenging to understand how the output of HE-group Conv is computed based on this figure. image

ranran0523 commented 9 months ago

Yes, the convolutional kernel would be decreased by using the group-convolution layers. It works by cutting the connections from input channels to output channels. In general convolution, Kernel K13 and K14 are computed with input channel 1 to generate output channel 3 & 4 (e.g. ch1xK13+ch2xK23+ch3xK33+ch4xK43=out3). However, we have input channels 1 & 2 in a group, channels 3 & 4 in another, in which out3=ch3xK33+ch4xK43. You can also refer the attached figure to see how group convolution works. group_convolution

czh-rot commented 9 months ago

Yes, the convolutional kernel would be decreased by using the group-convolution layers. It works by cutting the connections from input channels to output channels. In general convolution, Kernel K13 and K14 are computed with input channel 1 to generate output channel 3 & 4 (e.g. ch1xK13+ch2xK23+ch3xK33+ch4xK43=out3). However, we have input channels 1 & 2 in a group, channels 3 & 4 in another, in which out3=ch3xK33+ch4xK43. You can also refer the attached figure to see how group convolution works. group_convolution thanks for the reply!