Hi,
Thanks for the releasing code for your job. I have noticed that you have mentioned in he paper that the conformers borrowed from CMGAN, in the CMGAN code, in multi-head attention they use relative positional embedding, but in your code, i didn't see that. Are there any concern or reasons not include pe in the confomer.
Thanks
Hi, Thanks for the releasing code for your job. I have noticed that you have mentioned in he paper that the conformers borrowed from CMGAN, in the CMGAN code, in multi-head attention they use relative positional embedding, but in your code, i didn't see that. Are there any concern or reasons not include pe in the confomer. Thanks