Hello! Your wonderful work inspired me a lot, especially the part "Efficient Implementation of Attention with Relative Positional Encoding". I want to know whether the relative positional encoding you proposed can be applied to 3D transformer? If possible, could you give me some advice?
Hello! Your wonderful work inspired me a lot, especially the part "Efficient Implementation of Attention with Relative Positional Encoding". I want to know whether the relative positional encoding you proposed can be applied to 3D transformer? If possible, could you give me some advice?