A PyTorch implementation of "MetaFormer: A Unified Meta Framework for Fine-Grained Recognition". A reference PyTorch implementation of “CoAtNet: Marrying Convolution and Attention for All Data Sizes”
MIT License
226
stars
40
forks
source link
I have a question about "linear embbeding" and "non-linear embbeding". #5
Is figure 2 on page 4 of the paper and figure 1 on page 10 of the paper referring to the same architecture?
The term "non-linear embbedding" and "linear embbedding" are used to describe embedding meta-information, but if the figures refer to the same architecture, what is the intention behind the different designations?
Neural networks are iterations of processes that perform linear transformations and activation functions that perform nonlinear transformations. Is it correct to say that you used "non-linear embbedding" because you are using an activation function relu that performs a non-linear transformation?
Thanks for all your great work!
I have two questions about the paper.
Is figure 2 on page 4 of the paper and figure 1 on page 10 of the paper referring to the same architecture?
The term "non-linear embbedding" and "linear embbedding" are used to describe embedding meta-information, but if the figures refer to the same architecture, what is the intention behind the different designations? Neural networks are iterations of processes that perform linear transformations and activation functions that perform nonlinear transformations. Is it correct to say that you used "non-linear embbedding" because you are using an activation function relu that performs a non-linear transformation?