Open AlexanderNutalapati opened 1 year ago
Okay. It seems I missed the "implementation" part in the paper. Sorry for bothering you. Still if you could share some general recommendations about understanding mind-vis for a begginer in machine learning, I would appreciate it a lot.
Hello!
I am trying to visualize the architecture of mind-vis down to the layer for better understanding but I have trouble finding a description of the architecture. In the paper it is written that the encoder depth is 24. Does it mean that it has 24 layers? If so where can I look up the input and output size for each layer? Do you have any general recommendations on comprehending the exact architecture of mind-vis?
With regards, Alexander.