Closed balvisio closed 2 years ago
Hi @balvisio ,
the second layer(s) had always the number of classes that we wanted to predict as number of filters. So we did not do any flattening etc but simply used the second CNN's layer output as class prediction. You can find a code-example how the network looks like in this colab under Network architecture for secondary structure prediction: https://colab.research.google.com/drive/1TUj-ayG3WO52n5N50S7KH9vtt6zRkdmj?usp=sharing#scrollTo=c5XqIyeNStZP
Got it. Thanks!
Hi all,
In the paper it is explained that a CNN was chosen for the per-residue prediction. According to the paper the first layer of the CNN used a kernel of size 7 and the number of filters was 32. This layer was fed to a second Conv layer (two different CNNS: one for the 3-state and a different one for the 8-state) with kernel of size 7. I was wondering about the details of the network at this point:
Thanks again!