I am currently looking into transfer learning and reading the current paper: https://pubmed.ncbi.nlm.nih.gov/31138913/
"Kipoi offers a command to return and store the activation of a desired intermediate layer rather than the final, prediction layer. The transferred model can take those activations as input features instead of the original input. Since the intermediate
layer can serve as a good feature extractor, this procedure can speed up the training process by multiple orders of magnitude without reducing performance".
Does activation stand for the transformed features for that precise layer? Is there an easy way to play with transfer learning using Kipoi?
In the paper was also mentioned that you can transfer the parameter to a new model and replace the final layer with a randomly initialized one. How is that done using Kipoi?
I am currently looking into transfer learning and reading the current paper: https://pubmed.ncbi.nlm.nih.gov/31138913/ "Kipoi offers a command to return and store the activation of a desired intermediate layer rather than the final, prediction layer. The transferred model can take those activations as input features instead of the original input. Since the intermediate layer can serve as a good feature extractor, this procedure can speed up the training process by multiple orders of magnitude without reducing performance".
Does activation stand for the transformed features for that precise layer? Is there an easy way to play with transfer learning using Kipoi? In the paper was also mentioned that you can transfer the parameter to a new model and replace the final layer with a randomly initialized one. How is that done using Kipoi?