Closed mikeizbicki closed 5 years ago
We've proven that ReLU is continuous does not increase pairwise distance. Such functions are morphisms in the category of metric spaces. We'd like to know which other activation functions are morphisms (e.g. convex).
Still investigating what the means for holes in data and how that affects learning.
I've created a folder called
paper
with a latex file in it. In the latex file is a lemma about how the relu function transforms the shape of data. In our meeting, we were talking about how the relu function can change the shape of holes in the dataset, and the lemma is an easier version of that problem considering how the relu function changes the distance between points.I want you to try to prove/disprove the lemma for next week.
Also, some extensions to think about are: