Open tamos opened 5 years ago
Indeed, this analogy between the human brain and computational models had seemed more for insight than in the literal sense. The terms ‘Neural network' and ‘perceptron’ seemed to have emerged from that approach. But now we're defining a ‘synapse’ as ‘both connection weights and bias terms' (Pg 2). Why are we so committed to juxtaposing these biological and mathematical mechanisms and trying to develop a uniform set of terms across both?
This is a great philosophical question. It is possible that we can create ANNs with increasing sophistication that in no way resemble our own brains. However, I'm partial to the idea that there are very few ways in which networks can exhibit general intelligence, with our brains being one of the few examples. Thus, if we're able to create increasingly sophisticated ANNs that exhibit increasing intelligence, they'll increasingly resemble our own brain.This review (https://www.cell.com/neuron/fulltext/S0896-6273(17)30509-3) makes this point better than I ever could. I reference it tomorrow. Also, this paper (https://www.pnas.org/content/111/23/8619.short) presented a concrete example of this point. They showed that as ANNs became better ar identifying visual objects, the pattern of activity in their hidden units better resembled the activity patterns we see in visual cortex.
Can you elaborate on what you feel are the limits of the analogy between neural networks and the human brain? Is it reasonable to expect a neural network to display human characteristics such as not experiencing catastrophic forgetting, or is the added value of ANN in learning at scale in ways a human brain cannot?