Instead of adding new neurons to classify a non-"linear separable" problem (e.g. XOR), can we instead add more inputs ?
If we go back to your previous video about perceptrons, your inputs are the coordinates x and y of a point. If we add the x.y, being the multiplication of the two coordinates, wouldn't this be enough to replace the use of "NAND" and "OR" neurons you use in your example ? Tensorflow's playground is a nice tool to visualize the effect of adding new inputs, layers, nodes etc... to a classification problem.
This brings me to the real questions:
What is the right way of building a NN ? What's the closest we can get to a universal NN ?
Hi, I recently watched your video on multilayered perceptrons and I had the following question:
Instead of adding new neurons to classify a non-"linear separable" problem (e.g. XOR), can we instead add more inputs ?
If we go back to your previous video about perceptrons, your inputs are the coordinates
x
andy
of a point. If we add thex.y
, being the multiplication of the two coordinates, wouldn't this be enough to replace the use of "NAND" and "OR" neurons you use in your example ? Tensorflow's playground is a nice tool to visualize the effect of adding new inputs, layers, nodes etc... to a classification problem.This brings me to the real questions:
What is the right way of building a NN ? What's the closest we can get to a universal NN ?