Open ghost opened 6 years ago
This happens in classification too. I might be wrong but there seems to only be an output bias for positive-only activations (ReLU, sigmoid). The fact that the output bias is hidden was extremely confusing to me as I couldn't understand where the negative area was coming from when all of the nodes had ReLU activations. This makes it impossible to manually calculate the results in order to better understand how the neural network is working. It also can cause confusion when you try to replecate an earlier system and find that it is impossible because without being able to change the output bias, the system will have different results. I think it would help save a lot of confusion if this was changed so that you can see and edit the output bias.
Please, I spent hours trying to figure out why the top-left corner of the output was classified as orange when 2 of 3 of the previous layer neurons was classifying that same point as blue (with all the same positive weights), until I figured out it was because of the hidden output bias!!
Same confusion here about the output bias's effect on classification. I spent quite some time trying to figure out strange relu-based output before realizing that the output node has its own bias.
I've hacked in a javascript component which exposes output bias configuration in the forked commit here. This isn't formatted nicely at all but it works for those who would like to build and run locally. The configurable output bias will appear as a white box just to the right of the features layer.
If I'm not mistaken, the output node which is used to generate the heatmap also has a bias. It would be nice to be able to view that bias in the visualization, similar to how you can view the bias for the Nodes in the hidden layer. Otherwise, it is confusing when a regression algorithm with all positive weights and bias (no orange in the last hidden layer's output visualizations) using ReLu suddenly produces an area of strongly orange output on the heatmap.