libfann / fann

Official github repository for Fast Artificial Neural Network Library (FANN)
GNU Lesser General Public License v2.1
1.59k stars 380 forks source link

Feature request: Programmatic evaluation of output neurons / Inter-output-relations #139

Open phschafft opened 1 year ago

phschafft commented 1 year ago

It would be nice if there would be a way to evaluate the error of outputs using some provided function (e.g. callback). As of my understanding of the code this would affect calculation of neuron_diff in fann_compute_MSE(), basically replacing it with a callback.

Such programmatic evaluation are very advanced, come with performance penalties, and may hint to a not well thought thru set of outputs. Any implementation should stress those points in the documentation. Maybe recommending classical training and programmatic evaluation training to be mixed if data allows.

Use cases: Our first example is a simple network one input x and two outputs a, and b. The logic is given as: a = x > 0, and b = x > 1. This logic implies that whenever b is active a also must be active. Otherwise it is a logic error (not just a pattern detection error). So in case the result is x = 2 ➞ [a, b] = [false, true] (logic error) we want to train a stronger than for example when the result is x = -1 ➞ [a, b] = [true, false] (only detection error).

Second example: Let's say we want to use a robot to change the direction when an obstacle is detected. The network has two outputs for the speeds of two motors x, and y. Combined they form a direction vector for our movement. In this case we don't care for the actual values of x, and y as long as current_pos + [x, y] is outside of the bounding box of the obstacle. (We may however also check for additional factors such as that we still move forward, or that a given speed is maintained.)

If there is consensus on this I could have a look into it myself and prepare some code for discussion.

phschafft commented 1 year ago

Please note that #138 is a in itself valid subset of this ticket. #138 feature more comes with less penalties and is easier to use. Therefore having some value on it's own.