It would be nice if there would be a way to evaluate the error of outputs using some provided function (e.g. callback). As of my understanding of the code this would affect calculation of neuron_diff in fann_compute_MSE(), basically replacing it with a callback.
Such programmatic evaluation are very advanced, come with performance penalties, and may hint to a not well thought thru set of outputs. Any implementation should stress those points in the documentation. Maybe recommending classical training and programmatic evaluation training to be mixed if data allows.
Use cases:
Our first example is a simple network one input x and two outputs a, and b. The logic is given as: a = x > 0, and b = x > 1. This logic implies that whenever b is active a also must be active. Otherwise it is a logic error (not just a pattern detection error). So in case the result is x = 2 ➞ [a, b] = [false, true] (logic error) we want to train a stronger than for example when the result is x = -1 ➞ [a, b] = [true, false] (only detection error).
Second example:
Let's say we want to use a robot to change the direction when an obstacle is detected. The network has two outputs for the speeds of two motors x, and y. Combined they form a direction vector for our movement. In this case we don't care for the actual values of x, and y as long as current_pos + [x, y] is outside of the bounding box of the obstacle. (We may however also check for additional factors such as that we still move forward, or that a given speed is maintained.)
If there is consensus on this I could have a look into it myself and prepare some code for discussion.
Please note that #138 is a in itself valid subset of this ticket. #138 feature more comes with less penalties and is easier to use. Therefore having some value on it's own.
It would be nice if there would be a way to evaluate the error of outputs using some provided function (e.g. callback). As of my understanding of the code this would affect calculation of
neuron_diff
infann_compute_MSE()
, basically replacing it with a callback.Such programmatic evaluation are very advanced, come with performance penalties, and may hint to a not well thought thru set of outputs. Any implementation should stress those points in the documentation. Maybe recommending classical training and programmatic evaluation training to be mixed if data allows.
Use cases: Our first example is a simple network one input
x
and two outputsa
, andb
. The logic is given as:a = x > 0
, andb = x > 1
. This logic implies that wheneverb
is activea
also must be active. Otherwise it is a logic error (not just a pattern detection error). So in case the result isx = 2 ➞ [a, b] = [false, true]
(logic error) we want to traina
stronger than for example when the result isx = -1 ➞ [a, b] = [true, false]
(only detection error).Second example: Let's say we want to use a robot to change the direction when an obstacle is detected. The network has two outputs for the speeds of two motors
x
, andy
. Combined they form a direction vector for our movement. In this case we don't care for the actual values ofx
, andy
as long ascurrent_pos + [x, y]
is outside of the bounding box of the obstacle. (We may however also check for additional factors such as that we still move forward, or that a given speed is maintained.)If there is consensus on this I could have a look into it myself and prepare some code for discussion.