Open GeorgNation opened 2 years ago
Soon I will attach images of 100x100, which should have turned out at work, and another after importing the same model.
I have to admit GeorgNation... I don't quite understand the problem you are having. I think your function code may have been removed from your comment, maybe? Can you be more descriptive about what you mean when you say you are "saving picture (100x100) into network", what does that mean exactly? And... please forgive my obvious ignorance but... what is your definition of "redundancy" as well please? Also... its extremely unclear how or what you are trying to achieve... as you have not defined what a successful "result" is/looks like, what are you trying to do? What do the information processing layers above AND below the ANN look like? Additionally I will add, FANN does not have built in image manipulation capabilities so all things being equal we can assume the possibility that the issue may be a misconfiguration of FANN but it could also lay somewhere with the image rendering portion of your code and you should include that code in your debugging efforts.
I trained the neural network to remember images of 100 by 100 pixels, taking X and Y pixels (1/100) as input and returning its RGB (1/255) as output. About redundancy - I used to want to train it on an image of 200 by 200 pixels, but later I refused it, I left the size of the layers as-is, that's what I was talking about.
The network consists of: 2 input neurons; 500 hidden neurons; 500 hidden neurons; 250 hidden neurons; 125 hidden neurons; 3 output neurons.
I'm using pre-compiled FANN for PHP 7.4, taken from the PECL site.
I think that the problem is with export, because if the network were an exact copy, then the information would remain.
GeorgNation, I have to admit that what you are asking for seems like you are just "fishing" for free code... maybe even to get your project started because maybe you don't even have a project that isn't working??
I make this statment because some of your statements do not logically coincide with the circumstances.
For example, it is now crystal clear for all to see that fann_save(resource $ann, string $configuration_file) ( https://www.php.net/manual/en/function.fann-save.php ) is not your issue and yet... you would have known this BEFORE posting your issue and titling it "fann_save isn't work properly" because fann_save() has nothing to do with working with images, its for saving the neural network, not image processing, something FANN does not do...
You said:
"When I using this function for save model into file, it's export leads to data loss."
"...when I use fann_create_from_file, and then fann_run, my picture becomes completely different, and does not coincide with the previous result."
"I think that the problem is with export, because if the network were an exact copy, then the information would remain."
The only way you're statements could even remotely make sense is if your model is operating as desired (able to reproduce the image "from memory" as desired) before saving (before you use fann_save() to export your model to a file) but then after reloading the saved model it no longer works.
Is that the case? Are you testing BEFORE using fann_save() and getting "good" results, then saving, unloading the model from memory, reloading it using fann_create_from_file(string $configuration_file) ( https://www.php.net/manual/en/function.fann-create-from-file.php ) and getting a different result... is that what you are saying?
I suspect that you have/are not because when I asked to look at the actual problematic code that was giving you trouble, you only provided an image rather than your code.
This implies that either:
A. You do not actually have any operable but malfunctioning code to examine and so are therefore seeking for someone to provide it to/for you.
OR...
B. You believe your code is so "sacred delicious cow" that it cannot be shared lest "your" unpatentable algorithm is seen by a competitor.
Now, because B is extremely amateurish in this context and I don't wish to insult you... AND due to occams razor (the law of parsimony)... I am left to conclude that A is true.
Assuming however that I am wrong and the case IS closer to B... then prove me wrong by showing your code (I suspect you wont).
Further, if your cow is as sacred and delicious as you believe it to be then extricate the neural network portion from your project into a simple proof/example that can be uploaded to a github repo so that others can actually examine the offending code, otherwise... it really seems like you are trolling.
Think about it... how difficult is it to use free software like GIMP to slap together a couple of images then show up at a forum and say "...see it's not working", then provide no code to back it up, but then expect someone to solve/do it for you... takes almost no intelligence, skill or effort whatsoever!
Now, to address the actual issue... and again not having seen ANY of your alleged-to-exist code so as to be able to even make a legitimate attempt to help you...
Neural networks DO have a "memory" and CAN learn to "remember" output patterns based on input patterns, in-fact USUALLY you end up working against their memory to avoid "over-fitting" the network to it's training dataset so that it will actually "learn" to "generalize" the "problem/thing" you are "teaching/training" it to do/solve.
Networks work best with "visual like" information in parallel because pattern recognition relies on the "flow" of information through the network. Similar patterns of inputs follow similar "paths" through the network and "interact" with the neurons that other similar patterns interact with.
"Training" is the attempt to maximize the ability of the network to "respond" correctly with an output to an input pattern.
Now, although X & Y coords are a highly logical and effective way to organize spatially correlated information into a grid where the informations most related to each other are also "physically" closer to each other, these concepts have no meaning to a neural network.
You may imagine a grid when you are "feeding" X & Y coords+pixel color to the network and envision the whole scene in at least 2 dimensions but that isn't what the neural network is "seeing".
You have an intrinsic understanding of grid like information such that you posses an abstraction for imagining it and EVEN IF when you think about a grid of information you are not imagining ALL the spaces/pixels individually you can/do mentally see/imagine (build a model) a of simplified/compressed version in your mind of the image/grid.
In many cases AI developers mimic this process by using methods like "max pooling" to preserve the "image like" qualities of their data while reducing the amount of information necessary to convey/ preserve and process the meaning.
However, even techniques as "heavy handed" as max pooling (which literally amounts to throwing pixels away) still maintains the "core information" of input because the "input pattern" is unique and self similar (See: https://geekgirljoy.wordpress.com/2020/04/30/ocr-2-the-mnist-database/ ). Scaled images of dogs look like dogs, stick figures look like stick figures even if you shrink them and 3's… occasionally look like 8's, especially if you shrink them!
Stop imagining the image in at least 2 dimensions because your alleged bot sees in 1 dimension using 2 1-pixel eyes which only perceive a single gray scale "color"/signal AND without the benefit of the additional inputs from the output of your "grid like information abstraction/intuition" unit that is in your head.
Under your purposed scheme of only giving the X & Y to the network's 2 1-pixel "eyes" as input, it can only perceive a gradient of "all dark" XY(0,0) to "all light" XY(image_width, image_height), with "equally gray values" when at the center of the image/grid AND always "darker on the left" AND always "darker on the top" which is to say the pattern is always "lighter on the right" AND always "lighter on the bottom"... then the patterns are all very similar no matter what points are selected and what is varying is the strength of the pattern instead of the pattern itself which is going to require a deeper network to attempt to overcome this limitation.
As a side note and a rule of thumb, go "wider" BEFORE "deeper" and prefer your network to be as "shallow" as possible.
Now, again... neural networks DO have a kind of "memory" BUT they are not a database and ARE NOT performing an internal lookup for a known value. Input patterns result in output patterns because of the specific calculations that occur due to the "path" the "information/signal" took "through" the network. The different paths occur due to the connections and their weights and as "information" passes through the network more or less of the signal (sometimes anti-signal) propagate through to subsequent layers.
In your case, the network sees the same pattern but at various "intensities" of "brightness" which requires you to have a deeper network so as to create a larger internal "connection landscape" where additional layers of neurons "notice" the subtle variance in signal intensity and rather than relying on "pattern variance" which is a methodology neural networks operate better with.
I created a test example demonstrating this here: https://github.com/geekgirljoy/RememberMyImage/tree/main/SimpleXY
There was a method for doing this sort of thing demonstrated by Paul Andrei Bricman and Radu Tudor Ionescu in 2018 called CocoNet ( COordinates-to-COlor NETwork ) that uses a 6D vector (rather than just 2) to perform this by giving each location in a grid an X & Y location and inverse X & Y location combined with a polar coordinate that results in a unique address/index/pattern for each pixel/cell/location in the grid which creates "pattern variance" rather than just "signal variance" of the same pattern.
The Paper for CocoNet can be found/read here:
https://arxiv.org/abs/1805.11357
https://www.arxiv-vanity.com/papers/1805.11357/
It's implemented in Python and the code can be found on GitHub here:
https://github.com/paulbricman/python-fuse-coconet
I created a rough test example of CocoNet using FANN and PHP here: https://github.com/geekgirljoy/RememberMyImage/tree/main/CoCoNet
Although not perfect, I believe this method could be improved so that it would "work" but it would require experimentation and time to get the “shape” of the network right and I have doubts it would be truly "lossless".
Next I created a "One Hot Vector" implementation that does have a unique "address/pattern" for each "pixel" address/location within the grid dataset and was able to get it to more or less reproduce from "memory" the image it trained on with minor variations that could be further minimized beyond the point of human visual perception, but again... I have strong reservations of calling this truly "lossless" because you may care that the pixel in XY(0,0) is RGB(5,8,12) instead of RGB(6,9,11) but visually it would be an imperceptible difference to a human eye though if you through enough neurons, layers and time at it you probably could find a configuration that would effectively reproduce the same color channel values despite any rounding variance because through training you drove the variance to be smaller than is significant when de-scaling the output.
That code can be found here: https://github.com/geekgirljoy/RememberMyImage/tree/main/OneHot
Now, unless you intend to actually provide your code so that I and others can assist in actually identifying what your problem is, I would assume that this issue should be closed.
@bukka I can say with a high degree of certainty that this issue can be closed.
In fact, this answer seemed offensive to me. Also, I was busy. Here is the actual code. Then I did an experiment with retraining, so the same code is executed several times.
Well... I've got to admit GeorgNation that I am surprised to see you publish your code to a repo and I will be the first to admit I was wrong. I jumped to conclusions and over reacted when I noticed your lack of supporting code with your issue/question and you're not providing that after I asked about it combined with the circumstantial evidence of seeing your other recent activity include opening open ended issues on other repos that was something like "How does this work?" (paraphrasing from memory :-P ) though I suppose in all honesty after I looked at that repo I found that it did lack a readme so realistically what else are you supposed to do but ask?
I mistook all the aforementioned for malice and so with all genuine sincerity, I apologize. It was not my intent to offend you, I believed you were not operating in good faith and I am sorry.
Now, to help you if I can... I downloaded your repo and ran your code with your green gradient input image fon2.png and and it generated START.bmp image as output from memory that you stated/demonstrated in the photo you provided above however for me, after successive training epochs, it is preforming about as correctly as I would expect, so... either you fixed it and your published code is just to throw egg on my face, in which case... touche GeorgNation, well played sir! :-P Or... as you suggested there may be something weird happening.
Just so we're on the same page lets go through what I did and whats happening with your code for me.
I cropped the image out that you previously published above so that it is 100 x 100 px and named it fon2.png then made a folder named "2" in the same folder as code2.php and ran it.
fon2.png
code2.php loads 1.txt as $fann (the network) which has some amount of previous training/initialization.
It then loads fon2.png and creates a CSV log of the MSE by epoch for later review but doesn't start writing it until after START.bmp is generated.
Given the respective X&Y coords from [0,0] to [99,99], the index positions are fed to $fann and it's outputs for RGB (it has 4 outputs and I'm assuming the 4th is for alpha but it is unused in your example code) are then converted to color values using something like $color = min(max (round ($answer[$n] * 255), 0), 255); and then the color is allocated and the pixel data plotted.
When all the pixels have been plotted the image is saved in the 2 subfolder as START.bmp to indicate the existing ability of the network to remember the image.
Then the memory for the image plot is freed and a new blank image resource is created in memory and we begin stepping through it's X&Y coords from [0,0] to [99,99] and the same coords are used to obtain the pixel colors from fon2.png to train a single iteration/epoch using fann_train(resource $ann, array $input, array $desired_output).
After which you used fann_run ($fann, [$xNeuro, $yNeuro]) to test the network for those same positions to see if it can reproduce the color for that location from only the "memory" of the position. The output of the fann_run() operation is then converted to color values, allocated and then written to the plot image.
When all pixel locations have been "visited" the image is output to the 2 subfolder as the EpochNumber.bmp and then the nested loop returns to [0,0] and begin additional rounds/epochs of training until $i >= $epochs which was set to 500.
Now, as I said when I ran your code I DO get this image:
START.bmp
However for me successive training epochs are successful are "reproducing" (within expected capacity) the fon2.png however the network that can do so isn't saved by the system as 2.txt until the end of the 500 training epochs.
The results I get after training Epoch 0 (0.bmp):
The results I get after training Epoch 100 (100.bmp):
The results I get after training Epoch 500 (correction 499.bmp):
I also created a code3.php that differs from your example where I use only 3 layers:
$layers = [2, 200, 3]; $fann = fann_create_standard_array(count($layers), $layers);
and I used the RPROP training algorithm:
fann_set_training_algorithm ($fann, FANN_TRAIN_RPROP);
And instead of the fon2.png I downloaded the photo from https://pixabay.com/photos/cat-kitten-pet-lick-tongue-6723256/ and scaled + cropped it to 100 x 100 px to use a a second example:
kitten.png
$photo = imagecreatefrompng ('kitten.png');
Everything else is the same as your code and again it seems to work as expected:
Epoch 0:
Epoch 1:
The additional epochs are about the same.
This indicates that your code is correct and that there could be an issue that is specific to your test environment and very well could be related to the version of FANN that you are using.
I will upload a pull request later today or tomorrow depending on how my schedule goes with the completed/trained networks I generated, can you confirm that they are generating the correct image from memory for you?
If they do, then the the issue seems to be occurring either at the point of training or as you suggested at the point the network is saved. IF however the networks I post do not work for you then the issue is at the point of running/using the network.
You mentioned that you are using the PECL version of FANN, does that mean you are on Windows OS? I am currently on Linux and built my version of FANN myself using these instructions: https://geekgirljoy.wordpress.com/2019/04/05/getting-started-with-neural-networks-and-php-in-2019/
I do have a Windows machine available for testing but let's confirm you can run the trained networks okay first.
I was able to get successful results using your code and was able to reproduce the correct output using modified version of your code and my own images.
However, I was unsatisfied with these results and wanted to make sure I did my due diligence in looking into this issue so I tried reloading the saved ANN's and it quickly became apparent that the all the "good/working" training was lost and all subsequent tests with the reloaded ANN's resulted in failure regardless of any change to the configuration or methodology.
Having little success reloading the network I returned to the training and testing code which is/was seemingly working.
I tried generating a second "stop" image after the training had concluded but BEFORE the ANN had not been destroyed from memory. If all things are equal (and they should be at that point) then the image generated just before destroying the ANN should be IDENTICAL to the last image produced while training the ANN, but they are not.
This strongly indicated that the issue is occurring with fann_train() and not fann_save().
So, I modified the SimpleXY RememberMyImage example to functionally be identical to your code GeorgNation, except that instead of using fann_train() it uses fann_train_epoch() which was successful in reproducing the image.
This now confirms that there is an issue with using fann_train().
When I compare the two functions and their different weight update functions it is not immediately clear to me what the issue might be and further investigation into this issue is warranted.
Seemingly the issue is within fann_update_weights which is called by fann_train.
@bukka The issue is that the ANN does not retain the training despite testing/demonstrating correctly that is has been trained, even when no action/change has been made to the network.
https://github.com/geekgirljoy/test_code/blob/master/Failing/Joy1.php
https://github.com/geekgirljoy/test_code/tree/master/Failing/JoyTest
https://github.com/libfann/fann/blob/master/src/fann_train.c Line 90 - 99
FANN_EXTERNAL void FANN_API fann_train(struct fann *ann, fann_type *input,
fann_type *desired_output) {
fann_run(ann, input);
fann_compute_MSE(ann, desired_output);
fann_backpropagate_MSE(ann);
fann_update_weights(ann);
}
https://github.com/libfann/fann/blob/master/src/fann_train_data.c Line 204 - 220
FANN_EXTERNAL float FANN_API fann_train_epoch(struct fann *ann, struct fann_train_data *data) {
if (fann_check_input_output_sizes(ann, data) == -1) return 0;
switch (ann->training_algorithm) {
case FANN_TRAIN_QUICKPROP:
return fann_train_epoch_quickprop(ann, data);
case FANN_TRAIN_RPROP:
return fann_train_epoch_irpropm(ann, data);
case FANN_TRAIN_SARPROP:
return fann_train_epoch_sarprop(ann, data);
case FANN_TRAIN_BATCH:
return fann_train_epoch_batch(ann, data);
case FANN_TRAIN_INCREMENTAL:
return fann_train_epoch_incremental(ann, data);
}
return 0;
}
https://github.com/libfann/fann/blob/master/src/fann_train_data.c Line 119 - 138
float fann_train_epoch_irpropm(struct fann *ann, struct fann_train_data *data) {
unsigned int i;
if (ann->prev_train_slopes == NULL) {
fann_clear_train_arrays(ann);
}
fann_reset_MSE(ann);
for (i = 0; i < data->num_data; i++) {
fann_run(ann, data->input[i]);
fann_compute_MSE(ann, data->output[i]);
fann_backpropagate_MSE(ann);
fann_update_slopes_batch(ann, ann->first_layer + 1, ann->last_layer - 1);
}
fann_update_weights_irpropm(ann, 0, ann->total_connections);
return fann_get_MSE(ann);
}
https://github.com/libfann/fann/blob/master/src/fann_train.c Line 335 - 400
void fann_update_weights(struct fann *ann) {
struct fann_neuron *neuron_it, *last_neuron, *prev_neurons;
fann_type tmp_error, delta_w, *weights;
struct fann_layer *layer_it;
unsigned int i;
unsigned int num_connections;
/* store some variabels local for fast access */
const float learning_rate = ann->learning_rate;
const float learning_momentum = ann->learning_momentum;
struct fann_neuron *first_neuron = ann->first_layer->first_neuron;
struct fann_layer *first_layer = ann->first_layer;
const struct fann_layer *last_layer = ann->last_layer;
fann_type *error_begin = ann->train_errors;
fann_type *deltas_begin, *weights_deltas;
/* if no room allocated for the deltas, allocate it now */
if (ann->prev_weights_deltas == NULL) {
ann->prev_weights_deltas =
(fann_type *)calloc(ann->total_connections_allocated, sizeof(fann_type));
if (ann->prev_weights_deltas == NULL) {
fann_error((struct fann_error *)ann, FANN_E_CANT_ALLOCATE_MEM);
return;
}
}
#ifdef DEBUGTRAIN
printf("\nupdate weights\n");
#endif
deltas_begin = ann->prev_weights_deltas;
prev_neurons = first_neuron;
for (layer_it = (first_layer + 1); layer_it != last_layer; layer_it++) {
#ifdef DEBUGTRAIN
printf("layer[%d]\n", layer_it - first_layer);
#endif
last_neuron = layer_it->last_neuron;
if (ann->connection_rate >= 1) {
if (ann->network_type == FANN_NETTYPE_LAYER) {
prev_neurons = (layer_it - 1)->first_neuron;
}
for (neuron_it = layer_it->first_neuron; neuron_it != last_neuron; neuron_it++) {
tmp_error = error_begin[neuron_it - first_neuron] * learning_rate;
num_connections = neuron_it->last_con - neuron_it->first_con;
weights = ann->weights + neuron_it->first_con;
weights_deltas = deltas_begin + neuron_it->first_con;
for (i = 0; i != num_connections; i++) {
delta_w = tmp_error * prev_neurons[i].value + learning_momentum * weights_deltas[i];
weights[i] += delta_w;
weights_deltas[i] = delta_w;
}
}
} else {
for (neuron_it = layer_it->first_neuron; neuron_it != last_neuron; neuron_it++) {
tmp_error = error_begin[neuron_it - first_neuron] * learning_rate;
num_connections = neuron_it->last_con - neuron_it->first_con;
weights = ann->weights + neuron_it->first_con;
weights_deltas = deltas_begin + neuron_it->first_con;
for (i = 0; i != num_connections; i++) {
delta_w = tmp_error * prev_neurons[i].value + learning_momentum * weights_deltas[i];
weights[i] += delta_w;
weights_deltas[i] = delta_w;
}
}
}
}
}
https://github.com/libfann/fann/blob/master/src/fann_train.c Line 627 - 677
void fann_update_weights_irpropm(struct fann *ann, unsigned int first_weight,
unsigned int past_end) {
fann_type *train_slopes = ann->train_slopes;
fann_type *weights = ann->weights;
fann_type *prev_steps = ann->prev_steps;
fann_type *prev_train_slopes = ann->prev_train_slopes;
fann_type prev_step, slope, prev_slope, next_step, same_sign;
float increase_factor = ann->rprop_increase_factor; /*1.2; */
float decrease_factor = ann->rprop_decrease_factor; /*0.5; */
float delta_min = ann->rprop_delta_min; /*0.0; */
float delta_max = ann->rprop_delta_max; /*50.0; */
unsigned int i = first_weight;
for (; i != past_end; i++) {
prev_step = fann_max(
prev_steps[i],
(fann_type)0.0001); /* prev_step may not be zero because then the training will stop */
slope = train_slopes[i];
prev_slope = prev_train_slopes[i];
same_sign = prev_slope * slope;
if (same_sign >= 0.0)
next_step = fann_min(prev_step * increase_factor, delta_max);
else {
next_step = fann_max(prev_step * decrease_factor, delta_min);
slope = 0;
}
if (slope < 0) {
weights[i] -= next_step;
if (weights[i] < -1500) weights[i] = -1500;
} else {
weights[i] += next_step;
if (weights[i] > 1500) weights[i] = 1500;
}
/*if(i == 2){
* printf("weight=%f, slope=%f, next_step=%f, prev_step=%f\n", weights[i], slope, next_step,
* prev_step);
* } */
/* update global data arrays */
prev_steps[i] = next_step;
prev_train_slopes[i] = slope;
train_slopes[i] = 0.0;
}
}
I was able to get fann_train() to work without losing the weights however I am uncertain as to why.
I first tested using FANN to see if the issue was with the lib or an issue with the PHP bindings.
This code worked as desired.
#include <stdio.h>
#include "fann.h"
int main()
{
fann_type *calc_out;
const unsigned int num_input = 2;
const unsigned int num_output = 1;
const unsigned int num_layers = 3;
const unsigned int num_neurons_hidden = 3;
struct fann *ann;
// test data
float input[] = {-1, -1};
float output[] = {-1};
printf("Creating network.\n");
ann = fann_create_standard(num_layers, num_input, num_neurons_hidden, num_output);
printf("Configuring network.\n");
fann_set_activation_steepness_hidden(ann, 1);
fann_set_activation_steepness_output(ann, 1);
fann_set_activation_function_hidden(ann, FANN_SIGMOID_SYMMETRIC);
fann_set_activation_function_output(ann, FANN_SIGMOID_SYMMETRIC);
fann_set_train_stop_function(ann, FANN_STOPFUNC_BIT);
fann_set_bit_fail_limit(ann, 0.01f);
fann_set_training_algorithm(ann, FANN_TRAIN_RPROP);
for (int i = 0; i < 10; i++) {
printf("Training network.\n");
fann_train(ann, input, output);
printf("Testing network.\n");
calc_out = fann_run(ann, input);
printf("XOR test (%f,%f) -> %f, should be %f, difference=%f\n", input[0], input[1],
calc_out[0], output[0], fann_abs(calc_out[0] - output[0]));
}
printf("Final test after training network.\n");
calc_out = fann_run(ann, input);
printf("XOR test (%f,%f) -> %f, should be %f, difference=%f\n", input[0], input[1], calc_out[0],
output[0], fann_abs(calc_out[0] - output[0]));
printf("Saving network.\n");
fann_save(ann, "xor.net");
printf("Destroying network.\n");
fann_destroy(ann);
printf("Reloading network.\n");
ann = fann_create_from_file("xor.net");
printf("Test after reloading network.\n");
calc_out = fann_run(ann, input);
printf("XOR test (%f,%f) -> %f, should be %f, difference=%f\n", input[0], input[1],
calc_out[0], output[0], fann_abs(calc_out[0] - output[0]));
printf("Destroying network.\n");
fann_destroy(ann);
return 0;
}
So next I wrote the equivalent in PHP which strangely enough... also worked.
<?php
$layers = [2, 3, 1];
$num_input = 2;
$num_output = 1;
$num_layers = 3;
$num_neurons_hidden = 3;
// test data
$input = [-1, -1];
$output = [-1];
echo "Creating network.\n";
$fann = fann_create_standard($num_layers, $num_input, $num_neurons_hidden, $num_output);
echo "Configuring network.\n";
fann_set_activation_steepness_hidden($fann, 1);
fann_set_activation_steepness_output($fann, 1);
fann_set_activation_function_hidden($fann, FANN_SIGMOID_SYMMETRIC);
fann_set_activation_function_output($fann, FANN_SIGMOID_SYMMETRIC);
fann_set_train_stop_function($fann, FANN_STOPFUNC_BIT);
fann_set_bit_fail_limit($fann, 0.01);
fann_set_training_algorithm($fann, FANN_TRAIN_RPROP);
for ($i = 0; $i < 10; ++$i){
echo "Training network.\n";
fann_train ($fann, $input, $output);
echo "Testing network.\n";
$answer = fann_run ($fann, $input);
printf("XOR test (%f,%f) -> %f, should be %f, difference=%f\n", $input[0], $input[1],
$answer[0], $output[0], abs($answer[0] - $output[0]));
}
echo("Final test after training network.\n");
$answer = fann_run ($fann, $input);
printf("XOR test (%f,%f) -> %f, should be %f, difference=%f\n", $input[0], $input[1],
$answer[0], $output[0], abs($answer[0] - $output[0]));
echo "Saving network.\n";
fann_save ($fann, "xor.net");
echo "Destroying network.\n";
fann_destroy ($fann);
echo "Reloading network.\n";
$fann = fann_create_from_file("xor.net");
echo "Test after reloading network.\n";
$answer = fann_run ($fann, $input);
printf("XOR test (%f,%f) -> %f, should be %f, difference=%f\n", $input[0], $input[1],
$answer[0], $output[0], abs($answer[0] - $output[0]));
echo "Destroying network.\n";
fann_destroy ($fann);
// Results:
/*
Creating network.
Configuring network.
Training network.
Testing network.
XOR test (-1.000000,-1.000000) -> -0.674595, should be -1.000000, difference=0.325405
Training network.
Testing network.
XOR test (-1.000000,-1.000000) -> -0.749394, should be -1.000000, difference=0.250606
Training network.
Testing network.
XOR test (-1.000000,-1.000000) -> -0.789931, should be -1.000000, difference=0.210069
Training network.
Testing network.
XOR test (-1.000000,-1.000000) -> -0.816424, should be -1.000000, difference=0.183576
Training network.
Testing network.
XOR test (-1.000000,-1.000000) -> -0.835459, should be -1.000000, difference=0.164541
Training network.
Testing network.
XOR test (-1.000000,-1.000000) -> -0.849962, should be -1.000000, difference=0.150038
Training network.
Testing network.
XOR test (-1.000000,-1.000000) -> -0.861467, should be -1.000000, difference=0.138533
Training network.
Testing network.
XOR test (-1.000000,-1.000000) -> -0.870869, should be -1.000000, difference=0.129131
Training network.
Testing network.
XOR test (-1.000000,-1.000000) -> -0.878730, should be -1.000000, difference=0.121270
Training network.
Testing network.
XOR test (-1.000000,-1.000000) -> -0.885421, should be -1.000000, difference=0.114579
Final test after training network.
XOR test (-1.000000,-1.000000) -> -0.885421, should be -1.000000, difference=0.114579
Saving network.
Destroying network.
Reloading network.
Test after reloading network.
XOR test (-1.000000,-1.000000) -> -0.885421, should be -1.000000, difference=0.114579
Destroying network.
*/
Further testing and troubleshooting is required to determine the exact cause of this issue.
I have not had time to follow up with this issue however my previous belief that this was related to the weight updates in the RProp functions is incorrect because per the documentation here: https://www.php.net/manual/en/function.fann-train.php
"This training is always incremental training, since only one pattern is presented."
So, future investigation will need to focus on FANN's incremental functions to identify the cause of this issue.
Hello. I'm encountered an error in fann_save function. Layers: 2, 500, 500, 250, 125, 3 When I using this function for save model into file, it's export leads to data loss. I'm using this model for saving picture (100x100) into network (with redundancy), but when I use fann_create_from_file, and then fann_run, my picture becomes completely different, and does not coincide with the previous result.