Hi, I would like to understand if it is possible to set up neat python through the configuration file to retrain after each prediction of the test/unseen set. For instance if the XOR "evolve-minimal" example, from my understanding it can be adjusted so that it trains on part of the data (to a particular fitness level, obtaining the best genome) then it predicts on the other data that was set aside as a test set. See the code below to see what I mean:
from __future__ import print_function
import neat
import visualize
# 2-input XOR inputs and expected outputs. Training set
xor_inputs = [(0.0, 0.0, 0.0), (0.0, 1.0, 0.0), (1.0, 1.0, 1.0), (0.0, 0.0, 1.0), (1.0, 1.0, 0.0)]
xor_outputs = [(1.0,), (1.0,), (1.0,), (0.0,), (0.0,)]
# Test set
xor_inputs2 = [(1.0, 0.0, 1.0), (1.0, 1.0, 0.0), (1.0, 0.0, 0.0)]
xor_outputs2 = [(1.0,), (0.0,), (0.0,)]
def eval_genomes(genomes, config):
for genome_id, genome in genomes:
genome.fitness = 5
net = neat.nn.FeedForwardNetwork.create(genome, config)
for xi, xo in zip(xor_inputs, xor_outputs):
output = net.activate(xi)
genome.fitness -= (output[0] - xo[0]) ** 2
# Load configuration.
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
'config-feedforward')
# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)
# Add a stdout reporter to show progress in the terminal.
p.add_reporter(neat.StdOutReporter(True))
stats = neat.StatisticsReporter()
p.add_reporter(stats)
# Run until a solution is found.
winner = p.run(eval_genomes)
# Display the winning genome.
print('\nBest genome:\n{!s}'.format(winner))
# Show output of the most fit genome against training data.
print('\nOutput:')
winner_net = neat.nn.FeedForwardNetwork.create(winner, config)
count = 0
#To make predictions using the best genome
for xi, xo in zip(xor_inputs2, xor_outputs2):
prediction = winner_net.activate(xi)
print(" input {!r}, expected output {!r}, got {!r}".format(
xi, xo[0], round(prediction[0])))
#to get prediction accuracy
if int(xo[0]) == int(round(prediction[0])):
count = count + 1
accuracy = count / len(xor_outputs2)
print('\nAccuracy: ', accuracy)
node_names = {-1: 'A', -2: 'B', 0: 'A XOR B'}
visualize.draw_net(config, winner, True, node_names=node_names)
visualize.plot_stats(stats, ylog=False, view=True)
visualize.plot_species(stats, view=True)
However, the issue here is that no retraining takes place after each prediction is made in the test set. I believe the parameters in the config file are static and cannot change after the training process beginnings, so if you fitness level is based on the number of correct classifications of the training set (which is what I'm trying to implement, very similar to the one used here) this will be a problem and so I would like to understand whether a model that retrains can be implemented through adjusting a setting in the config file. Or is there more to it then that?
Hi, I would like to understand if it is possible to set up neat python through the configuration file to retrain after each prediction of the test/unseen set. For instance if the XOR "evolve-minimal" example, from my understanding it can be adjusted so that it trains on part of the data (to a particular fitness level, obtaining the best genome) then it predicts on the other data that was set aside as a test set. See the code below to see what I mean:
Config file is:
However, the issue here is that no retraining takes place after each prediction is made in the test set. I believe the parameters in the config file are static and cannot change after the training process beginnings, so if you fitness level is based on the number of correct classifications of the training set (which is what I'm trying to implement, very similar to the one used here) this will be a problem and so I would like to understand whether a model that retrains can be implemented through adjusting a setting in the config file. Or is there more to it then that?