seallard / walker

NEAT
MIT License
0 stars 0 forks source link

Distinguish signals from current and previous update? #38

Closed seallard closed 3 years ago

seallard commented 3 years ago

Is it necessary to distinguish signals generated in the current update and the previous? I think there might be weird feedback loops if this is not accounted for.

seallard commented 3 years ago

"The activation function, bool Network::activate(), gives the specifics. The implementation is of course considerably different than for a simple layered feedforward network. Each node adds up the activation from all incoming nodes from the previous timestep. (The function also handles a special "time delayed" connection, but that is not used by the current version of NEAT in any experiments that we have published.) Another way to understand it is to realize that activation does not travel all the way from the input layer to the output layer in a single timestep. In a single timestep, activation only travels from one neuron to the next. So it takes several timesteps for activation to get from the inputs to the outputs. If you think about it, this is the way it works in a real brain, where it takes time for a signal hitting your eyes to get to the cortex because it travels over several neural connections."

seallard commented 3 years ago

Is one timestep in the network the same as one timestep in the environment?

seallard commented 3 years ago
// Activates the net such that all outputs are active
// Returns true on success;
bool Network::activate() {
    std::vector<NNode*>::iterator curnode;
    std::vector<Link*>::iterator curlink;
    double add_amount;  //For adding to the activesum
    bool onetime; //Make sure we at least activate once
    int abortcount=0;  //Used in case the output is somehow truncated from the network

    //Keep activating until all the outputs have become active
    //(This only happens on the first activation, because after that they are always active)

    onetime=false;

    while(outputsoff()||!onetime) {

        ++abortcount;

        if (abortcount==20) {
            return false;
        }

        // For each node, compute the sum of its incoming activation
        for(curnode=all_nodes.begin();curnode!=all_nodes.end();++curnode) {

            //Ignore SENSORS
            if (((*curnode)->type)!=SENSOR) {
                (*curnode)->activesum=0;
                (*curnode)->active_flag=false;  //This will tell us if it has any active inputs

                // For each incoming connection, add the activity from the connection to the activesum
                for(curlink=((*curnode)->incoming).begin();curlink!=((*curnode)->incoming).end();++curlink) {
                    if ((((*curlink)->in_node)->active_flag)||
                        (((*curlink)->in_node)->type==SENSOR)) (*curnode)->active_flag=true;
                    (*curnode)->activesum+=((*curlink)->weight)*(((*curlink)->in_node)->get_active_out());
                } //End for over incoming links
            } //End if (((*curnode)->type)!=SENSOR)
        } //End for over all nodes

        // Now activate all the non-sensor nodes off their incoming activation
        for(curnode=all_nodes.begin();curnode!=all_nodes.end();++curnode) {

            if (((*curnode)->type)!=SENSOR) {
                //Only activate if some active input came in
                if ((*curnode)->active_flag) {
                    //Now run the net activation through an activation function
                    if ((*curnode)->ftype==SIGMOID)
                        (*curnode)->activation=NEAT::fsigmoid((*curnode)->activesum,4.924273,2.4621365);  //Sigmoidal activation- see comments under fsigmoid

                    //Increment the activation_count
                    //First activation cannot be from nothing!!
                    (*curnode)->activation_count++;
                }
            }
        }
        onetime=true;
    }
    return true;
}
seallard commented 3 years ago

My version (for classification tasks):

        while tries < max_tries: # Dirty way: cleaner to check if outputs are stable.

            # Iterate over nodes and calculate their net input signals.
            for node in self.nodes[self.num_inputs + 1:]:
                node.calculate_net_input_signal()

            # Activate each node based on their net input signal.
            for node in self.nodes[self.num_inputs + 1:]:
                node.activate()

            tries += 1

        return self.get_outputs()

I'm pretty sure this works as expected since the net input signal for each node is calculated first.