esp8266 / Arduino

ESP8266 core for Arduino
GNU Lesser General Public License v2.1
16.03k stars 13.33k forks source link

strange compile error using double: collect2.exe: error: ld returned 1 exit status #6756

Closed dsyleixa closed 4 years ago

dsyleixa commented 4 years ago

strange compile error using double: collect2.exe: error: ld returned 1 exit status does not happen with float vs. double

Basic Infos

[x ] This issue complies with the issue POLICY doc.
[x ] I have read the documentation at readthedocs and the issue is not addressed there.
[x ] I have tested that the issue is present in current master branch (aka latest git).
[x ] I have searched the issue tracker for a similar issue.
[ x] If there is a stack dump, I have decoded it.
[x ] I have filled out all fields below.

Platform

Hardware: ESP-12E
Core Version: 2.5.2
Development Env: [Arduino IDE
Operating System: [Windows| 7/64 pro

Settings in IDE

Module: nodemcu 1.0
Flash Mode: ???
Flash Size: 4MB no SPIFFS
lwip Variant: ???
Reset Method: ???
Flash Frequency: [ 40Mhz]
CPU Frequency: [80Mhz|]
Upload Using: [SERIAL]
Upload Speed: [115200|

MCVE Sketch

// Neural Network for Arduino

// Ref.:
// http://robotics.hobbizine.com/arduinoann.html
// www.cs.bham.ac.uk/~jxb/NN/nn.html

// 3-layer Backpropagation net

// modified, extended, and enhanced by dsylexia
// required MCUs: ESP8266, ESP32
// version 0.1.1

// (C) 2018 by dsyleixa
// (C) of processed 3rd party code: see original sources.
// This example code is in the public domain for private use, 3rd party code not affected.
// Use for professional or business purpose only by personal written permission
// by the author.

// change log:
// 0.1.1: input matrix=void => output array=void {0,0,0,0,0,0,0,0,0,0}
//        input matrix pattern="0" => output array={1,0,0,0,0,0,0,0,0,0}
// 0.1.0: debug mode

#include <math.h>

// debug mode: uncomment, regular mode: outcomment!
//#define  DEBUG

#define  REPORT_N   20    // for terminal log
int32_t  ReportInterval;  // for terminal log
uint32_t timestamp;       // for terminal log

/******************************************************************
   Network Configuration, customizable
 ******************************************************************/

const int MAX_PATTERNS = 40;  //  
const int NUM_INPUTS   = 120;
const int NUM_HIDDEN   = 25;
const int NUM_OUTPUTS  = 10;

double LearningRate = 0.2;     // 0.3 vv lower oscillating
double Momentum     = 0.8;     // 0.8 ^^ lower local min
double InitialWeightMax = 0.6; // 0.5

#ifdef DEBUG
#define MAXLOOPS   3000
#define ThrSuccess 0.20
#else
#define MAXLOOPS   2147483647
#define ThrSuccess 1E-6*NUM_INPUTS*NUM_OUTPUTS
#endif

int   i, j, p, q, r;
int   RandomizedIndex[MAX_PATTERNS];
long  TrainingCycle;
double Rando;
double Error;
double Accum;

/******************************************************************
   Artificial Neuron
 ******************************************************************/

double HiddenActiv[NUM_HIDDEN];                          // Activation factors for neurons
double OutputActiv[NUM_OUTPUTS];
double HiddenWeights[NUM_INPUTS + 1][NUM_HIDDEN];        // Weight factors for neuron inputs
double OutputWeights[NUM_HIDDEN + 1][NUM_OUTPUTS];
double HiddenDelta[NUM_HIDDEN];                          // Delta=Divergence target/actual
double OutputDelta[NUM_OUTPUTS];
double ChangeHiddenWeights[NUM_INPUTS + 1][NUM_HIDDEN];  // change buffer for Weights
double ChangeOutputWeights[NUM_HIDDEN + 1][NUM_OUTPUTS];

/******************************************************************
   Training Test Patterns
 ******************************************************************/

//-----------------------------------------------------------------
//  Input patterns
//-----------------------------------------------------------------

byte Input[MAX_PATTERNS + 1][NUM_INPUTS] ;

//-----------------------------------------------------------------
//  Output patterns
//-----------------------------------------------------------------

byte Target[MAX_PATTERNS + 1][NUM_OUTPUTS] ;

//-----------------------------------------------------------------
//  Test Inputs
//-----------------------------------------------------------------

byte TestInputC01[NUM_INPUTS] ;

byte TestInputC02[NUM_INPUTS] ;

byte TestInputC03[NUM_INPUTS] ;

byte TestInputC06[NUM_INPUTS] ;

byte TestInputT1[NUM_INPUTS];

byte TestInputT9[NUM_INPUTS] ;

/*  ^^^^^^^^^^^^^^^     End NetConfiguration      ^^^^^^^^^^^^^^^^
 *****************************************************************/

/******************************************************************
   Tools
 ******************************************************************/

//-----------------------------------------------------------------
//  tool: millis to cstring
//-----------------------------------------------------------------

char * millis_to_strF(int ms) {
  uint32_t  Days = 0;
  uint32_t  Hours = 0;
  uint32_t  Mins = 0;
  uint32_t  Secs = 0;

  Secs  = ms / 1000;
  Mins  = Secs / 60;
  Hours = Mins / 60;
  Days  = Hours / 24;
  Secs  = Secs - (Mins * 60);
  Mins  = Mins - (Hours * 60);
  Hours = Hours - (Days * 24);

  char str[20] = "";
  sprintf(str, "%d:%02d:%02d:%02d", Days, Hours, Mins, Secs);
  return str;
}

//-----------------------------------------------------------------
// tool: NN: calc Activation + Net-Error
//-----------------------------------------------------------------

void computeActErr() {
  //--------------------------------------------------
  //  Compute hidden layer activations
  //--------------------------------------------------

  for ( i = 0 ; i < NUM_HIDDEN ; i++ ) {
    Accum = HiddenWeights[NUM_INPUTS][i] ;
    for ( j = 0 ; j < NUM_INPUTS ; j++ ) {
      Accum += Input[p][j] * HiddenWeights[j][i] ;
    }
    HiddenActiv[i] = 1.0 / (1.0 + exp(-Accum)) ;
  }

  //--------------------------------------------------
  //  Compute output layer activations +  errors
  //--------------------------------------------------

  for ( i = 0 ; i < NUM_OUTPUTS ; i++ ) {
    Accum = OutputWeights[NUM_HIDDEN][i] ;
    for ( j = 0 ; j < NUM_HIDDEN ; j++ ) {
      Accum += HiddenActiv[j] * OutputWeights[j][i] ;
    }
    OutputActiv[i] = 1.0 / (1.0 + exp(-Accum)) ;
  }
}

//-----------------------------------------------------------------
// tool: NN statistics log to terminal monitor
//-----------------------------------------------------------------

void PrintStatistixx() {
  Serial.print ("TrainingCycle: "); Serial.print (TrainingCycle);
  if (TrainingCycle > 0) {
    Serial.print ("  Error=");      Serial.print (Error, 5);
    Serial.print ("  ThrSuccess="); Serial.print (ThrSuccess, 5);
    Serial.print ("  runtime (d:h:m:s)=");
    Serial.print (millis_to_strF(millis() - timestamp));
  }
}

//-----------------------------------------------------------------
// tool: NN Inputs/Outputs to terminal monitor
//-----------------------------------------------------------------

void PrintNetPattern()
{
  char buf[10];

  for ( p = 0 ; p < MAX_PATTERNS ; p++ ) {
    Serial.println();
    //Serial.print ("  Training Pattern: ");

    sprintf(buf, "%3d", p);
    Serial.print (buf);
    Serial.print (" In:");
    for ( i = 0 ; i < NUM_INPUTS ; i++ ) {
      if (!(i % 10)) {
        yield();
        Serial.println();
        Serial.print("    ");
      }
      Serial.print (Input[p][i], DEC);
    }
    Serial.print (" Targ: ");
    for ( i = 0 ; i < NUM_OUTPUTS ; i++ ) {
      Serial.print (Target[p][i], DEC);
      Serial.print ("");
    }

    computeActErr();

    Serial.print (" Out: ");
    for ( i = 0 ; i < NUM_OUTPUTS ; i++ ) {
      Serial.print (OutputActiv[i], 3);
      Serial.print (" ");
    }
    Serial.println();
    Serial.print ("        ROUNDED OUT: ");
    for ( i = 0 ; i < NUM_OUTPUTS ; i++ ) {
      Serial.print ((int)round(OutputActiv[i]));
      //Serial.print (" ");
    }
  }
  Serial.println();
  Serial.println("^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^");
  PrintStatistixx();

  Serial.println();
#ifdef DEBUG
  Serial.println ("DEBUG MODE: ACTIVE! ");
#endif

}

/******************************************************************
   HAL (Heuristic Algorithmic Layer): Initialize
 ******************************************************************/

int initializeWeights() {
  //--------------------------------------------------
  //  Initialize HiddenWeights and ChangeHiddenWeights
  //--------------------------------------------------
  for ( i = 0 ; i < NUM_HIDDEN ; i++ ) {
    for ( j = 0 ; j <= NUM_INPUTS ; j++ ) {
      ChangeHiddenWeights[j][i] = 0.0 ;
      Rando = double(random(100)) / 100;
      HiddenWeights[j][i] = 2.0 * ( Rando - 0.5 ) * InitialWeightMax ;
    }
  }

  //--------------------------------------------------
  //  Initialize OutputWeights and ChangeOutputWeights
  //--------------------------------------------------
  for ( i = 0 ; i < NUM_OUTPUTS ; i ++ ) {
    for ( j = 0 ; j <= NUM_HIDDEN ; j++ ) {
      ChangeOutputWeights[j][i] = 0.0 ;
      Rando = double(random(100)) / 100;
      OutputWeights[j][i] = 2.0 * ( Rando - 0.5 ) * InitialWeightMax ;
    }
  }
}

/******************************************************************
   HAL: Backpropagation Learning
 ******************************************************************/

//******************************************************************
int BP_Learning() {
  initializeWeights();

  Serial.println();
  Serial.println("Initial/Untrained Outputs: ");

  PrintNetPattern();

  //--------------------------------------------------
  //  Begin training
  //--------------------------------------------------
  for ( TrainingCycle = 1 ; TrainingCycle < MAXLOOPS ; TrainingCycle++) {

    //--------------------------------------------------
    //  Randomize order of training patterns
    //--------------------------------------------------
RESTART:
    for ( p = 0 ; p < MAX_PATTERNS ; p++) {
      q = random(MAX_PATTERNS);
      r = RandomizedIndex[p] ;
      RandomizedIndex[p] = RandomizedIndex[q] ;
      RandomizedIndex[q] = r ;
    }
    Error = 0.0 ;

    //--------------------------------------------------
    //  Cycle through each training pattern in the randomized order
    //--------------------------------------------------
    for ( q = 0 ; q < MAX_PATTERNS ; q++ ) {
      p = RandomizedIndex[q];

      //--------------------------------------------------
      //  Compute hidden layer activations
      //--------------------------------------------------

      for ( i = 0 ; i < NUM_HIDDEN ; i++ ) {

        Accum = HiddenWeights[NUM_INPUTS][i] ;
        for ( j = 0 ; j < NUM_INPUTS ; j++ ) {
          Accum += Input[p][j] * HiddenWeights[j][i] ;
        }
        HiddenActiv[i] = 1.0 / (1.0 + exp(-Accum)) ;
      }

      //--------------------------------------------------
      //  Compute output layer activations + calculate errors
      //--------------------------------------------------
      for ( i = 0 ; i < NUM_OUTPUTS ; i++ ) {

        Accum = OutputWeights[NUM_HIDDEN][i] ;
        for ( j = 0 ; j < NUM_HIDDEN ; j++ ) {
          Accum += HiddenActiv[j] * OutputWeights[j][i] ;
        }
        OutputActiv[i] = 1.0 / (1.0 + exp(-Accum)) ;
        OutputDelta[i] = (Target[p][i] - OutputActiv[i]) * OutputActiv[i] * (1.0 - OutputActiv[i]) ;
        Error += 0.5 * (Target[p][i] - OutputActiv[i]) * (Target[p][i] - OutputActiv[i]) ;
      }

      //--------------------------------------------------
      //  Backpropagate errors to hidden layer
      //--------------------------------------------------

      for ( i = 0 ; i < NUM_HIDDEN ; i++ ) {
        Accum = 0.0 ;
        yield();
        for ( j = 0 ; j < NUM_OUTPUTS ; j++ ) {
          Accum += OutputWeights[i][j] * OutputDelta[j] ;
        }
        HiddenDelta[i] = Accum * HiddenActiv[i] * (1.0 - HiddenActiv[i]) ;
      }

      //--------------------------------------------------
      //  Update Inner --> Hidden Weights
      //--------------------------------------------------

      for ( i = 0 ; i < NUM_HIDDEN ; i++ ) {
        yield();
        ChangeHiddenWeights[NUM_INPUTS][i] = LearningRate * HiddenDelta[i] + Momentum * ChangeHiddenWeights[NUM_INPUTS][i] ;
        HiddenWeights[NUM_INPUTS][i] += ChangeHiddenWeights[NUM_INPUTS][i] ;
        for ( j = 0 ; j < NUM_INPUTS ; j++ ) {
          ChangeHiddenWeights[j][i] = LearningRate * Input[p][j] * HiddenDelta[i] + Momentum * ChangeHiddenWeights[j][i];
          HiddenWeights[j][i] += ChangeHiddenWeights[j][i] ;
        }
      }

      //--------------------------------------------------
      //  Update Hidden --> Output Weights
      //--------------------------------------------------

      for ( i = 0 ; i < NUM_OUTPUTS ; i ++ ) {

        ChangeOutputWeights[NUM_HIDDEN][i] = LearningRate * OutputDelta[i] + Momentum * ChangeOutputWeights[NUM_HIDDEN][i] ;
        OutputWeights[NUM_HIDDEN][i] += ChangeOutputWeights[NUM_HIDDEN][i] ;
        for ( j = 0 ; j < NUM_HIDDEN ; j++ ) {
          ChangeOutputWeights[j][i] = LearningRate * HiddenActiv[j] * OutputDelta[i] + Momentum * ChangeOutputWeights[j][i] ;
          OutputWeights[j][i] += ChangeOutputWeights[j][i] ;
        }
      }
    }

    //--------------------------------------------------
    //  Every (n) cycles send to terminal for display
    //--------------------------------------------------

    ReportInterval = ReportInterval - 1;
    if (ReportInterval == 0)
    {
      Serial.println();
      Serial.println("vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv");
      PrintStatistixx();

      PrintNetPattern();

      if (TrainingCycle == 1)
      {
        ReportInterval = REPORT_N - 1;
      }
      else
      {
        ReportInterval = REPORT_N;
      }
    }

    //--------------------------------------------------
    // If (error rate < pre-determined threshold) => end
    //--------------------------------------------------

    // if captured in local minimum or oscillating:
    if ( Error > 1.0 && TrainingCycle > 3000 ) {
      initializeWeights();
      goto RESTART;
    }
    if ( Error > 0.6 && TrainingCycle > 20000 ) {
      initializeWeights();
      goto RESTART;
    }

    // success?
    if ( Error < ThrSuccess ) return 0 ;  // 0:  training OK: no err
  }
  return -1;              // -1: loop limit reached: no success
}

/******************************************************************
   HAL (Heuristic Algorithmic Layer): Recognize arbitrary patterns
 ******************************************************************/

void InputPatternRecognition(byte TestInput[NUM_INPUTS] ) {
  char buf[10];

  for (int i = 0; i < NUM_INPUTS; i++) {
    Input[MAX_PATTERNS][i] = TestInput[i];
  }

  Serial.println();
  Serial.print ("  Test Pattern ");

  Serial.print (" In:");
  for ( i = 0 ; i < NUM_INPUTS ; i++ ) {
    if (!(i % 10)) {
      yield();
      Serial.println();
      Serial.print("    ");
    }
    Serial.print (Input[MAX_PATTERNS][i], DEC);
  }

  computeActErr();

  Serial.print (" Out: ");
  for ( i = 0 ; i < NUM_OUTPUTS ; i++ ) {
    Serial.print (OutputActiv[i], 3);
    Serial.print (" ");
  }
  Serial.println();
  Serial.print ("       ROUNDED OUT: ");
  for ( i = 0 ; i < NUM_OUTPUTS ; i++ ) {
    Serial.print ((int)round(OutputActiv[i]));
    //Serial.print (" ");
  }

  Serial.println();
  Serial.println("^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^");

}

volatile static int8_t StateMode, ModeLearn = 1, ModeDetect = 0, ModePause = 0;

/******************************************************************
   setup
 ******************************************************************/

void setup() {
  Serial.begin(115200);
  delay(1000);
  timestamp = millis();

  randomSeed(analogRead(A0));
  ReportInterval = 1;
  for ( p = 0 ; p < MAX_PATTERNS ; p++ ) {
    RandomizedIndex[p] = p ;
  }
}

/******************************************************************
   loop
 ******************************************************************/
void loop() {
  volatile static int8_t result = -1;

  //--------------------------------------------------
  // start Backpropagation Learning (BP)
  //--------------------------------------------------
  result = BP_Learning();

#ifdef DEBUG
  Serial.println ("DEBUG MODE: ACTIVE! ");
#endif

  if (result == 0) {               // 0:  training OK: no err
    // Error < ThrSuccess

    Serial.println ();
    Serial.println("vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv");
    PrintStatistixx();
    PrintNetPattern();
    Serial.println ();
    Serial.println ();
    Serial.println ("Training Set Solved! ");
    Serial.println ("-------------------- ");
    Serial.println ();
    Serial.println ();

  }

  else if (result == -1) {      // -1: loop limit reached: no success

    Serial.println("vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv");
    PrintStatistixx();
    PrintNetPattern();
    Serial.println ();
    Serial.println ();
    Serial.println ("loop limit reached - Training aborted! ");
    Serial.println ("-------------------------------------- ");
    Serial.println ();
    Serial.println ();

  }

  // Recognize test patterns:

  Serial.println ("TestInputC01 \"1\" ");  // Test, debug
  InputPatternRecognition( TestInputC01 );
  Serial.println ();
  Serial.println ();

  Serial.println ("TestInputC06 \"6\" ");  // Test, debug
  InputPatternRecognition( TestInputC06 );
  Serial.println ();
  Serial.println ();

  Serial.println ("TestInputT1, untrained");  // Test T1, debug
  InputPatternRecognition( TestInputT1 );
  Serial.println ();
  Serial.println ();

  Serial.println ("TestInputT9, untrained");  // Test T9, debug
  InputPatternRecognition( TestInputT9 );
  Serial.println ();
  Serial.println ();

#ifdef DEBUG
  Serial.println ("DEBUG MODE: ACTIVE! ");
#endif

  while (true) {        // debug: loop forever
    delay(1000); // training finished
  }

}

Debug Messages


d:/arduino/portable/packages/esp8266/tools/xtensa-lx106-elf-gcc/2.5.0-3-20ed2b9/bin/../lib/gcc/xtensa-lx106-elf/4.8.2/../../../../xtensa-lx106-elf/bin/ld.exe: address 0x3fffd150 of C:\Users\hw\AppData\Local\Temp\arduino_build_160748/esp8266_BPN_011.ino.elf section `.bss' is not within region `dram0_0_seg'

collect2.exe: error: ld returned 1 exit status

exit status 1
Fehler beim Kompilieren für das Board NodeMCU 1.0 (ESP-12E Module).
earlephilhower commented 4 years ago

You are out of RAM due to RODATA constants and very large heap arrays, so BSS can't fit w/in the ~50KB free for core use.

Try float instead of double (4 bytes vs. 8 bytes), moving constant arrays to PMEM (and using appropriate pgm_read_xxx macros), etc.

dsyleixa commented 4 years ago

float has too few digits for convergence. (sigma + error function) the error msg is confusing and ambiguous: How to retrieve the free RAM/stack/heap?

earlephilhower commented 4 years ago

That's some serious dynamic range problem in the algorithm, ouch.

The error message is from GCC, not us, and it's pretty clear. BSS == your uninitted memory arrays. It doesn't fit in the RAM segment, so it can't link properly.

Just build an empty sketch and you'll see the total heap space available printed as part of the Arduino output ("Program uses XXX bytes out of YYYY of RAM" or similar). Assume ~50KB total w/o any WiFi. And that's obviously not going to include any dynamically allocated stuff.

Stack is 4K always.

dsyleixa commented 4 years ago

I actually do not see or remember having allocated any dynamic memory at all. OTOH, A free mem avail function during runtime would be fine too (does actually exist for Arduino Due (Cortex M3 ATSAM3X8E), but does not work for ESP8266

earlephilhower commented 4 years ago

ESP.getFreeHeap() is the runtime command. It's not going to help you, though, if your heap variables are too big since you can't make an executable in that case.

devyte commented 4 years ago

Your hidden weights matrix alone is 57KB or so, plus all the other stuff.

dsyleixa commented 4 years ago

yes, I see, thanks! I'll try to migrate to esp32. At least I finally know now what the strange error msg means, thank you!