JesusEV / nest-simulator

The NEST simulator
GNU General Public License v2.0
1 stars 0 forks source link

`erase_used_eprop_history()` function appears to be removing an excessive number of entries #10

Closed JesusEV closed 4 months ago

JesusEV commented 4 months ago

Describe the bug The erase_used_eprop_history() function appears to be removing an excessive number of entries when executing online tasks. This issue manifests within the compute_gradient() function, particularly evident during the execution of:

 auto eprop_hist_it = get_eprop_history( t_previous_spike - 1 );

Subsequent to invoking erase_used_eprop_history(), the condition t_previous_spike - 1 != eprop_hist_it->t_ arises unexpectedly. This indicates a problem since the expected behavior would be for these values to match.

To Reproduce Run

python online_eprop_supervised_regression_neuromorphic_mnist.py 

using the following patch (sine-waves does not produce the problem as erase_used_eprop_history() never erases all values due to lazy neurons )

diff --git a/models/eprop_iaf.cpp b/models/eprop_iaf.cpp
index 78f57b45c..ec8900666 100644
--- a/models/eprop_iaf.cpp
+++ b/models/eprop_iaf.cpp
@@ -398,6 +398,14 @@ eprop_iaf::compute_gradient( const long t_spike,

   auto eprop_hist_it = get_eprop_history( t_previous_spike - 1 );

+  if( (get_node_id() == 2425 or get_node_id() == 218) and t_previous_spike - 1 != eprop_hist_it->t_)
+  {
+    std::cout << " -> " << t_previous_spike - 1
+                << " " << eprop_hist_it->t_
+                << " " <<  t_previous_spike - 1 - eprop_hist_it->t_
+                << std::endl;
+  }
+
   double pre = 1.0;

   while ( t < std::min( t_spike, t_previous_spike + P_.eprop_isi_trace_cutoff_ ) )
diff --git a/nestkernel/eprop_archiving_node_impl.h b/nestkernel/eprop_archiving_node_impl.h
index 9f7e564f7..6e7da5676 100644
--- a/nestkernel/eprop_archiving_node_impl.h
+++ b/nestkernel/eprop_archiving_node_impl.h
@@ -180,7 +180,15 @@ EpropArchivingNode< HistEntryT >::erase_used_eprop_history( const long eprop_isi

   const auto it_eprop_hist_from_2 = get_eprop_history( 0 );
   const auto it_eprop_hist_to_2 = get_eprop_history( update_history_.begin()->t_ - 1 );
+
+  int old_size = eprop_history_.size();
   eprop_history_.erase( it_eprop_hist_from_2, it_eprop_hist_to_2 ); // erase found entries since no longer used
+  int new_size = eprop_history_.size();
+
+  if(get_node_id() == 2425 and old_size - new_size)
+  {
+    std::cout << " t_curr: " << t_curr << " n_removed: " << old_size - new_size << std::endl;
+  }
 }

 template < typename HistEntryT >
diff --git a/pynest/examples/eprop_plasticity/online_eprop_supervised_regression_neuromorphic_mnist.py b/pynest/examples/eprop_plasticity/online_eprop_supervised_regression_neuromorphic_mnist.py
index 4d386cf70..300901397 100644
--- a/pynest/examples/eprop_plasticity/online_eprop_supervised_regression_neuromorphic_mnist.py
+++ b/pynest/examples/eprop_plasticity/online_eprop_supervised_regression_neuromorphic_mnist.py
@@ -117,8 +117,8 @@ np.random.seed(rng_seed)  # fix numpy random seed
 # Using a batch size larger than one aids the network in generalization, facilitating the solution to this task.
 # The original number of iterations requires distributed computing.

-n_batch = 100  # batch size, 64 in reference [2], 32 in the README to reference [2]
-n_iter = 200
+n_batch = 50  # batch size, 64 in reference [2], 32 in the README to reference [2]
+n_iter = 1

 steps = {}
JesusEV commented 4 months ago

Going down the rabbit hole, it seems that the problem arises from the write_update_to_history() function. This function adds the following shift's: t_current_update + shift and t_previous_update + shift.

When this shift is removed the problem described above disappears. However, this requires a little more analysis.