SpiNNakerManchester / sPyNNaker

The SpiNNaker implementation of the PyNN neural networking language
Apache License 2.0
103 stars 43 forks source link

Exception when setting individual spike trains in a population of SpikeSourceArrays #159

Closed astoeckel closed 8 years ago

astoeckel commented 8 years ago

Consider the following code:

import pyNN.spiNNaker as sim

sim.setup()
p = sim.Population(3, sim.SpikeSourceArray, {"spike_times": []})
p.tset("spike_times", [[10.0, 20.0], [10.0, 20.0], [30.0, 40.0]])
sim.run(1000.0)

This code crashes at sim.run with the following output:

2015-10-27 17:42:42 INFO: Read config files: /localhome/nmpidemo/.spynnaker.cfg, spynnaker.cfg, /localhome/nmpidemo/sPyNNaker/sPyNNaker/spynnaker/spynnaker.cfg
2015-10-27 17:42:43 INFO: sPyNNaker (c) 2015 APT Group, University of Manchester
2015-10-27 17:42:43 INFO: Release version 2015.005.01(Arbitrary) - September 2015. Installed in folder /localhome/nmpidemo/sPyNNaker/sPyNNaker
2015-10-27 17:42:43 WARNING: A timestep was entered that has forced pacman103 to automatically slow the simulation down from real time by a factor of 10.0. To remove this automatic behaviour, please enter a timescaleFactor value in your .pacman.cfg
2015-10-27 17:42:43 INFO: Setting time scale factor to 10.0.
2015-10-27 17:42:43 INFO: Setting appID to 30.
2015-10-27 17:42:43 INFO: Setting machine time step to 100.0 micro-seconds.
2015-10-27 17:42:43 INFO: Creating transceiver for 192.168.6.1
2015-10-27 17:42:43 INFO: going to try to boot the machine with scamp
2015-10-27 17:42:49 INFO: failed to boot machine with scamp, trying to power on machine
2015-10-27 17:42:52 INFO: going to try to boot the machine with scamp
2015-10-27 17:42:57 INFO: Detected a machine on ip address 192.168.6.1 which has 2567 cores and 432 links
2015-10-27 17:42:57 INFO: successfully booted the machine with scamp
2015-10-27 17:42:58 INFO: *** Running Mapper *** 
Allocating virtual identifiers
|0                           50%                         100%|
 ============================================================
Partitioning graph vertices
|0                           50%                         100%|
 Traceback (most recent call last):
  File "run.py", line 8, in <module>
    sim.run(1000.0)
  File "/localhome/nmpidemo/sPyNNaker/sPyNNaker/spynnaker/pyNN/__init__.py", line 193, in run
    _spinnaker.run(run_time)
  File "/localhome/nmpidemo/sPyNNaker/sPyNNaker/spynnaker/pyNN/spinnaker.py", line 280, in run
    self.map_model()
  File "/localhome/nmpidemo/sPyNNaker/sPyNNaker/spynnaker/pyNN/spinnaker.py", line 612, in map_model
    self._execute_partitioner(pacman_report_state)
  File "/localhome/nmpidemo/sPyNNaker/sPyNNaker/spynnaker/pyNN/spinnaker.py", line 759, in _execute_partitioner
    self._machine)
  File "/localhome/nmpidemo/sPyNNaker/PACMAN/pacman/operations/partition_algorithms/partition_and_place_partitioner.py", line 92, in partition
    resource_tracker, graph)
  File "/localhome/nmpidemo/sPyNNaker/PACMAN/pacman/operations/partition_algorithms/partition_and_place_partitioner.py", line 147, in _partition_vertex
    graph_to_subgraph_mapper, resource_tracker)
  File "/localhome/nmpidemo/sPyNNaker/PACMAN/pacman/operations/partition_algorithms/partition_and_place_partitioner.py", line 189, in _partition_by_atoms
    max_atoms_per_core, graph)
  File "/localhome/nmpidemo/sPyNNaker/PACMAN/pacman/operations/partition_algorithms/partition_and_place_partitioner.py", line 295, in _scale_down_resources
    graph)
  File "/localhome/nmpidemo/sPyNNaker/PACMAN/pacman/model/partitionable_graph/abstract_partitionable_vertex.py", line 124, in get_resources_used_by_atoms
    sdram_requirement = self.get_sdram_usage_for_atoms(vertex_slice, graph)
  File "/localhome/nmpidemo/sPyNNaker/sPyNNaker/spynnaker/pyNN/models/spike_source/spike_source_array.py", line 373, in get_sdram_usage_for_atoms
    send_buffer = self._get_spike_send_buffer(vertex_slice)
  File "/localhome/nmpidemo/sPyNNaker/sPyNNaker/spynnaker/pyNN/models/spike_source/spike_source_array.py", line 205, in _get_spike_send_buffer
    for timeStamp in sorted(self._spike_times):
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
alan-stokes commented 8 years ago

As with the last issue. It seems this has been fixed within the master branch with the last refactor. Your welcome to download the master brahcnes of the entire tool chain to test it out though. It wont be released till december, under the release name "AnotherFineProductFromTheNonsenseFactory"