Closed gretac closed 10 years ago
Note, still going to be adding some tests to this to make sure the resulting computations are correct.
It looks like if you merge master into this branch it will hang on the simplenode example.
/usr/bin/python2 /home/robert/spike/test/nengo_tests/test_simplenode.py /home/robert/spike/test/../src is_spike
(or just do make test)
I also see that greta added an explanation that 'cannot be used when origins are used directly in connections'. Is this an example of this case? Can we add something that detects this case and displays information to the user to indicate this (if it does not already work that way.)
Also, I noticed that you didn't merge in my change to the test harness that includes rounding for the tests. I guess that means it must get exactly the same values as theirs now?
It's not yet getting exactly the same values. I am using your harness with rounding locally for now. But I will be trying to get the values more precise as well since now it's not even to 2 digits accurate. With the simple node, I am not yet sure what is the problem. I'll merge in master and will take a look.
I think it makes sense for me to start working on this now. If I want to start making progress on getting this to pass all the tests where should I start? Does it make sense to just pull the latest version of this branch, and start debugging why the values aren't exactly the same? Is there any unpushed important code?
we just made some progress with the cases where subensembles were failing completely (not a decimal error). We're pushing now but the differences caused by imprecision still exist
Isn't this already in the master branch?
we have subensembles in master, yes.. but it's subensembles built on the new architecture... this branch is sub-ensembles on the old.. not sure if we need it or not
Is there an advantage to the 'old' architecture? I don't really understand the difference.
in the old, we build nodes on the user and then serialize the network and distribute it in the new, we send commands to machines for them to build nodes and then we connect the machines to form the distributed network
I guess we haven't really stress tested the new model much yet, but I'd say if it does what we want (in terms of distributing the memory load) it is better for sure. Even if there are a few things that are a bit slower (which we can hopefully make faster after we fix the off by one errors), the new model is still better because it can run extremely large models. There is no benefit in having a model that is super fast but can't actually instantiate a model large enough to necessitate needing the fastness.
Pull request is outdated and was superseded by an already merged change.
Closes #18 Closes #36