Open CodeReclaimers opened 7 years ago
With a couple of exceptions, I am about ready to merge my config_work branch to my fork's master branch and thus into a pull request. That will do a lot of configuration stuff that has been waiting around in TODO form (in the docs and in the source code) for a while, including documenting it (and doing some corrections on the current documentation version; said current version has some notes in it various places about where the config_work branch has fixes for issues). The two exceptions are:
All and all, I would be quite happy to see a release soon - 0.92? I'll see about updating the documentation (including various change records) for that. Currently, I'm describing the repository's master branch as 0.91-github
and my fork's config_work branch as 0.91-config_work
, incidentally.
-Allen
P.S. Might I brag slightly? I am happy to report that coveralls.io says my fork's mypy_checking branch (which is based off of a branch of config_work, multiparam_funcs) has a test coverage of 95.43%.
What I can say, I am slow, thanks to @drallensmith for pinpointing answers that motivated to do more. @bennr01 for clear answers about computing. And most importantly you for a great project continuation!
P.S. When I grow up I want to be like you :)
You're not nearly as slow as me @d0pa, I haven't done anything on my open projects in what feels like months now. :)
@drallensmith, thank you for paying attention to test coverage, that's been desperately needed for a while!
@CodeReclaimers distributed.py
should be finished soon (thanks to @drallensmith for identifying some issues and trying to fix some of them). Currently, the only problems are the tests, which seem to randomly fail or work. In the last travis build, only the python2.7 and pypy3 tests failed (the pypy2 and python3.x tests are working).
@drallensmith
@bennr01, would you say that it should be characterized as "alpha" or "beta"? This description may vary depending on the python version in use, BTW; 2.7 seems to work the best.
probably beta. I do not plan to add any specific new features to neat/distributed.py
, but some of the tests fail randomly (see above).
@drallensmith, thank you for paying attention to test coverage, that's been desperately needed for a while!
Quite welcome; the version of config_work I just uploaded (with backports from mypy_checking but no documentation or distributed.py updates) has a coveralls percentage of 96.605 percent :-}
I'll start on the documentation fixes next. (Sphinx actually has a (non-third-party) plugin for checking documentation coverage, but ironically it's insufficiently documented to use... my personal goal is to have the documentation at least as comprehensive as that of most Python standard library modules.)
The documentation fixes are up, including a HTML version at drallensmith.github.io. I also added a couple of tests I had forgotten about for ctrnn and iznn - coverage now at 96.761 percent. (Admittedly, part of the coverage work is simply figuring out where it is appropriate to put a pragma: no cover comment; for instance, testing for cross-validation statistics gathering not working is undesirable, since it should work at some point in the future. It is also unlikely that a machine without threading
will become available for testing on...)
Now, all that should be left for my end for a release is incorporating @bennr01's distributed.py changes (both the file and the documentation, plus any to the appropriate test scripts).
@CodeReclaimers: BTW, don't forget the other PR than mine waiting - #99 ... I can easily change the documentation to account for it.
Thanks for the reminder, glad to be rid of Indexer. :)
Quite welcome! In my fork's distributed_rewrite branch is now a combination of my config_work branch and @bennr01's work on the distributed.py module and related (plus a bit by me, particularly on documentation and testing). Up on drallensmith.github.io is what I'm hoping is the final draft of the documentation for the 0.92 release; if people could take a look at it (and the code), I'd appreciate it. Once everyone (particularly @bennr01 for the distributed.py code) has had a chance to look at it in html form, and I've made any corrections needed, I'll merge it into my master branch, from where it'll go in as a pull request for this repository's master branch; that seems to be the most convenient time for code comments (IIRC, one can put comments regarding specific lines during a pull request?).
BTW, it should be noted that currently the main distributed.py testing is being skipped on pypy (including pypy3); while these tests are usually working in command-line testing (pypy threading problems seem to be the most frequent holdup), something about the travis environment makes them rather more inconsistent. (I have seen this problem before with something also involving sockets and multiple processes, although that was happening with the default python.) To run the tests on pypy locally, just comment out the @unittests.skipIf(ON_PYPY,...)
lines.
@drallensmith The documentation looks good (as always). The documentation for neat.distributed
does not contain any mistakes, but i added a few comments containing improvement suggestions to the commit. Thanks for your work.
@CodeReclaimers I finished my changes. They are now in @drallensmith's distributed_rewrite branch. Thanks for creating and maintaining neat-python :)
@bennr01: Quite welcome, and thank you for the compliment, also as always! I've done the documentation updates and changed some internal-use-only names to start with _
.
I realized, BTW, that one reason checkpointing has been very slow, particularly with pypy, is that the code was always using pickle, when on 2.7 versions cPickle is much, much faster. The change in speed is particularly noticeable with pypy.
I mis-stated a bit regarding code comments & push above - it's more that all the code changes at once will be grouped together in the push, instead of being in lots of little commits...
I have also updated the rest of the documentation (installation.rst and xor_example.rst) and setup.py to 0.92 (the links for the former won't work until the tag is actually in, of course...).
Distributed_rewrite now merged with my master, pull request #97 now includes all of it. (I'll have to make some notes on the new issues that QuantifiedCode found... however, if you leave out various problems it has with the test code + the issue it has with the best-for-human-comprehension (IMHO) saw_EOFError
, there are more issues fixed than introduced.)
@CodeReclaimers: One issue (apparently with my understanding of .git) - I had meant to prevent merges from doing anything to .coveralls.yml (is that token supposed to be public, BTW?) by putting it into .gitignore, but the merge is trying to delete the file. Sorry about that... coveralls works even without it, although there may be some increased capabilities that are not available.
@drallensmith regarding the QuantifiedCode issues in #97:
Avoid using "non-Pythonic" variable names in distributed.py
: I think we can remove saw_EOFError
completely. If i recall correctly, it is no longer required (i removed it when adding the reconnection mechanic).
Use @property
instead of Java-style setter/getter methods in distributed.py
: Unfortunately this can not be changed because these @property
does not work with managers.
@bennr01: Good point. Fixed.
Thanks for the reminder about the Travis token--I seem to recall when I set it up, an example or the instructions made it seem like it should be there, despite the warning about not making it public.
If everybody thinks #97 is good enough for a release, I'll just go ahead and do that today.
Quite welcome; I think that the instructions assume that people are either doing purely public repositories (and thus would have no need for travis-pro, etc) or purely private repositories (in which case it's not a problem). The situation does unfortunately cause a bit of a merge problem, but that should be easy to fix.
Oh. Should I put a caution about distributed.py being (very) beta in its docstrings, or just leave it in the sphinx documentation only?
OK, I fixed the .coveralls.yml merge conflict and put the note in distributed.py's main docstring.
Thanks, I just merged, made a couple of minor updates and pushed the new version to PyPI. I'll post announcements on Twitter and reddit, if anybody knows of another good place to publicize it, please feel free to do so (or let me know and I can do it).
Do any of you have things you'd like to get in before the 1.0 release? Originally I really wanted to get at least a minimal HyperNEAT implementation in there, and it's still promised in the docs. I doubt I'll have time to tackle that any time soon, so if anybody wanted to have a go at it please feel free, maybe I can help test or something.
Well, I've been working on using activation and aggregation functions that have, as well as the usual input parameter(s), additional ones that are themselves evolved (as FloatAttributes, defaulting to uniform initialization). The simplest case of this is a weighted combination of two or three functions, but it can get more complex than that. The code is up on my fork under the branch multiparam_funcs. For a look at three examples of what I mean in terms of activation functions, see the very-incomplete documentation's activations page. These (and the aggregation functions) can go beyond one parameter to two or more parameters, although that's rather harder to plot in a limited amount of space - for instance, min(abs(x),max(x, ax, b(exp(x+c)-exp(c)))) for a combination of ReLU, Leaky ReLU, ELU, and others.
What's the advantage? There are two major ones (in addition to, as with ELU, avoiding manual tuning of some parameters, such as the leakiness of a Leaky ReLU):
The current status of the code is that it at least seems to work (and the multiparam_relu function, max(x, a*x) is capable of very quickly solving xor via evolving a=-1; this sometimes happens within 1 generation...). Given all the manipulations involved, I've been checking using mypy (see mypy_checking), but the code definitely needs more review and testing from people more familiar with the guts of python - and more experienced with writing code that others can work with easily - than myself. The documentation needs updating for this even more so....
A couple of expansions onto this that I'm very interested in doing:
I have also commented, in the current documentation, on some other places that may need looking at - search for "TODO" (which I perhaps should have indexed... now done for the multiparam_funcs documentation). For instance, it's actually possible for a "homologous" node to contribute more to genomic distance than a non-homologous node, even if the disjoint coefficient is greater than the weight coefficient. (If it is the same, and likely too high, one gets the behavior that caused iznn to take forever in the test scripts - the population gets divided into lots and lots of species with only 2 genomes each, thanks to min_species_size.) As another instance, it's actually possible for the current stagnation setup to remove species containing genomes that weren't making progress because they were at the maximum possible fitness...
Admittedly, proper configuration will help with the above (although eliminating the first would be rather difficult, if one wants to keep a wide range of weights, biases, and/or response multipliers possible, for instance). Some of the others, such as bias not being part of the distance calculation for iznn, are more matters for "how do we fix this with backward compatibility maximized but forward utility also maximized" than necessarily difficult to code.
Regarding HyperNEAT, would this perhaps be of interest to @bennr01 and @mathiasose for their respective NEAT-tetris experiments? This is also related to the question of saving networks - see #21.
@drallensmith HyperNEAT looks interesting. I just read trough a paper and the HyperNEAT user page and i think i now understand the concept. The problem is integrating it into neat-python. If i understand HyperNEAT correctly, the idea is that the weight (and in more advanced versions also the bias) of a connection is determined by a second NEAT (the CPPN) which has 4 inputs (x1, y1, x2, x3) and 1 or 2 outputs (the weight and the bias of the connection). But how should we determine whether a NEAT should be a HyperNEAT? (Maybe use a parameter in the config?) And which configuration should the CPPN use? The same as the substrate (=the normal neat whose weight and bias values are set by the CPPN) or load another config file? And, most importantly, how should we specify the position of the nodes? We can probably use a new class for the inputs and convert the currently used float inputs into this class, but what about the hidden and output nodes?
(Actually, come to think of it, multiparam funcs could be used for the activation and aggregation functions of HyperNEAT, enabling the CPPN to output those parameters also...)
The question of how to specify the position of the nodes (if not using ES-HyperNEAT, in which they are determined by the CPPN - although one still needs to specify the arrangement of the input and output nodes...) is part of why I was referencing #21, actually; while the issue's title is asking re xml, the question is more generally how to save NNs in a more human-comprehensible (and portable) format than pickle. (And, for that matter, that's of interest for if one wants to specify some of the structure of a regular NEAT to start (to speed things up), and for more clearly specifying default/whatever activation/aggregation functions for particular nodes, as I mentioned above re multiparam_funcs.)
@drallensmith I think best portable format would be C structs, would it be of interest replacing pickle with C structs ? Using C structs would also help https://github.com/CodeReclaimers/neat-python/issues/26
Interesting idea. C structs would certainly be highly portable in some respects, provided that the language has good ways to manipulate them. OTOH, they certainly aren't human-editable or even human-readable, and are at least as problematic as JSON in terms of convenience of isolating to each module any module-specific code for input/output.
In terms of pickle, there are two different meanings for saving a network - saving the current state, including configuration and randomization, and the less-encompassing saving the network itself (the arrangement of nodes + connections and the parameters/attributes of each of these). The first does not need to be particularly portable; it is mainly of use for checkpointing. (I suspect pickle will continue to be the best option for it.) For the second, a variety of means may be possible, and pickle is not particularly suited for it, being neither portable nor human-readable/comprehensible.
We could also add the ability to dump a network into a dictionary which only contains strings and number and write a function to create a network from this dictionary. This dictionary could then be serialized using json, xml, bson...
Maybe something like {"type": "feedforward", "nodes": ...}
?
The dictionary could also contain sub-dictionaries, as long as it wasn't recursive. Making the encoded version of a part of a network available via __repr__
would be helpful.
As suggestion, storing NNs can be done with CSV format its portable and human readable, but it will require two files, one for neuron attributes and second for connections.
NN attributes file (Functional): node -1, activation_func, ... , aggregation_func node N, activation_func, ... , aggregation_func
NN connections file (Structural): From Node, To Node(s) node -1, node 1, node 2, node 3 node -2, node 1, node 2, node 3 node 1, node 4 node 2, node 4 node 3, node 4
This way it will be easy to edit, read and plot, additionally it will be easily expandable (more advanced attributes can be stored in additional files).
If such format is of interest, I build real example for most variation of neat, manually :)
Manually? Ouch! Ideally, the library would be able to read and write multiple formats. Having a common interface design would help with that, and the different alternatives could go through an evolutionary contest of their own :)
Manually first just to make sure all variables for each neat and neuron variations can be covered in csv config that is easy to work with. By common interface you mean to add data.py that will allow to load and store NN's in various portable formats ?
Got it re manually. The "data.py" idea is a good one; I was actually more thinking of a specified interface for genomes/genes that would allow other code to get info from them and be able to create them without knowing so much about their inner workings - that ideally could be used by all of the nn
, iznn
, ctrnn
, and expansions onto these such as RBF-NEAT. One aspect of this in my current thinking is that repr() should be returning something for which there is another function such that feeding it the repr
results would get you the neural network (or part thereof) back. I've been working on something like this for one aspect of neurons in my fork's multiparam_funcs branch, although what I just pushed to github was some other updates. I've also been looking at this in my fork's controlled_mutation branch, although that's just the very start of what would be needed.
Yes, repr() is by far best option, only thing would be nice to have it in separate file so in future any changes for printing out or loading new manually edited NN would not require to edit manually multiple classes in multiple files. Idea is to have repr() to return just comma separated values, each value can be described in documentation by position. data.py would be responsible to modify that line to various formats (data.py will be similar to reporting.py). I need a bit time to get up to speed with other branches (rip my life).
data.py is misleading, probably something like import_export.py is better. If idea make sense at all.
repr itself has, as far as I know, to be in with the classes it is giving information about. In a separate file could be helper functions, however, both for repr if needed and for translating the repr results to various formats. It could even be done with more than one type of reporting from repr with the import_export.py (or whatever) file being responsible for translating between different formats. (repr is technically supposed to output something that is usable to recreate the original object.)
@drallensmith, @bennr01, I played around over weekend with interface, I looked for inspiration of interface structure at this publication and at neuroml2 xml configurations.
Writing networks manually, I might be getting ahead of project :) It naturally becomes unified configuration for network and human readable/editable network dump at same time. Keeping it in csv or dict it is a same thing, just couple lines of code convert it back and forth. Would it be wrong to say that it will be better to design unified config structure for neat and other neat variations with assumption that project in some future will be evolving cortex like structures? (Does not look right to have different definition of nodes/connection for neat and other neat variations)
Is there anything wrong with keeping 3 separate config files, such as structure_config (definition of network nodes positions and connections), functions_config (definition of network nodes and connections attributes), (effective_config contains evolved/predefined network) ? Seems like there is a general problem at least for me to split functional and effective config(s) since user has to define as example in hyperneat layers and nodes position, moreover there can be mix of static nodes/connection and evolved ones.
P.S. I need to work a bit more on configuration file example before I show it here, but answers, suggestions and questions would help me a lot. I think config example of Retina Problem is a good start for hyperneat, is it ? :)
First, thanks to everybody that's been answering questions and working on pull requests, especially @drallensmith, @bennr01 and @d0pa!
When do you all want to take what you're working on now, get it to a mostly-usable state, and make a release so people can try it out through pypi? We can always just do it piecemeal--no need for everybody to be done at the same time.