Closed lukepmccombs closed 1 year ago
This may be outside the scope of LEAP. I'll direct you to MENNDL/GREGOR where you can see that we have a way of representing pytorch layers in a dynamically-sized configuration for neuroarchitecture search.
I was thinking more in line with evolving the parameters of the module, rather than the architecture. For SNNTorch at least, this is a want. I would like to look at GREGOR though.
I like the idea, since neural architecture search is such a common application of EAs.
@markcoletti has a point, though: adding a decoder
that creates a torch
Module
phenotype (for example) introduces a not-so-small dependency on torch
, which would be one more dependency for us to maintain as the torch
API evolves over time.
Maybe that wouldn't be so big of a commitment, but my inclination is to keep it out of the core LEAP features.
Could make great fodder for a contrib
package, though! We could interpret this issue in that light and keep it open if you think it's something you plan on tackling in the near future, @lukepmccombs.
I was already thinking to make it as a separate package! I'm certainly interested in doing it. If I end up making enough progress to warrant making it available, I'll post a link to the repo here.
I think a separate package would be following the tradition set by MENNDL and GREGOR. (And Gremlin, I guess.) In any case, I'm keen to see what you come up with as that's well within my wheelhouse.
I think it would be interesting to try putting together functions for evolving pytorch modules directly. You can access the whole of their parameters quite easily and alter them as well. I'm not sure how distributed would handle it however.