opencog / asmoses

MOSES Machine Learning: Meta-Optimizing Semantic Evolutionary Search for the AtomSpace (https://github.com/opencog/atomspace)
https://wiki.opencog.org/w/Meta-Optimizing_Semantic_Evolutionary_Search
Other
39 stars 31 forks source link

Provisional design draft of AS-MOSES #1

Closed ngeiswei closed 6 years ago

ngeiswei commented 6 years ago

This will probably have to be moved to some doc folder but for now is better let here in the open.

bgoertzel commented 6 years ago

Nil, this is great, thanks!

One difference between this and what I was thinking is... I was thinking to leave the deme management and feature selection in MOSES as-is, and just replace what happens inside the deme... But of course, yeah, ultimately all this can be done via reasoning...

I suggest two phases though

Phase 1) replace each deme with an Atomspace, and leave the deme management and feature selection as-is

Phase 2) Look at replacing the demes and feature selection stuff with Atomspace mechanisms as well

If we agree on these two phases, then for now we can focus on Phase 1 which will be hard enough. What do you think?

ngeiswei commented 6 years ago

So @bgoertzel if I understand correctly what you have in mind is to, for now

  1. replace Combo by Atomese,
  2. store the candidates in AtomSpace(s).

This would provide 3 immediate advantages

  1. Fitness function can user-defined in Atomese
  2. Results are directly, by default, in Atomese
  3. Some OpenCog meta-learning can already take place at the deme level

I expect fitness evaluation will be slower, though this could be counteracted by caching sub-tree evaluations (likely the most costly parts of the fitness).

I don't mind taking that route but I sorta fear that it's gonna be a lot of work and it won't yield a tremendous return, compared to what we already have. I mean Combo can be ported to Atomese already, however the fitness cannot easily be user-defined and AtomSpace based. On the bright side it seems it would take less time before it can effectively work.

Actually, it's probably good to come up first with with some use cases highlighting the justification of that port, and to help to guide it. Something like learning some predictive patterns over some data living on some atomspace. My toy inference control learning experiment https://github.com/opencog/opencog/tree/master/examples/pln/inference-control-learning could be one of these use cases. These use cases could then be turned into unit tests, if they're moderately small.

OK, so on a practical level, that makes the port very different than what I had in mind, I more or less thought we would start from scratch, whereas this requires to modify the existing code. I need to study the amount of work it would take, its feasibility, etc. So, assuming I understand your proposal correctly, I'll go over the code and write a complementary report for that port type.

I'm thinking maybe we can come up with some middle ground, something that would gently nudge AS-MOSES towards a more OpenCog-ish self-improving process, while still retaining along the way it's current efficiency. Maybe that would mean wrapping its main components in Scheme/Atomese, link hard stochastic decisions to Atoms reflecting the knowledge required to take those decisions, etc.