wazaahhh / bayesLearn

0 stars 2 forks source link

improved intro and abstract #27

Closed jac2130 closed 6 years ago

jac2130 commented 7 years ago

Hi Thomas, I'm still working on the text but for now here are some changes that hopefully improve the abstract and introduction. There are some parts that I find confusing. In particular, Markov processes are random processes with memory, clearly, and I think that any random processes that has a finite number of possible states and some processes with a slowly increasing number of countable states can be modeled as Markov processes. What I find confusing is that 1) you say that a truly random process is one without memory (implying that Markov processes are not truly random), but what I think you mean is that true Levy flights have no memory? Then you say that only truly random processes are optimal for search, meaning if the processes have memory then they are not optimal? But then if I understand correctly, you say that memory is helpful for slowly approaching a solution through a combination of 1) synthesis of previously explored solutions and 2) completely novel solutions. But if it is helpful then not having memory can't be optimal as an optimal process can't be further improved. ...I think that we need to reorganize and clarify this a bit but I didn't want to rip things out and rearrange them too much before I know exactly what you mean by optimal and what it is that is not optimal. If I made this out correctly then the story is this: memory can be useful so that some memory for synthesis is part of optimality. What is truly a hindrance is not memory but the excessive return to exact spots that were already found before and that are already known not to be the optimal solution. In fact, memory could be helpful to avoid going back to previously explored solutions (memory is usually defined as any time dependents, also avoidance of previously explored points in space ..when repetition is less than it would be completely randomly) In other words, going to places in the convex hull is okay, as long as they are not exact places that have been previously found? Returning to exact previous solutions is nothing but a waste of time in abstract spaces but evolutionarily perhaps optimal because new things could traditionally be found in previously explored places? So basically, the difference in a sense between the geographic resource space and the abstract space is that the former isn't a fixed space with one optimal solution (things can regrow) while our case is one where once a solution has been tried it will never be more optimal than what it was when it was first tried. This is interesting because this would not be a prediction having to do as much with complexity as it does with whether the superiority or inferiority of a previously explored solution can be known once and for all.

jac2130 commented 7 years ago

Hi Thomas, here is the newest version.

wazaahhh commented 7 years ago

Hi Johannes,

Do you mind sending me the pdf so I can read it in the plane ? Thanks!

Cheers,

Thomas

On Thu, Mar 30, 2017 at 1:10 AM, Johannes Castner notifications@github.com wrote:

Hi Thomas, here is the newest version.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/wazaahhh/bayesLearn/pull/27#issuecomment-290253850, or mute the thread https://github.com/notifications/unsubscribe-auth/AAQ4Gi4Eclq33JGm4-brItrHzsihqro4ks5rquTLgaJpZM4MCfQG .