Closed xurxodiz closed 12 years ago
Currently, the direction is in a two-pass generation scheme.
First, we create ground, hills and gaps. Second, we place boxes and enemies.
The passes are sequential, and based on rolling dice. The probabilities is what the algorithm will take care of.
The tension wave will be just another parameter to take into account. Other factos are base hard-coded chances, user data and previous ocurrences (Markov chains?).
This issue remains open as we further tune and refine the design.
The Markov chain approach takes serious consideration now.
Now, the process would be to create a finite state automaton in which each state is a chunk of the stage. Transition between states is defined by the probabilities (the Markov model). The states produce chunks upon entering them.
The chunks may now include enemies and blocks (that would mean a single pass), for a more holistic vision of the stage.
The probabilities could be generated by either a) a neural network b) a classifier. In the second case, each category would have the probabilities "hardcoded".
The problem with the first case is that we need data for training. In the second one it's not really necessary, although it could be useful for refining the categories by clustering upon the data and checking results.
A definite direction has been chosen. We'll try to gather data from playing users (see #21 and #22), cluster that (see #23 and #24), and assign hardcoded transitions for an automaton to produce a Markov chain (see #16 and #17).
For the moment, the idea that seems more appealing is to create a general top level flow that gets the parts further defined by parameters picked from the user. The variation and randomness comes from the different ways that this top flow can be defined (e.g., "high stress" could come from many consecutive jumps, many enemies, or lots of coins).
After issues like #6 or #2 have been worked on a bit, we can further assert if this is the way to go or we need to rethink it.