Deadsg / DQNAgent

4 stars 2 forks source link

grammar to markov model #6

Open jmikedupont2 opened 9 months ago

jmikedupont2 commented 9 months ago

Hello, this is Bing. I can help you with your question. šŸ˜Š

A state machine is a deterministic model that describes the behavior of a system in terms of states and transitions between them. A Markov model is a probabilistic model that describes the evolution of a system in terms of states and transition probabilities between them. Both models can be represented by a state transition diagram, which is a graph where the nodes are states and the edges are transitions.

There is no direct way to convert a state machine produced by Bison or similar tools into a Markov model, because the state machine does not have any information about the transition probabilities. However, there are some methods to estimate the transition probabilities from data or domain knowledge, and then use them to construct a Markov model. For example, one can use the following steps:

$$ \begin{bmatrix} 0 & 0.5 & 0.5 \ 0.25 & 0 & 0.5 \ 0.33 & 0.33 & 0 \end{bmatrix} $$

Alternatively, one can use domain knowledge to assign the transition probabilities based on some assumptions or rules. For example, if the system is a weather model, one can use historical data or meteorological models to estimate the probability of transitioning from one weather state to another, such as sunny, cloudy, or rainy.

There are also some tools and resources that can help with building Markov models from state machines or state transition diagrams. For example, TreeAge Pro is a software that can easily convert state transition diagrams to Markov modelsĀ¹. Stanford University has a lecture note that explains the fundamentals of Markov modelsā“. Baeldung has an article that compares Markov chains and finite state machinesā¶. Springer has a book chapter that provides an overview and practical guide to building Markov state modelsā·.

I hope this helps you understand the difference and the connection between state machines and Markov models. If you have any other questions, feel free to ask me. šŸ˜Š

Source: Conversation with Bing, 1/7/2024 (1) Easily convert State Transition Diagrams to Markov Models. https://www.youtube.com/watch?v=DDXFzzad8dw. (2) Easily convert State Transition Diagrams to Markov Models. https://www.youtube.com/watch?v=DDXFzzad8dw. (3) Hidden Markov Models Fundamentals - Stanford University. https://cs229.stanford.edu/section/cs229-hmm.pdf. (4) Hidden Markov Models Fundamentals - Stanford University. https://cs229.stanford.edu/section/cs229-hmm.pdf. (5) Is a Markov Chain the Same as a Finite State Machine?. https://www.baeldung.com/cs/markov-chain-vs-finite-state-machine. (6) Is a Markov Chain the Same as a Finite State Machine?. https://www.baeldung.com/cs/markov-chain-vs-finite-state-machine. (7) An Overview and Practical Guide to Building Markov State Models - Springer. https://link.springer.com/chapter/10.1007/978-94-007-7606-7_2. (8) An Overview and Practical Guide to Building Markov State Models - Springer. https://link.springer.com/chapter/10.1007/978-94-007-7606-7_2. (9) Markov Models. https://www.youtube.com/watch?v=EqUfuT3CC8s. (10) Hidden Markov Model Clearly Explained! Part - 5. https://www.youtube.com/watch?v=RWkHJnFj5rY. (11) Converting a Markov model to a State Transition Diagram. https://www.treeage.com/help/Content/75-State-Transition-Diagrams/4-Converting-Markov-to-STD.htm. (12) undefined. https://www.treeage.com/training2/tutorials/. (13) undefined. https://www.treeage.com/training2/tut.

jmikedupont2 commented 9 months ago

So my idea is that we would start with a grammar and we would translate that into a Markov model that would assign a probability a higher probability for the allowable paths and a low probability for the not allowed paths and then we would fine tune it on a data set where it would adjust the weights and model the probability of the particular next tokens

p3nGu1nZz commented 9 months ago

interesting idea of encoding the behavior of a state machine into markov chains, which could be replayed. the way you do this in mlagents is create a new model and use discrete decision branches.. Each state in a FSM is represented by an int, so really your just recording the stuff into a linked list of truples, where your storing the prev state, the curr state and the delta in time between transitons, which should be normalized using a sigmoid pow2.