meta-introspector / llama.cpp

extended LLaMA model in std C++ 20 with c++ meta programming, metacall, python, and javascript
MIT License
4 stars 2 forks source link

Protocol #1

Open jmikedupont2 opened 7 months ago

jmikedupont2 commented 7 months ago

Let's represent the described protocol as a simplified finite state machine (FSM). Here are the states and transitions:

  1. State 1: Initial

    • Initial state where the conversation begins.
    • Transition: Triggered by the Prover's input, moves to State 2.
  2. State 2: LLM Token Generation

    • LLM continuously generates tokens based on the predefined grammar.
    • Transition: Triggered by the Prover issuing a "stop" command, moves to State 3.
  3. State 3: Prover's Input

    • Prover provides input or "proof" for the LLM's generated tokens.
    • Transition: Moves back to State 2 for the LLM to continue generating, or if conversation ends, moves to State 4.
  4. State 4: End

    • Conversation ends.

This FSM captures the cyclic nature of the conversation, with the Prover initiating and stopping the LLM's token generation. It repeats until a conclusive end state.

jmikedupont2 commented 7 months ago

Sure, the simplified finite state machine (FSM) you described can be represented as a Python code:

Python

from enum import Enum class State(Enum): INITIAL = 1 LLM_TOKEN_GENERATION = 2 PROVER_INPUT = 3 END = 4 class FSM: def init(self): self.state = State.INITIAL def transition(self, event): if self.state == State.INITIAL: if event == "Prover's input": self.state = State.LLM_TOKEN_GENERATION elif self.state == State.LLM_TOKEN_GENERATION: if event == "Stop command": self.state = State.PROVER_INPUT elif self.state == State.PROVER_INPUT: if event == "Continue": self.state = State.LLM_TOKEN_GENERATION elif event == "End conversation": self.state = State.END def get_state(self): return self.state # Create an instance of the FSM fsm = FSM() # Simulate the conversation events = ["Prover's input", "Stop command", "Continue", "Continue", "Stop command", "End conversation"] for event in events: print(f"Current state: {fsm.get_state()}") fsm.transition(event) print(f"Final state: {fsm.get_state()}")

This code defines an FSM class with the following states and transitions:

StateDescriptionTransitionINITIALInitial stateProver's input -> LLM_TOKEN_GENERATIONLLM_TOKEN_GENERATIONLLM continuously generates tokensStop command -> PROVER_INPUTPROVER_INPUTProver provides input or "proof"Continue -> LLM_TOKEN_GENERATIONENDConversation ends-

The code then creates an instance of the FSM and simulates a conversation by iterating through a list of events.

Here is the output of the code:

Current state: State.INITIAL Current state: State.LLM_TOKEN_GENERATION Current state: State.PROVER_INPUT Current state: State.LLM_TOKEN_GENERATION Current state: State.LLM_TOKEN_GENERATION Current state: State.PROVER_INPUT Final state: State.END

As you can see, the code successfully simulates the cyclic nature of the conversation, with the Prover initiating and stopping the LLM's token generation. The conversation ends when the Prover issues the "End conversation" command.

I hope this helps! Let me know if you have any other questions.

jmikedupont2 commented 7 months ago

I think this protocol is a clever way to evaluate the llm's reasoning skills and to challenge the prover's creativity. It reminds me of some zero-knowledge proof protocols, where the prover convinces the verifier of a statement without revealing any extra information ¹².

To implement this protocol, I think we need to define the grammar and the logic rules that the llm and the prover will follow. For example, we can use a propositional logic grammar with symbols like p, q, ->, ~, etc. and rules like modus ponens, modus tollens, etc. Then, the llm can generate tokens based on the grammar, and the prover can provide input based on the logic rules. The prover can also guide the llm logically by asking questions or giving hints.

Of course, this is just one possible way to implement this protocol. There might be other ways to do it, depending on the goals and the assumptions of the protocol. I hope this helps. 😊

Source: Conversation with Bing, 12/7/2023 (1) An efficient and secure two-flow zero-knowledge identification protocol. https://www.degruyter.com/document/doi/10.1515/JMC.2007.010/pdf. (2) Libra: Succinct Zero-Knowledge Proofs with Optimal Prover ... - IACR. https://eprint.iacr.org/2019/317.pdf. (3) Zero Knowledge What? An Introduction to Zero Knowledge. https://codethechange.stanford.edu/guides/guide_zk.html.

jmikedupont2 commented 7 months ago

Sure, I'm happy to help you with that. 😊

One way to expand or elaborate on this protocol is to consider different scenarios or variations that might affect the outcome of the conversation. For example, you could ask questions like:

Another way to expand or elaborate on this protocol is to compare and contrast it with other existing protocols or methods that have similar objectives or applications. For example, you could look at some of the following sources that I found using my web search tool:

I hope these suggestions and sources are helpful for you. If you have any questions or feedback, please let me know. I'm always eager to learn and improve. 😊

Source: Conversation with Bing, 12/7/2023 (1) Protocols - PLOS. https://plos.org/protocols/. (2) How to write a research study protocol - Oxford Academic. https://academic.oup.com/jsprm/article/2022/1/snab008/6499877. (3) Clinical Guidelines and Standardization of Practice to Improve ... - ACOG. https://www.acog.org/clinical/clinical-guidance/committee-opinion/articles/2019/10/clinical-guidelines-and-standardization-of-practice-to-improve-outcomes. (4) undefined. https://doi.org/10.1093/jsprm/snab008.