tsterbak / promptmage

simplifies the process of creating and managing LLM workflows.
https://promptmage.io
MIT License
67 stars 4 forks source link

Why are we overwriting the prompt? #9

Closed LuciAkirami closed 2 weeks ago

LuciAkirami commented 2 weeks ago

I have a doubt. Let's say I have prompt in the database

id name system user version template_vars active
f64f474d-1299-415c-b30f-fabc66e8a012 qa_bot You are an helpful AI assistant Answer the following user question:{query} 1 query 0

Also in the UI p1

Now, let's say I change the user prompt in it by clicking on the Edit Prompt Button

id name system user version template_vars active
f64f474d-1299-415c-b30f-fabc66e8a012 qa_bot You are an helpful AI assistant Answer the following human question:{query} 2 query 0

In UI p2 (Note that, here in the above image, on the right side, we can see that the version is changed from 1 to 2. Initially, this was not happening, that is even though I edited the prompt, it showed me version 1 itself. So I added the code to upgrade the version, for which I have raised the PR)

You can also notice that the "Number of Versions" is 1, this is because the name "qa_bot" we are having only prompt, that is, the updated prompt, which we have updated by overwriting the previous version

This is all the database contains. So, when we update the prompt, we are overwriting the existing prompt with the new one. Then what is the use of versioning here? What if I want to use the older version? I cannot use it, because it's overwritten with the new version. So, is it intentionally designed this way?

If not, then there is an issue in the code. We could retain both the updated and new versions by saving the updated prompt like with a new ID. Then both the old version and the new version will exist in the database and in the UI like the below

id name system user version template_vars active
b52aa814-2df5-4fbc-aa97-1330358ad49f qa_bot You are a helpful AI assistant Answer the following user question: {query} 1 query 0
331806d2-c3fa-4643-8d4f-bb11256fc0b8 qa_bot You are a helpful AI assistant Answer the following human question: {query} 2 query 0

Here, now we can see both the new and the old versions. Also take a look at the UI

p3

In the UI, we can see both the old and new versions. Even the "Number of Versions" is increased by 1. For this, I had to make the following changes to the update_prompt() in the sqlite_backend.py

storage/sqlite_backend.py - Line 116

Existing Code

    def update_prompt(self, prompt: Prompt):
        """Update an existing prompt in the database by id.

        Args:
            prompt (Prompt): The prompt to update.
        """
        session = self.Session()
        try:
            existing_prompt = session.execute(
                select(PromptModel).where(PromptModel.id == prompt.id)
            ).scalar_one_or_none()

            if not existing_prompt:
                raise PromptNotFoundException(
                    f"Prompt with name {prompt.name} not found."
                )

            existing_prompt.system = prompt.system
            existing_prompt.user = prompt.user
            existing_prompt.active = prompt.active

            session.commit()
        except SQLAlchemyError as e:
            session.rollback()
            logger.error(f"Error updating prompt: {e}")
        finally:
            session.close()

Updated Code

    def update_prompt(self, prompt: Prompt):
        """Update an existing prompt in the database by id.

        Args:
            prompt (Prompt): The prompt to update.
        """
        session = self.Session()
        try:
            existing_prompt = session.execute(
                select(PromptModel).where(PromptModel.id == prompt.id)
            ).scalar_one_or_none()

            if not existing_prompt:
                raise PromptNotFoundException(
                    f"Prompt with name {prompt.name} not found."
                )

            # If changes are made to user prompt or system prompt
            if (existing_prompt.system != prompt.system or
                    existing_prompt.user != prompt.user):

                # Copy the existing prompt with a new version
                new_prompt = PromptModel(
                    id=str(uuid.uuid4()),  # Generate a new unique ID for the new version
                    name=existing_prompt.name,
                    system=prompt.system,
                    user=prompt.user,
                    version=existing_prompt.version + 1,
                    template_vars=existing_prompt.template_vars,
                    active=prompt.active
                )

                # Add the new version to the session
                session.add(new_prompt)
            # If made no changes made to user prompt and system prompt
            else:
                existing_prompt.active = prompt.active

            session.commit()
        except SQLAlchemyError as e:
            session.rollback()
            logger.error(f"Error updating prompt: {e}")
        finally:
            session.close()
LuciAkirami commented 2 weeks ago

Actually, the above code has a small bug (which you can notice in the UI). The bug is, that both versions are having an Edit Prompt option. So, if we click on the Edit Prompt option for the version 1 prompt, then edit it and click on Save Prompt, then the UI will show 2 prompts having the same version number of version 2

To solve this, we can remove the Edit Prompt option for the older versions and only the latest version will have the Edit Prompt option. For this, we need the make the following changes in the components/prompts_page.py

components/prompts_page.py - Line 24

Existing Code

    def show_side_panel(prompts: List[Prompt]):
        # get prompt with highest version
        side_panel.clear()
        with side_panel:
            ui.button(">>", on_click=hide_side_panel).style(
                "margin: 20px; margin-bottom: 0px; margin-top: 100px;"
            )
            for prompt in sorted(prompts, key=lambda p: p.version, reverse=True):
                bg_color = ""
                # highlight active prompt in green
                if prompt.active:
                    bg_color = "bg-green-200 dark:bg-green-800"
                with ui.card().style(
                    "padding: 20px; margin-right: 20px; margin-top: 10px; margin-bottom: 10px; margin-left: 20px;"
                ).classes(bg_color):
                    # display run data
                    with ui.grid(columns=2).classes("gap-0"):
                        label_with_icon("Prompt ID:", icon="o_info")
                        ui.label(f"{prompt.id}")

                        label_with_icon("Name:", icon="o_badge")
                        ui.label(f"{prompt.name}")

                        label_with_icon("Version:", icon="o_tag")
                        ui.label(f"{prompt.version}")

                        label_with_icon("Active:", icon="o_check")
                        ui.label(f"{prompt.active}")

                        label_with_icon("System prompt:", icon="o_code")
                        ui.label(f"{prompt.system}")

                        label_with_icon("User prompt:", icon="o_psychology")
                        ui.label(f"{prompt.user}")

                    with ui.row():
                        ui.button(
                            "Activate Prompt",
                            on_click=lambda prompt_id=prompt.id: activate_prompt(
                                prompt_id
                            ),
                        )
                        ui.button(
                            "Edit Prompt",
                            on_click=lambda prompt_id=prompt.id: edit_prompt(prompt_id),
                        )
                        ui.button(
                            "Delete Prompt",
                            on_click=lambda prompt_id=prompt.id: delete_prompt(
                                prompt_id
                            ),
                        )

Updated Code

    def show_side_panel(prompts: List[Prompt]):
        # get prompt with highest version
        side_panel.clear()
        with side_panel:
            ui.button(">>", on_click=hide_side_panel).style(
                "margin: 20px; margin-bottom: 0px; margin-top: 100px;"
            )
            # Code Change Start - wrap the sorted() in a enumerate() so you can get the index of latest prompt
            for idx, prompt in enumerate(sorted(prompts, key=lambda p: p.version, reverse=True)):
            # Code Change End
                bg_color = ""
                # highlight active prompt in green
                if prompt.active:
                    bg_color = "bg-green-200 dark:bg-green-800"
                with ui.card().style(
                    "padding: 20px; margin-right: 20px; margin-top: 10px; margin-bottom: 10px; margin-left: 20px;"
                ).classes(bg_color):
                    # display run data
                    with ui.grid(columns=2).classes("gap-0"):
                        label_with_icon("Prompt ID:", icon="o_info")
                        ui.label(f"{prompt.id}")

                        label_with_icon("Name:", icon="o_badge")
                        ui.label(f"{prompt.name}")

                        label_with_icon("Version:", icon="o_tag")
                        ui.label(f"{prompt.version}")

                        label_with_icon("Active:", icon="o_check")
                        ui.label(f"{prompt.active}")

                        label_with_icon("System prompt:", icon="o_code")
                        ui.label(f"{prompt.system}")

                        label_with_icon("User prompt:", icon="o_psychology")
                        ui.label(f"{prompt.user}")

                    with ui.row():
                        ui.button(
                            "Activate Prompt",
                            on_click=lambda prompt_id=prompt.id: activate_prompt(
                                prompt_id
                            ),
                        )
                        # Code Change Start - If Index is 0 (As we are looping in a descending prompt version)
                        if idx == 0:
                            ui.button(
                                "Edit Prompt",
                                on_click=lambda prompt_id=prompt.id: edit_prompt(prompt_id),
                            )
                        # Code Change End
                        ui.button(
                            "Delete Prompt",
                            on_click=lambda prompt_id=prompt.id: delete_prompt(
                                prompt_id
                            ),
                        )
        side_panel.style("transform:translateX(0%);")
        side_panel.update()

p4 We can see that only the latest version i.e. version 3 has the Edit Option and the version 1 does not have one (There is no version 2 here because I deleted it to check that everything is working right. Even it had no Edit Option)

So this update along with the above suggested update can be used to store prompts with different versions

tsterbak commented 2 weeks ago

This should now be fixed and every time you save a new version is created. The fix is on the dev branch. Let me know if it works for you :)

LuciAkirami commented 2 weeks ago

I'm unsure how to git clone and run it. I'm downloading the package via pip and making changes to the promptmage library and using

The thing is, when I do a git clone and run python promptmage/cli.py run my_flow.py it gives me a module not found error. Am I doing something wrong here? Or is there another way to run the promptmage after git cloning it?

So I have to download it as a package and test it.

Yah, just checked it, the fix is working

LuciAkirami commented 2 weeks ago

Do we need the Edit Prompt button for older Prompts?

tsterbak commented 2 weeks ago

Yah, just checked it, the fix is working

Great! Nice that you got it working. I need to update developer documentation to make it easier :)

Do we need the Edit Prompt button for older Prompts?

yeah, I think so. Maybe you want to continue from an older prompt.

LuciAkirami commented 2 weeks ago

Yah. For running it, I cloned the repo, inside that repo, created a new file with following content and ran it. It's working.

Intially I tried to write the flow in a separate my_flows.py file and call the python promptmage/cli.py run my_flows.py and it didn't work and threw a lot of errors (might be that I'm doing it the wrong way)

With the below code I'm able to run it

from typing import List

from click import Path
from dotenv import load_dotenv
from loguru import logger
import uvicorn
from promptmage import __version__, title

from promptmage import PromptMage, Prompt, MageResult
from promptmage.api import PromptMageAPI
from promptmage.frontend.frontend import PromptMageFrontend
from promptmage.storage import (
    PromptStore,
    SQLitePromptBackend,
    DataStore,
    SQLiteDataBackend,
)

import ollama

load_dotenv()

# # storing prompts
backend = SQLitePromptBackend(db_path="myprompts.db")
my_prompt_store = PromptStore(backend=backend)

qa_prompt = Prompt(
    name="qa_bot",
    system="You are an helpful AI assistant",
    user="Summarize the following:{query}",
    template_vars=["query"],
)

my_prompt_store.store_prompt(qa_prompt)

# # Creating our own backend store to store the Runs
data_backend = SQLiteDataBackend(db_path="myprompts.db")
my_data_store = DataStore(backend=data_backend)

# Create a new PromptMage instance
mage = PromptMage(
    name="simple-question-answer",
    available_models=["llama3.1", "phi", "mistral:instruct"],
    prompt_store=my_prompt_store,
    data_store=my_data_store,
)

@mage.step(
    name="question_answer_bot",
    prompt_name="qa_bot",
    initial=True,
)
def question_answer_bot(
    question: str, prompt: Prompt, model: str = "llama3.1"
) -> List[MageResult]:
    response = ollama.chat(
        model=model,
        messages=[
            {"role": "system", "content": prompt.system},
            {"role": "user", "content": prompt.user.format(query=question)},
        ],
    )
    output = response["message"]["content"]
    return MageResult(result=output)

logger.info(f"\nWelcome to\n{title}")
# logger.info(f"Running PromptMage version {__version__} from {file_path}")

logger.info(f"Running PromptMage version {__version__}")

# create the .promptmage directory to store all the data
# dirPath = Path(".promptmage")
# dirPath.mkdir(mode=0o777, parents=False, exist_ok=True)

# get the available flows from the source file
# available_flows = get_flows(file_path)
available_flows = [mage]

if not available_flows:
    raise ValueError("No PromptMage instance found in the module.")

# create the FastAPI app
app = PromptMageAPI(flows=available_flows).get_app()

# create the frontend app
frontend = PromptMageFrontend(flows=available_flows)
frontend.init_from_api(app)

uvicorn.run(app, host="localhost", port=5050, log_level="info")
tsterbak commented 2 weeks ago

Haha nice, that should work for most testing usecases.

Normally you would install a library in development mode from files (can be done with pip in a virtual env or I do it with poetry). I will make a write-up soon :)

LuciAkirami commented 2 weeks ago

Initially, I have downloaded the library with pip install promptmage, then use to go to that library path, edit there and see how the changes respond

tsterbak commented 2 weeks ago

Fixed with the latest release 0.1.2