xivind / sandbox

Sandkasse for læring om store språkmodeller
4 stars 2 forks source link

Forbedre prompt #32

Open xivind opened 2 days ago

xivind commented 2 days ago

Se prompt engineering guide for eksempler: https://platform.openai.com/docs/guides/prompt-engineering

Vurder å skille prompten etter roller: system og user

xivind commented 2 days ago

Yes, you can provide inputs for both the user and system roles, but it’s available only in the Chat Completions API (which uses models like gpt-4 or gpt-3.5-turbo), not in the older Completions API (which uses models like text-davinci-003).

Here's how the roles work in the Chat Completions API:

Using Both system and user Roles

In your case, you can define both a system message and a user query within the messages parameter. Here's how to modify your generate_response function:

async def generate_response(query: str, context: str) -> AsyncGenerator[str, None]:
    """
    Generate a streaming response using the OpenAI Chat Completions API
    """
    messages = [
        {"role": "system", "content": "You are an expert on digitalization in Norway's healthcare and welfare sector. Provide guidance based on laws, regulations, and best practices."},
        {"role": "user", "content": f"Context: {context}\n\nQuestion: {query}"}
    ]

    response = await client.chat.completions.create(
        model=settings.model_name,  # E.g., "gpt-4"
        messages=messages,
        temperature=settings.temperature,
        max_tokens=settings.max_tokens,
        stream=True
    )

    async for chunk in response:
        content = chunk.choices[0].delta.get('content')
        if content:
            yield content

Key Differences Between APIs

  1. Chat Completions API (with gpt-4 or gpt-3.5-turbo):

    • Supports roles: system, user, and assistant.
    • Better for maintaining structured conversations and enforcing specific behaviors.
    • Requires messages in the messages parameter.
  2. Completions API (with text-davinci-003, etc.):

    • Only one role (user input) is supported.
    • To emulate system-like behavior, prepend instructions manually to the prompt.

    Example:

    prompt = f"""You are an expert on digitalization in Norway's healthcare and welfare sector.
    Provide guidance based on laws, regulations, and best practices.
    
    Context: {context}
    
    Question: {query}
    """

    This isn't as robust as the system role in the Chat Completions API, as it doesn’t distinguish between instructions and the user’s query.

When to Use assistant Role?

The assistant role is optional and used when continuing a multi-turn conversation. For example:

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What is digitalization?"},
    {"role": "assistant", "content": "Digitalization is the process of using digital technologies to transform business processes."},
    {"role": "user", "content": "How does it apply in healthcare?"}
]

This lets the model "remember" its previous response when formulating the next answer.

Summary