yousefmrashad / bookipedia

AI Inference Server for our graduation project (Bookipedia App)
GNU General Public License v3.0
8 stars 2 forks source link

Encountering 422 Unprocessable Entity Error When Passing List Parameter to FastAPI Endpoint in `api/api.py` #8

Closed yousefmrashad closed 4 months ago

yousefmrashad commented 4 months ago

I am encountering an issue with FastAPI where I receive a 422 Unprocessable Entity error when passing a list parameter (book_ids) to an endpoint function. Despite correctly typing the parameter as List[str] in the function signature, and ensuring that it does not accept a default value of None, the error persists when I provide a list of values.

Steps to Reproduce:

  1. Define a FastAPI endpoint function with a list parameter (e.g., book_ids: List[str]).
  2. Ensure that the parameter does not accept a default value of None.
  3. Make a request to the endpoint with a list of values for the parameter.
  4. Observe the 422 Unprocessable Entity error in the response.

Expected Behavior: The FastAPI endpoint should be able to parse and handle list parameters without encountering a 422 error, as long as the parameter is correctly typed.

Endpoint

@app.get("/stream_response_and_sources")
async def stream_response_and_sources(user_prompt: str,
                                    chat_summary: str,
                                    chat: str,
                                    book_ids: list[str],
                                    enable_web_retrieval:bool = True):
    # Initialize RAG pipeline
    async def stream_generator():
        # Yield data stream
        async for chunk in rag_pipeline.generate_answer(user_prompt, chat_summary, chat, book_ids, enable_web_retrieval):
            yield chunk.encode('utf-8')
        # Yield metadata as first part of the stream
        yield b'\n\nSources: '
        yield json.dumps(rag_pipeline.metadata).encode('utf-8') + b'\n'
    return StreamingResponse(stream_generator(), media_type="text/plain")

Test

import requests

url = "http://localhost:8000/stream_response_and_sources"

# Parameters
params = {
    "user_prompt": "what is their mathematical basis?",
    "chat_summary": "user discussed positional encodings in transformers",
    "chat": """
    User: Hi! I've been reading about transformers and positional encodings. Can you explain how they work?
    Assistant: Sure! Positional encodings help transformers understand the sequence order of tokens by encoding positional information. They're added directly to the input embeddings using mathematical functions like sine and cosine.
    User: Got it. How does the model learn to interpret these positional encodings?
    Assistant: Through training, the model learns to associate positional encoding patterns with token positions. This helps it understand sequential relationships between tokens better.
    """,
    "book_ids": ['1'],
    "enable_web_retrieval": False
}
yousefmrashad commented 4 months ago

Solution:

@app.get("/stream_response_and_sources")
async def stream_response_and_sources(user_prompt: str,
                                    chat_summary: str,
                                    chat: str,
                                    book_ids: Annotated[list[str] | None, Query()],
                                    enable_web_retrieval:bool = True):
    # Initialize RAG pipeline

Source: https://fastapi.tiangolo.com/tutorial/query-params-str-validations/#__tabbed_15_1