Generate performance reviews and self-reviews in minutes using Large Language Models (LLMs).
https://github.com/user-attachments/assets/81e62eda-b12c-4697-9469-d904fd8ee4ed
Clone the repository
Create a virtual environment
python -m venv .venv
pip install -r requirements.txt
streamlit run app.py
This will start the web interface where you can:
Run the FastAPI server:
uvicorn backend.app_fastapi:app --host 0.0.0.0 --port 8000
API endpoints:
/generate_review
: Generate a performance review/generate_self_review
: Generate a self-reviewBuild the Docker Image:
Navigate to the root directory of the project and run:
docker build -t performance-review-api .
Run the Docker Container:
Start the Docker container:
docker run -p 8000:8000 performance-review-api
This command maps port 8000 on your local machine to port 8000 in the Docker container, making the FastAPI application accessible at http://localhost:8000
.
Verify the Application is Running:
Open a web browser and navigate to http://localhost:8000
. You should see the welcome message defined in the root
endpoint.
You can also use curl
to test the root endpoint:
curl http://localhost:8000/
You should see a response like:
{"message": "Welcome to the Performance Review API"}
ReviewRequest
: Pydantic model for performance review requestsgenerate_review()
: Main function to generate performance reviewsSelfReviewRequest
: Pydantic model for self-review requestsgenerate_self_review()
: Main function to generate self-reviewsSupported LLM providers:
Make sure to provide your own API key for the selected LLM provider when using the application.