google-gemini / example-chat-app

Get up and running with the Gemini API using Node.js and Python
https://ai.google.dev/gemini-api/docs
Apache License 2.0
20 stars 11 forks source link
gemini gemini-api

Gemini API chat app

Table of Contents

Intro

This example app lets the user chat with the Gemini API and use it as a personal AI assistant. The app supports text-only chat in two modes: non-streaming and streaming.

In non-streaming mode, a response is returned after the model completes the entire text generation process.

Streaming mode uses the Gemini API's streaming capability to achieve faster interactions.

Frontend

The client for this app is written using React and served using Vite.

Backend

There are three implementations of the backend server to choose from:

You only need to install and run one of the backends. If you want to try more than one, keep in mind that they all default to running on the same port.

Installation

Follow the installation instructions for one of the backend servers (Node.js, Python, or Go).

(Option 1) Node.js installation

Before running the installation steps, make sure that Node.js v18+ and npm are installed in your development environment.

  1. Navigate to the app directory, server-js (i.e. where package.json is located).
  2. Run npm install.

(Option 2) Python installation

Before running the installation steps, make sure that Python 3.9+ is installed in your development environment. Then navigate to the app directory, server-python, and complete the installation.

Create a virtual environment

Linux/macOS
python -m venv venv
source venv/bin/activate
Windows
python -m venv venv
.\venv\Scripts\activate

Install the required Python packages

Linux/macOS/Windows
pip install -r requirements.txt

(Option 3) Go installation

Check if Go 1.20+ is installed on your system.

go version

If Go 1.20+ is not installed, follow the instructions for your operating system from the Go installation guide. The backend dependencies will be installed when you run the app.

Run the app

To launch the app:

  1. Run the React client
  2. Run the backend server of your choice

Run the React client

  1. Navigate to the app directory, client-react/.
  2. Run the application with the following command:

    npm run start

The client will start on localhost:3000.

Run a backend server

To run the backend, you need to get an API key and then follow the configure-and-run instructions for one of the backend servers (Node.js, Python, or Go).

Get an API Key

Before you can use the Gemini API, you must first obtain an API key. If you don't already have one, create a key with one click in Google AI Studio.

Get an API key

(Option 1) Configure and run the Node.js backend

Configure the Node.js app:

  1. Navigate to the app directory, server-js/.
  2. Copy the .env.example file to .env.
    cp .env.example .env
  3. Specify the Gemini API key for the variable GOOGLE_API_KEY in the .env file.
    GOOGLE_API_KEY=<your_api_key>

Run the Node.js app:

node --env-file=.env app.js

--env-file=.env tells Node.js where the .env file is located.

By default, the app will run on port 9000.

To specify a custom port, edit the PORT key in your .env file, PORT=xxxx.

Note: In case of a custom port, you must update the host URL specified in client-react/src/App.js.

(Option 2) Configure and run the Python backend

Configure the Python app:

  1. Navigate to the app directory, server-python/.
  2. Make sure that you've activated the virtual environment as shown in the installation steps.
  3. Copy the .env.example file to .env.

    cp .env.example .env
  4. Specify the Gemini API key for the variable GOOGLE_API_KEY in the .env file.

    GOOGLE_API_KEY=<your_api_key>

Run the Python app:

python app.py

The server will start on localhost:9000.

(Option 3) Configure and run the Go backend

  1. Navigate to the app directory, server-go (i.e. where main.go is located).
  2. Run the application with the following command, replacing <your_api_key> with your API key.
    GOOGLE_API_KEY=<your_api_key> go run .

The server will start on localhost:9000.

By default, the server starts on port 9000. You can override the default port the server listens on by setting the environment variable PORT in the command above.

Usage

To start using the app, visit http://localhost:3000

API documentation

The following table shows the endpoints available in the example app:

chat/ This is the non-streaming POST method route. Use this to send the chat message and the history of the conversation to the Gemini model. The complete response generated by the model to the posted message will be returned in the API's response.

POST chat/

Parameters

Name
Type
Data type
Description
chat required string Latest chat message from user
history optional array Current chat history between user and Gemini model

Response

HTTP code
Content-Type
Response
200 application/json {"text": string}
stream/ This is the streaming POST method route. Use this to send the chat message and the history of the conversation to the Gemini model. The response generated by the model will be streamed to handle partial results.

POST stream/

Parameters

Name
Type
Data type
Description
chat required string Latest chat message from user
history optional array Current chat history between user and Gemini model

Response

HTTP code
Content-Type
Response
200 application/json string