This example app lets the user chat with the Gemini API and use it as a personal AI assistant. The app supports text-only chat in two modes: non-streaming and streaming.
In non-streaming mode, a response is returned after the model completes the entire text generation process.
Streaming mode uses the Gemini API's streaming capability to achieve faster interactions.
The client for this app is written using React and served using Vite.
There are three implementations of the backend server to choose from:
You only need to install and run one of the backends. If you want to try more than one, keep in mind that they all default to running on the same port.
Follow the installation instructions for one of the backend servers (Node.js, Python, or Go).
Before running the installation steps, make sure that Node.js v18+ and npm are installed in your development environment.
server-js
(i.e. where package.json
is
located).npm install
.Before running the installation steps, make sure that Python 3.9+ is installed
in your development environment. Then navigate to the app directory,
server-python
, and complete the installation.
python -m venv venv
source venv/bin/activate
python -m venv venv
.\venv\Scripts\activate
pip install -r requirements.txt
Check if Go 1.20+ is installed on your system.
go version
If Go 1.20+ is not installed, follow the instructions for your operating system from the Go installation guide. The backend dependencies will be installed when you run the app.
To launch the app:
client-react/
.Run the application with the following command:
npm run start
The client will start on localhost:3000
.
To run the backend, you need to get an API key and then follow the configure-and-run instructions for one of the backend servers (Node.js, Python, or Go).
Before you can use the Gemini API, you must first obtain an API key. If you don't already have one, create a key with one click in Google AI Studio.
Configure the Node.js app:
server-js/
..env.example
file to .env
.
cp .env.example .env
GOOGLE_API_KEY
in the .env
file.
GOOGLE_API_KEY=<your_api_key>
Run the Node.js app:
node --env-file=.env app.js
--env-file=.env
tells Node.js where the .env file is located.
By default, the app will run on port 9000.
To specify a custom port, edit the PORT
key in your .env
file,
PORT=xxxx
.
Note: In case of a custom port, you must update the host URL specified in
client-react/src/App.js
.
Configure the Python app:
server-python/
.Copy the .env.example
file to .env
.
cp .env.example .env
Specify the Gemini API key for the variable GOOGLE_API_KEY
in the .env
file.
GOOGLE_API_KEY=<your_api_key>
Run the Python app:
python app.py
The server will start on localhost:9000
.
server-go
(i.e. where main.go is located).<your_api_key>
with your API key.
GOOGLE_API_KEY=<your_api_key> go run .
The server will start on localhost:9000
.
By default, the server starts on port 9000. You can override the default port
the server listens on by setting the environment variable PORT
in the command
above.
To start using the app, visit http://localhost:3000
The following table shows the endpoints available in the example app:
Endpoint |
Details |
||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
chat/
|
This is the non-streaming POST method route. Use this to send the
chat message and the history of the conversation to the Gemini model. The
complete response generated by the model to the posted message will be
returned in the API's response.
POST chat/
Parameters
Response
|
||||||||||||||||||
stream/
|
This is the streaming POST method route. Use this to send the chat
message and the history of the conversation to the Gemini model. The
response generated by the model will be streamed to handle partial
results.
POST stream/
Parameters
Response
|