Gemini Pro AI is a Streamlit-based application that utilizes the Gemini Pro and Gemini Pro Vision models from the Google Generative AI API to provide text and image-based conversational responses.
Gemini Pro AI allows users to engage in a chat with the Gemini Pro AI models, enabling conversations with both text and image inputs. The application utilizes Streamlit for the user interface and Google Generative AI API for generating responses.
To run the application, make sure you have the required dependencies installed. You can install them using the following command:
pip install streamlit pillow python-dotenv google-generativeai
Additionally, you need to set up your Gemini API key. You can either set it as an environment variable named GENAI_API_KEY
or enter it when prompted.
export GENAI_API_KEY=your_api_key
Run the application using the following command:
streamlit run your_app_file.py
Once the application is running, you can interact with Gemini Pro AI:
Gemini Pro AI will process your input and generate responses based on the configured models. Conversations will be displayed in the chat container.
Gemini Pro AI is configured with the following settings:
Text Chat Model:
Image Chat Model:
python your_app_file.py
Visit http://localhost:8501 in your web browser to interact with Gemini Pro AI.
Contributions are welcome! Feel free to open issues or pull requests to enhance the functionality or fix any issues.
This project is licensed under the MIT License - see the LICENSE file for details.