addyosmani / chatty

ChattyUI - your private AI chat for running LLMs in the browser
https://chattyui.com
MIT License
497 stars 41 forks source link
ai chatbot huggingface local
[](https://chattyui.com/)

Chatty

![Website](https://img.shields.io/website?url=https%3A%2F%2Fchattyui.com%2F) ![GitHub Repo stars](https://img.shields.io/github/stars/addyosmani/chatty) ![GitHub forks](https://img.shields.io/github/forks/addyosmani/chatty) ![GitHub watchers](https://img.shields.io/github/watchers/addyosmani/chatty)

Chatty is your private AI that leverages WebGPU to run large language models (LLMs) natively & privately in your browser, bringing you the most feature rich in-browser AI experience.

Features ✨

Preview

https://github.com/addyosmani/chatty/assets/114422072/a994cc5c-a99d-4fd2-9eab-c2d4267fcfd3

Why?

This project is meant to be the closest attempt at bringing the familarity & functionality from popular AI interfaces such as ChatGPT and Gemini into a in-browser experience.

Browser support

By default, WebGPU is enabled and supported in both Chrome and Edge. However, it is possible to enable it in Firefox and Firefox Nightly. Check the browser compatibility for more information.

How to Install

If you just want to try out the app, it's live on this website.

This is a Next.js application and requires Node.js (18+) and npm installed to run the project locally.

Install from source

If you want to setup and run the project locally, follow the steps below:

1. Clone the repository to a directory on your pc via command prompt:

git clone https://github.com/addyosmani/chatty

2. Open the folder:

cd chatty

3. Install dependencies:

npm install

4. Start the development server:

npm run dev

5. Go to localhost:3000 and start chatting!

Docker

[!NOTE]
The Dockerfile has not yet been optimized for a production environment. If you wish to do so yourself, checkout the Nextjs example

docker build -t chattyui .
docker run -d -p 3000:3000 chattyui

Or use docker-compose:

docker compose up

If you've made changes and want to rebuild, you can simply run docker-compose up --build

Roadmap

Contributing

Contributions are more than welcome! However, please make sure to read the contributing guidelines first :)

Hardware requirements

[!NOTE]
To run the models efficiently, you'll need a GPU with enough memory. 7B models require a GPU with about 6GB memory whilst 3B models require around 3GB.

Smaller models might not be able to process file embeddings as efficient as larger ones.

Acknowledgements & credits

Chatty is built using the WebLLM project, utilizing HuggingFace, open source LLMs and LangChain. We want to acknowledge their great work and thank the open source community.

Authors

Chatty is created and maintained by Addy Osmani & Jakob Hoeg Mørk.