π Llama Assistant π
Local AI Assistant That Respects Your Privacy! π
Website: llama-assistant.nrl.ai
AI-powered assistant to help you with your daily tasks, powered by Llama 3.2. It can recognize your voice, process natural language, and perform various actions based on your commands: summarizing text, rephrasing sentences, answering questions, writing emails, and more.
This assistant can run offline on your local machine, and it respects your privacy by not sending any data to external servers.
https://github.com/user-attachments/assets/af2c544b-6d46-4c44-87d8-9a051ba213db
Supported Models
TODO
- [x] πΌοΈ Support multimodal model: moondream2.
- [x] π£οΈ Add wake word detection: "Hey Llama!".
- [x] π οΈ Custom models: Add support for custom models.
- [x] π Support 5 other text models.
- [x] πΌοΈ Support 5 other multimodal models.
- [x] β‘ Streaming support for response.
- [x] ποΈ Add offline STT support: WhisperCPP.
- [ ] π§ Knowledge database: Langchain or LlamaIndex?.
- [ ] π Plugin system for extensibility.
- [ ] π° News and weather updates.
- [ ] π§ Email integration with Gmail and Outlook.
- [ ] π Note-taking and task management.
- [ ] π΅ Music player and podcast integration.
- [ ] π€ Workflow with multiple agents.
- [ ] π Multi-language support: English, Spanish, French, German, etc.
- [ ] π¦ Package for Windows, Linux, and macOS.
- [ ] π Automated tests and CI/CD pipeline.
Features
- ποΈ Voice recognition for hands-free interaction.
- π¬ Natural language processing with Llama 3.2.
- πΌοΈ Image analysis capabilities (TODO).
- β‘ Global hotkey for quick access (Cmd+Shift+Space on macOS).
- π¨ Customizable UI with adjustable transparency.
Note: This project is a work in progress, and new features are being added regularly.
Technologies Used
Installation
Recommended Python Version: 3.10.
Install PortAudio:
Install `PortAudio`_. This is required by the `PyAudio`_ library to stream
audio from your computer's microphone. PyAudio depends on PortAudio for cross-platform compatibility, and is installed differently depending on the
platform.
* For Mac OS X, you can use `Homebrew`_::
brew install portaudio
**Note**: if you encounter an error when running `pip install` that indicates
it can't find `portaudio.h`, try running `pip install` with the following
flags::
pip install --global-option='build_ext' \
--global-option='-I/usr/local/include' \
--global-option='-L/usr/local/lib' \
pyaudio
* For Debian / Ubuntu Linux::
apt-get install portaudio19-dev python3-all-dev
* Windows may work without having to install PortAudio explicitly (it will get
installed with PyAudio).
For more details, see the `PyAudio installation`_ page.
.. _PyAudio: https://people.csail.mit.edu/hubert/pyaudio/
.. _PortAudio: http://www.portaudio.com/
.. _PyAudio installation:
https://people.csail.mit.edu/hubert/pyaudio/#downloads
.. _Homebrew: http://brew.sh
On Windows: Installing the MinGW-w64 toolchain
- Download and install with instructions from [here](https://code.visualstudio.com/docs/cpp/config-mingw).
- Direct download link: [MinGW-w64](https://github.com/msys2/msys2-installer/releases/download/2024-01-13/msys2-x86_64-20240113.exe).
Install from PyPI:
pip install pyaudio
pip install git+https://github.com/stlukey/whispercpp.py
pip install llama-assistant
Or install from source:
1. Clone the repository:
```bash
git clone https://github.com/vietanhdev/llama-assistant.git
cd llama-assistant
```
2. Install the required dependencies and install the package:
```bash
pip install pyaudio
pip install git+https://github.com/stlukey/whispercpp.py
pip install -r requirements.txt
pip install .
```
Speed Hack for Apple Silicon (M1, M2, M3) users: π₯π₯π₯
- Install Xcode:
```bash
# check the path of your xcode install
xcode-select -p
# xcode installed returns
# /Applications/Xcode-beta.app/Contents/Developer
# if xcode is missing then install it... it takes ages;
xcode-select --install
```
- Build `llama-cpp-python` with METAL support:
```bash
pip uninstall llama-cpp-python -y
CMAKE_ARGS="-DGGML_METAL=on" pip install -U llama-cpp-python --no-cache-dir
# You should now have llama-cpp-python v0.1.62 or higher installed
# llama-cpp-python 0.1.68
```
Usage
Run the assistant using the following command:
llama-assistant
# Or with a
python -m llama_assistant.main
Use the global hotkey (default: Cmd+Shift+Space
) to quickly access the assistant from anywhere on your system.
Configuration
The assistant's settings can be customized by editing the settings.json
file located in your home directory: ~/llama_assistant/settings.json
.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
This project is licensed under the GPLv3 License - see the LICENSE file for details.
Acknowledgements
Star History
Contact