Open-source assistant-style large language models that run locally on CPU
:green_book: Technical Report 3: GPT4All Snoozy and Groovy
:green_book: Technical Report 2: GPT4All-J
:green_book: Technical Report 1: GPT4All
:snake: Official Python Bindings
:computer: Official Typescript Bindings
:speech_balloon: Official Chat Interface
:speech_balloon: Official Web Chat Interface
š¦ļøš Official Langchain Backend
GPT4All is made possible by our compute partner Paperspace.
Run on an M1 Mac (not sped up!)
GPT4All welcomes contribution, involvment, and discussion from the open source community! Please see CONTRIBUTING.md and follow the issue, bug report, and PR markdown templates.
Check project discord, with project owners, or through existing issues/PRs to avoid duplicate work.
Please make sure to tag all of the above with relevant project identifiers or your contribution could potentially get lost.
Example tags: backend
, bindings
, python-bindings
, documentation
, etc.
Run any GPT4All model natively on your home desktop with the auto-updating desktop chat client. See website for exaustive list of models.
Direct Installer Links:
If you have older hardware that only supports avx and not avx2 you can use these.
Find the most up-to-date information on the GPT4All Website
pip install gpt4all
import gpt4all
gptj = gpt4all.GPT4All("ggml-gpt4all-j-v1.3-groovy")
messages = [{"role": "user", "content": "Name 3 colors"}]
gptj.chat_completion(messages)
Please see GPT4All-J Technical Report for details.
We have released updated versions of our GPT4All-J
model and training data.
v1.0
: The original model trained on the v1.0 datasetv1.1-breezy
: Trained on a filtered dataset where we removed all instances of AI language modelv1.2-jazzy
: Trained on a filtered dataset where we also removed instances like I'm sorry, I can't answer... and AI language modelThe models and data versions can be specified by passing a revision
argument.
For example, to load the v1.2-jazzy
model and dataset, run:
from datasets import load_dataset
from transformers import AutoModelForCausalLM
dataset = load_dataset("nomic-ai/gpt4all-j-prompt-generations", revision="v1.2-jazzy")
model = AutoModelForCausalLM.from_pretrained("nomic-ai/gpt4all-j-prompt-generations", revision="v1.2-jazzy")
accelerate launch --dynamo_backend=inductor --num_processes=8 --num_machines=1 --machine_rank=0 --deepspeed_multinode_launcher standard --mixed_precision=bf16 --use_deepspeed --deepspeed_config_file=configs/deepspeed/ds_config_gptj.json train.py --config configs/train/finetune_gptj.yaml
If you utilize this repository, models or data in a downstream project, please consider citing it with:
@misc{gpt4all,
author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
}