fabiomatricardi / GPT4All_Medium

Repo of the code from the Medium article
https://artificialcorner.com/gpt4all-is-the-local-chatgpt-for-your-documents-and-it-is-free-df1016bc335
Creative Commons Zero v1.0 Universal
85 stars 41 forks source link
artificial-intelligence

GPT4All_Medium

Repo of the code from the Medium article


Update and bug fixes - 2023.06.05

The ggml-model-q4_0.bin model has been changed a lot in the past weeks.
It may throw you an error when trying even to load it.
You can use a different embeddings for the creation of the Vector indexing (and in this case you don't need to load ggml-model-q4_0.bin) Here how to proceed:
Remove the call to that model, and replace the embeddings with the Hugging Face ones: remember to import them with

from langchain.embeddings import HuggingFaceEmbeddings
# assign the path for the 2 models GPT4All and Alpaca for the embeddings 
gpt4all_path = './models/gpt4all-converted.bin' 
## REMOVED ## llama_path = './models/ggml-model-q4_0.bin' 
# Calback manager for handling the calls with  the model
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
# create the embedding object
## REMOVED ## embeddings = LlamaCppEmbeddings(model_path=llama_path)
embeddings = HuggingFaceEmbeddings(model_name='sentence-transformers/all-MiniLM-L6-v2')
# create the GPT4All llm object
llm = GPT4All(model=gpt4all_path, callback_manager=callback_manager, verbose=True)

Create the vector index with these embeddings

Update and bug fixes - 2023.05.30

Buğra Çakır reported an issue, running the code on Python 3.11.3 (main, May 24 2023, 00:00:00) [GCC 13.1.1 20230511 (Red Hat 13.1.1-2)] on linux
He solved the issue installing a different llama-cpp-python version with

pip install llama-cpp-python==0.1.48

Update and bug fixes - 2023.05.25

This happens usually only on Windows users. Running the installation of llama-cpp-python, required by
LangChain with the llamaEmbeddings, on windows CMake C complier is not installed by default, so you cannot build from source.
On Mac Users with Xtools and on Linux, usually the C complier is already available on the OS.
To avoid the issue you MUST use pre complied wheel.
Go here https://github.com/abetlen/llama-cpp-python/releases
and look for the complied wheel for your architecture and python version - you MUST take Weels Version 0.1.49
because higher versions are not compatible.

In my case I have Windows 10, 64 bit, python 3.10
so my file is llama_cpp_python-0.1.49-cp310-cp310-win_amd64.whl

Troubleshooting Section

Update and bug fixes - 2023.05.23

pip install pygpt4all==1.0.1 
pip install pygptj==1.0.10 
pip install pyllamacpp==1.0.6