IntelligenzaArtificiale / Free-Auto-GPT

Free Auto GPT with NO paids API is a repository that offers a simple version of Auto GPT, an autonomous AI agent capable of performing tasks independently. Unlike other versions, our implementation does not rely on any paid OpenAI API, making it accessible to anyone.
MIT License
2.43k stars 382 forks source link

RuntimeError: The model is currently loading, please re-run the query. #135

Closed thisisloze closed 1 year ago

thisisloze commented 1 year ago

Have you already searched for your ISSUE among the resolved ones?

What version of Python do you have?

What version of operating system do you have?

What type of installation did you perform?

Desktop (please complete the following information):

Describe the bug A clear and concise description of what the bug is. Traceback (most recent call last): File "/root/Free-AUTOGPT-with-NO-API/AUTOGPT.py", line 314, in agent.run([input("Enter the objective of the AI system: (Be realistic!) ")]) File "/usr/local/lib/python3.10/dist-packages/langchain/experimental/autonomous_agents/autogpt/agent.py", line 91, in run assistant_reply = self.chain.run( File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 259, in run return self(kwargs, callbacks=callbacks)[self.output_keys[0]] File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 145, in call raise e File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 139, in call self._call(inputs, run_manager=run_manager) File "/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py", line 69, in _call response = self.generate([inputs], run_manager=run_manager) File "/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py", line 78, in generate prompts, stop = self.prep_prompts(input_list, run_manager=run_manager) File "/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py", line 106, in prep_prompts prompt = self.prompt.format_prompt(selected_inputs) File "/usr/local/lib/python3.10/dist-packages/langchain/prompts/chat.py", line 147, in format_prompt messages = self.format_messages(kwargs) File "/usr/local/lib/python3.10/dist-packages/langchain/experimental/autonomous_agents/autogpt/prompt.py", line 51, in format_messages relevant_docs = memory.get_relevant_documents(str(previous_messages[-10:])) File "/usr/local/lib/python3.10/dist-packages/langchain/vectorstores/base.py", line 392, in get_relevant_documents docs = self.vectorstore.similarity_search(query, self.search_kwargs) File "/usr/local/lib/python3.10/dist-packages/langchain/vectorstores/faiss.py", line 256, in similarity_search docs_and_scores = self.similarity_search_with_score(query, k) File "/usr/local/lib/python3.10/dist-packages/langchain/vectorstores/faiss.py", line 225, in similarity_search_with_score embedding = self.embedding_function(query) File "/root/Free-AUTOGPT-with-NO-API/Embedding/HuggingFaceEmbedding.py", line 39, in newEmbeddingFunction embeddings = newEmbeddings(texts) File "/usr/local/lib/python3.10/dist-packages/decorator.py", line 232, in fun return caller(func, *(extras + args), *kw) File "/usr/local/lib/python3.10/dist-packages/retry/api.py", line 73, in retry_decorator return __retry_internal(partial(f, args, kwargs), exceptions, tries, delay, max_delay, backoff, jitter, File "/usr/local/lib/python3.10/dist-packages/retry/api.py", line 33, in __retry_internal return f() File "/root/Free-AUTOGPT-with-NO-API/Embedding/HuggingFaceEmbedding.py", line 34, in newEmbeddings raise RuntimeError( RuntimeError: The model is currently loading, please re-run the query.

Screenshots If applicable, add screenshots to help explain your problem.

Additional context Add any other context about the problem here.

jacobweiss2305 commented 1 year ago

+1, I was using the default model in the repo: obrizum/all-MiniLM-L6-v2

DaveMBush commented 1 year ago

That model seems to be corrupt currently.

I changed it to the one that is commented out (line above) and got past this error.

ie, in HuggingFaceEmbedding.py

model_id = "sentence-transformers/all-MiniLM-L6-v2"
# model_id = "obrizum/all-MiniLM-L6-v2"
api_url = f"https://api-inference.huggingface.co/pipeline/feature-extraction/{model_id}"
headers = {"Authorization": f"Bearer {hf_token}"}
dhnam commented 1 year ago

Check if you have included "HUGGINGFACE_TOKEN". Looks like this is necessary no matter what model you use.