Open xyzaf1 opened 1 year ago
It's the same error of #136 issue. I have a workaround for it
@Dex94 what is the workaroud? where can i find it? pls tell me its not in the #136 and im just to stupid to find it...
btw: this is just my second acc, for my phone, just don't ask why
there are too many errors in the code. It does not worth to repair only this one. I know that this error is appearing when you are using bing chat, it's not the only error, there is also cookie path error that I don't know if the fix is correct and an error with the loading of huggingchat... too many errors that make this repo not usable at all
@Dex94 @m4conSpeed @xyzaf1 here is the fix (got this via email and don't see it mentioned even where I originally reported the issue)
Use the command "python -m pip install EdgeGPT==0.4.1".
You are right, there are lots of things that are currently not working. Some of those are because dependencies shifted since this version was released. I personally hate the Python environment for this reason. But that's what the AI community is primarily using right now. 🤷
@DaveMBush thx for the fix man, i'll try it laiter. Did you get it fully to work, or "only" fixed that issue? is it realy a alternative to AutoGPT (things like directly saving the response in files, or reading and editing files)?
@DaveMBush you can use also EdgeGPT==0.9.2 or the latest version
but you have to search 'from EdgeGPT' in bingapi and replace it with 'from EdgeGPT.EdgeGPT' (see below)
, then in line 47 if you delete cookie_path=self.cookiepath the error for cookie path disappear but it persists an error for loading the model... too many errors and time to repair it
Is there a better project that more well maintained and up to date?
for autogpt with no api using autogpt as llm there isn't for now other repo... other error is that in the readme page ther is written to use python but using it there's a syntax error, so to use autogpt with the terminal there's the need to use python3
What if we use an older realise would that work, this version has many errors.
https://github.com/aorumbayev/autogpt4all there is this
it's not the same thing, this repo uses a token for chatgpt other common Ai, that one uses different model and it uses local resources, not an alternative of the api. In the actual state EdgeGPT with the recent update doesn't need a correction, so EdgeGPT.EdgeGPT is not valid, it was a temporary fix. Maybe someone has langchain error. To fix langchain error you need to install 0.0.189: pip install langchain==0.0.189, I don't know what happens from 0.0.190 version. Need further research. Maybe we can discuss about all error in the conversation section to find and list them and try to search a correction.
There's a lot of errors in just AutoGPT.py
I understand that that whole thing is running locally, but my point being it does the main "AutoGPT" thing for free (locally)
neverinstall is a free linux machine in your browser we could try running this on that I'm on windows at the moment.
@DaveMBush @iamashwin99 @Dex94 @m4conSpeed @Fitsbit I truly apologize. Unfortunately in this period I had many problems. I have currently updated the repository trying to fix the errors you pointed out to us.
We apologize if the code is not perfect and has errors, redundancies, etc...
But I'm only 20 years old, I manage the largest Italian site on AI https://www.intelligenzaartificialeitalia.net/ , I work as a consultant for large companies, I deal with three large open source projects, and now I'm also trying with this project to break down the economic barriers imposed by bigtechs like Google, OpenAI, BING. (In all this I also attend university and am about to graduate)
If you want to use the paid APIs you will naturally have fewer problems. But with this repository you can use the same LLMs that OpenAI, BARD and BIng use to test your projects before putting them into production. All for free!
Let me know if the problems persist :)
A hug and thanks to all for the interest🤗🤗
Is this online or offline? im guessing its offline
yay new error... i had to install a bunch of more modules just to get another error, I asked copilot and saw no file. any ideas?
Traceback (most recent call last):
File "c:\Users\Traceback\Documents\automations\Free-Auto-GPT\AUTOGPT.py", line 301, in
I tried clearing my cache but windows provided an error: is unavailable. If the location is on this PC, make sure the device or drive is connected or the disc is inserted, and then try again. If the location is on a network, make sure you're connected to the network or Internet, and then try again. If the location still can't be found, it might have been moved or deleted.
the core logic has changed, try reinstalling all over again.
Thank you for being very active again, your project has a noble cause. Is this version of autogpt connected to the internet?
I keep getting the same error even after clearing my cache, any ideas, I installed the modules etc, this time I feel its a problem with my machine but its giving an error for a nonexistant file.
Then you need to understand a difference. Autonomous Agents are based on two great pillars.
1- LLM models and tools . Thanks to the LLM model, agents can choose the tool to use, produce outputs and run the autonomous agent. 2- Tools, on the other hand, are skill (or functions) that are given to the agent to perform tasks, such as writing files or searching the internet.
To answer your question now internet access can be given either through LLMs (for example bard , Bing and Gpt4 with browsing ) they all already have internet access, while huggingchat or gpt3 do not.
So to give gpt3 or huggingchat internet access we use tools like Duckduck , which isn't great because it's free.
@Fitsbit we can't figure out what your mistake is. Please give us as much detail as possible, telling us you cleared the cache we can't pinpoint the error
I've been working with LLM's for a while, I apologize if my answer was "vague" its currently 12am for me. I noticed the Duckduckgo in the autogpt.py file, but I noticed its running via cookies.
@Fitsbit what do you mean when you say duckduck works via cookeis ?
Here's my full error: Traceback (most recent call last): File "c:\Users\Traceback\Documents\automations\Free-Auto-GPT\AUTOGPT.py", line 301, in agent.run([input("Enter the objective of the AI system: (Be realistic!) ")]) File "C:\Users\Traceback\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\experimental\autonomous_agents\autogpt\agent.py", line 91, in run assistant_reply = self.chain.run( File "C:\Users\Traceback\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\base.py", line 261, in run return self(kwargs, callbacks=callbacks)[self.output_keys[0]] File "C:\Users\Traceback\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\base.py", line 147, in call raise e File "C:\Users\Traceback\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\base.py", line 141, in call self._call(inputs, run_manager=run_manager) File "C:\Users\Traceback\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\llm.py", line 74, in _call response = self.generate([inputs], run_manager=run_manager) File "C:\Users\Traceback\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\llm.py", line 83, in generate prompts, stop = self.prep_prompts(input_list, run_manager=run_manager) File "C:\Users\Traceback\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\llm.py", line 111, in prep_prompts prompt = self.prompt.format_prompt(selected_inputs) File "C:\Users\Traceback\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\prompts\chat.py", line 152, in format_prompt messages = self.format_messages(kwargs) File "C:\Users\Traceback\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\experimental\autonomous_agents\autogpt\prompt.py", line 46, in format_messages used_tokens = self.token_counter(base_prompt.content) + self.token_counter( File "C:\Users\Traceback\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\base_language.py", line 90, in get_num_tokens return len(self.get_token_ids(text)) File "C:\Users\Traceback\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\base_language.py", line 86, in get_token_ids return _get_token_ids_default_method(text) File "C:\Users\Traceback\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\base_language.py", line 25, in _get_token_ids_default_method tokenizer = GPT2TokenizerFast.from_pretrained("gpt2") File "C:\Users\Traceback\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\tokenization_utils_base.py", line 1760, in from_pretrained resolved_vocab_files[file_id] = cached_file( File "C:\Users\Traceback\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\utils\hub.py", line 409, in cached_file resolved_file = hf_hub_download( File "C:\Users\Traceback\AppData\Local\Programs\Python\Python310\lib\site-packages\huggingface_hub\utils_validators.py", line 120, in _inner_fn return fn(*args, **kwargs) File "C:\Users\Traceback\AppData\Local\Programs\Python\Python310\lib\site-packages\huggingface_hub\file_download.py", line 1275, in hf_hub_download with FileLock(lock_path): File "C:\Users\Traceback\AppData\Local\Programs\Python\Python310\lib\site-packages\filelock_api.py", line 255, in enter self.acquire() File "C:\Users\Traceback\AppData\Local\Programs\Python\Python310\lib\site-packages\filelock_api.py", line 213, in acquire self._acquire() File "C:\Users\Traceback\AppData\Local\Programs\Python\Python310\lib\site-packages\filelock_windows.py", line 27, in _acquire fd =os.open(self.lock_file, flags, self._context.mode) OSError: [Errno 22] Invalid argument: 'C:\Users\Traceback/.cache\huggingface\hub\models--gpt2\blobs\W/"1f1d9aaca301414e7f6c9396df506798ff4eb9a6.lock'
That is the cache, and there's a lock file, I do not see the lock file when I open up my file explorer, this isn't an issue with the code its an issue with me, but I cannot figure out how to fix it
which LLM model did you select?
I've been trying with all of them, each time i select I get the same error.
in this one, chatgpt
thank you for being here and helping me
you can try running it on colab and see if it gives you the same error, or try running Babyagi and let us know
Colab gives me an import error, but I don't have colab pro so i cannot access the terminal, I will try babyagi now.
Or try installing python 3.11 version and running the following command python3.11 -m pip install -r requirements.txt
it dont need colab pro, share the error in colab with screeshot
Babyagi is also provding an error, but its related to a login issue, it is very late for me, I need to go to bed, will you be online at the same time tomorrow?
go easy don't worry, today we are still working on the project and trying to understand the error. Don't worry about writing to us . Have a good nigth friend ❤🤗
Thank you for being supportive, I support this project and its goal. I am a big fan of the AutoGPT project.
now the minimal version of python is 3.9 ... I read somewhere another thing... but with the introduction of revchatgpt the things are changed.
There is the reverse proxy api key thingy
running in colab gives the og error of EdgeGPT
BREAKING NEWS: I got HF to work, but nothing else everything else is providing errors, is this normal?
Update: its not doing anything related to my prompt. well kind of, it understands, but is doing its own tinggyyyyyyyyyy;alkdfad
This doesn't work anymore because Microsoft pulled EdgeGPT.
You might as well figure that you are going to have to use OpenAI. Hugging face models in Langchain don't even work properly.
This doesn't work anymore because Microsoft pulled EdgeGPT.
Yes.
Replace from EdgeGPT import Chatbot, ConversationStyle
with from EdgeGPT.EdgeGPT import Chatbot, ConversationStyle
. The error should be fixed now.
I´m getting this error by using Python 3.11.4. The requirements.txt is of course installed