GiusTex / EdgeGPT

Extension for Text Generation Webui based on EdgeGPT, a reverse engineered API of Microsoft's Bing Chat AI
126 stars 5 forks source link

Sorry, you need to login first to access this service. #12

Closed Simplegram closed 1 year ago

Simplegram commented 1 year ago

I have a login issue with EdgeGPT. After doing a clean install and updating EdgeGPT extension to 0.6.8, the issue temporarily disappear and then started appearing again. I have looked into #5 but I don't really know where to log in next. I have logged in to https://bing.com, Microsoft Edge, and Windows 11. Below is my oobabooga log. I removed a few OOM errors, let me know if that can also be a factor. I changed the activation word to Bing in the last two interactions.

Starting arguments: --auto-devices --chat --xformers --sdp-attention --extensions EdgeGPT long_term_memory openai

Models tested:

Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: einops in e:\projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages (0.6.1)
WARNING:A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
bin E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cuda117.dll
INFO:Loading settings from settings.json...
INFO:Loading the extension "EdgeGPT"...

Thanks for using the EdgeGPT extension! If you encounter any bug or you have some nice idea to add, write it on the issue page here: https://github.com/GiusTex/EdgeGPT/issues
INFO:Loading the extension "long_term_memory"...

-----------------------------------------
IMPORTANT LONG TERM MEMORY NOTES TO USER:
-----------------------------------------
Please remember that LTM-stored memories will only be visible to the bot during your NEXT session. This prevents the loaded memory from being flooded with messages from the current conversation which would defeat the original purpose of this module. This can be overridden by pressing 'Force reload memories'
----------
LTM CONFIG
----------
change these values in ltm_config.json
{'ltm_context': {'injection_location': 'BEFORE_NORMAL_CONTEXT',
                 'memory_context_template': "{name2}'s memory log:\n"
                                            '{all_memories}\n'
                                            'During conversations between '
                                            '{name1} and {name2}, {name2} will '
                                            'try to remember the memory '
                                            'described above and naturally '
                                            'integrate it with the '
                                            'conversation.',
                 'memory_template': '{time_difference}, {memory_name} said:\n'
                                    '"{memory_message}"'},
 'ltm_reads': {'max_cosine_distance': 0.6,
               'memory_length_cutoff_in_chars': 1000,
               'num_memories_to_fetch': 2},
 'ltm_writes': {'min_message_length': 100}}
----------
-----------------------------------------
INFO:Loading the extension "openai"...
INFO:Loading the extension "gallery"...
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.

Loaded embedding model: all-mpnet-base-v2, max sequence length: 384
Starting OpenAI compatible api:
OPENAI_API_BASE=http://127.0.0.1:5001/v1
INFO:Loading TheBloke_guanaco-33B-GPTQ...
INFO:Found the following quantized model: models\TheBloke_guanaco-33B-GPTQ\Guanaco-33B-GPTQ-4bit.act-order.safetensors
INFO:Replaced attention with xformers_attention
INFO:Loaded the model in 76.28 seconds.

Output generated in 7.84 seconds (2.68 tokens/s, 21 tokens, context 1715, seed 928831495)
Output generated in 13.31 seconds (7.89 tokens/s, 105 tokens, context 1830, seed 1511671731)
`OOM error`
Output generated in 27.42 seconds (9.96 tokens/s, 273 tokens, context 1934, seed 742048406)
Output generated in 13.05 seconds (5.44 tokens/s, 71 tokens, context 2045, seed 880538633)
`OOM error`
Output generated in 3.79 seconds (4.48 tokens/s, 17 tokens, context 1993, seed 1744823770)
`OOM error`
Output generated in 7.28 seconds (4.81 tokens/s, 35 tokens, context 2019, seed 777958075)
Output generated in 6.27 seconds (4.46 tokens/s, 28 tokens, context 2022, seed 1894157071)
Traceback (most recent call last):
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\gradio\routes.py", line 414, in run_predict
    output = await app.get_blocks().process_api(
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\gradio\blocks.py", line 1323, in process_api
    result = await self.call_function(
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\gradio\blocks.py", line 1067, in call_function
    prediction = await utils.async_iteration(iterator)
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\gradio\utils.py", line 339, in async_iteration
    return await iterator.__anext__()
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\gradio\utils.py", line 332, in __anext__
    return await anyio.to_thread.run_sync(
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\gradio\utils.py", line 315, in run_sync_iterator_async
    return next(iterator)
  File "E:\Projects\cpy\llms\textgen-webui-gpu\text-generation-webui\modules\chat.py", line 327, in generate_chat_reply_wrapper
    for i, history in enumerate(generate_chat_reply(text, shared.history, state, regenerate, _continue, loading_message=True)):
  File "E:\Projects\cpy\llms\textgen-webui-gpu\text-generation-webui\modules\chat.py", line 321, in generate_chat_reply
    for history in chatbot_wrapper(text, history, state, regenerate=regenerate, _continue=_continue, loading_message=loading_message):
  File "E:\Projects\cpy\llms\textgen-webui-gpu\text-generation-webui\modules\chat.py", line 230, in chatbot_wrapper
    prompt = apply_extensions('custom_generate_chat_prompt', text, state, **kwargs)
  File "E:\Projects\cpy\llms\textgen-webui-gpu\text-generation-webui\modules\extensions.py", line 193, in apply_extensions
    return EXTENSION_MAP[typ](*args, **kwargs)
  File "E:\Projects\cpy\llms\textgen-webui-gpu\text-generation-webui\modules\extensions.py", line 80, in _apply_custom_generate_chat_prompt
    return extension.custom_generate_chat_prompt(text, state, **kwargs)
  File "E:\Projects\cpy\llms\textgen-webui-gpu\text-generation-webui\extensions\EdgeGPT\script.py", line 171, in custom_generate_chat_prompt
    asyncio.run(EdgeGPT())
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\asyncio\runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\asyncio\base_events.py", line 649, in run_until_complete
    return future.result()
  File "E:\Projects\cpy\llms\textgen-webui-gpu\text-generation-webui\extensions\EdgeGPT\script.py", line 157, in EdgeGPT
    bot = await Chatbot.create()
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\EdgeGPT.py", line 634, in create
    await _Conversation.create(self.proxy, cookies=cookies),
  File "E:\Projects\cpy\llms\textgen-webui-gpu\installer_files\env\lib\site-packages\EdgeGPT.py", line 411, in create
    raise NotAllowedToAccess(self.struct["result"]["message"])
EdgeGPT.NotAllowedToAccess: Sorry, you need to login first to access this service.

image image

GiusTex commented 1 year ago

How did you make the new clean installation? I did it in this way (I'm pasting the instructions on my readme.md): a) Make a new clean install. I don't know the exact old file to delete, so I removed the majority of them: go to TextGenerationWebui\installer_files\env\Lib\site-packages and delete EdgeGPT- your.version.number.dist-info, then scroll down and delete EdgeGPT.py. Now just in case go to TextGenerationWebui\text-generation-webui\extensions\EdgeGPT and delete the __pycache__ folder.

b) Install again EdgeGPT: open cmd_windows.bat and type pip install EdgeGPT or pip install EdgeGPT==0.6.1 or pip install EdgeGPT==your.desired.version.

If you want you can check the installed version: conda list EdgeGPT.

Simplegram commented 1 year ago

Yes, I follow that instructions for the clean install.

GiusTex commented 1 year ago

It's strange, anyway if you want to get rid of this you can always use the cookies, I'm changing the script to let it use cookies. You need also the old edgeGPT version (details at the bottom), I could add an option to continue using them if you like it. Try creating again the cookies.json in the extension folder, maybe the script finds them even without specifing the path.

  1. Install Cookie Editor for Microsoft Edge.

  2. Go to bing.com, login to your microsoft account and copy the cookies in a file. If you can't find the extension on Microsoft Edge, follow these steps:

    1- Click the puzzle icon;

    2- Click the cookie icon;

    3- Click the fifth option on top, to copy them.

    Now that you have copied them, go inside text-generation-webui\extensions\EdgeGPT and paste the cookies settings in cookies.txt, then rename it to cookies.json and press enter.

Edit:

Adding the cookies path to the script doesn't work alone, probably we need also an older version of the main edgeGPT script (the one by Archeon). Tell me if you want to downgrade the edgeGpt function to use the cookies or you want to fix the login thing (which shouldn't be required). Anyway to downgrade the main edgeGpt script you need to start cmd_windows.bat/your platform.bat and type pip install EdgeGPT==[your.desired.version]. This should overwrite the newer script, if you still get the error try deleting EdgeGPT- your.version.number.dist-info and EdgeGPT.py

vchauhan1 commented 1 year ago

I am getting the same error as well. I am running the textgen on a remote cli based server. Do I still need to pass bing login/cookies to plugin somewhere?

GiusTex commented 1 year ago

Edit 1:

--listen shouldn't be a problem, it works for me. How are you using the remote cli server, is it difficult to setup?

Edit 2:

Without cookies it should work also on a server, I tested it on colab and it works (wow). Maybe you need to delete the old files? TextGenerationWebui\installer_files\env\Lib\site-packages>EdgeGPT- your.version.number.dist-info, then scroll down and delete EdgeGPT.py. Now just in case go to TextGenerationWebui\text-generation-webui\extensions\EdgeGPT and delete the __pycache__ folder. Some of these files require the cookies, so we need to delete and update them

GiusTex commented 1 year ago

If you want you can use the colab version: Open In Colab

Simplegram commented 1 year ago

Okay, so far after doing a clean install of EdgeGPT extension, installing EdgeGPT 0.6.10, and placing cookies.json inside the extension folder, I have not run into any issues. As I'm writing this, the login error showed up again. This is after 12 straight questions without error.

This is the 12 questions: image image image image

Using the latest EdgeGPT 0.7.1 though render the EdgeGPT extension sort of unconscious, no reply and no error logs after pressing the generate button, just nothing after minutes. Only the Is searching... showed up.

GiusTex commented 1 year ago

Can you print in console the bing output? And maybe the character output too, I don't remember if there is something similar. Maybe they were generated but not showed, printing them may be a first step

Simplegram commented 1 year ago

I'm sorry, I didn't save the logs from before.

Changing the script.py at line 158 to something like below so far works though after 17 questions. No errors yet. Will reply as soon as possible with the logs when I encounter further error.

cookies = json.loads(open("path/to/cookies/json", encoding="utf-8").read())
bot = await Chatbot.create(cookies=cookies)

I need to specify the exact path of the cookies.json for it to work. And don't forget to put import json on the top of the file in case anyone wants to try.

GiusTex commented 1 year ago

It's similar to the old version of the script where it was specified the cookies path, I thought too to add it again but I got an error, I thought I had to downgrade EdgeGPT but you had not to, interesting. I could add a switch with your code, to ler users choose between using cookies or not, this could help other users with the same issue. Tell me if it continues to work, maybe in the meantime I'll create the switch

GiusTex commented 1 year ago

I added the option to turn on cookies, tell me if it works for you. If it isn't, you can open again the issue