xtekky / gpt4free

The official gpt4free repository | various collection of powerful language models
https://g4f.ai
GNU General Public License v3.0
59.87k stars 13.2k forks source link

ChatGPT provider fails to read .har file #1896

Closed plia7 closed 1 month ago

plia7 commented 4 months ago

Hello.

It seems like it doesn't work anymore? I updated g4f to latest version and added a new HAR file, but I'm still getting a message that HAR file is missing?

me:~# Successfully installed g4f-0.3.0.6

me: chatgpt “test“

GPT: Traceback (most recent call last):
  File "/root/./mnt/docs/chatgpt.py", line 63, in <module>
    for chunk in stream:
  File "/usr/lib/python3.9/site-packages/g4f/providers/base_provider.py", line 216, in create_completion
    yield loop.run_until_complete(await_callback(gen.__anext__))
  File "/usr/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
    return future.result()
  File "/usr/lib/python3.9/site-packages/g4f/providers/base_provider.py", line 45, in await_callback
    return await callback()
  File "/usr/lib/python3.9/site-packages/g4f/Provider/needs_auth/OpenaiChat.py", line 383, in create_async_generator
    cls.default_model = cls.get_model(await cls.get_default_model(session, cls._headers))
  File "/usr/lib/python3.9/site-packages/g4f/Provider/needs_auth/OpenaiChat.py", line 184, in get_default_model
    raise MissingAuthError('Add a "api_key" or a .har file' if cls._api_key is None else "Invalid api key")
g4f.errors.MissingAuthError: Add a "api_key" or a .har file

Does it still work for you?

Thanks.

hlohaus commented 4 months ago

Is the .har file in /mnt/docs?

plia7 commented 4 months ago

Is the .har file in /mnt/docs?

Yes, it's the same place I always placed it and it worked before. I'm also obtaining the HAR file exactly like I used to before so not sure why it doesn't work anymore?

plia7 commented 4 months ago

Is the .har file in /mnt/docs?

Are you able to reproduce the issue on your end? Is there an alternative way to supply it with the login credentials without the HAR file such as email and password or some session key?

Thanks.

hlohaus commented 4 months ago

.har files are needed for plus accounts. Others work with NoDriver too. To install NoDriver, use: pip install nodriver.

The .har file must be in the current directory and must end with .har.

plia7 commented 4 months ago

.har files are needed for plus accounts. Others work with NoDriver too. To install NoDriver, use: pip install nodriver.

The .har file must be in the current directory and must end with .har.

I'm using ChatGPT v3.5 (I don't think it's ChatGPT v4 which is a plus account). So do I still need the .har file? If not, what do I need to supply it with alternatively?

Yes, it's placed same place as before and it stopped working:

me:/mnt/docs# ls
chat.openai.com.har   
chatgpt.py            
me:/mnt/docs# python3 ./chatgpt.py

You: chatgpt are you here 
New g4f version: 0.3.0.7 (current: 0.3.0.6) | pip install -U g4f

GPT: Traceback (most recent call last):
  File "/root/mnt/docs/./chatgpt.py", line 63, in <module>
    for chunk in stream:
  File "/usr/lib/python3.9/site-packages/g4f/providers/base_provider.py", line 216, in create_completion
    yield loop.run_until_complete(await_callback(gen.__anext__))
  File "/usr/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
    return future.result()
  File "/usr/lib/python3.9/site-packages/g4f/providers/base_provider.py", line 45, in await_callback
    return await callback()
  File "/usr/lib/python3.9/site-packages/g4f/Provider/needs_auth/OpenaiChat.py", line 383, in create_async_generator
    cls.default_model = cls.get_model(await cls.get_default_model(session, cls._headers))
  File "/usr/lib/python3.9/site-packages/g4f/Provider/needs_auth/OpenaiChat.py", line 184, in get_default_model
    raise MissingAuthError('Add a "api_key" or a .har file' if cls._api_key is None else "Invalid api key")
g4f.errors.MissingAuthError: Add a "api_key" or a .har file

Does it work for you?

Thanks.

plia7 commented 4 months ago

.har files are needed for plus accounts. Others work with NoDriver too. To install NoDriver, use: pip install nodriver.

The .har file must be in the current directory and must end with .har.

I'm using ChatGPT v3.5 (I don't think it's ChatGPT v4 which is a plus account). So do I still need the .har file? If not, what do I need to supply it with alternatively?

Yes, it's placed same place as before and it stopped working:

me:/mnt/docs# ls
chat.openai.com.har   
chatgpt.py            
me:/mnt/docs# python3 ./chatgpt.py

You: chatgpt are you here 
New g4f version: 0.3.0.7 (current: 0.3.0.6) | pip install -U g4f

GPT: Traceback (most recent call last):
  File "/root/mnt/docs/./chatgpt.py", line 63, in <module>
    for chunk in stream:
  File "/usr/lib/python3.9/site-packages/g4f/providers/base_provider.py", line 216, in create_completion
    yield loop.run_until_complete(await_callback(gen.__anext__))
  File "/usr/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
    return future.result()
  File "/usr/lib/python3.9/site-packages/g4f/providers/base_provider.py", line 45, in await_callback
    return await callback()
  File "/usr/lib/python3.9/site-packages/g4f/Provider/needs_auth/OpenaiChat.py", line 383, in create_async_generator
    cls.default_model = cls.get_model(await cls.get_default_model(session, cls._headers))
  File "/usr/lib/python3.9/site-packages/g4f/Provider/needs_auth/OpenaiChat.py", line 184, in get_default_model
    raise MissingAuthError('Add a "api_key" or a .har file' if cls._api_key is None else "Invalid api key")
g4f.errors.MissingAuthError: Add a "api_key" or a .har file

Does it work for you?

Thanks.

@hlohaus Any updates on this?

Thanks.

hlohaus commented 4 months ago

Are your .har files created correctly? The .har file solution works for me and other users.

plia7 commented 4 months ago

Are your .har files created correctly? The .har file solution works for me and other users.

To create the HAR file, I follow these instructions:

Generating a .HAR File
To utilize the OpenaiChat provider, a .har file is required from https://chat.openai.com/. Follow the steps below to create a valid .har file:

Navigate to https://chat.openai.com/ using your preferred web browser and log in with your credentials.

Access the Developer Tools in your browser. This can typically be done by right-clicking the page and selecting "Inspect," or by pressing F12 or Ctrl+Shift+I (Cmd+Option+I on a Mac).

With the Developer Tools open, switch to the "Network" tab.

Reload the website to capture the loading process within the Network tab.

Initiate an action in the chat which can be capture in the .har file.

Right-click any of the network activities listed and select "Save all as HAR with content" to export the .har file.

It generates a file I save as "chatgpt.har" and copy it to mnt/docs folder where chatgpt.py is located.

It worked before, but it stopped working so I'm not doing anything different than what I did before.

Is there a tool or script to fetch the .har from chrome browser automatically to make sure I'm not doing anything wrong when fetching it manually following these instructions?

I also just updated the g4f package but I still get the same message:

Successfully installed g4f-0.3.0.8
me:/mnt/docs# ls
chatgpt.har           
chatgpt.py            
me:/mnt/docs# chatgpt

You: how are you?

GPT: Traceback (most recent call last):
  File "/root/mnt/docs/chatgpt.py", line 63, in <module>
    for chunk in stream:
  File "/usr/lib/python3.9/site-packages/g4f/providers/base_provider.py", line 216, in create_completion
    yield loop.run_until_complete(await_callback(gen.__anext__))
  File "/usr/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
    return future.result()
  File "/usr/lib/python3.9/site-packages/g4f/providers/base_provider.py", line 45, in await_callback
    return await callback()
  File "/usr/lib/python3.9/site-packages/g4f/Provider/needs_auth/OpenaiChat.py", line 409, in create_async_generator
    cls.default_model = cls.get_model(await cls.get_default_model(session, cls._headers))
  File "/usr/lib/python3.9/site-packages/g4f/Provider/needs_auth/OpenaiChat.py", line 200, in get_default_model
    raise MissingAuthError('Add a "api_key" or a .har file' if cls._api_key is None else "Invalid api key")
g4f.errors.MissingAuthError: Add a "api_key" or a .har file

Any alternative way to specify the auth for ChatGPT 3.5 provider (free account) without using the .HAR file? Do you have Discord to discuss? Does it matter what's the full name of the file or as long as it's *.har it should be ok? Is there any way to inspect the .har or debug this case to see why it can't find or fetch the .har file?

Thanks.

plia7 commented 4 months ago

@hlohaus I just tried a second windows pc with python, I'm encountering the same issue. So I have this in both Linux x86 machine and windows x64 machine both with python installed.

I downloaded the .har multiple times, trying to write multiple times to chatgpt before saving the .har file, but I still keep getting the same error message.

It can't be that I'm the only one that's reporting this issue? This doesn't make any sense. Please fix this bug. Let me know what troubleshoot or debug information do you need, I can also share my screen on discord to show you the problem.

Thanks.

iG8R commented 4 months ago

@plia7 What directory did you save the har file in?

PS. I had the same problem, but version 0.3.0.7 solved it. https://github.com/xtekky/gpt4free/issues/1795#issuecomment-2081705237

plia7 commented 4 months ago

@plia7 What directory did you save the har file in?

PS. I had the same problem, but version 0.3.0.7 solved it. #1795 (comment)

I put it in both the root folder where chatgpt.py is located as well as inside /mnt/docs. Again, I'm not doing anything different than I did before when it worked, but it just stopped working for me. I also upgraded to the latest version 0.3.0.8 in both environments but it still doesn't work.

Do you have some basic code snippet that I can try that works for you to make sure my chatgpt.py script is not causing the issue?

Do you have a Discord channel or user I can show you the problem?

@hlohaus @iG8R Is there any guaranteed folder where I can put the .har file like python site packages or something or is there a way I can force it to read the file from a specific location?

@iG8R Your issue doesn't sound the same as mine as yours doesn't say that "har file is missing?" so I think you had a different issue than mine?

Thanks.

iG8R commented 4 months ago

@plia7 Yes, my issue was a little different. In your case, when it says "har file is missing", IMHO, it means it can't find the har file in the g4f specific directory where it should be.

I do not have Linux, but I suggest you try the following on Windows, which works fine for me:

  1. Clone the g4f repository, e.g. c:\g4f\
  2. Create a virtual environment: Enter c:\g4f\ and run from command line python -m venv venv
  3. Activate the virtual environment: Still when you are inside c:\g4f\ run from command line .\venv\Scripts\activate
  4. Still inside c:\g4f\ run from command line pip install . (Don't forget to put a period at the end)
  5. If needed also install: pip install uvicorn and pip install fastapi
  6. First, run g4f api --debug without any har file in the c:\g4f\har_and_cookies\ directory
  7. Then copy your har file to the c:\g4f\har_and_cookies\ directory and run g4f api --debug again. You will then be able to see comments on how g4f handles your har file.

image

iG8R commented 4 months ago

Do you have some basic code snippet that I can try that works for you to make sure my chatgpt.py script is not causing the issue?

I don't use any code snippets, just API calls from other applications, e.g.

  1. https://github.com/pot-app/pot-desktop Request Path http://127.0.0.1:1337/v1/chat/completions?provider=OpenaiChat Api Key - whitespace

  1. https://github.com/immersive-translate/immersive-translate/
hlohaus commented 4 months ago

Hey, @plia7! How do you run your script? Do you just type python chatgpt.py or do you use an absolute or relative path like python .../my_dir/chatgpt.py? Try changing to your directory first with cd .../my_dir/ and see if that works.

plia7 commented 4 months ago

Hey, @plia7! How do you run your script? Do you just type python chatgpt.py or do you use an absolute or relative path like python .../my_dir/chatgpt.py? Try changing to your directory first with cd .../my_dir/ and see if that works.

Yes, I change directory where chatgpt.py is located and then I run it by the command "python3 chatgpt.py" but that's failing. Maybe you could add an option to specify the har in a parameter to make sure the package doesn't get confused like that?

Thanks.

plia7 commented 4 months ago

Hey, @plia7! How do you run your script? Do you just type python chatgpt.py or do you use an absolute or relative path like python .../my_dir/chatgpt.py? Try changing to your directory first with cd .../my_dir/ and see if that works.

My friend was able to get this to work in his newly set environment after getting the same error about missing har file although he placed it. He updated to 0.3.1.0 and then it started to work. I tried to do the same but I still get the error. I made sure to remove any har file copies I had so it doesn't take the wrong one.

@hlohaus Could you explain please how it determines which HAR file to use? Is there a place I can put the file to ensure it will take priority over any other har file in the running path? I think this is not environment issue rather it's a code issue that should be fixed, because it's clearly failing to pick up on the har file as in my case.

Thanks.

hlohaus commented 4 months ago

Hey, why can't you just put the .har file in the current directory? It reads all the .har files, but then only uses the last one it finds that matches alphabetically.

plia7 commented 4 months ago

Hey, why can't you just put the .har file in the current directory? It reads all the .har files, but then only uses the last one it finds that matches alphabetically.

I already placed it in the current directory. It doesn't work.

plia7 commented 4 months ago

@iG8R Thanks for the detailed reply. Could you explain please how do you make a REST POST call to this endpoint:

http://127.0.0.1:1337/v1/chat/completions?provider=OpenaiChat

What body do you use?

When I try to make a REST POST call to http://127.0.0.1:1337/v1/chat/completions?provider=OpenaiChat without any body, this is the response I get back:

INFO: 127.0.0.1:64627 - "POST /v1/chat/completions?provider=OpenaiChat HTTP/1.1" 422 Unprocessable Entity

And this is the response body I get:

{
    "detail": [
        {
            "loc": [
                "body"
            ],
            "message": "Input should be a valid dictionary or object to extract fields from",
            "type": "model_attributes_type"
        }
    ]
}

Thanks.

iG8R commented 4 months ago

@plia7 I don't use any code snippets, all the bodies are "provided" by the apps I use (see the attached screenshot). Try to execute them and check from the console logs - is the har file being processed as it should or not?

image

plia7 commented 4 months ago

@plia7

I don't use any code snippets, all the bodies are "provided" by the apps I use (see the attached screenshot).

Try to execute them and check from the console logs - is the har file being processed as it should or not?

image

Do I have to install both of these programs?

iG8R commented 4 months ago

Do I have to install both of these programs?

Any will do.

plia7 commented 3 months ago

Do I have to install both of these programs?

Any will do.

Ok so when I try to add the call in the pot program as in your screenshot, this is the message I get:

C:\Users\me\Documents>C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\Scripts\g4f.exe api --debug
Starting server... [g4f v-0.3.1.0] (debug)
INFO:     Will watch for changes in these directories: ['C:\\Users\\me\\Documents']
INFO:     Uvicorn running on http://0.0.0.0:1337 (Press CTRL+C to quit)
INFO:     Started reloader process [27076] using WatchFiles
INFO:     Started server process [5984]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
Using OpenaiChat provider and gpt-3.5-turbo model
INFO:     127.0.0.1:62560 - "POST /v1/chat/completions?provider=OpenaiChat HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
  + Exception Group Traceback (most recent call last):
  |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\_utils.py", line 87, in collapse_excgroups
  |     yield
  |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\middleware\base.py", line 190, in __call__
  |     async with anyio.create_task_group() as task_group:
  |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\anyio\_backends\_asyncio.py", line 678, in __aexit__
  |     raise BaseExceptionGroup(
  | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
  +-+---------------- 1 ----------------
    | Traceback (most recent call last):
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\uvicorn\protocols\http\httptools_impl.py", line 411, in run_asgi
    |     result = await app(  # type: ignore[func-returns-value]
    |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\uvicorn\middleware\proxy_headers.py", line 69, in __call__
    |     return await self.app(scope, receive, send)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\fastapi\applications.py", line 1054, in __call__
    |     await super().__call__(scope, receive, send)
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\applications.py", line 123, in __call__
    |     await self.middleware_stack(scope, receive, send)
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\middleware\errors.py", line 186, in __call__
    |     raise exc
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\middleware\errors.py", line 164, in __call__
    |     await self.app(scope, receive, _send)
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\middleware\base.py", line 189, in __call__
    |     with collapse_excgroups():
    |   File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.752.0_x64__qbz5n2kfra8p0\Lib\contextlib.py", line 158, in __exit__
    |     self.gen.throw(value)
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\_utils.py", line 93, in collapse_excgroups
    |     raise exc
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\middleware\base.py", line 191, in __call__
    |     response = await self.dispatch_func(request, call_next)
    |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\g4f\api\__init__.py", line 84, in authorization
    |     return await call_next(request)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\middleware\base.py", line 165, in call_next
    |     raise app_exc
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\middleware\base.py", line 151, in coro
    |     await self.app(scope, receive_or_disconnect, send_no_error)
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\middleware\exceptions.py", line 65, in __call__
    |     await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
    |     raise exc
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
    |     await app(scope, receive, sender)
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\routing.py", line 756, in __call__
    |     await self.middleware_stack(scope, receive, send)
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\routing.py", line 776, in app
    |     await route.handle(scope, receive, send)
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\routing.py", line 297, in handle
    |     await self.app(scope, receive, send)
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\routing.py", line 77, in app
    |     await wrap_app_handling_exceptions(app, request)(scope, receive, send)
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
    |     raise exc
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
    |     await app(scope, receive, sender)
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\routing.py", line 72, in app
    |     response = await func(request)
    |                ^^^^^^^^^^^^^^^^^^^
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\fastapi\routing.py", line 278, in app
    |     raw_response = await run_endpoint_function(
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\fastapi\routing.py", line 191, in run_endpoint_function
    |     return await dependant.call(**values)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\g4f\api\__init__.py", line 161, in chat_completions
    |     return JSONResponse((await response).to_json())
    |                          ^^^^^^^^^^^^^^
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\g4f\client\async_client.py", line 63, in iter_append_model_and_provider
    |     async for chunk in response:
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\g4f\client\async_client.py", line 37, in iter_response
    |     async for chunk in response:
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\g4f\Provider\needs_auth\OpenaiChat.py", line 419, in create_async_generator
    |     await raise_for_status(response)
    |   File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\g4f\requests\raise_for_status.py", line 28, in raise_for_status_async
    |     raise ResponseStatusError(f"Response {response.status}: {message}")
    | g4f.errors.ResponseStatusError: Response 401: {"detail":"Unauthorized"}
    +------------------------------------

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\uvicorn\protocols\http\httptools_impl.py", line 411, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\uvicorn\middleware\proxy_headers.py", line 69, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\fastapi\applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\middleware\errors.py", line 186, in __call__
    raise exc
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\middleware\errors.py", line 164, in __call__
    await self.app(scope, receive, _send)
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\middleware\base.py", line 189, in __call__
    with collapse_excgroups():
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.752.0_x64__qbz5n2kfra8p0\Lib\contextlib.py", line 158, in __exit__
    self.gen.throw(value)
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\_utils.py", line 93, in collapse_excgroups
    raise exc
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\middleware\base.py", line 191, in __call__
    response = await self.dispatch_func(request, call_next)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\g4f\api\__init__.py", line 84, in authorization
    return await call_next(request)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\middleware\base.py", line 165, in call_next
    raise app_exc
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\middleware\base.py", line 151, in coro
    await self.app(scope, receive_or_disconnect, send_no_error)
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\middleware\exceptions.py", line 65, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\routing.py", line 756, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\routing.py", line 776, in app
    await route.handle(scope, receive, send)
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\routing.py", line 297, in handle
    await self.app(scope, receive, send)
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\routing.py", line 77, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\starlette\routing.py", line 72, in app
    response = await func(request)
               ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\fastapi\routing.py", line 278, in app
    raw_response = await run_endpoint_function(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\fastapi\routing.py", line 191, in run_endpoint_function
    return await dependant.call(**values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\g4f\api\__init__.py", line 161, in chat_completions
    return JSONResponse((await response).to_json())
                         ^^^^^^^^^^^^^^
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\g4f\client\async_client.py", line 63, in iter_append_model_and_provider
    async for chunk in response:
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\g4f\client\async_client.py", line 37, in iter_response
    async for chunk in response:
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\g4f\Provider\needs_auth\OpenaiChat.py", line 419, in create_async_generator
    await raise_for_status(response)
  File "C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\g4f\requests\raise_for_status.py", line 28, in raise_for_status_async
    raise ResponseStatusError(f"Response {response.status}: {message}")
g4f.errors.ResponseStatusError: Response 401: {"detail":"Unauthorized"}

Any idea what does it mean? I tried to place the .har file inside:

C:\Users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\Scripts

But the error indicates something was unauthorized? I used a single space for the api key like you said (and tried also without it and I get the same error). That seems to be inline with the error that I get in OpenaiChat.py:

if response.status == 401:
                    raise MissingAuthError('Add a "api_key" or a .har file' if cls._api_key is None else "Invalid api key")

Thanks.

plia7 commented 3 months ago

@hlohaus @iG8R Could it be a problem with my ChatGPT account? Could it be I got shadow banned or something where the .HAR that gets generated doesn't work anymore? Is it possible? Does the error I get indicates it can't find the .HAR file or could it be it does find it, but the content inside the file can't be used? Would there be a different error in that case?

The code seem to point to this error:

if response.status == 401:
                    raise MissingAuthError('Add a "api_key" or a .har file' if cls._api_key is None else "Invalid api key")

So I get status 401 unauthorized response.

Moreover, I noticed that if I go to the following URL/login to my chatgpt account: https://platform.openai.com/playground/chat Enter my question in the “User” text box, just as you would if you were chatting on the regular chat.openai website, then press the submit button on the bottom left, I would get the message:

"You've reached your usage limit. See your [usage dashboard](https://platform.openai.com/account/usage) and [billing settings](https://platform.openai.com/account/billing) for more details. If you have further questions, please contact us through our help center at help.openai.com."

But other people told me they are able to use this too, and provided a screenshot:

https://imgur.com/a/P4YeuA4

So I wonder if this is related and part of the root cause to the problem that I experience with the .har file (401 unauthorized response), what do you think?

Thanks.

plia7 commented 3 months ago

@hlohaus @iG8R Ok in one environment, I was able to get this to work with a new .HAR I just downloaded, so it must be something with my other two environments. It doesn't seem like a problem with my ChatGPT account.

But why in two environments it would get screwy like that out of the blue? Everything was working fine prior to that, it doesn't make any sense.

@hlohaus It doesn't make sense that I need to re-create the environment, re-install python and g4f just to get this to work? Why would it work in one environment with the same .HAR file but then it would fail in another environment with the same .HAR file?

Could you please fix it, so it's environment agnostic and never fails? Is there a way to force it to use the .HAR file somehow? Can I modify some code to tell it to use the file directly?

Could you please direct me what to change and try it and see if it fixes the problem? I can modify:

C:\users\me\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\g4f\Provider\needs_auth\OpenaiChat.py

Just tell me what to change to force it to read my .HAR file that's located in:

C:\Users\me\Documents\chat.openai.com.har

Thanks.

hlohaus commented 3 months ago

Please enable the logging or debug mode. This will allow you to view the .har files that are loaded in the log.

plia7 commented 3 months ago

Please enable the logging or debug mode. This will allow you to view the .har files that are loaded in the log.

How do I do that? I run my script like this:

python3 chatgpt.py

So where do I enable logging for it?

Thanks.

iG8R commented 3 months ago

@plia7

So where do I enable logging for it?

Read once more my post https://github.com/xtekky/gpt4free/issues/1896#issuecomment-2094503579 and repeat it exactly as it was described.

plia7 commented 3 months ago

@plia7

So where do I enable logging for it?

Read once more my post https://github.com/xtekky/gpt4free/issues/1896#issuecomment-2094503579 and repeat it exactly as it was described.

@iG8R Sorry but I don't want to recreate my environment from scratch, I have other things there that I want to keep.

iG8R commented 3 months ago

@plia7 All these steps do not change your environment in any way, which is why I wrote them there. Once everything is done, all you have to do is delete the g4f directory. Also there you will find the answer to the question "So where do I enable logging?".

plia7 commented 3 months ago

@plia7

All these steps do not change your environment in any way, which is why I wrote them there. Once everything is done, all you have to do is delete the g4f directory.

Also there you will find the answer to the question "So where do I enable logging?".

I'm running it on a linux x86 emulator which is very slow to uninstall/install g4f.

plia7 commented 3 months ago

@hlohaus @iG8R I enabled logging but I don't see it shows what .har it uses.

I enabled it by adding the logging as follows to my script:

g4f.debug.logging = True # enable logging

Any other things I can add to my script that can shed some light why this issue only happens with this environment?

Please don't tell me to reinstall the environment to set up a new virtual environment, I have other stuff in it I want to keep and it's too slow to reinstall it. There must be other ways to get this to work.

Thanks.


g4f.Provider.Ails supports: (
    model: str,
    messages: Messages,
    stream: bool,
    proxy: str = None,
)

You: test
Using OpenaiChat provider

GPT: Traceback (most recent call last):
  File "/root/mnt/docs/chatgpt.py", line 67, in <module>
    for chunk in stream:
  File "/usr/lib/python3.9/site-packages/g4f/providers/base_provider.py", line 216, in create_completion
    yield loop.run_until_complete(await_callback(gen.__anext__))
  File "/usr/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
    return future.result()
  File "/usr/lib/python3.9/site-packages/g4f/providers/base_provider.py", line 45, in await_callback
    return await callback()
  File "/usr/lib/python3.9/site-packages/g4f/Provider/needs_auth/OpenaiChat.py", line 411, in create_async_generator
    cls.default_model = cls.get_model(await cls.get_default_model(session, cls._headers))
  File "/usr/lib/python3.9/site-packages/g4f/Provider/needs_auth/OpenaiChat.py", line 200, in get_default_model
    raise MissingAuthError('Add a "api_key" or a .har file' if cls._api_key is None else "Invalid api key")
g4f.errors.MissingAuthError: Add a "api_key" or a .har```
hlohaus commented 3 months ago

Yo, I'll create a function that sets the cookies dir for you. This way, it won't search for files in the wrong directory. Sound good?

plia7 commented 3 months ago

Yo, I'll create a function that sets the cookies dir for you. This way, it won't search for files in the wrong directory. Sound good?

Perfect, I appreciate it. I'm sure it could benefit other users when the environment goes "corrupt" like this.

Please let me know if I can do anything to help to test the function or the new package version when it comes out to see it fixes the issue for me.

Thanks.

hlohaus commented 3 months ago

Hey, I just dropped the new version! Check out the readme for an example of how to use the set cookies dir function.

plia7 commented 3 months ago

Hey, I just dropped the new version! Check out the readme for an example of how to use the set cookies dir function.

I just tried it, and this is the message that I get now:

me:~# chatgpt
Read .har file: /root/mnt/docs/har_and_cookies/chat.openai.com.har

You: hello world
Using OpenaiChat provider

GPT: Traceback (most recent call last):
  File "/root/mnt/docs/chatgpt.py", line 72, in <module>
    for chunk in stream:
  File "/usr/lib/python3.9/site-packages/g4f/providers/base_provider.py", line 216, in create_completion
    yield loop.run_until_complete(await_callback(gen.__anext__))
  File "/usr/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
    return future.result()
  File "/usr/lib/python3.9/site-packages/g4f/providers/base_provider.py", line 45, in await_callback
    return await callback()
  File "/usr/lib/python3.9/site-packages/g4f/Provider/needs_auth/OpenaiChat.py", line 378, in create_async_generator
    cls.default_model = cls.get_model(await cls.get_default_model(session, cls._headers))
  File "/usr/lib/python3.9/site-packages/g4f/Provider/needs_auth/OpenaiChat.py", line 197, in get_default_model
    cls._update_request_args(session)
  File "/usr/lib/python3.9/site-packages/g4f/Provider/needs_auth/OpenaiChat.py", line 706, in _update_request_args
    cls._cookies[c.key if hasattr(c, "key") else c.name] = c.value
TypeError: 'NoneType' object does not support item assignment

I only placed the .har file inside the folder (I never needed any other files like cookies and it works fine in another environment just with this same .har file so not sure why it's complaining about cookies or something).

This is the beginning of my script file:

from g4f.client import Client
from g4f.Provider import OpenaiChat
import os.path
from g4f.cookies import set_cookies_dir, read_cookie_files
import g4f.debug

g4f.debug.logging = True # enable logging
cookies_dir = os.path.join(os.path.dirname(__file__), "har_and_cookies")
set_cookies_dir(cookies_dir)
read_cookie_files(cookies_dir)
.
.
.

I don't think you need cookies files for ChatGPT provider?

Thanks.

hlohaus commented 3 months ago

Alright. It now finds the .har file. But I don't think it's valid. Do you have an error message? Did you track a chat action? Did you reload the website after opening the browser console?

plia7 commented 3 months ago

Alright. It now finds the .har file. But I don't think it's valid. Do you have an error message? Did you track a chat action? Did you reload the website after opening the browser console?

Yes I did reload the website and track a chat action when fetching the .har file. The same file works in my other working environment. So I believe it's not a problem with the .har file that I'm using but still a potential problem with the current environment.

The error which can be seen in the stack trace I posted in previous comment is:

    cls.default_model = cls.get_model(await cls.get_default_model(session, cls._headers))
  File "/usr/lib/python3.9/site-packages/g4f/Provider/needs_auth/OpenaiChat.py", line 197, in get_default_model
    cls._update_request_args(session)
  File "/usr/lib/python3.9/site-packages/g4f/Provider/needs_auth/OpenaiChat.py", line 706, in _update_request_args
    cls._cookies[c.key if hasattr(c, "key") else c.name] = c.value
TypeError: 'NoneType' object does not support item assignment

So looks like it fails here:

cls._cookies[c.key if hasattr(c, "key") else c.name] = c.value

TypeError: 'NoneType' object does not support item assignment

It tries to assign a NoneType object to the cookies dictionary?

But why does it even try to go to this code if I didn't provide any cookies and it's not applicable to ChatGPT? Please correct me if I'm wrong.

Thanks.

hlohaus commented 3 months ago

The error occurs only when no .har file is found. Consider using OpenaiAccount instead, as it does not check for the free service.

plia7 commented 3 months ago

The error occurs only when no .har file is found. Consider using OpenaiAccount instead, as it does not check for the free service.

@hlohaus Sorry I'm a little bit confused:

Thanks.

hlohaus commented 3 months ago

The cookies are also included in the .har file. Your last error message indicates that the cookies are not available and cannot be updated.

plia7 commented 3 months ago

The cookies are also included in the .har file. Your last error message indicates that the cookies are not available and cannot be updated.

Ok but as I mentioned the same .har file works fine in another working environment. So it's not a problem with the .har file or the cookies in it. It's still a code problem, it shouldn't fail on loading the .har/cookies file that's working fine in another environment.

Thanks.

plia7 commented 3 months ago

The cookies are also included in the .har file. Your last error message indicates that the cookies are not available and cannot be updated.

Ok but as I mentioned the same .har file works fine in another working environment. So it's not a problem with the .har file or the cookies in it. It's still a code problem, it shouldn't fail on loading the .har/cookies file that's working fine in another environment.

Thanks.

Do you agree @hlohaus ? I don't understand why you keep telling me to get a new environment, virtual environment, new chatgpt account, free account, paid account. I know this .har works in another environment with same g4f package. So for whatever reason it doesn't work in this environment, that means it's not a setup issue. This supposed "bad environment" is exposing an issue/bug with the g4f package and/or the way it's coded.

Why I don't have this issue with other packages that I use?

Could you please fix it?

Thanks.

@iG8R

66696e656c696665 commented 3 months ago

@plia7 Да, у меня проблема была немного в другом. В вашем случае, когда он говорит «файл har отсутствует», ИМХО, это означает, что он не может найти файл harв g4fопределенном каталоге, где он должен быть.

У меня нет Linux, но я предлагаю вам попробовать следующее в Windows, и у меня это работает нормально:

  1. Клонируйте репозиторий g4f, например c:\g4f\
  2. Создайте виртуальную среду: войдите c:\g4f\и запустите из командной строки.python -m venv venv
  3. Активируйте виртуальную среду: по-прежнему, когда вы находитесь внутри, c:\g4f\запустите из командной строки..\venv\Scripts\activate
  4. Все еще внутри, c:\g4f\запустите из командной строки pip install .(не забудьте поставить точку в конце)
  5. При необходимости также установите: pip install uvicornиpip install fastapi
  6. Сначала запустите g4f api --debugбез каких-либо harфайлов в c:\g4f\har_and_cookies\каталоге.
  7. Затем скопируйте harфайл в c:\g4f\har_and_cookies\каталог и запустите g4f api --debugснова. После этого вы сможете увидеть комментарии о том, как g4fобрабатывается ваш harфайл.

изображение

Ty, bro, This work

github-actions[bot] commented 2 months ago

Bumping this issue because it has been open for 7 days with no activity. Closing automatically in 7 days unless it becomes active again.

iG8R commented 2 months ago

up

plia7 commented 2 months ago

up

@iG8R Do you have this issue too now?

After my two screwed up environments, my third environment that was working just got screwed as well. I just updated it by running the command pip install -U g4f and now I get (0.3.2.1):

C:\Users\myUser\Documents>python chatgpt.py
You: who is dd
Traceback (most recent call last):
  File "C:\Users\myUser\Documents\chatgpt.py", line 50, in <module>
    gpt_response=handle_chat(user_input)
                 ^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\myUser\Documents\chatgpt.py", line 9, in handle_chat
    response = g4f.ChatCompletion.create(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\myUser\AppData\Local\Programs\Python\Python312\Lib\site-packages\g4f\__init__.py", line 68, in create
    return result if stream else ''.join([str(chunk) for chunk in result])
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\myUser\AppData\Local\Programs\Python\Python312\Lib\site-packages\g4f\providers\base_provider.py", line 223, in create_completion
    yield loop.run_until_complete(await_callback(gen.__anext__))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\myUser\AppData\Local\Programs\Python\Python312\Lib\asyncio\base_events.py", line 685, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "C:\Users\myUser\AppData\Local\Programs\Python\Python312\Lib\site-packages\g4f\providers\base_provider.py", line 52, in await_callback
    return await callback()
           ^^^^^^^^^^^^^^^^
  File "C:\Users\myUser\AppData\Local\Programs\Python\Python312\Lib\site-packages\g4f\Provider\needs_auth\OpenaiChat.py", line 379, in create_async_generator
    cls.default_model = cls.get_model(await cls.get_default_model(session, cls._headers))
                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\myUser\AppData\Local\Programs\Python\Python312\Lib\site-packages\g4f\Provider\needs_auth\OpenaiChat.py", line 199, in get_default_model
    raise MissingAuthError('Add a "api_key" or a .har file' if cls._api_key is None else "Invalid api key")
g4f.errors.MissingAuthError: Add a "api_key" or a .har file

I fetched a new har file.

@hlohaus I don't think environments should get screwed like that when they were working fine and then doing some update breaks it for no reason (because nothing have changed in these environments).

Thanks.

iG8R commented 2 months ago

@plia7 Yes, the same issue.

plia7 commented 2 months ago

@plia7 Yes, the same issue.

I think this time it's related to some new recent issue:

https://github.com/xtekky/gpt4free/pull/2054

https://github.com/xtekky/gpt4free/issues/2081

https://github.com/xtekky/gpt4free/issues/2092

Lorodn4x commented 2 months ago

@plia7 Да, у меня проблема была немного другой. В вашем случае, когда он говорит "файл har отсутствует", IMHO, это означает, что он не может найти файл harв g4fопределенном каталоге, где он должен быть.

У меня нет Linux, но я предлагаю вам попробовать следующее на Windows, которое у меня отлично работает:

  1. Клонируйте репозиторий g4f, например c:\g4f\
  2. Создайте виртуальную среду: введите c:\g4f\и запустите из командной строкиpython -m venv venv
  3. Активируйте виртуальную среду: когда вы все еще внутри, c:\g4f\запустите ее из командной строки.\venv\Scripts\activate
  4. Все еще внутри, c:\g4f\запустите из командной строки pip install .(не забудьте поставить точку в конце)
  5. При необходимости также установите: pip install uvicornиpip install fastapi
  6. Сначала запустите g4f api --debugбез каких-либо harфайлов в c:\g4f\har_and_cookies\каталоге
  7. Затем скопируйте ваш harфайл в c:\g4f\har_and_cookies\каталог и запустите g4f api --debugснова. Затем вы сможете увидеть комментарии о том, как g4fобрабатывается ваш harфайл.

изображение

Windows PowerShell (C) Корпорация Майкрософт (Microsoft Corporation). Все права защищены.

Установите последнюю версию PowerShell для новых функций и улучшения! https://aka.ms/PSWindows

PS C:\Users\admin\Desktop\g4f> python -m venv venv PS C:\Users\admin\Desktop\g4f> venv\Scripts\activate (venv) PS C:\Users\admin\Desktop\g4f> pip install . ERROR: Directory '.' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.

[notice] A new release of pip is available: 24.0 -> 24.1.1 [notice] To update, run: python.exe -m pip install --upgrade pip (venv) PS C:\Users\admin\Desktop\g4f> dir

Каталог: C:\Users\admin\Desktop\g4f

Mode LastWriteTime Length Name


d----- 07.07.2024 16:54 gpt4free d----- 07.07.2024 16:56 venv

(venv) PS C:\Users\admin\Desktop\g4f> deactivate PS C:\Users\admin\Desktop\g4f> cd gpt4free PS C:\Users\admin\Desktop\g4f\gpt4free> python -m venv venv PS C:\Users\admin\Desktop\g4f\gpt4free> venv\Scripts\activate (venv) PS C:\Users\admin\Desktop\g4f\gpt4free> pip install . Processing c:\users\admin\desktop\g4f\gpt4free Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Collecting requests (from g4f==0.0.0) Using cached requests-2.32.3-py3-none-any.whl.metadata (4.6 kB) Collecting aiohttp (from g4f==0.0.0) Using cached aiohttp-3.9.5-cp312-cp312-win_amd64.whl.metadata (7.7 kB) Collecting brotli (from g4f==0.0.0) Using cached Brotli-1.1.0-cp312-cp312-win_amd64.whl.metadata (5.6 kB) Collecting pycryptodome (from g4f==0.0.0) Using cached pycryptodome-3.20.0-cp35-abi3-win_amd64.whl.metadata (3.4 kB) Collecting aiosignal>=1.1.2 (from aiohttp->g4f==0.0.0) Using cached aiosignal-1.3.1-py3-none-any.whl.metadata (4.0 kB) Collecting attrs>=17.3.0 (from aiohttp->g4f==0.0.0) Using cached attrs-23.2.0-py3-none-any.whl.metadata (9.5 kB) Collecting frozenlist>=1.1.1 (from aiohttp->g4f==0.0.0) Using cached frozenlist-1.4.1-cp312-cp312-win_amd64.whl.metadata (12 kB) Collecting multidict<7.0,>=4.5 (from aiohttp->g4f==0.0.0) Using cached multidict-6.0.5-cp312-cp312-win_amd64.whl.metadata (4.3 kB) Collecting yarl<2.0,>=1.0 (from aiohttp->g4f==0.0.0) Using cached yarl-1.9.4-cp312-cp312-win_amd64.whl.metadata (32 kB) Collecting charset-normalizer<4,>=2 (from requests->g4f==0.0.0) Using cached charset_normalizer-3.3.2-cp312-cp312-win_amd64.whl.metadata (34 kB) Collecting idna<4,>=2.5 (from requests->g4f==0.0.0) Using cached idna-3.7-py3-none-any.whl.metadata (9.9 kB) Collecting urllib3<3,>=1.21.1 (from requests->g4f==0.0.0) Using cached urllib3-2.2.2-py3-none-any.whl.metadata (6.4 kB) Collecting certifi>=2017.4.17 (from requests->g4f==0.0.0) Using cached certifi-2024.7.4-py3-none-any.whl.metadata (2.2 kB) Using cached aiohttp-3.9.5-cp312-cp312-win_amd64.whl (369 kB) Using cached Brotli-1.1.0-cp312-cp312-win_amd64.whl (357 kB) Using cached pycryptodome-3.20.0-cp35-abi3-win_amd64.whl (1.8 MB) Using cached requests-2.32.3-py3-none-any.whl (64 kB) Using cached aiosignal-1.3.1-py3-none-any.whl (7.6 kB) Using cached attrs-23.2.0-py3-none-any.whl (60 kB) Using cached certifi-2024.7.4-py3-none-any.whl (162 kB) Using cached charset_normalizer-3.3.2-cp312-cp312-win_amd64.whl (100 kB) Using cached frozenlist-1.4.1-cp312-cp312-win_amd64.whl (50 kB) Using cached idna-3.7-py3-none-any.whl (66 kB) Using cached multidict-6.0.5-cp312-cp312-win_amd64.whl (27 kB) Using cached urllib3-2.2.2-py3-none-any.whl (121 kB) Using cached yarl-1.9.4-cp312-cp312-win_amd64.whl (76 kB) Building wheels for collected packages: g4f Building wheel for g4f (pyproject.toml) ... done Created wheel for g4f: filename=g4f-0.0.0-py3-none-any.whl size=606039 sha256=a762ad8d1e2fa5fd666de5ba42bc47ee23ff766163809cc25b089a75a812134f Stored in directory: C:\Users\admin\AppData\Local\Temp\pip-ephem-wheel-cache-20iwgkrz\wheels\19\b2\fa\6fbbc945ff5718b48fd17c42485d727a3c27fb5098ab370aa9 Successfully built g4f Installing collected packages: brotli, urllib3, pycryptodome, multidict, idna, frozenlist, charset-normalizer, certifi, attrs, yarl, requests, aiosignal, aiohttp, g4f Successfully installed aiohttp-3.9.5 aiosignal-1.3.1 attrs-23.2.0 brotli-1.1.0 certifi-2024.7.4 charset-normalizer-3.3.2 frozenlist-1.4.1 g4f-0.0.0 idna-3.7 multidict-6.0.5 pycryptodome-3.20.0 requests-2.32.3 urllib3-2.2.2 yarl-1.9.4

[notice] A new release of pip is available: 24.0 -> 24.1.1 [notice] To update, run: python.exe -m pip install --upgrade pip (venv) PS C:\Users\admin\Desktop\g4f\gpt4free> pip install uvicorn fastapi Collecting uvicorn Using cached uvicorn-0.30.1-py3-none-any.whl.metadata (6.3 kB) Collecting fastapi Using cached fastapi-0.111.0-py3-none-any.whl.metadata (25 kB) Collecting click>=7.0 (from uvicorn) Using cached click-8.1.7-py3-none-any.whl.metadata (3.0 kB) Collecting h11>=0.8 (from uvicorn) Using cached h11-0.14.0-py3-none-any.whl.metadata (8.2 kB) Collecting starlette<0.38.0,>=0.37.2 (from fastapi) Using cached starlette-0.37.2-py3-none-any.whl.metadata (5.9 kB) Collecting pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4 (from fastapi) Using cached pydantic-2.8.2-py3-none-any.whl.metadata (125 kB) Collecting typing-extensions>=4.8.0 (from fastapi) Using cached typing_extensions-4.12.2-py3-none-any.whl.metadata (3.0 kB) Collecting fastapi-cli>=0.0.2 (from fastapi) Using cached fastapi_cli-0.0.4-py3-none-any.whl.metadata (7.0 kB) Collecting httpx>=0.23.0 (from fastapi) Using cached httpx-0.27.0-py3-none-any.whl.metadata (7.2 kB) Collecting jinja2>=2.11.2 (from fastapi) Using cached jinja2-3.1.4-py3-none-any.whl.metadata (2.6 kB) Collecting python-multipart>=0.0.7 (from fastapi) Using cached python_multipart-0.0.9-py3-none-any.whl.metadata (2.5 kB) Collecting ujson!=4.0.2,!=4.1.0,!=4.2.0,!=4.3.0,!=5.0.0,!=5.1.0,>=4.0.1 (from fastapi) Using cached ujson-5.10.0-cp312-cp312-win_amd64.whl.metadata (9.5 kB) Collecting orjson>=3.2.1 (from fastapi) Using cached orjson-3.10.6-cp312-none-win_amd64.whl.metadata (51 kB) Collecting email_validator>=2.0.0 (from fastapi) Using cached email_validator-2.2.0-py3-none-any.whl.metadata (25 kB) Collecting colorama (from click>=7.0->uvicorn) Using cached colorama-0.4.6-py2.py3-none-any.whl.metadata (17 kB) Collecting dnspython>=2.0.0 (from email_validator>=2.0.0->fastapi) Using cached dnspython-2.6.1-py3-none-any.whl.metadata (5.8 kB) Requirement already satisfied: idna>=2.0.0 in c:\users\admin\desktop\g4f\gpt4free\venv\lib\site-packages (from email_validator>=2.0.0->fastapi) (3.7) Collecting typer>=0.12.3 (from fastapi-cli>=0.0.2->fastapi) Using cached typer-0.12.3-py3-none-any.whl.metadata (15 kB) Collecting anyio (from httpx>=0.23.0->fastapi) Using cached anyio-4.4.0-py3-none-any.whl.metadata (4.6 kB) Requirement already satisfied: certifi in c:\users\admin\desktop\g4f\gpt4free\venv\lib\site-packages (from httpx>=0.23.0->fastapi) (2024.7.4) Collecting httpcore==1.* (from httpx>=0.23.0->fastapi) Using cached httpcore-1.0.5-py3-none-any.whl.metadata (20 kB) Collecting sniffio (from httpx>=0.23.0->fastapi) Using cached sniffio-1.3.1-py3-none-any.whl.metadata (3.9 kB) Collecting MarkupSafe>=2.0 (from jinja2>=2.11.2->fastapi) Using cached MarkupSafe-2.1.5-cp312-cp312-win_amd64.whl.metadata (3.1 kB) Collecting annotated-types>=0.4.0 (from pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4->fastapi) Using cached annotated_types-0.7.0-py3-none-any.whl.metadata (15 kB) Collecting pydantic-core==2.20.1 (from pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4->fastapi) Using cached pydantic_core-2.20.1-cp312-none-win_amd64.whl.metadata (6.7 kB) Collecting httptools>=0.5.0 (from uvicorn[standard]>=0.12.0->fastapi) Using cached httptools-0.6.1-cp312-cp312-win_amd64.whl.metadata (3.7 kB) Collecting python-dotenv>=0.13 (from uvicorn[standard]>=0.12.0->fastapi) Using cached python_dotenv-1.0.1-py3-none-any.whl.metadata (23 kB) Collecting pyyaml>=5.1 (from uvicorn[standard]>=0.12.0->fastapi) Using cached PyYAML-6.0.1-cp312-cp312-win_amd64.whl.metadata (2.1 kB) Collecting watchfiles>=0.13 (from uvicorn[standard]>=0.12.0->fastapi) Using cached watchfiles-0.22.0-cp312-none-win_amd64.whl.metadata (5.0 kB) Collecting websockets>=10.4 (from uvicorn[standard]>=0.12.0->fastapi) Using cached websockets-12.0-cp312-cp312-win_amd64.whl.metadata (6.8 kB) Collecting shellingham>=1.3.0 (from typer>=0.12.3->fastapi-cli>=0.0.2->fastapi) Using cached shellingham-1.5.4-py2.py3-none-any.whl.metadata (3.5 kB) Collecting rich>=10.11.0 (from typer>=0.12.3->fastapi-cli>=0.0.2->fastapi) Using cached rich-13.7.1-py3-none-any.whl.metadata (18 kB) Collecting markdown-it-py>=2.2.0 (from rich>=10.11.0->typer>=0.12.3->fastapi-cli>=0.0.2->fastapi) Using cached markdown_it_py-3.0.0-py3-none-any.whl.metadata (6.9 kB) Collecting pygments<3.0.0,>=2.13.0 (from rich>=10.11.0->typer>=0.12.3->fastapi-cli>=0.0.2->fastapi) Using cached pygments-2.18.0-py3-none-any.whl.metadata (2.5 kB) Collecting mdurl~=0.1 (from markdown-it-py>=2.2.0->rich>=10.11.0->typer>=0.12.3->fastapi-cli>=0.0.2->fastapi) Using cached mdurl-0.1.2-py3-none-any.whl.metadata (1.6 kB) Using cached uvicorn-0.30.1-py3-none-any.whl (62 kB) Using cached fastapi-0.111.0-py3-none-any.whl (91 kB) Using cached click-8.1.7-py3-none-any.whl (97 kB) Using cached email_validator-2.2.0-py3-none-any.whl (33 kB) Using cached fastapi_cli-0.0.4-py3-none-any.whl (9.5 kB) Using cached h11-0.14.0-py3-none-any.whl (58 kB) Using cached httpx-0.27.0-py3-none-any.whl (75 kB) Using cached httpcore-1.0.5-py3-none-any.whl (77 kB) Using cached jinja2-3.1.4-py3-none-any.whl (133 kB) Using cached orjson-3.10.6-cp312-none-win_amd64.whl (136 kB) Using cached pydantic-2.8.2-py3-none-any.whl (423 kB) Using cached pydantic_core-2.20.1-cp312-none-win_amd64.whl (1.9 MB) Using cached python_multipart-0.0.9-py3-none-any.whl (22 kB) Using cached starlette-0.37.2-py3-none-any.whl (71 kB) Using cached typing_extensions-4.12.2-py3-none-any.whl (37 kB) Using cached ujson-5.10.0-cp312-cp312-win_amd64.whl (42 kB) Using cached annotated_types-0.7.0-py3-none-any.whl (13 kB) Using cached anyio-4.4.0-py3-none-any.whl (86 kB) Using cached colorama-0.4.6-py2.py3-none-any.whl (25 kB) Using cached dnspython-2.6.1-py3-none-any.whl (307 kB) Using cached httptools-0.6.1-cp312-cp312-win_amd64.whl (55 kB) Using cached MarkupSafe-2.1.5-cp312-cp312-win_amd64.whl (17 kB) Using cached python_dotenv-1.0.1-py3-none-any.whl (19 kB) Using cached PyYAML-6.0.1-cp312-cp312-win_amd64.whl (138 kB) Using cached sniffio-1.3.1-py3-none-any.whl (10 kB) Using cached typer-0.12.3-py3-none-any.whl (47 kB) Using cached watchfiles-0.22.0-cp312-none-win_amd64.whl (280 kB) Using cached websockets-12.0-cp312-cp312-win_amd64.whl (124 kB) Using cached rich-13.7.1-py3-none-any.whl (240 kB) Using cached shellingham-1.5.4-py2.py3-none-any.whl (9.8 kB) Using cached markdown_it_py-3.0.0-py3-none-any.whl (87 kB) Using cached pygments-2.18.0-py3-none-any.whl (1.2 MB) Using cached mdurl-0.1.2-py3-none-any.whl (10.0 kB) Installing collected packages: websockets, ujson, typing-extensions, sniffio, shellingham, pyyaml, python-multipart, python-dotenv, pygments, orjson, mdurl, MarkupSafe, httptools, h11, dnspython, colorama, annotated-types, pydantic-core, markdown-it-py, jinja2, httpcore, email_validator, click, anyio, watchfiles, uvicorn, starlette, rich, pydantic, httpx, typer, fastapi-cli, fastapi Successfully installed MarkupSafe-2.1.5 annotated-types-0.7.0 anyio-4.4.0 click-8.1.7 colorama-0.4.6 dnspython-2.6.1 email_validator-2.2.0 fastapi-0.111.0 fastapi-cli-0.0.4 h11-0.14.0 httpcore-1.0.5 httptools-0.6.1 httpx-0.27.0 jinja2-3.1.4 markdown-it-py-3.0.0 mdurl-0.1.2 orjson-3.10.6 pydantic-2.8.2 pydantic-core-2.20.1 pygments-2.18.0 python-dotenv-1.0.1 python-multipart-0.0.9 pyyaml-6.0.1 rich-13.7.1 shellingham-1.5.4 sniffio-1.3.1 starlette-0.37.2 typer-0.12.3 typing-extensions-4.12.2 ujson-5.10.0 uvicorn-0.30.1 watchfiles-0.22.0 websockets-12.0

[notice] A new release of pip is available: 24.0 -> 24.1.1 [notice] To update, run: python.exe -m pip install --upgrade pip (venv) PS C:\Users\admin\Desktop\g4f\gpt4free> g4f api --debug Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "C:\Users\admin\Desktop\g4f\gpt4free\venv\Scripts\g4f.exe__main.py", line 4, in File "C:\Users\admin\Desktop\g4f\gpt4free\venv\Lib\site-packages\g4f\init.py", line 6, in from .models import Model File "C:\Users\admin\Desktop\g4f\gpt4free\venv\Lib\site-packages\g4f\models.py", line 5, in from .Provider import IterListProvider, ProviderType File "C:\Users\admin\Desktop\g4f\gpt4free\venv\Lib\site-packages\g4f\Provider\init__.py", line 36, in from .HuggingChat import HuggingChat File "C:\Users\admin\Desktop\g4f\gpt4free\venv\Lib\site-packages\g4f\Provider\HuggingChat.py", line 5, in from curl_cffi import requests as cf_reqs ModuleNotFoundError: No module named 'curl_cffi' (venv) PS C:\Users\admin\Desktop\g4f\gpt4free> pip install curl_cffi Collecting curl_cffi Using cached curl_cffi-0.7.0-cp38-abi3-win_amd64.whl.metadata (13 kB) Collecting cffi>=1.12.0 (from curl_cffi) Using cached cffi-1.16.0-cp312-cp312-win_amd64.whl.metadata (1.5 kB) Requirement already satisfied: certifi>=2024.2.2 in c:\users\admin\desktop\g4f\gpt4free\venv\lib\site-packages (from curl_cffi) (2024.7.4) Collecting pycparser (from cffi>=1.12.0->curl_cffi) Using cached pycparser-2.22-py3-none-any.whl.metadata (943 bytes) Using cached curl_cffi-0.7.0-cp38-abi3-win_amd64.whl (4.0 MB) Using cached cffi-1.16.0-cp312-cp312-win_amd64.whl (181 kB) Using cached pycparser-2.22-py3-none-any.whl (117 kB) Installing collected packages: pycparser, cffi, curl_cffi Successfully installed cffi-1.16.0 curl_cffi-0.7.0 pycparser-2.22

[notice] A new release of pip is available: 24.0 -> 24.1.1 [notice] To update, run: python.exe -m pip install --upgrade pip (venv) PS C:\Users\admin\Desktop\g4f\gpt4free> g4f api --debug Starting server... [g4f v-0.0.0] (debug) INFO: Will watch for changes in these directories: ['C:\Users\admin\Desktop\g4f\gpt4free'] INFO: Uvicorn running on http://0.0.0.0:1337 (Press CTRL+C to quit) INFO: Started reloader process [7992] using WatchFiles INFO: Started server process [13536] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Shutting down INFO: Waiting for application shutdown. INFO: Application shutdown complete. INFO: Finished server process [13536] INFO: Stopping reloader process [7992] (venv) PS C:\Users\admin\Desktop\g4f\gpt4free> g4f api --debug Starting server... [g4f v-0.0.0] (debug) INFO: Will watch for changes in these directories: ['C:\Users\admin\Desktop\g4f\gpt4free'] INFO: Uvicorn running on http://0.0.0.0:1337 (Press CTRL+C to quit) INFO: Started reloader process [10828] using WatchFiles Read .har file: ./har_and_cookies\chatgpt.com.har INFO: Started server process [13680] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Shutting down INFO: Waiting for application shutdown. INFO: Application shutdown complete. INFO: Finished server process [13680] INFO: Stopping reloader process [10828] (venv) PS C:\Users\admin\Desktop\g4f\gpt4free> g4f gui --debug usage: g4f [-h] {api,gui} ... g4f: error: unrecognized arguments: --debug (venv) PS C:\Users\admin\Desktop\g4f\gpt4free> g4f gui Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "C:\Users\admin\Desktop\g4f\gpt4free\venv\Scripts\g4f.exe__main__.py", line 7, in File "C:\Users\admin\Desktop\g4f\gpt4free\venv\Lib\site-packages\g4f\cli.py", line 30, in main run_gui_args(args) File "C:\Users\admin\Desktop\g4f\gpt4free\venv\Lib\site-packages\g4f\gui\run.py", line 14, in run_gui_args run_gui(host, port, debug) File "C:\Users\admin\Desktop\g4f\gpt4free\venv\Lib\site-packages\g4f\gui__init.py", line 13, in run_gui raise MissingRequirementsError(f'Install "gui" requirements | pip install -U g4f[gui]\n{import_error}') g4f.errors.MissingRequirementsError: Install "gui" requirements | pip install -U g4f[gui] No module named 'flask' (venv) PS C:\Users\admin\Desktop\g4f\gpt4free> g4f api --debug Starting server... [g4f v-0.0.0] (debug) INFO: Will watch for changes in these directories: ['C:\Users\admin\Desktop\g4f\gpt4free'] INFO: Uvicorn running on http://0.0.0.0:1337 (Press CTRL+C to quit) INFO: Started reloader process [12160] using WatchFiles Read .har file: ./har_and_cookies\chatgpt.com.har INFO: Started server process [7160] INFO: Waiting for application startup. INFO: Application startup complete. INFO: 147.45.76.244:0 - "GET /v1/models HTTP/1.1" 200 OK New g4f version: 0.3.2.1 (current: 0.0.0) | pip install -U g4f Using RetryProvider provider and gpt-3.5-turbo model Using Koala provider INFO: 147.45.76.244:0 - "POST /v1/chat/completions HTTP/1.1" 200 OK Koala: RateLimitError: Response 402: Rate limit reached Using Aichatos provider Using FreeGpt provider Using ChatgptNext provider C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\asyncio\events.py:88: UserWarning: Curlm alread closed! quitting from process_data self._context.run(self._callback, *self._args) ChatgptNext: ClientResponseError: 429, message='Too Many Requests', url=URL('https://chat.fstha.com/api/openai/v1/chat/completions') Using You provider Using OpenaiChat provider OpenaiChat: MissingAuthError: No arkose token found in .har file Using Cnote provider Cnote: ClientResponseError: 404, message='Not Found', url=URL('https://p1api.xjai.pro/freeapi/chat-process') Using Feedough provider Feedough: ClientResponseError: 403, message='Forbidden', url=URL('https://www.feedough.com/wp-admin/admin-ajax.php') ERROR:root:RetryProvider failed: Koala: RateLimitError: Response 402: Rate limit reached ChatgptNext: ClientResponseError: 429, message='Too Many Requests', url=URL('https://chat.fstha.com/api/openai/v1/chat/completions') OpenaiChat: MissingAuthError: No arkose token found in .har file Cnote: ClientResponseError: 404, message='Not Found', url=URL('https://p1api.xjai.pro/freeapi/chat-process') Feedough: ClientResponseError: 403, message='Forbidden', url=URL('https://www.feedough.com/wp-admin/admin-ajax.php') Traceback (most recent call last): File "C:\Users\admin\Desktop\g4f\gpt4free\venv\Lib\site-packages\g4f\api\init__.py", line 171, in streaming async for chunk in response: File "C:\Users\admin\Desktop\g4f\gpt4free\venv\Lib\site-packages\g4f\client\async_client.py", line 71, in iter_append_model_and_provider async for chunk in response: File "C:\Users\admin\Desktop\g4f\gpt4free\venv\Lib\site-packages\g4f\client\async_client.py", line 42, in iter_response async for chunk in response: File "C:\Users\admin\Desktop\g4f\gpt4free\venv\Lib\site-packages\g4f\providers\retry_provider.py", line 143, in create_async_generator raise_exceptions(exceptions) File "C:\Users\admin\Desktop\g4f\gpt4free\venv\Lib\site-packages\g4f\providers\retry_provider.py", line 324, in raise_exceptions raise RetryProviderError("RetryProvider failed:\n" + "\n".join([ g4f.errors.RetryProviderError: RetryProvider failed: Koala: RateLimitError: Response 402: Rate limit reached ChatgptNext: ClientResponseError: 429, message='Too Many Requests', url=URL('https://chat.fstha.com/api/openai/v1/chat/completions') OpenaiChat: MissingAuthError: No arkose token found in .har file Cnote: ClientResponseError: 404, message='Not Found', url=URL('https://p1api.xjai.pro/freeapi/chat-process') Feedough: ClientResponseError: 403, message='Forbidden', url=URL('https://www.feedough.com/wp-admin/admin-ajax.php') Using RetryProvider provider and gpt-3.5-turbo model Using Cnote provider Cnote: ClientResponseError: 404, message='Not Found', url=URL('https://p1api.xjai.pro/freeapi/chat-process') Using OpenaiChat provider Arkose: False Proofofwork: gAAAAABwQ8Lk... OpenaiChat: ResponseStatusError: Response 502:

Bad gateway

The web server reported a bad gateway error.

  • Ray ID: 89f863fd9a9a82a7
  • Your IP address: 147.45.76.244
  • Error reference number: 502
  • Cloudflare Location: Stockholm

Using You provider Using FreeGpt provider Using ChatgptNext provider ChatgptNext: ClientResponseError: 429, message='Too Many Requests', url=URL('https://chat.fstha.com/api/openai/v1/chat/completions') Using Aichatos provider Using Koala provider Koala: RateLimitError: Response 402: Rate limit reached Using Feedough provider Feedough: ClientResponseError: 403, message='Forbidden', url=URL('https://www.feedough.com/wp-admin/admin-ajax.php') ERROR:root:RetryProvider failed: Cnote: ClientResponseError: 404, message='Not Found', url=URL('https://p1api.xjai.pro/freeapi/chat-process') OpenaiChat: ResponseStatusError: Response 502:

Bad gateway

The web server reported a bad gateway error.

  • Ray ID: 89f863fd9a9a82a7
  • Your IP address: 147.45.76.244
  • Error reference number: 502
  • Cloudflare Location: Stockholm

ChatgptNext: ClientResponseError: 429, message='Too Many Requests', url=URL('https://chat.fstha.com/api/openai/v1/chat/completions') Koala: RateLimitError: Response 402: Rate limit reached Feedough: ClientResponseError: 403, message='Forbidden', url=URL('https://www.feedough.com/wp-admin/admin-ajax.php') Traceback (most recent call last): File "C:\Users\admin\Desktop\g4f\gpt4free\venv\Lib\site-packages\g4f\api__init__.py", line 167, in chat_completions return JSONResponse((await response).to_json()) ^^^^^^^^^^^^^^ File "C:\Users\admin\Desktop\g4f\gpt4free\venv\Lib\site-packages\g4f\client\async_client.py", line 71, in iter_append_model_and_provider async for chunk in response: File "C:\Users\admin\Desktop\g4f\gpt4free\venv\Lib\site-packages\g4f\client\async_client.py", line 42, in iter_response async for chunk in response: File "C:\Users\admin\Desktop\g4f\gpt4free\venv\Lib\site-packages\g4f\providers\retry_provider.py", line 143, in create_async_generator raise_exceptions(exceptions) File "C:\Users\admin\Desktop\g4f\gpt4free\venv\Lib\site-packages\g4f\providers\retry_provider.py", line 324, in raise_exceptions raise RetryProviderError("RetryProvider failed:\n" + "\n".join([ g4f.errors.RetryProviderError: RetryProvider failed: Cnote: ClientResponseError: 404, message='Not Found', url=URL('https://p1api.xjai.pro/freeapi/chat-process') OpenaiChat: ResponseStatusError: Response 502:

Bad gateway

The web server reported a bad gateway error.

  • Ray ID: 89f863fd9a9a82a7
  • Your IP address: 147.45.76.244
  • Error reference number: 502
  • Cloudflare Location: Stockholm

ChatgptNext: ClientResponseError: 429, message='Too Many Requests', url=URL('https://chat.fstha.com/api/openai/v1/chat/completions') Koala: RateLimitError: Response 402: Rate limit reached Feedough: ClientResponseError: 403, message='Forbidden', url=URL('https://www.feedough.com/wp-admin/admin-ajax.php') INFO: 147.45.76.244:0 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error