xtekky / gpt4free

The official gpt4free repository | various collection of powerful language models
https://g4f.ai
GNU General Public License v3.0
62.23k stars 13.38k forks source link

OpenaiChat 3.5 did not work for me #1757

Closed rocryptogroup closed 7 months ago

rocryptogroup commented 8 months ago

Having problems with OpenaiChat gpt-3.5-turbo api

i visit chat.openai with chrome -> inspect element -> network -> save har file to project directory.

Untitle1

python script called test1.py :

`from g4f.client import Client from g4f.Provider import OpenaiChat

import g4f.debug g4f.debug.logging = True

client = Client( provider=OpenaiChat, )

response = client.chat.completions.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Say this is a test"}] )

print(response.choices[0].message.content)`

Untitled

Environment

Windows 10 Anaconda Python 3.9.13 (this is my application requirements) Last version 0.2.7.0 Same error with legacy api

hlohaus commented 8 months ago

Try to create a new har. It can't read yours

mitka1337 commented 8 months ago

did you fixed it? same problem

mitka1337 commented 8 months ago

i dont know much abt so ill wait :)

albusmaxgrangerthu commented 8 months ago

Same RuntimeError: No .har file found. I use the docker GUI, docker run -p 8080:8080 -p 1337:1337 -p 7900:7900 --shm-size="2g" -v /g4f/chat.openai.com.har:/app/ha rdir hlohaus789/g4f:latest

hlohaus commented 8 months ago

In the docker command should only the dir name without the .har file name. I will add a example and a default dir for mounting in docker.

I search for our issue with loading .har files. They say I should use json.loads and not json.load. Then there should no charmap issue.

hlohaus commented 8 months ago

Test this:

Replace: harFile = json.load(file) With: harFile = json.loads(file.read())

iG8R commented 8 months ago

As I understand it, a HAR file is needed for Pro users, but the changes in version 0.2.7.0 with HAR files also affect Free users, as I, as a Free user, also get the following error:

Using OpenaiChat provider and gpt-3.5-turbo model
ERROR:root:No .har file found
Traceback (most recent call last):
  File "c:\gpt4free\venv\Lib\site-packages\g4f\api\__init__.py", line 91, in chat_completions
    response = self.client.chat.completions.create(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\client.py", line 163, in create
    return response if stream else next(response)
                                   ^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\client.py", line 84, in iter_append_model_and_provider
    for chunk in response:
  File "c:\gpt4free\venv\Lib\site-packages\g4f\client.py", line 49, in iter_response
    for idx, chunk in enumerate(response):
  File "c:\gpt4free\venv\Lib\site-packages\g4f\providers\base_provider.py", line 206, in create_completion
    yield loop.run_until_complete(await_callback(gen.__anext__))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\nest_asyncio.py", line 99, in run_until_complete
    return f.result()
           ^^^^^^^^^^
  File "C:\Python312\Lib\asyncio\futures.py", line 203, in result
    raise self._exception.with_traceback(self._exception_tb)
  File "C:\Python312\Lib\asyncio\tasks.py", line 304, in __step_run_and_handle_result
    result = coro.send(None)
             ^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\providers\base_provider.py", line 202, in await_callback
    return await callback()
           ^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\Provider\needs_auth\OpenaiChat.py", line 369, in create_async_generator
    arkose_token, api_key, cookies = await getArkoseAndAccessToken(proxy)
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\Provider\openai\har_file.py", line 122, in getArkoseAndAccessToken
    chatArk, accessToken = readHAR()
                           ^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\Provider\openai\har_file.py", line 38, in readHAR
    raise RuntimeError("No .har file found")
RuntimeError: No .har file found
INFO:     127.0.0.1:63208 - "POST /v1/chat/completions?provider=OpenaiChat HTTP/1.1" 500 Internal Server Error

and when I gave g4f the HAR file, I still get:

Using OpenaiChat provider and gpt-3.5-turbo model
ERROR:root:pop from empty list
Traceback (most recent call last):
  File "c:\gpt4free\venv\Lib\site-packages\g4f\api\__init__.py", line 91, in chat_completions
    response = self.client.chat.completions.create(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\client.py", line 163, in create
    return response if stream else next(response)
                                   ^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\client.py", line 84, in iter_append_model_and_provider
    for chunk in response:
  File "c:\gpt4free\venv\Lib\site-packages\g4f\client.py", line 49, in iter_response
    for idx, chunk in enumerate(response):
  File "c:\gpt4free\venv\Lib\site-packages\g4f\providers\base_provider.py", line 206, in create_completion
    yield loop.run_until_complete(await_callback(gen.__anext__))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\nest_asyncio.py", line 99, in run_until_complete
    return f.result()
           ^^^^^^^^^^
  File "C:\Python312\Lib\asyncio\futures.py", line 203, in result
    raise self._exception.with_traceback(self._exception_tb)
  File "C:\Python312\Lib\asyncio\tasks.py", line 304, in __step_run_and_handle_result
    result = coro.send(None)
             ^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\providers\base_provider.py", line 202, in await_callback
    return await callback()
           ^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\Provider\needs_auth\OpenaiChat.py", line 369, in create_async_generator
    arkose_token, api_key, cookies = await getArkoseAndAccessToken(proxy)
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\Provider\openai\har_file.py", line 122, in getArkoseAndAccessToken
    chatArk, accessToken = readHAR()
                           ^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\Provider\openai\har_file.py", line 55, in readHAR
    return chatArks.pop(), accessToken
           ^^^^^^^^^^^^^^
IndexError: pop from empty list
INFO:     127.0.0.1:64447 - "POST /v1/chat/completions?provider=OpenaiChat HTTP/1.1" 500 Internal Server Error
hlohaus commented 8 months ago

I added free user support today.

You can pass the access token with a .har file or as api_key.

The arkose token will only be readed, if it exists in the .har.

iG8R commented 8 months ago

You can pass the access token with a .har file or as api_key.

How can I do this? Create a .har file in the usual way or create an empty file and insert the access token into it?

rocryptogroup commented 8 months ago

works perfect for me with last update, thank you Untitled

hlohaus commented 8 months ago

You can pass the access token with a .har file or as api_key.

How can I do this? Create a .har file in the usual way or create an empty file and insert the access token into it?

You have to create a .har in usual way. It reads the session request from OpenAI.

iG8R commented 8 months ago

@hlohaus Still getting the following error. What am I doing wrong? How did I get the har file:

  1. Opened https://chat.openai.com/ and invoked DevTools (F12)
  2. Went to the Network tab and clicked on "Reload"
  3. Wrote any arbitrary message to the chat
  4. Clicked on any place in the "File" column and selected "Save All as HAR" - got a file about 5MB in size
  5. Placed this HAR file into the hardir directory.
    Using OpenaiChat provider and gpt-3.5-turbo model
    ERROR:root:Response 401: {"detail":"Could not parse your authentication token. Please try signing in again."}
    Traceback (most recent call last):
    File "c:\gpt4free\venv\Lib\site-packages\g4f\api\__init__.py", line 91, in chat_completions
    response = self.client.chat.completions.create(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "c:\gpt4free\venv\Lib\site-packages\g4f\client.py", line 163, in create
    return response if stream else next(response)
                                   ^^^^^^^^^^^^^^
    File "c:\gpt4free\venv\Lib\site-packages\g4f\client.py", line 84, in iter_append_model_and_provider
    for chunk in response:
    File "c:\gpt4free\venv\Lib\site-packages\g4f\client.py", line 49, in iter_response
    for idx, chunk in enumerate(response):
    File "c:\gpt4free\venv\Lib\site-packages\g4f\providers\base_provider.py", line 206, in create_completion
    yield loop.run_until_complete(await_callback(gen.__anext__))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "c:\gpt4free\venv\Lib\site-packages\nest_asyncio.py", line 99, in run_until_complete
    return f.result()
           ^^^^^^^^^^
    File "C:\Python312\Lib\asyncio\futures.py", line 203, in result
    raise self._exception.with_traceback(self._exception_tb)
    File "C:\Python312\Lib\asyncio\tasks.py", line 304, in __step_run_and_handle_result
    result = coro.send(None)
             ^^^^^^^^^^^^^^^
    File "c:\gpt4free\venv\Lib\site-packages\g4f\providers\base_provider.py", line 202, in await_callback
    return await callback()
           ^^^^^^^^^^^^^^^^
    File "c:\gpt4free\venv\Lib\site-packages\g4f\Provider\needs_auth\OpenaiChat.py", line 371, in create_async_generator
    await raise_for_status(response)
    File "c:\gpt4free\venv\Lib\site-packages\g4f\requests\raise_for_status.py", line 23, in raise_for_status_async
    raise ResponseStatusError(f"Response {response.status}: {message}")
    g4f.errors.ResponseStatusError: Response 401: {"detail":"Could not parse your authentication token. Please try signing in again."}
    INFO:     127.0.0.1:53184 - "POST /v1/chat/completions?provider=OpenaiChat HTTP/1.1" 500 Internal Server Error
txmazing commented 8 months ago

@hlohaus Still getting the following error. What am I doing wrong? How did I get the har file:

  1. Opened https://chat.openai.com/ and invoked DevTools (F12)
  2. Went to the Network tab and clicked on "Reload"
  3. Wrote any arbitrary message to the chat
  4. Clicked on any place in the "File" column and selected "Save All as HAR" - got a file about 5MB in size
  5. Placed this HAR file into the hardir directory.
Using OpenaiChat provider and gpt-3.5-turbo model
ERROR:root:Response 401: {"detail":"Could not parse your authentication token. Please try signing in again."}
Traceback (most recent call last):
  File "c:\gpt4free\venv\Lib\site-packages\g4f\api\__init__.py", line 91, in chat_completions
    response = self.client.chat.completions.create(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\client.py", line 163, in create
    return response if stream else next(response)
                                   ^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\client.py", line 84, in iter_append_model_and_provider
    for chunk in response:
  File "c:\gpt4free\venv\Lib\site-packages\g4f\client.py", line 49, in iter_response
    for idx, chunk in enumerate(response):
  File "c:\gpt4free\venv\Lib\site-packages\g4f\providers\base_provider.py", line 206, in create_completion
    yield loop.run_until_complete(await_callback(gen.__anext__))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\nest_asyncio.py", line 99, in run_until_complete
    return f.result()
           ^^^^^^^^^^
  File "C:\Python312\Lib\asyncio\futures.py", line 203, in result
    raise self._exception.with_traceback(self._exception_tb)
  File "C:\Python312\Lib\asyncio\tasks.py", line 304, in __step_run_and_handle_result
    result = coro.send(None)
             ^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\providers\base_provider.py", line 202, in await_callback
    return await callback()
           ^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\Provider\needs_auth\OpenaiChat.py", line 371, in create_async_generator
    await raise_for_status(response)
  File "c:\gpt4free\venv\Lib\site-packages\g4f\requests\raise_for_status.py", line 23, in raise_for_status_async
    raise ResponseStatusError(f"Response {response.status}: {message}")
g4f.errors.ResponseStatusError: Response 401: {"detail":"Could not parse your authentication token. Please try signing in again."}
INFO:     127.0.0.1:53184 - "POST /v1/chat/completions?provider=OpenaiChat HTTP/1.1" 500 Internal Server Error

Logout and login. Before login open the networktab. Enable "Preserve log" and "Disable Cache" in Networktab before

iG8R commented 8 months ago

@txmazing Thank you so much! It worked.

github-actions[bot] commented 7 months ago

Bumping this issue because it has been open for 7 days with no activity. Closing automatically in 7 days unless it becomes active again.

github-actions[bot] commented 7 months ago

Closing due to inactivity.