atisharma / chasm_engine

CHAracter State Management - a generative text adventure (engine)
https://chasm.run
GNU Affero General Public License v3.0
61 stars 5 forks source link

Fix openai module version to 0.28 #3

Closed kadogo closed 1 year ago

kadogo commented 1 year ago

Hello, thanks for your code, I didn't make it completely work yet, but I will make PR for the change that I must do to make it work.

Based on https://github.com/openai/openai-python/discussions/742#discussioncomment-7544918 it looks like openai module last version need some migration, but it's still possible to use the previous version for now.

Feel free to reject or correct a PR if needed.

Cheers

atisharma commented 1 year ago

Thank you.

atisharma commented 1 year ago

I will make PR for the change that I must do to make it work.

If you like, let me know what the trouble is and I can try to facilitate.

kadogo commented 1 year ago

Hello @atisharma

Thank you, for now I didn't really understand what the issue is. I change the range for trying to have a smaller generation process because I only have a CPU so it is a little slow ^^

https://github.com/atisharma/chasm_engine/blob/main/chasm_engine/engine.hy#L153 (I had read that on Reddit)

When it looks like finish, I launch the client, it does the "spawning" thing and end with a request timed out. I didn't debug enough to know if the trouble is the server or client, and I sadly not have an OpenAI key for testing with it directly. So I was planning to dig further when I got a bit more time, but if you have ideas don't hesitate to ping me.

Have a good day

atisharma commented 1 year ago

You could try increasing https://github.com/atisharma/chasm/blob/9ae7a33349d0b87affbecf8098ef0779f425a870/chasm/client.hy#L12 if your generation is slow. If spawning fails, it doesn't really matter, it will just redo the room generation when it's needed. If you carry on, what happens?

For world generation, OpenAI is quite good, and won't cost more than a few dollars, so it's not bad for testing.

kadogo commented 1 year ago

Hello @atisharma

Increasing the time make it work, but now I got this error just after writing my first text after description. I can see that my koboldcpp is busy, but the AuthenticationError let me think the issue is elsewhere.

Engine error: RetryError(<Future at 0x7f6f05f01120 state=finished raised AuthenticationError>)

We can continue here, but maybe you prefer that I open a proper issue for a better following?

Have a good day.

Edit: I tried doing another character, and it looks like it works 1 time I could go to another room but when I typed something else it did the same so I think it's maybe not only related to the character. I will try to enable debug to see if I can find something more useful later.

Edit2: I dug further before sleeping, and I noticed that in the logs.

2023-11-14 04:24:04,791 : ERROR : log/error : Engine error
Traceback (most recent call last):
  File "/home/user/chasm_engine/env/lib/python3.10/site-packages/tenacity/_asyncio.py", line 50, in __call__
    result = await fn(*args, **kwargs)
  File "/home/user/chasm_engine/chasm_engine/chat.hy", line 134, in respond
    response (await
  File "/home/user/chasm_engine/env/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 45, in acreate
    return await super().acreate(*args, **kwargs)
  File "/home/user/chasm_engine/env/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 217, in acreate
    response, _, api_key = await requestor.arequest(
  File "/home/user/chasm_engine/env/lib/python3.10/site-packages/openai/api_requestor.py", line 382, in arequest
    resp, got_stream = await self._interpret_async_response(result, stream)
  File "/home/user/chasm_engine/env/lib/python3.10/site-packages/openai/api_requestor.py", line 728, in _interpret_async_response
    self._interpret_response_line(
  File "/home/user/chasm_engine/env/lib/python3.10/site-packages/openai/api_requestor.py", line 765, in _interpret_response_line
    raise self.handle_error_response(
openai.error.AuthenticationError: Incorrect API key provided: n/a. You can find your API key at https://platform.openai.com/account/api-keys.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/user/chasm_engine/chasm_engine/engine.hy", line 128, in parse
    line (assistant (await (narrate (append user-msg messages) player))))
  File "/home/user/chasm_engine/chasm_engine/engine.hy", line 1, in narrate
    "
  File "/home/user/chasm_engine/env/lib/python3.10/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped
    return await fn(*args, **kwargs)
  File "/home/user/chasm_engine/env/lib/python3.10/site-packages/tenacity/_asyncio.py", line 47, in __call__
    do = self.iter(retry_state=retry_state)
  File "/home/user/chasm_engine/env/lib/python3.10/site-packages/tenacity/__init__.py", line 326, in iter
    raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x7fd7e2ed05b0 state=finished raised AuthenticationError>]

It makes me think that there is perhaps a check for the API in OpenAI module? In case it could be an issue with koboldcpp I also tried the following curl code and it works without any issue.

curl http://localhost:5001/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "gpt-3.5-turbo",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Who won the world series in 2020?"
      },
      {
        "role": "assistant",
        "content": "The Los Angeles Dodgers won the World Series in 2020."
      },
      {
        "role": "user",
        "content": "Where was it played?"
      }
    ]
  }'
atisharma commented 1 year ago

Yes, please open a separate issue. If you could copy in the relevant sections of the server config being used as well then it would help. I'll look at upgrading the openai module too (which doesn't need tenacity).

atisharma commented 1 year ago

Just to add -- are you trying to use KoboldAI API? It's not compatible with OpenAI's API. Compatible choices are listed in the README.

kadogo commented 1 year ago

Hello @atisharma

KoboldAPI has some of OpenAI compatibility (https://github.com/LostRuins/koboldcpp/pull/466) It works for the first generation, so I'm unsure that it is related.

I gave a try to text-generation-webui but I got a core dump segmentation (even with the latest version that normally has OpenAI compatibility by default).

Edit:

Hello again @atisharma

It will be my last message on the PR but I thought it was better to answer here. I upgrade with your new version and I noticed my error... I hadn't completely commented the openai provider so it was cycle sometimes on this one and failing... Now it's working properly even on koboldcpp. I will make some issues about maybe ideas so that we can discuss them in a proper issue. Thanks again, have a good day.