vegu-ai / talemate

Roleplay with AI with a focus on strong narration, consistent world and game state tracking.
GNU Affero General Public License v3.0
111 stars 7 forks source link

Installation issues / clarifications [post here if you run into troubles] #17

Open vegu-ai opened 10 months ago

vegu-ai commented 10 months ago

General catch all ticket for installation issues in this early stage of development.

Nroobz commented 9 months ago

Hiya mate. I'm having trouble getting the API to be applied to all the agents.

vegu-ai commented 9 months ago

@Nroobz hey - is it just assigning it to some agents and not all? What happens if you click the agent button under the client ? image

Nroobz commented 9 months ago

@final-wombat when I click the agent button i.e 'creator', a modal window opens with a drop-down labelled 'client' but theres no data available for the the dropdown.

error message below:

2023-11-19 17:29:38 [info ] frontend connected 2023-11-19 17:29:38 [debug ] frontend message action_type=request_app_config 2023-11-19 17:29:38 [info ] request_app_config 2023-11-19 17:30:14 [debug ] frontend message action_type=configure_clients 2023-11-19 17:30:14 [info ] Configuring clients clients=[{'name': 'TextGenWebUI', 'type': 'openai', 'apiUrl': 'http://localhost:5000', 'model_name': '', 'max_token_length': 4096, 'model': 'gpt-4-1106-preview'}] 2023-11-19 17:30:14 [error ] Error connecting to client client_name=TextGenWebUI e=TypeError("ClientBase.init() missing 1 required positional argument: 'api_url'") server.py :242 2023-11-19 17:30:14,387 connection handler failed Traceback (most recent call last): File "D:\Git\TALEMATE\talemate-main\talemate_env\lib\site-packages\websockets\legacy\server.py", line 240, in handler await self.ws_handler(self) File "D:\Git\TALEMATE\talemate-main\talemate_env\lib\site-packages\websockets\legacy\server.py", line 1186, in _ws_handler return await cast( File "D:\Git\TALEMATE\talemate-main\src\talemate\server\api.py", line 115, in websocket_endpoint handler.configure_clients(data.get("clients")) File "D:\Git\TALEMATE\talemate-main\src\talemate\server\websocket_server.py", line 210, in configure_clients self.connect_llm_clients() File "D:\Git\TALEMATE\talemate-main\src\talemate\server\websocket_server.py", line 83, in connect_llm_clients self.connect_agents() File "D:\Git\TALEMATE\talemate-main\src\talemate\server\websocket_server.py", line 99, in connect_agents client = list(self.llm_clients.values())[0]["client"] KeyError: 'client' server.py :268 2023-11-19 17:30:14,388 connection closed server.py :646 2023-11-19 17:30:14,735 connection open 2023-11-19 17:30:14 [info ] frontend connected 2023-11-19 17:30:14 [debug ] frontend message action_type=request_app_config 2023-11-19 17:30:14 [info ] request_app_config

vegu-ai commented 9 months ago
2023-11-19 17:30:14 [info ] Configuring clients clients=[{'name': 'TextGenWebUI', 'type': 'openai', 'apiUrl': 'http://localhost:5000', 'model_name': '', 'max_token_length': 4096, 'model': 'gpt-4-1106-preview'}]

@Nroobz can you open the client config and change the type from openai to textgenwebui - looks like the types got mixed up and its failing to save the client because of it

Edit: or vice versa i guess, if you're trying to run openai client. Either way it seems the client config ended up in a bugged state.

Once the client is setup correctly it should actually auto assign it self to all agents if its the only client. Then there is also the button in the client row that will assign it to all agents on click as well:

image

Edit: Not sure how it ended up with type openai when all the other parameters are from the textgenwebui client config - still trying to reproduce.

vegu-ai commented 9 months ago

Seems like editing clients in generally seems sorta buggy atm, especially if you change the type on an existing client, created a bug ticket and will work on that.

I can't even remove the edited client now.

I had to shutdown the backend and frontend and edit the config.yaml file to fix it. Let me know if you want to try that / need help with that.

vegu-ai commented 9 months ago

Found all sorts of issues looking into this, version 0.13.2 should fix it - let me know if not.

There also was a problem with openai clients specifically, that matched what you were seeing, so hopefully update will work for you.

Nroobz commented 9 months ago

ok i'll get back to you asap with feedback

Antrisa commented 8 months ago

problem starting backend \Desktop\Ai chat thing\talemate-0.16.0\talemate-0.16.0\src\talemate\server\run.py", line 6, in import structlog ModuleNotFoundError: No module named 'structlog'

vegu-ai-tools commented 8 months ago

@Antrisa Odd, try to run install.bat again and see if that reports any errors. Since it can't find structlog i am expecting that the install script failed somewhere.

Antrisa commented 8 months ago

@Antrisa Odd, try to run install.bat again and see if that reports any errors. Since it can't find structlog i am expecting that the install script failed somewhere.

Note: This error originates from the build backend, and is likely not a problem with poetry but with safetensors (0.4.1) not supporting PEP 517 builds. You can verify this by running 'pip wheel --no-cache-dir --use-pep517 "safetensors (==0.4.1)"'.

• Installing scipy (1.11.4): Installing... • Installing starlette (0.27.0) • Installing threadpoolctl (3.2.0) • Installing tokenizers (0.15.0): Failed

ChefBuildError

the installer also says maybe because pip is 23.2. not 23.3 but the zip I have to update pip doesn't work?

What exactly are all the steps in detail to do before a user should press install.bat

vegu-ai-tools commented 8 months ago

@Antrisa ideally you'd just have to install python and nodejs and install.bat does the rest. That's how i am testing on my end anyhow, but it could be i am missing some setup step somewhere, i might try spinning up a vm later to test a completely from scratch setup.

You can activate the talemate venv and manually upgrade pip and setuptools and see if that helps.

open a command window then run

talemate_env\Scripts\activate
python -m pip install pip setuptools -U
python -m poetry install

if it still fails it'd also be helpful for you to run the following

talemate_env\Scripts\activate
python -V
python -m pip freeze
vegu-ai-tools commented 8 months ago

@Antrisa able to reproduce, looks like it is indeed missing some setup steps, will update once i know more

vegu-ai-tools commented 8 months ago

Ok so i found two issues, i am not sure the second one applies to your case, but

vegu-ai-tools commented 8 months ago

0.16.1 released for some fixes to the install script for windows installations

GabrielxScorpio commented 6 months ago

Hello I am pretty new to this so this may be a dumb question. I Installed a fresh Python 3.10.11, Node.js 21.6.1, and Talemate v0.18.0. When I started the bat file It gives me this error code:

Error [ERR_REQUIRE_ESM]: require() of ES Module C:\Program Files\nodejs\node_modules\npm\node_modules\strip-ansi\index.js from C:\Program Files\nodejs\node_modules\npm\node_modules\wide-align\node_modules\string-width\index.js not supported. Instead change the require of C:\Program Files\nodejs\node_modules\npm\node_modules\strip-ansi\index.js in C:\Program Files\nodejs\node_modules\npm\node_modules\wide-align\node_modules\string-width\index.js to a dynamic import() which is available in all CommonJS modules. at Object. (C:\Program Files\nodejs\node_modules\npm\node_modules\wide-align\node_modules\string-width\index.js:2:17) { code: 'ERR_REQUIRE_ESM' }

I tried restarting and reinstalling everything but I'm not sure how to fix it.

vegu-ai-tools commented 6 months ago

Hi @GabrielxScorpio

Personally i had nodejs v19 installed, upgraded to v21 - since i had not tested with it yet - and it seems to work correctly (although it does give me a warning about some package not supporting it).

Can you do the following please:

GabrielxScorpio commented 6 months ago

Thank you @vegu-ai-tools

I downgraded, and it worked! I have gotten everything up to the point of loading the Infinity Quest and LM Studio. My issue now is getting a response from the AI. I tried multiple models on LM Studio, but after everything is loaded and I put any kind of input, it comes back with "Unhandled Error: unsupported operand type(s) for -: 'NoneType' and 'int'"

Problem LM Studio Talemate 1
vegu-ai-tools commented 6 months ago

@GabrielxScorpio Thanks for confirming that the downgrade fixed the issue, i've updated the readme.

I've seen that error before, but i haven't managed to track it down yet.

Although right now with LMStudio 0.2.12 loaded - using the same model - i can't reproduce it.

In the LMStudio view there should be a server log ticking by, what does it show when you try to generate dialogue?

I'd expect it to look something like this:

image

If it shows it generating, how fast is it?

Edit: also mind copy pasting the error message that appears in the backend process window for talemate, that'd help a lot tracking this down, whatever it is :)

vegu-ai-tools commented 6 months ago

@GabrielxScorpio oh i think i see the issue

image

It seems like the context size for the client isn't set to anything - can you try setting it either via the slider or via the client settings dialogue?

in the end it should look like this with a number next to ctx

image

Note sure how it ended up being unset, i created a bug ticket for that.

GabrielxScorpio commented 6 months ago

@vegu-ai-tools On Talemate, I tried moving the slider and changing the value from the settings, but it just forced it back to nothing, no number.

This is my Lm Studio page.

Full LM Error 1

This is a closer look at the log. Every couple of seconds, it repeats that.

Lm studio log error 1

This is what is showing up on the Talemate backend window every time the error appears.

Talemate backend error message

vegu-ai-tools commented 6 months ago

@GabrielxScorpio thanks for the followup, issue is definitely the unset context length - i can reproduce and am working on a fix for a 18.1 release, will update here once its ready.

As a workaround, this seems to fix it until then (sort of, it will still revert back to whatever you set here if you try changing it):

clients:
  LMStudio:
    api_url: http://localhost:1234
    name: LMStudio
    type: lmstudio
    max_token_length: 4096
vegu-ai-tools commented 6 months ago

@GabrielxScorpio fix released in https://github.com/vegu-ai/talemate/releases/tag/0.18.1

Hope this fixes the issue for you as well. Apologies for the rough start, personally i dont run against LM Studio so testing against it has been somewhat neglected.

GabrielxScorpio commented 6 months ago

@vegu-ai-tools The update fixed the error! Thank you.

I found another bug maybe? I had switched to using GPT4 turbo before your fix was released and it was not letting me continue with dialogue. But when I progress the story it answers back normally.

GPT4turbo error
vegu-ai-tools commented 6 months ago

Woah, ill have to look at that - that's gpt-4 censoring itself. Given that there is nothing in the chat history that is even close to risky, it must be tripping over the system message talemate sends. I test gpt-4 turbo a lot, so that's new if so. Thanks for pointing it out.

thephilluk commented 6 months ago

Hi there, unsure if this is an installation error or (more likely) an user Error. I am trying to use Talemate with the Chub AI. but when entering the Data the Model isn't fetched and the following Error is printed in the Server Console (API Key changed)

The same settings do work for Sillytavern, so I know that they are good and there's a model accessible

2024-02-06 23:45:09 [info     ] Configuring clients            clients=[{'name': 'OpenAI Compatible API', 'type': 'openai_compat', 'api_url': 'https://mercury.chub.ai/mythomax/v1', 'model_name': 'mythomax', 'max_token_length': 4096, 'data': {}, 'model': 'mythomax', 'api_key': 'CHK-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'}]
server.py           :242  2024-02-06 23:45:09,609 connection handler failed
Traceback (most recent call last):
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\server.py", line 240, in handler
    await self.ws_handler(self)
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\server.py", line 1186, in _ws_handler
    return await cast(
           ^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\src\talemate\server\api.py", line 119, in websocket_endpoint
    handler.configure_clients(data.get("clients"))
  File "C:\AI\talemate-0.19.0\src\talemate\server\websocket_server.py", line 233, in configure_clients
    self.connect_llm_clients()
  File "C:\AI\talemate-0.19.0\src\talemate\server\websocket_server.py", line 97, in connect_llm_clients
    client = self.llm_clients[client_name]["client"] = instance.get_client(
                                                       ^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\src\talemate\instance.py", line 55, in get_client
    client = cls(name=name, *create_args, **create_kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\src\talemate\client\openai_compat.py", line 33, in __init__
    super().__init__(**kwargs)
  File "C:\AI\talemate-0.19.0\src\talemate\client\base.py", line 95, in __init__
    self.set_client(max_token_length=self.max_token_length)
  File "C:\AI\talemate-0.19.0\src\talemate\client\openai_compat.py", line 41, in set_client
    self.client = AsyncOpenAI(base_url=self.api_url + "/v1", api_key=self.api_key)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\openai\_client.py", line 296, in __init__
    raise OpenAIError(
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
server.py           :268  2024-02-06 23:45:09,612 connection closed
server.py           :646  2024-02-06 23:45:09,960 connection open
2024-02-06 23:45:09 [info     ] frontend connected
vegu-ai-tools commented 6 months ago

@thephilluk the client auto-appends the /v1 path to the url - try removing that (so https://mercury.chub.ai/mythomax)

That's the only thing that jumps out at me.

That said the OpenAI compat client is fairly fresh and has received very limited testing, personally i've only tested it against the llamacpp openai wrapper, so its probably still finnicky. Let me know if fixing the url doesn't do anything.

Did some tests and found additional issues: tracked #76

thephilluk commented 6 months ago

@vegu-ai-tools tried that, got the following (changed) output:

2024-02-07 18:36:57 [info     ] Configuring clients            clients=[{'name': 'OpenAI Compatible API', 'type': 'openai_compat', 'api_url': 'https://mercury.chub.ai/mythomax', 'model_name': 'mythomax', 'max_token_length': 4096, 'data': {'template_file': 'Mythomax.jinja2', 'has_prompt_template': True, 'prompt_template_example': '### Instruction:\nsysmsg\n\n### Input:\nprompt\n\n### Response:\n{LLM coercion}'}, 'model': 'mythomax', 'api_key': 'CHK-XXXXXXXXXXXXXXXXXXXXXXXXXXXXX'}]
server.py           :242  2024-02-07 18:36:57,095 connection handler failed
Traceback (most recent call last):
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\server.py", line 240, in handler
    await self.ws_handler(self)
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\server.py", line 1186, in _ws_handler
    return await cast(
           ^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\src\talemate\server\api.py", line 119, in websocket_endpoint
    handler.configure_clients(data.get("clients"))
  File "C:\AI\talemate-0.19.0\src\talemate\server\websocket_server.py", line 233, in configure_clients
    self.connect_llm_clients()
  File "C:\AI\talemate-0.19.0\src\talemate\server\websocket_server.py", line 97, in connect_llm_clients
    client = self.llm_clients[client_name]["client"] = instance.get_client(
                                                       ^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\src\talemate\instance.py", line 55, in get_client
    client = cls(name=name, *create_args, **create_kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\src\talemate\client\openai_compat.py", line 33, in __init__
    super().__init__(**kwargs)
  File "C:\AI\talemate-0.19.0\src\talemate\client\base.py", line 95, in __init__
    self.set_client(max_token_length=self.max_token_length)
  File "C:\AI\talemate-0.19.0\src\talemate\client\openai_compat.py", line 41, in set_client
    self.client = AsyncOpenAI(base_url=self.api_url + "/v1", api_key=self.api_key)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\openai\_client.py", line 296, in __init__
    raise OpenAIError(
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
server.py           :268  2024-02-07 18:36:57,096 connection closed
server.py           :646  2024-02-07 18:36:57,424 connection open
2024-02-07 18:36:57 [info     ] frontend connected
base_events.py      :1771 2024-02-07 18:36:57,443 Task exception was never retrieved
future: <Task finished name='Task-33' coro=<websocket_endpoint.<locals>.send_messages() done, defined at C:\AI\talemate-0.19.0\src\talemate\server\api.py:29> exception=ConnectionClosedError(None, Close(code=1011, reason=''), None)>
Traceback (most recent call last):
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\protocol.py", line 1302, in close_connection
    await self.transfer_data_task
  File "C:\Program Files\Python311\Lib\asyncio\futures.py", line 287, in __await__
    yield self  # This tells Task to wait for completion.
    ^^^^^^^^^^
  File "C:\Program Files\Python311\Lib\asyncio\tasks.py", line 339, in __wakeup
    future.result()
  File "C:\Program Files\Python311\Lib\asyncio\futures.py", line 198, in result
    raise exc
  File "C:\Program Files\Python311\Lib\asyncio\tasks.py", line 269, in __step
    result = coro.throw(exc)
             ^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\protocol.py", line 959, in transfer_data
    message = await self.read_message()
              ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\protocol.py", line 1029, in read_message
    frame = await self.read_data_frame(max_size=self.max_size)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\protocol.py", line 1104, in read_data_frame
    frame = await self.read_frame(max_size)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\protocol.py", line 1161, in read_frame
    frame = await Frame.read(
            ^^^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\framing.py", line 68, in read
    data = await reader(2)
           ^^^^^^^^^^^^^^^
  File "C:\Program Files\Python311\Lib\asyncio\streams.py", line 733, in readexactly
    await self._wait_for_data('readexactly')
  File "C:\Program Files\Python311\Lib\asyncio\streams.py", line 526, in _wait_for_data
    await self._waiter
  File "C:\Program Files\Python311\Lib\asyncio\futures.py", line 287, in __await__
    yield self  # This tells Task to wait for completion.
    ^^^^^^^^^^
  File "C:\Program Files\Python311\Lib\asyncio\tasks.py", line 339, in __wakeup
    future.result()
  File "C:\Program Files\Python311\Lib\asyncio\futures.py", line 198, in result
    raise exc
asyncio.exceptions.CancelledError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Program Files\Python311\Lib\asyncio\tasks.py", line 267, in __step
    result = coro.send(None)
             ^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\src\talemate\server\api.py", line 37, in send_messages
    await websocket.send(json.dumps(message))
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\protocol.py", line 635, in send
    await self.ensure_open()
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\protocol.py", line 935, in ensure_open
    raise self.connection_closed_exc()
websockets.exceptions.ConnectionClosedError: sent 1011 (unexpected error); no close frame received
vegu-ai-tools commented 6 months ago

@thephilluk i went ahead and checked against their api and eventually got it working - however i think there is definitely something broken / fragile with the client, which i will look into as part of #76, but after saving and retrying a couple times it ended up working. (see my client config below) - that said, i never got to see the error you are seeing, so unsure what that's all about still.

Bad news - from the little i testing i did with it just now, i am not sure their mythomax instance is on par with what is required to handle the more demanding parts (specifically world state stuff, which requires a lot of JSON accuracy tasks) in talemate. Seems like the experience will be suboptimal, alas.

image

thephilluk commented 6 months ago

@vegu-ai-tools Got it, thank you! what service do you recommend for people with weaker PCs?

vegu-ai-tools commented 6 months ago

@thephilluk good question :)

Personally i run local / rented gpu / official openai

I have not had the time to test with any other remote apis so can't make any direct recommendations, but anything hosting a 7B mistral or upwards should be capable of handling talemate, keeping in mind that until #76 is fixed, the issues we just ran into with chub.ai for setup may still exist on those apis as well.

Here is a list of models i currently test with (ranging 7B to 50B), that are all good

Kunoichi-7B
sparsetral-16x7B
Fimbulvetr-10.7B
dolphin-2.7-mixtral-8x7b
Mixtral-8x7B-instruct

Talemate does come with direct runpod support if you want to try the gpu rental route, but there is some setup involved, instructions here if you're interested: https://github.com/vegu-ai/talemate/blob/main/docs/runpod.md

maxcurrent420 commented 4 months ago

Getting an error on the install script in Linux. It fails to install triton.

source install.sh Command 'python' not found, did you mean: command 'python3' from deb python3 command 'python' from deb python-is-python3 bash: talemate_env/bin/activate: No such file or directory

[installs other dependencies]

`

Cannot install triton. `

I tried removing the lock file, deleting the poetry cache, re-installing poetry all to no avail so far. I tried just running the backend anyway and got the struct log error someone posted above.

Also, thanks for your hard work!

vegu-ai-tools commented 4 months ago

@maxcurrent420 looks like there are multiple things going on here

Command 'python' not found, did you mean:
command 'python3' from deb python3
command 'python' from deb python-is-python3
bash: talemate_env/bin/activate: No such file or directory

first issue appears to be its never actually creating the virtual env.

can you try editing the file and changing python to python3 and then running it again.

i think your case poetry just rolls a venv on its own anyways, so unsure this would do anything for your error, but still worth trying to see what happens if the first error is resolved, since it actually does a fresh poetry install into that venv.

I am not able to reproduce - this does scream poetry cache issue to me, but sounds like you already went down that path, so not sure.

one thing you could try is switching to the prep-0.23.0 branch and try running it through docker.

Edit: did a bunch of edits for clarity

maxcurrent420 commented 4 months ago

@maxcurrent420 looks like there are multiple things going on here

Command 'python' not found, did you mean:
command 'python3' from deb python3
command 'python' from deb python-is-python3
bash: talemate_env/bin/activate: No such file or directory

first issue appears to be its never actually creating the virtual env.

can you try editing the file and changing python to python3 and then running it again.

i think your case poetry just rolls a venv on its own anyways, so unsure this would do anything for your error, but still worth trying to see what happens if the first error is resolved, since it actually does a fresh poetry install into that venv.

I am not able to reproduce - this does scream poetry cache issue to me, but sounds like you already went down that path, so not sure.

one thing you could try is switching to the prep-0.23.0 branch and try running it through docker.

Edit: did a bunch of edits for clarity

I did change it to python3 but what I ended up doing is installing structlog manually, followed by each other dependency after trying and getting a dependency error. Wasn't too bad (assuming it works ok now- backend is running).

Pixelycia commented 3 months ago

Hello. Have troubles with frontend run.

Followed guide: cloned, created a config file, run docker compose up, frontend does not run with the error "sh: 1: vue-cli-service: not found"

Run on Mac M1, but technically it should not be an issue.

vegu-ai-tools commented 3 months ago

Hi @Pixelycia - thanks for the report, some users have reported issues with the current docker setup, i believe this is a fault on our side and i will look at it shortly. Issue is tracked at https://github.com/vegu-ai/talemate/issues/114.

vegu-ai-tools commented 3 months ago

@Pixelycia please check out the latest release (0.25.4) and see if that fixes the issue for you - you will need to run docker compose build to rebuild