a16z-infra / ai-town

A MIT-licensed, deployable starter kit for building and customizing your own version of AI town - a virtual town where AI characters live, chat and socialize.
https://convex.dev/ai-town
MIT License
7.66k stars 706 forks source link

No one has noticed that if it's Windows, there will be a problem in step 8 according to your instructions, where socat listens on port 11434 and ollama also listens on port 11434, which will cause a process conflict. #253

Open jiajiahard opened 1 month ago

jiajiahard commented 1 month ago

When I skipped step 8, my Convex backend was completely unable to access Ollama and kept telling me that
9/25/2024, 9:26:31 PM [CONVEX A(aiTown/agentOperations:agentGenerateMessage)] Uncaught Error: Request to http://localhost:11434/api/embeddings forbidden

9/25/2024, 9:26:31 PM [CONVEX A(aiTown/agentOperations:agentGenerateMessage)] [LOG] 'Texts to be sent for embedding: ' [ 'Bob is talking to Stella' ] 9/25/2024, 9:26:31 PM [CONVEX A(aiTown/agentOperations:agentGenerateMessage)] [LOG] 'Sending data for embedding: {"model":"mxbai-embed-large","prompt":"Bob is talking to Stella"}' 9/25/2024, 9:26:31 PM [CONVEX A(aiTown/agentOperations:agentGenerateMessage)] Uncaught Error: Request to http://localhost:11434/api/embeddings forbidden

jiajiahard commented 1 month ago

I cannot even see any record information on Ollama about the records convex have accessed.

Extassey commented 1 month ago

Yes I have been having this problem as well on multiple machines... I basically brute force it everytime. Will post in a minute when I get it working again..

jiajiahard commented 1 month ago

Yes I have been having this problem as well on multiple machines... I basically brute force it everytime. Will post in a minute when I get it working again.. so,what should I do

Extassey commented 1 month ago

Honestly for now I am using the no clerk branch, makes my machine less hot anyhow, maybe one of the devs of the main fork will chime in eventually when they see this, I cant figure out how I got passed this last time. I am having trouble with the convex precompiled local running, npm rum dev keeps prompting me to set up a remote convex project, this was not the case before. something must have updated somewhere and broke.

Extassey commented 1 month ago

Okay so I found out if you are using WSL you have to make sure you have Ollama installed in WSL and not on windows itself, not sure if there is a port problem with having it on windows itself but I wouldnt have it on there just to be safe.

Secondly install Llama with curl -fsSL https://ollama.com/install.sh | sh if you use snap it doesnt seem to work for me.

Thirdly, once you install Llama, you are good to go, don't worry about doing a Ollama serve, once its running you can feel free to do a /bye and it should still be running, open cmd and type netstat -ano | findstr :11434 you should get nothing in response, if you get something in response, do a kill <PID> and it should exit it.

Now back in WSL (Remember every one of these steps ive mentioned are things you will do in WSL not cmd, other than the netstat -ano | findstr :11434 step and the kill step) do a curl http://localhost:11434/ and you should see Ollama is running in response.

I am going to check the other socat config and convex port assign steps as well to make sure that is still working.

[EDIT]:

so with so cat first set your host ip like this HOST_IP=<YOUR HOST IP INSIDE HERE> to find out what it is run hostname -I | awk '{print $1}' then run this command to tell socat to listen socat TCP-LISTEN:11434,fork TCP:$HOST_IP:11434 & (You might get an error saying it is already listening but ignore that) and then check to make sure it is listening with ps aux | grep socat

jiajiahard commented 1 month ago

I dont understamd your idea,if I use the ollama serve in WSL and socat TCP-LISTEN:11434,fork TCP:$HOST_IP:11434 &ps aux | grep socat,I will find that Process conflict,because they both use 11434,maybe your ollama serve started in other port