Closed IceYetiWins closed 2 months ago
If you would like to use a different AI provider you will have to first recognize that the two API's have different request and response objects. The good news is that you can use a lot of my code to implement this but you will need to change the JSON objects in GPT/Json
and update the code in GPTAPI
, GPTActions.java
, and GenerateCommands.java
to work with the different output from the other AI provider. There are some big challenges you may face though:
GPTActions.java
, from my experience smaller/ free Ai providers don't have an alternative to thisTaking a quick at the docs look it seems they have similar binding to OpenAi however the function calling feature doesn't seem available. You can try prompting it to return in JSON format, and trying to see if you can discern any consistent patterns. It seems fine tuning is also not available so you can't train it to output consistently. When you do find a somewhat consistent pattern try creating a JSON object to represent that output and replace my GptResponse.java with that
also are you able to see the API responses in the logs?
Yes, stuff like this shows up in the server console:
[12:48:50 INFO]: recieved response from OpenAI: { "id": "chatcmpl-kv21unqua5gvbiz1fdir7", "object": "chat.completion", "created": 1725295726, "model": "lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF/Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "IceYetiWins, you have been on my island for some time now. I see that you've decided to pick up a single grain of sand. A meager start indeed.\n\nYour next task will be to collect five pieces of obsidian from the nearby volcanic rocks scattered along the coast. Bring them back to me, and perhaps your reward will be worth your effort." }, "logprobs": null, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 245, "completion_tokens": 75, "total_tokens": 320 }, "system_fingerprint": "lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF/Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf" }
Also, the scoreboard doesn't show up even though it appears that it wouldn't use the AI to do that.
The scoreboard actually does use the AI, it wont show up until GPT sets an objective
It seems you got it to successfully respond, I think you would have to try to get it to do formatted outputs through prompting though
The scoreboard actually does use the AI, it wont show up until GPT sets an objective
Ah ok. Yeah it does seem like it's working right just not doing the function calling because it doesn't natively support it. I'll try to get it working somehow.
I am trying to use LM Studio with llama instead of an openai key. I replaced the url to openai with this in the GptAPI class:
private String CHATGPTURL = "http://localhost:1234/v1/chat/completions";
based off LM Studio's info. In the server console it's saying the things I'm doing and clearly the AI knows what's happening but nothing is actually happening in-game, including messages. Any thoughts?