It appears local LLMs are at a state now where it's easy enough to use them for the average user, and I just got a M3 Pro MacBook Pro for testing.
The servers packaged with apps like LM Studio follow OpenAI's API spec, which means that it should be fine to work with simpleaichat, pending a few warnings about unsupported config parameters.
It appears local LLMs are at a state now where it's easy enough to use them for the average user, and I just got a M3 Pro MacBook Pro for testing.
The servers packaged with apps like LM Studio follow OpenAI's API spec, which means that it should be fine to work with simpleaichat, pending a few warnings about unsupported config parameters.