Open mskyttner opened 11 months ago
It would be interesting to try out the recently released lida library with LLMs running locally using Llama.cpp.
Could llmx support such "offline"/embedded or standalone more resource constrained scenarios with LLMs using only CPUs?
If so, can you provide an outline of steps required?
It would be interesting to try out the recently released lida library with LLMs running locally using Llama.cpp.
Could llmx support such "offline"/embedded or standalone more resource constrained scenarios with LLMs using only CPUs?
If so, can you provide an outline of steps required?