Open KandBM opened 3 months ago
Like ollama, Nvidia's chatRTX runs LLMs locally. Could LLMs installed with chatRTX be used in the same way as ollama?
Like ollama, Nvidia's chatRTX runs LLMs locally. Could LLMs installed with chatRTX be used in the same way as ollama?