Command should just be able to demo the LLM by sending a message then receiving a response from it. This requires setting up the LLM API & implementation within Silvercord.
Tasks:
[x] Explore options: Claude, Meta's Llama, Google's Gemma, and Groq
[x] Choose one, and integrate their API into Silvercord
[x] Build internal project API to accept prompt and context, and to generate a response as an output
[x] Write unit tests for relevant prompts and context that may be used in production
Command should just be able to demo the LLM by sending a message then receiving a response from it. This requires setting up the LLM API & implementation within Silvercord.
Tasks: