Open robknapen opened 2 months ago
Not that using an LLM requires access to one (or more) GPUs, depending on the size of model selected.
To avoid installation complexity (figuring out access to GPU resources), in the first prototype we might make use of our WEnR OpenAI subscription and later declare costs to the project.
Various LLMs exist, e.g. different in size (number of parameters), open or closed, locally installed or hosted by a company, and so on. It is good to know some initial preferences by the project.