AISE-TUDelft / coco

AI Code Completions for Jetbrains and VSCode
0 stars 0 forks source link

Plug-&-Play system for generating responses from the LLM #5

Closed RebelOfDeath closed 1 week ago

RebelOfDeath commented 2 weeks ago

The implementation required a text to be given to the LLM and the cleaned output of the LLM to be returned in the intended format. This system should be implemented in a way that allows for the greater extensibility of the system such that later down the line a model can be substituted or exchanged with minimal effort.

Ar4l commented 1 week ago

Done, see chain in server.completion.__init__. Models can be easily added to a dict, requiring only that its model-specific chain handles the prefix and suffix fields properly for model-specific tokenisation.