Closed minipasila closed 1 year ago
This would be a lot off effort, I'm all for more options however. Text-Generation-WebUI handles almost everything differently than KoboldAI does. It pretty throws out all modularity in favor of raw performance. The biggest problem is that it doesn't even have an API for you to use, actually sending and fetching generation tasks would be extremely difficult.
Hmm I thought it said something about having an API on the readme though. I guess I just have to wait for koboldai to support 8 bit precision.
not sure if this is of any help https://github.com/oobabooga/text-generation-webui/blob/main/api-example.py
@minipasila Oobabooga added support for this on their end a few days ago. If you start it with --api you can connect to it from TavernAI by treating it as KoboldAI.
If you start it with --api you can connect to it from TavernAI by treating it as KoboldAI.
That's cool and that is apparently an extension so you use it like an extension like: "--extensions api". I guess this can then be closed.
It would be nice if there was a way to use text-generation-webui with this, considering that it appears to be the only one that supports 8 bit precision for running models, which more or less halves the required vram needed to run these models. (though the feature doesn't currently support Windows but there is a workaround that fixes it)