Open ShmuelRonen opened 10 months ago
Hi ShmuelRonen, I created the oobaprompt node before they changed their name to text-generation-webui and also changed their API. I have yet to get around to making a new node for their new API, but I did make a node for LM Studio's API called LMStudioPrompt.
I
Hi ShmuelRonen, I created the oobaprompt node before they changed their name to text-generation-webui and also changed their API. I have yet to get around to making a new node for their new API, but I did make a node for LM Studio's API called LMStudioPrompt.
Thanks bash-j for the quick response. I have oobabooga installed with lots of LLM's gigs. I would prefer to avoid installing LM-Studio mainly due to the fact that all LLM's need to be reinstalled. I would appreciate it if you could reuse oobaprompt.
+1
I start oobabooga API with: python server.py --model TheBloke_Starling-LM-7B-alpha-GPTQ --loader ExLlamav2_HF --listen --api
Everything looks like working but I got:
this is comfyui log:
Prompt executed in 2.31 seconds got prompt Prompt executed in 2.04 seconds got prompt Prompt executed in 2.05 seconds got prompt Prompt executed in 2.06 seconds got prompt Prompt executed in 2.05 seconds got prompt Prompt executed in 2.04 seconds got prompt got prompt Prompt executed in 2.04 seconds Prompt executed in 2.08 seconds