Closed Urammar closed 1 year ago
The installation works very well on Linux systems. I saw you were trying to export a variable on windows, try using set
instead.
Launching on windows should be the same as Linux, try oobabot <args>
or try python -m oobabot <args>
Thanks that worked! Okay so last step we have everything ready to go
Now this
C:\MyShit\AI\Flan-t5-XXL\OobaBot>python -m oobabot --ai-name YourBotsName --persona "You are a cat named YourBotsName"
←[33;40m2023-06-22 03:57:51,874←[0m←[37;40m INFO ←[0m←[37;40mOobabooga is at ws://localhost:5005←[0m
←[33;40m2023-06-22 03:57:55,939←[0m←[37;40m ERROR ←[0m←[31;40mCould not connect to Oobabooga server: [ws://localhost:5005]←[0m
←[33;40m2023-06-22 03:57:55,940←[0m←[37;40m ERROR ←[0m←[31;40mPlease check the URL and try again.←[0m
←[33;40m2023-06-22 03:57:55,940←[0m←[37;40m ERROR ←[0m←[31;40mReason: Cannot connect to host localhost:5005 ssl:default [The remote computer refused the network connection]←[0m
API is checked in the boolean command-line flags in the UI and I hit apply and restart the interface.
The final code on the command prompt says its running, but doesnt mention the API, unsure if its supposed to
Really appreciate the help, AND quick response time btw!
If your oobabooga server is not on the same machine, you may need to pass the --listen and --api (just to be sure) option when launching the oobabooga server, then pass the server ip as an argument to oobabot. Even if you checked and applied the api option in the webui, if it isnt telling you "Starting API at http://127.0.0.1:5000" then it may not be running.
You can confirm if the api ports are listening with netstat -ano | findstr /i ":5005 :5000"
Also there is a built-in extension for oobabot that will provide all this functionality into the oobabooga webui.
Right you are, looks like that fixed it, also kill me theres totally a UI extension!!!!
Yup that fixed it, thanks a ton for your help! I'll let you know how it goes!
Its very slow compared to ordinary generation, i'll open a 2nd issue for that
Its very slow compared to ordinary generation, i'll open a 2nd issue for that
It's mostly because oobabot can't cache the continuous context into vram and keep it there, so the context is pulled each and every time the bot activates and parsed then, adding extra time to the generation. Normal operation in the webui keeps up to 2048 (or more, since models are being developed) tokens of context in vram, so new generations don't have to reread the context.
Glad you got it working!
Hi, I cannot for the life of me work out how to actually use this.
I've followed all the steps and now have the bot in the server and the pip install done
However... how do I launch the actual application? How do I give the bot its token? How do I point it to my local ip? I think the install guide is missing a few steps...