Waifu_AI_Vtuber is a Python-based AI virtual YouTuber chatbot. The chatbot interacts with live YouTube chat, processes the messages, generates responses using the OpenAI GPT-3.5 model, and provides text-to-speech audio output for responses using VoiceVox engine.
Download OBS, Vtuber studio, EarTrumpet, VoiceMeeter banana(after you install VoiceMeeter banana you'll also need to restart your PC) and open VoiceVox.
For VoiceMeeter banana, we need to change voice output and voice input first.
Open the Control Panel by pressing the Windows key
and typing Control Panel
. In the upper right corner, click on View by
and select Large icon
.
Click on Sound
, scroll down until you see VoiceMeeter Input
, and then click on it. Finally, click Set Default
.
Click on Recording
at the top, scroll down until you see VoiceMeeter Aux Output
, click on it, and then click Set Default
.
The first time the program is opened, it would look like this.
Click on each A1
to deselect them on all five panels. Similarly, do the same with B1
. It should now look like this.
On the upper right corner, click on A1
and select your speaker output (WDM is recommended).
Now, click on A1
for all VIRTUAL INPUTS. However, for VOICEMEETER AUX, you'll also need to click on B1
.
For Vtuber Studio.
Open the settings by double-clicking on the screen and then click on the gear icon located on the left side.
Scrolldown until you see Microphone Setting
check Use microphone
and select VoiceMeeter Output (VB-Audio VoiceMeeter VAIO)
by clicking on the Microphone
.
Go to Model setting at the top left corner(a people icon with a gear). Scroll down until you see Mouth Open
. Click on input
and select or type VoiceVolumePlusMouthOpen
.
Optional: In Microphone Setting
, I recommend setting Volume gain
to 20 and everything else is set to 0.
For OBS, we'll add subtitles to display the text, and for Vtuber studio, we'll use it to show Live2D.
To add a subtitle, press +
in the source, select Text(GDI+)
, and name it as Subtitle
.
After adding the text source, a window will appear like this. You'll need to check Read from file
and then click Browse
.
Navigate to subtitle.txt
, which is located inside the text_log
folder, and select it.
Customize and configure the subtitle file according to your preferences, (For my recommendation, I suggest reducing the size of the text, setting Alignment
to center and Verticle alignment
to center, right-clicking on the text, navigating to Tranform
and selecting Center Horizontally
. Also, check Outline
, set the outline size to 10-14, and change the outline color to black by clicking on Select color
).
To add Vtuber Studio, press +
in the source, select Window Capture
and name it as Live2D
After adding the video source, a window will appear like this. Click on Window
, select [VTube Studio.exe]: VTube Studio
, on Capture Method
choose Windows 10 (1903 and up)
, and then click OK
.
Right-click on the preview screen, choose Windowed Projector (Preview)
, and resize it as your desire.
Running the code, open EarTrumpet and scroll down to the bottom you'll see VoiceMeeter Input (VB-Audio VoiceMeeter VAIO)
, right click on Python 3.11.xx
and click on change
icon, select VoiceMeeter Aux Input (VB-Audio VoiceMeeter AUX VAIO)
.
Change your playback/output device
by clicking on the speaker icon on the taskbar (or go to window setting
-> System
-> Sound
-> Choose your output device
). Select VoiceMeeter Aux Input (VB-Audio VoiceMeeter AUX VAIO)
first and then selcet VoiceMeeter Input (VB-Audio VoiceMeeter VAIO)
(we need to do this process to let Python recognize these playback devices).
In Vtuber Studio, open the settings, navigate to Microphone Setting
and click on Reload
.
Download the project zip file from GitHub or Clone this repository by typing these in terminal or command prompt (but if you choose to download the project as a zip file you'll also need to extract the zip file).
git clone https://github.com/ZeroMirai/VirtuAI-Helper.git
Open a terminal or command prompt and change the directory to the project folder by typing cd
followed by where this folder is located for example cd C:\Git_hub\VirtuAI Helper
.
Install all necessary library by typing.
pip install -r requirements.txt
Configure the necessary API keys and other config in config.txt.
functions
: Contains modular components of the project.
keys
: folder to contain API keys.api_key_chat
: File to store configuration for chatGPT's API keys.chatbot.py
: Code to interact with the OpenAI GPT-3.5 model to create responses.create_subtitle.py
: Code to generate subtitle files to use in obs.tts.py
: Code for text-to-speech using Voicevox.main.py
: Main code for the AI virtual YouTuber chatbot.youtube_chat.py
: Code to read and process YouTube live chat messages.Before running the program, Ensure you have changed all the configurations and pasted them right after the : in the file. The file must have the following format.
api_key_chat.py
to generate chat responses and youtube_chat.py
to capture a youtube live chat.main.py
with the character's name.youtube_chat.py
, in video_id
get your video id at your youtube live url and paste it here. For example https://www.youtube.com/watch?v=CSdEsXa
your video id is CSdEsXa
.RoleAndStory.txt
but don't change the rule part (if you are too lazy to write your own prompt you can also ask chatGPT to make your own one by asking).
Generate me a prompt so I can use with my AI assistant "that's a ..(personality,gender,traits).., ..(other personality,gender,traits).."
run_main_script.bat
and run_read_chat_script.bat
.Actually, this is a project that's made just for fun but if you are interested to contribute in this project here is how you can make this project better for everyone:
If you found a bug or have an idea for a new feature, feel free to requests and reports by open an issue on GitHub and post it if it's a bug please give as much detail as possible or suggest an idea please include a step or a clear description.
If you have suggestions or improvements.
main
.I'm primarily looking for code improvements and bug fixes. Once your changes are approved, they will be merged into the main project.
If you find this project useful I would be really grateful if you could consider sharing this small project with others and giving it a star on GitHub.
This project is licensed under the MIT License.
Special thanks to Neuro-sama, who inspired me to start learning how to code and create my own waifu.