A backend scheduler that will track VTuber live stream (and archive) for Youtube, Bilibili, Twitch, Twitcasting
Written in Typescript, and using Mongoose.
BiliBili Implementation is a little bit hindered because rate limiting, currently working around the limitation :smile:
0.3.0
Please run the database script npm run database
and run the Migrations 2021-02-05
Run this if you already setup this program before this update
npm install
npm install -g ts-node
You need to have MongoDB Server up and running at localhost or Mongo Atlas
You need:
There's a limit of 10k request per day, so you might want to ask google or try to get another API key that will be rotated
You need: Twitch API Key, register a new application on your Developer Console
That will create a Client ID and Client Secret for you to use.
You need: Twitter Developer API Token (Bearer Token).
You need to apply for developer access at https://developer.twitter.com/en and then make sure you're application has access to the v2 API.
Configure the scheduler in src/config.json
Rename the config.json.example to config.json
{
"mongodb": {
"uri": "mongodb://127.0.0.1:27017",
"dbname": "vtapi"
},
"youtube": {
"api_keys": [],
"rotation_rate": 60
},
"twitch": {
"client_id": null,
"client_secret": null
},
"twitter": {
"token": null
},
"workers": {
"youtube": true,
"bilibili": false,
"twitch": false,
"twitcasting": false,
"mildom": false,
"twitter": false
},
"intervals": {
"bilibili": {
"channels": "*/60 */2 * * *",
"upcoming": "*/4 * * * *",
"live": "*/2 * * * *"
},
"youtube": {
"channels": "*/60 */2 * * *",
"feeds": "*/2 * * * *",
"live": "*/1 * * * *",
"missing_check": "*/5 * * * *"
},
"twitcasting": {
"channels": "*/60 */2 * * *",
"live": "*/1 * * * *"
},
"twitch": {
"channels": "*/60 */2 * * *",
"feeds": "*/15 * * * *",
"live": "*/1 * * * *"
},
"mildom": {
"channels": "*/60 */2 * * *",
"live": "*/1 * * * *"
},
"twitter": {
"channels": "*/60 */2 * * *",
"live": "*/1 * * * *",
"feeds": "*/3 * * * *"
}
},
"filters": {
"exclude": {
"channel_ids": [],
"groups": []
},
"include": {
"channel_ids": [],
"groups": []
}
}
}
Explanation:
null
instead of the crontab styleschannel_ids
and groups
from being processedchannel_ids
and groups
to be processedinclude
filters.exclude
Make sure you've already configured the config.json and the MongoDB server is up and running
The next thing you need to do is filter what you want and what do you not want for the database.
You can do that by adding the word .mute
to file in database/dataset
Example, you dont want to scrape Hololive data, then rename it from hololive.json
to hololive.json.mute
npm run database
, this will run the database creation handler2
, this will start the initial scrapping processIf you have something changed, you could run that again to update the Channel Models
If you just removed something, you want to run 3
then 2
to reset it.
Its recommended to split the worker into separate server or process to avoid rate-limiting.
After that you need to rename skip_run.json.example
to skip_run.json
and add anything you dont need on that server.
Do that on every other server.
It's recommended to create a id
and group
indexes for every collection. It's not made by default so you need to open Mongo Shell or use MongoDB Compass to create the Indexes for it.
For viewersdatas
, id
Indexes is enough
Recommended Indexes:
Identifier
id
: 1
Group
group
: 1
Identifier and Group
id
: 1
group
: 1