Open pomkos opened 3 years ago
Should we use the chat data we've downloaded from twitch, instead of the twitch api? @chand1012 @RusseII
I think if we have the data we should use it, that way we're not contributing to our twitch API rate limit.
BTW twitch has a nice embeddable chat widget - but it is only useful for LIVE streams. Not useful at all for vods. https://dev.twitch.tv/docs/embed/chat
In the super short term - It might make sense for us to recommend a plugin for twitch studio / osb that ads the twitch chat as an overlay.
I have a clear idea of how we can do this on the frontend.
@chand1012 any ideas how this would be possible to handle on the backend? To add chat
to the compilations we create?
I wasn't even thinking rendering the chat in the video. Since this is entirely for feedback purposes, I think it would make more sense to display the chat next to the video somewhere within the frontend GUI. Here is what I was thinking:
It might make sense for us to recommend a plugin for twitch studio / osb that ads the twitch chat as an overlay.
This is what I use.
good point about maybe us recommending a plugin for twitch chat overlays, that might be a better short/medium term approach
on displaying the chat in the gui: i think the interesting part of the discussion is what mechanism do we use to get the chat data to the Editor FE?
few brainstorming options:
good point about maybe us recommending a plugin for twitch chat overlays, that might be a better short/medium term approach
+++ If we are able to do this in short/mid term that'd be ideal.
If are going to do this frontend only we should just copy exactly how twitch handles it and use their API to get the data.
Here is my initial concept for a chat getting function. It filters the data by content_offset_seconds
and only gets the data between the startTime
and endTime
query parameters. This will be hosted on Vercel to keep frontend code within the frontend repository.
You can also find the file here.
// api/chat/[videoId].js
// import aws s3 v3 SDK
import { S3Client, GetObjectCommand } from '@aws-sdk/client-s3';
// region set to us east
const REGION = 'us-east-1';
// get env variables
const { BUCKET_NAME, S3_AWS_ACCESS_KEY_ID, S3_AWS_SECRET_ACCESS_KEY } = process.env;
// from the AWS docs
// converts an S3 stream to a string
// https://tinyurl.com/njha6ha2
const streamToString = (stream) =>
new Promise((resolve, reject) => {
const chunks = [];
stream.on('data', (chunk) => chunks.push(chunk));
stream.on('error', reject);
stream.on('end', () => resolve(Buffer.concat(chunks).toString('utf8')));
});
const get = async (req, res) => {
// get the videoId, startTime, and endTime from the request
const { videoId, startTime, endTime } = req.query;
// construct the s3 client
const s3Client = new S3Client({
region: REGION,
accessKeyId: S3_AWS_ACCESS_KEY_ID,
secretAccessKey: S3_AWS_SECRET_ACCESS_KEY,
});
// construct the command parameters
const params = {
Bucket: BUCKET_NAME,
Key: `${videoId}`,
};
// get the data from the s3 bucket
const dataStream = await s3Client.send(new GetObjectCommand(params));
// get string from the stream
const dataString = await streamToString(dataStream.Body);
// convert data string to json
const data = JSON.parse(dataString);
// filter the data so that only messages
// between start and end time are returned
const filteredData = data.filter((d) => {
return (
d.content_offset_seconds >= parseFloat(startTime) &&
d.content_offset_seconds <= parseFloat(endTime)
);
});
// return the filtered data
return res.status(200).json(filteredData);
};
export default get;
The above example is more or less @gatesyp's first example in the above comment, but sorts the data on the server side as getting the data is going to be much faster in Vercel's datacenter. I'm open to the idea of putting chat data into MongoDB, but I think only relevant chats should be put into Mongo, such as the chats that are within the start and end timestamps of the videos we found.
that code snippet looks like it'd work out OK
on reading Russells comment though, the 'hit Twitch api' approach might make a TON of sense. for some reason i forgot we can even do that =] bonus points for using the user's tokens to make the API calls. this seems like itd be the best solution, maybe alongside some user-side caching + background requests so the user doesn't have to wait for API calls to complete when they are in the Editor
other consideration we have to make is -- what should the front end design look like? the obvious answer would be to put the twitch chat to the right side of the currently playing video, but we currently have the Review Queue on the right side in a vertical list component.
this is a half baked idea so far, but im wondering what the UX for the Editor would look like
Useful so we can have some feedback on what is being talked about in chat during clips.
See #43