Since Shabad OS is already capturing timestamps for each line being activated, if we can tie this with a live stream that it's being used for, we can easily get timestamps for each line being read/sung. This would tie very nicely into NLP training in the future.
In Shabad OS > Settings > Overlay, add an input box that allows user to link their public live stream with their Shabad OS instance. On Shabad OS close/crash pull that data into a server db to, potentially, programmatically get snippets of lines once figuring out the time lag "offset".
Since Shabad OS is already capturing timestamps for each line being activated, if we can tie this with a live stream that it's being used for, we can easily get timestamps for each line being read/sung. This would tie very nicely into NLP training in the future.
In Shabad OS > Settings > Overlay, add an input box that allows user to link their public live stream with their Shabad OS instance. On Shabad OS close/crash pull that data into a server db to, potentially, programmatically get snippets of lines once figuring out the time lag "offset".