How can we use LLMs to navigate large amounts of video data for the purpose of doing qualitative analysis of events in that data?
User Story:
Bob the educator / tinkering designer / practitioner researcher runs a workshop with his class of 18 kids. Each pair of kids has a gopro running looking at their projects / hands as they are built out.
Afterwards the videos are transcribed with a timeline matching the timestamps in the videos, and fed into a RAG. Bob can now ask questions like:
Tell me about the first occurence of the snow white story in the data. Who started talking about it, and how did it progress?
[so and so] discovered that their project could move faster with plumber's strap attached to the wheel. How did this idea spread amongst the rest of the group? Give me links to several video clips that show these moments.
Later this may take inspiration interface wise from my friend Glen's project, as a way of making available transcript data in sync with video content.
http://glench.com/EyesOnThePrize/
How can we use LLMs to navigate large amounts of video data for the purpose of doing qualitative analysis of events in that data?
User Story: Bob the educator / tinkering designer / practitioner researcher runs a workshop with his class of 18 kids. Each pair of kids has a gopro running looking at their projects / hands as they are built out.
Afterwards the videos are transcribed with a timeline matching the timestamps in the videos, and fed into a RAG. Bob can now ask questions like:
Later this may take inspiration interface wise from my friend Glen's project, as a way of making available transcript data in sync with video content. http://glench.com/EyesOnThePrize/