skillenza-com / MishMash-India-2020

MishMash hackathon is India’s largest online diversity hackathon. The focus will be to give you, regardless of your background, gender, sexual orientation, ethnicity, age, skill sets and viewpoints, an opportunity to showcase your talent. The Hackathon is Live from 6:00 PM, 23rd March to 11:55 PM, 1st April, 2020
2 stars 12 forks source link

Manifestors - AudienceAI - EdTech #143

Open reallyinvincible opened 4 years ago

reallyinvincible commented 4 years ago

Manifestors - AudienceAI - EdTech

ℹ️ Project information

  1. Theme- EdTech
  1. Project Name: AudienceAI

  2. Short Project Description: AudienceAI is a tool meant for workshop speakers, teachers and professors to analyze the emotions of the students during their teaching sessions

  3. Team Name: Manifestors

  4. Team Members: Sparsh Srivastava(https://github.com/reallyinvincible), Amogh Lele(https://github.com/ATechnoHazard), Sriharsha Bommana(https://github.com/harshablast), Riya Kanabar(https://github.com/Riya1608)

  5. Demo Link: Link to app - https://drive.google.com/file/d/1XzUi9ljP8SRpXtcmL58VayjQgdnplCSk/view?usp=sharing

  6. Repository Link(s): https://github.com/reallyinvincible/AudienceAI

  7. Presentation Link: https://drive.google.com/file/d/1ao_MIFpaT6VUSOjpIYGWe7tWMPtFvCXG/view?usp=sharing

  8. Deep Tech - Problem Statement - 3: NA

  9. Deep Tech - Problem Statement - 2: NA

  10. Azure Services Used- Azure NC6 Promo VM

🔥 Your Pitch

AudienceAI is an app that we have developed to allow workshop speakers, teachers, conference speakers, etc to have a comprehensive understanding of their crowd’s emotions during the entire session. Our application effectively uses latest deep learning techniques to do this. The user has to point a camera towards the audience for the duration of the session, after which, they can upload that video onto our app. Our app analyzes this video and extracts the faces and emotions from each face. It then allows the user to have a detailed visualization of these emotions against time so that the general reaction to the entire session can be analyzed.

When the user uploads a video, our backend, which is based on Go, handles it and passes it to the python microservice, which is running the Deep Learning models. This backend is hosted on an Azure NC6 Promo VM. The video then gets passed through a RetinaFace model, which is currently the state of the art detector for face detection, hence allowing all faces in a particular hall to be extracted, regardless of the size of the venue. Each face then gets passed into a mini-xception neural network which give’s the emotion score. The emotion scores are then calculated cumulatively for each frame in the video before getting passed back to the user app. The user can then visualize these results.

This would be useful in this space so that educators can effectively understand how their session went. With this feedback mechanism in place, they can understand which parts of their session were not interactive enough, which parts were not understood by the students, which parts were boring, etc by looking at the general emotion scores. Hence they can effectively improve their sessions by changing those respective parts. This allows the overall quality of these sessions to improve.

From a business perspective, this solution costs around 20 rupees per hour of video processes, hence we can have a viable business around this if we sell it as a service. We can charge the user per minute of video uploaded and processed

🔦 Any other specific thing you want to highlight?

(Optional)

✅ Checklist

Before you post the issue: