The idea was to build something using AI on the music and art front. We came up with an idea to generate music from the text entered by a user while maintaining their journal. The idea was to generate music and have a slideshow of images theming to those emotions. Due to the constraint of time, we were unable to work on the image portion and as an MVP we decided to build the music generation project.
We built an online journal application. Users can enter their thoughts like in a normal diary they would. Once they save their thoughts, our microservice generates musical tunes depending upon the sentiments passed on by the user's text.
🔦 Any other specific thing you want to highlight?
Since this is just a prototype of our MVP, the user might need to close their application after having written and saved their journal. We haven't integrated an end-to-end flow yet, so we face this issue. Also, it might take some time for the audio to play after having pressed play, we got stuck figuring out the conversion of MIDI to mp3 and thus had to go with MIDI to wav conversion.
✅ Checklist
Before you post the issue:
[x] You have followed the issue title format.
[x] You have provided all the information correctly.
[x] You have read and agree with the terms in the Code Of Conduct
ℹ️ Project information
Your Theme : Entertainment
Project Name: Kalpana
Short Project Description: A musical dear diary
Team Name: Gumsum Devs
Team Members: Aditi Jain @microaditi Dewansh Rawat @dewanshrawat15
Demo Link: Presentation Video Prototype application
Presentation Link: https://docs.google.com/presentation/d/1hVOtFuped7tYSJcxTnnCdYNKIoMb3dJ_nrA_ZRiqLMI/edit?usp=sharing
Repository Link: https://github.com/team-axios/kalpana-info
🔥 Your Pitch
The idea was to build something using AI on the music and art front. We came up with an idea to generate music from the text entered by a user while maintaining their journal. The idea was to generate music and have a slideshow of images theming to those emotions. Due to the constraint of time, we were unable to work on the image portion and as an MVP we decided to build the music generation project.
We built an online journal application. Users can enter their thoughts like in a normal diary they would. Once they save their thoughts, our microservice generates musical tunes depending upon the sentiments passed on by the user's text.
🔦 Any other specific thing you want to highlight?
Since this is just a prototype of our MVP, the user might need to close their application after having written and saved their journal. We haven't integrated an end-to-end flow yet, so we face this issue. Also, it might take some time for the audio to play after having pressed play, we got stuck figuring out the conversion of MIDI to mp3 and thus had to go with MIDI to wav conversion.
✅ Checklist
Before you post the issue: