iiakshat / BriefComm

Simplifying audio content through transcription and summarization using Meta's LLAMA 2 & OpenAI's Whisper.
MIT License
1 stars 1 forks source link

BriefComm

Keep the comm. brief.

Approach

The approach for the BriefComm project involves a multi-step process to efficiently summarize audio content. Initially, raw text, audio, or video files are accepted as input. These files are then processed using an OpenAI's model called Whisper for transcription, converting the audio content into text. The transcribed text is then processed further, potentially being translated into different languages using translation services. Next, the processed text is fed into an AI model by META called Llama2, which is fine-tuned for summarization tasks. Llama2 generates concise and coherent summaries of the input text. Finally, the summarized output is displayed on a webpage, allowing users to easily access and utilize the key insights and information extracted from the original audio content. This approach streamlines the process of summarizing audio content, enabling users to efficiently extract valuable insights for various applications and use cases.

How to use BriefComm?

Visit [Hugging Face Space]() or Run locally using:

git clone https://github.com/iiakshat/BriefComm.git

and then, running:

pip install -r requirements.txt