MeetingMind is an AI-powered meeting assistant that helps you capture, analyze, and act on your meeting insights effortlessly. This project is built with Langflow, Next.js and Groq-based fast transcription service to analyze your meetings and generate insights.
Check out this demo video to see MeetingMind in action:
https://github.com/user-attachments/assets/50a9de7a-b24f-4167-9526-4e112b1d24f8
⚠️ Important: Groq Whisper used for transcription and analysis, currently supports files up to 25 MB only. There is a compression step in the process to reduce the file size to a manageable level. If your audio file is still larger than 25 MB, you will need to compress it before uploading. This limitation may affect the processing of longer meetings or high-quality audio recordings.
To compress your audio files further, you can use tools like:
Ensure your compressed audio maintains sufficient quality for accurate transcription while staying under the 25 MB limit.
Clone the repository:
git clone https://github.com/yourusername/meetingmind.git
cd meetingmind
Install dependencies:
npm install
# or
yarn install
Set up LangFlow:
utils/langflow_flow/Meeting Mind.json
Create a .env.local
file in the root directory and add the LangFlow URL:
LANGFLOW_FLOW_URL="http://127.0.0.1:7860/api/v1/run/5781a690-e689-4b26-b636-45da76a91915"
Replace the URL with your actual LangFlow server URL if different.
In the file app/api/transcribe/route.ts
, locate the payload
object and update the Groq component name to match your LangFlow component name. For example:
const payload = {
output_type: 'text',
input_type: 'text',
tweaks: {
'YourGroqComponentName': {
audio_file: filePath
},
}
}
Replace 'YourGroqComponentName' with the actual name of your Groq component in LangFlow.
Set up the database:
This project uses Prisma as an ORM. By default, it's configured to use SQLite as the database.
a. To use the local SQLite database:
.env
file contains:
DATABASE_URL="file:./dev.db"
npx prisma generate
npx prisma migrate dev --name init
b. To use a different database (e.g., PostgreSQL with Neon):
.env
file with the appropriate connection string:
DATABASE_URL="postgresql://username:password@host:port/database?schema=public"
provider
in prisma/schema.prisma
:
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
Run the development server:
npm run dev
# or
yarn dev
Open http://localhost:3000 with your browser to see the result.
app/
: Contains the main application code
components/
: Reusable React componentsapi/
: API routes for server-side functionalitydashboard/
: Dashboard page componentpage.tsx
: Home page componentpublic/
: Static assetsprisma/
: Database schema and migrationsutils/
: Utility functions and configurationslib/
: Shared libraries and modules.env.local
file.tailwind.config.ts
.tsconfig.json
./api/meetings
: Handles CRUD operations for meetings/api/transcribe
: Handles audio file transcription and analysisThese screenshots provide a visual representation of the application's main interfaces. The landing page showcases the initial user experience, while the dashboard displays the core functionality where users can upload audio files and view the AI-processed meeting information.
Contributions are welcome! Please feel free to submit a Pull Request. Here are some ways you can contribute:
Please read our contributing guidelines before submitting a pull request.
This project is licensed under the MIT License. See the LICENSE file for details.
If you encounter any problems or have questions, please open an issue on the GitHub repository.