AI toolkit for creating audiovisual experiences.
The goal is simple: help developers produce great audiovisual experiences.
https://github.com/user-attachments/assets/a0231e08-ec6e-4dea-b7de-44b4627ec185
https://github.com/user-attachments/assets/91469491-03fe-4548-951c-8e52f729d28a
A few months ago, I started working on TurboReel, an automation tool for generating short videos 100x faster. It was built with MoviePy and OpenAI. While MoviePy is great for basic tasks, I found it limiting for more complex ones. Plus, I relied too heavily on OpenAI, which made it tricky to keep improving the project.
We ended up using Revideo for the video processing tasks.
That made me realize that AI tools should be separated from the video engine(MoviePy, Revideo, Remotion, etc.) or AI service(GPT, ElevenLabs, Dalle, Runway, Sora, etc.) you choose to use. So you can easily switch between the best out there.
Also, there is no hub for audiovisual generation knowledge. So this is my attempt to create that hub.
Image generation: Pollinations, Dalle, Leonardo.
Script generation: OpenAI.
Video generation: Not yet.
Audio generation: OpenAI, ElevenLabs.
Video editing: MoviePy, Revideo.
Special shoutout to Pollinations for their free image generation API.
Mediachain is designed to be the LangChain for audiovisual creation, a centralized toolkit and knowledge hub for the field.
Image and video generation is just the start.
Emerging features like video embeddings (which can understand the context of videos) are next along with powerful video generation models.
Our mission is to push boundaries and make audiovisual generation accessible for everyone at a fraction of the cost of current solutions.
Here’s what’s planned for Mediachain:
[ ] Add the Revideo engine to the examples folder.
[ ] Introduce new features like image animation, image editing, voice cloning, and AI avatars.
[ ] Support more video generation services and models.
[ ] Create useful templates using Mediachain.
[ ] Publish the package on PyPI.
[ ] Write detailed documentation.
[ ] Develop a beginner-friendly guide to audiovisual generation.
The project is organized into the following folders:
core
: Core functionality of MediaChain. See the core README for more information.
examples
: Examples showing how to use MediaChain with tools like MoviePy, Revideo, and Remotion. See the examples README for more information.
To test MediaChain, start with the Reddit Stories example. This template creates a video from Reddit posts.
Make Sure You’ve Got Python: Grab Python 3.10.x from python.org.
Install FFmpeg and ImageMagick:
brew install ffmpeg imagemagick
sudo apt-get install ffmpeg imagemagick
Get the Required Python Packages:
pip install -r requirements.txt
Add your OpenAI API key:
Add your OpenAI API key to the .env
file as OPENAI_API_KEY
.
Edit the examples variables:
examples/moviepy_engine/reddit_stories/main_moviepy.py
.prompt
variable to whatever you want.video_url
variable to the video you want to use as background.Run the example:
python3 examples/moviepy_engine/reddit_stories/main_moviepy.py
Feel free to contribute, ask questions, or share your ideas!
Discord: https://discord.gg/bby6DYsCPu
Made with ❤️ by @TacosyHorchata