Given some prompt, the AI will end up creating a video "suitable" for YouTube/Rumble
[X] π Research Agent
[X] βοΈ Script Writer Agent
[X] π Voiceover Artist Agent
[X] πΌοΈ Storyboard Artist Agent
[X] - Music Composer Agent
[X] - Sound Engineer Agent
[X] π¦ Producer Agent
[ ] π¬ Director Agent
[/] π¦ Distributor Agent
create .env file and put OPENAI_API_KEY
and GPT4_TOKEN
variables in it
Make sure you got python and virtualenv installed
make setup
make proxy
to run the chatgpt-proxy
Then you can run either
make notebook
will launch jupytermake ui
to run the webui to generate a videomake video
the command line to run a single input, pass as make args (env variables) eg: ARGS=--prompt "prompt" --actors zane --director mvp_director --production-config default_config --program matrix --output some/output
make auto
will look for inputs.txt
with |
separated ( prompt | output_dir | actor )
can be used to run multiple back to back video creationscast
includes the actors/directors/researchers etc.. also includes the configsprograms
includes the well, programs aka showsThe config currently includes the output video dimensions (also img2img dimension), as well as the stable diffusion initial dimensions
A Director is a collection of researcher, and other artists involved in the production Each artist, can have multiple yamls that can be chosen as part of a director
specify a list of actors to be part of the show currently only shows with one actor are supported this is still in heavy development for v0.69
Has a name and a description, not used anywhere currently
Each of the agents should have a notebook associated with it, and how it's created.
This agent will research the topic obviously.
This agent will write the script based on the research done by Research Agent.
This agent will rewrite the script lines in their own words according to their character bio and create the audio voice lines.
This agent will rewrite the scene descriptions as text-to-image prompts and create X images per scene.
This agent will rewrite the scene descriptions into prompts for text-to-music models.
This agent assembles the audio components into a single wav file.
This agent assembles the audio and visual components of into the final video file.
This agent will provide the title, description, and tags for the video
This agent will create video clips to be used instead of just image
AICP can now be run in the cloud via Runpod.io
make bin/runpodctl
runpodctl
configuration: ./bin/runpodctl config --apiKey=YOUR_API_KEY
/output
RUNPOD_TEMPLATE=your-template-id
make runpod-create
Commentator will go through comments that have not been replied to by us, and generates a reply if confirmed by the human, the Commentator will post the reply.
client_secrets.json
models/
directory with the name llama2_7b_chat_uncensored.bin
make commentator
(to do- developer guide)
This project uses the black
code formatter. You can check your local environment by running make check-format
and you can autoformat your code with make reformat
. See pyproject.toml
for configuration.
Note: This code format is enforced for pull requests.