Automated README
file generator, powered by large language model APIs
Objective
Readme-ai is a developer tool that auto-generates README.md files using a combination of data extraction and generative ai. Simply provide a repository URL or local path to your codebase and a well-structured and detailed README file will be generated for you.
Motivation
Streamlines documentation creation and maintenance, enhancing developer productivity. This project aims to enable all skill levels, across all domains, to better understand, use, and contribute to open-source software.
[!IMPORTANT]
Readme-ai is currently under development with an opinionated configuration and setup. It is vital to review all generated text from the LLM API to ensure it accurately represents your project.
Standard CLI Usage:
Offline Mode Demonstration:
[!TIP]
Offline mode is useful for generating a boilerplate README at no cost. View the offline README.md example here!
Readme-ai uses a balanced approach to building README files, combining data extraction and generative AI to create comprehensive and informative documentation.
Over a dozen CLI options are available to customize the README generation process:
A few examples of the CLI options in action:
default output (no options provided to cli)
|
|
--alignment left --badge-style flat-square --image cloud
|
--alignment left --badge-style flat --image gradient
|
--badge-style flat --image custom
|
--badge-style skills-light --image grey
|
--badge-style flat-square
|
--badge-style flat --image black
|
See the Configuration section for a complete list of CLI options.
Overview
|
Features Table
|
Repository Structure
|
File Summaries
|
Getting Started
Install , Usage , and Test guides are supported for many languages.
|
Contributing Guide
|
Additional Sections
Project Roadmap , Contributing Guidelines , License , and Acknowledgements are included by default.
|
README Template for ML & Data
|
|
Output File | Input Repository | Input Contents | |
---|---|---|---|
βΉ | readme-python.md | readme-ai | Python |
βΉ | readme-google-gemini.md | readme-ai | Python |
βΉ | readme-typescript.md | chatgpt-app-react-ts | TypeScript, React |
βΉ | readme-postgres.md | postgres-proxy-server | Postgres, Duckdb |
βΉ | readme-kotlin.md | file.io-android-client | Kotlin, Android |
βΉ | readme-streamlit.md | readme-ai-streamlit | Python, Streamlit |
βΉ | readme-rust-c.md | rust-c-app | C, Rust |
βΉ | readme-go.md | go-docker-app | Go |
βΉ | readme-java.md | java-minimal-todo | Java |
βΉ | readme-fastapi-redis.md | async-ml-inference | FastAPI, Redis |
βΉ | readme-mlops.md | mlops-course | Python, Jupyter |
βΉ | readme-local.md | Local Directory | Flink, Python |
System Requirements:
pip
, pipx
, docker
OpenAI
, Ollama
, Google Gemini
, Offline Mode
Repository URL or Local Path:
Make sure to have a repository URL or local directory path ready for the CLI.
Choosing an LLM Service:
pip
pip install readmeai
[!TIP]
Use pipx to install and run Python command-line applications without causing dependency conflicts with other packages!
docker
docker pull zeroxeli/readme-ai:latest
conda
conda install -c conda-forge readmeai
source
Environment Variables
OpenAI
Set your OpenAI API key as an environment variable.
# Using Linux or macOS $ export OPENAI_API_KEY=<your_api_key> # Using Windows $ set OPENAI_API_KEY=<your_api_key>
Ollama
Set Ollama local host as an environment variable.
$ export OLLAMA_HOST=127.0.0.1 $ ollama pull mistral:latest # llama2, etc. $ ollama serve # run if not using the Ollama desktop app
For more details, check out the Ollama repository.
Google Gemini
Set your Google Cloud project ID and location as environment variables.
$ export GOOGLE_API_KEY=<your_api_key>
Run the CLI
pip
# Using OpenAI API readmeai --repository https://github.com/eli64s/readme-ai --api openai # Using Ollama local model readmeai --repository https://github.com/eli64s/readme-ai --api ollama --model mistral
docker
docker run -it \ -e OPENAI_API_KEY=$OPENAI_API_KEY \ -v "$(pwd)":/app zeroxeli/readme-ai:latest \ -r https://github.com/eli64s/readme-ai
streamlit
Try directly in your browser on Streamlit, no installation required! For more details, check out the readme-ai-streamlit repository.
source
pytest
$ make pytest
nox
$ nox -f noxfile.py
[!TIP]
Use nox to test application against multiple Python environments and dependencies!
Customize the README file using the CLI options below.
Option | Type | Description | Default Value |
---|---|---|---|
--alignment , -a |
String | Align the text in the README.md file's header. | center |
--api |
String | LLM API service to use for text generation. | offline |
--badge-color |
String | Badge color name or hex code. | 0080ff |
--badge-style |
String | Badge icon style type. | see below |
--base-url |
String | Base URL for the repository. | v1/chat/completions |
--context-window |
Integer | Maximum context window of the LLM API. | 3999 |
--emojis , -e |
Boolean | Adds emojis to the README.md file's header sections. | False |
--image , -i |
String | Project logo image displayed in the README file header. | blue |
π§ --language |
String | Language for generating the README.md file. | en |
--model , -m |
String | LLM API to use for text generation. | gpt-3.5-turbo |
--output , -o |
String | Output file name for the README file. | readme-ai.md |
--rate-limit |
Integer | Maximum number of API requests per minute. | 5 |
--repository , -r |
String | Repository URL or local directory path. | None |
--temperature , -t |
Float | Sets the creativity level for content generation. | 0.9 |
π§ --template |
String | README template style. | default |
--top-p |
Float | Sets the probability of the top-p sampling method. | 0.9 |
--tree-depth |
Integer | Maximum depth of the directory tree structure. | 2 |
--help |
Displays help information about the command and its options. |
π§ feature under development
The --badge-style
option lets you select the style of the default badge set.
Style | Preview |
---|---|
default | |
flat | |
flat-square | |
for-the-badge | |
plastic | |
skills | |
skills-light | |
social |
When providing the --badge-style
option, readme-ai does two things:
$ readmeai --badge-style flat-square --repository https://github.com/eli64s/readme-ai
{... project logo ...}
{... project name ...}
{...project slogan...}
Developed with the software and tools below.
{... end of header ...}
Select a project logo using the --image
option.
blue | gradient | black |
cloud | purple | grey |
For custom images, see the following options:
--image custom
to invoke a prompt to upload a local image file path or URL.--image llm
to generate a project logo using a LLM API (OpenAI only).--api
Integrate singular interface for all LLM APIs (OpenAI, Ollama, Gemini, etc.)--audit
to review existing README files and suggest improvements.--template
to select a README template style (i.e. ai, data, web, etc.)--language
to generate README files in any language (i.e. zh-CN, ES, FR, JA, KO, RU)To grow the project, we need your help! See the links below to get started.