mify-io / mify-llm-editor

LLM code editor for backend services
MIT License
10 stars 1 forks source link

Mify LLM Editor

LLM code generation paired with templates. Create backend service and update code via LLM. This project is very much alpha.

About the project

Currently, in most cases when I use LLMs for coding, it can only help with implementation of individual functions and components. There are multiple tools and startups working on adding a full code repository to the model's context, so it can make suggestions based on it, but complex multi-file changes are still clunky for models.

I wanted to see if it would be simpler and more reliable for the model to work on a repository with a known and more rigid structure, and this is where Mify comes in. Mify CLI can generate backend service templates with a predefined directory structure based on OpenAPI schema. Assuming that all services in the repository are generated by Mify we can pass the project structure in the prompt and help the model with the code generation.

Features

Demo

https://github.com/user-attachments/assets/fa96c7e1-cc33-4e1a-bb10-68c87d9f6173

In this demo, I'm asking LLM to create an Echo service.

And the service is working fine after these updates!

Installation

Prerequisites

Python setup:

$ virtualenv venv
$ . ./venv/bin/activate
$ pip install -r llm-worker/requirements.txt

Prepare the environment for worker:

$ export DATABASE_URL="sqlite:///$PWD/storage.db"
$ export ANTHROPIC_API_KEY="sk-ant-api<Your key>"

Prepare database:

cd llm-worker && alembic upgrade head && cd -

Run llm-worker:

$ uvicorn llm-worker.main:app --port 3001

Run webapp:

$ cd webapp
$ npm run dev

Usage

Working with Mify services

After opening the chat in your browser try asking it to create a service. LLM should ask for the project location and service name, after you provide them it will run mify to create a service. It could take a bit of back and forth with the LLM, but most of the time if you just tell it to continue, it will finish running the commands. Then ask it to update the OpenAPI schema for the service and it should be able to locate it and the corresponding handler and update the code there.

To run the generated service go to /py-services and run:

$ . ./venv/bin/activate
$ python -m <service-name>

And hopefully, it will work. If not you can try asking LLM to fix the issue.

Editing code

Claude AI is really good at editing Tailwind-based pages, so you can use the chat to work on any other project. The web app for this agent is generated with it!

Contributing

We welcome contributions to this project! Check out the GitHub issues to get started.

License

This project is licensed under the MIT License.

Contact

For questions and suggestions, you can ping me via email or via Twitter. If you have any issues just post them on the GitHub repository.