Human-centric & Coherent Whole Program Synthesis aka your own personal junior developer
Build the thing that builds the thing! a
smol dev
for every dev in every situation
This is a "junior developer" agent (aka smol dev
) that either:
Instead of making and maintaining specific, rigid, one-shot starters, like create-react-app
, or create-nextjs-app
, this is basically is or helps you make create-anything-app
where you develop your scaffolding prompt in a tight loop with your smol dev.
After the successful initial v0 launch, smol developer was rewritten to be even smol-ler, and importable from a library!
# install
git clone https://github.com/smol-ai/developer.git
cd developer
poetry install # install dependencies. pip install poetry if you need
# run
python main.py "a HTML/JS/CSS Tic Tac Toe Game" # defaults to gpt-4-0613
# python main.py "a HTML/JS/CSS Tic Tac Toe Game" --model=gpt-3.5-turbo-0613
# other cli flags
python main.py --prompt prompt.md # for longer prompts, move them into a markdown file
python main.py --prompt prompt.md --debug True # for debugging
In this way you can use your clone of this repo itself to prototype/develop your app.
This is the new thing in smol developer v1! Add smol developer
to your own projects!
pip install smol_dev
Here you can basically look at the contents of main.py
as our "documentation" of how you can use these functions and prompts in your own app:
from smol_dev.prompts import plan, specify_file_paths, generate_code_sync
prompt = "a HTML/JS/CSS Tic Tac Toe Game"
shared_deps = plan(prompt) # returns a long string representing the coding plan
# do something with the shared_deps plan if you wish, for example ask for user confirmation/edits and iterate in a loop
file_paths = specify_file_paths(prompt, shared_deps) # returns an array of strings representing the filenames it needs to write based on your prompt and shared_deps. Relies on OpenAI's new Function Calling API to guarantee JSON.
# do something with the filepaths if you wish, for example display a plan
# loop through file_paths array and generate code for each file
for file_path in file_paths:
code = generate_code_sync(prompt, shared_deps, file_path) # generates the source code of each file
# do something with the source code of the file, eg. write to disk or display in UI
# there is also an async `generate_code()` version of this
To start the server run:
poetry run api
or
python smol_dev/api.py
and then you can call the API using either the following commands:
To create a task run:
curl --request POST \
--url http://localhost:8000/agent/tasks \
--header 'Content-Type: application/json' \
--data '{
"input": "Write simple script in Python. It should write '\''Hello world!'\'' to hi.txt"
}'
You will get a response like this:
{"input":"Write simple script in Python. It should write 'Hello world!' to hi.txt","task_id":"d2c4e543-ae08-4a97-9ac5-5f9a4459cb19","artifacts":[]}
Then to execute one step of the task copy the task_id
you got from the previous request and run:
curl --request POST \
--url http://localhost:8000/agent/tasks/<task-id>/steps
or you can use Python client library:
from agent_protocol_client import AgentApi, ApiClient, TaskRequestBody
...
prompt = "Write simple script in Python. It should write 'Hello world!' to hi.txt"
async with ApiClient() as api_client:
# Create an instance of the API class
api_instance = AgentApi(api_client)
task_request_body = TaskRequestBody(input=prompt)
task = await api_instance.create_agent_task(
task_request_body=task_request_body
)
task_id = task.task_id
response = await api_instance.execute_agent_task_step(task_id=task_id)
...
6 minute video demo - (sorry for sped up audio, we were optimizing for twitter, bad call)
smol-plugin
- prompt to ChatGPT plugin (tweet, fork)
Lessons from Creating a VSCode Extension with GPT-4 (also on HN)
7 min Video: Smol AI Developer - Build ENTIRE Codebases With A Single Prompt produces a full working OpenAI CLI python app from a prompt
12 min Video: SMOL AI - Develop Large Scale Apps with AGI in one click scaffolds a surprisingly complex React/Node/MongoDB full stack app in 40 minutes and $9
I'm actively seeking more examples, please PR yours!
sorry for the lack of examples, I know that is frustrating but I wasnt ready for so many of you lol
please send in alternative implementations, and deploy strategies on alternative stacks!
Please subscribe to https://latent.space/ for a fuller writeup and insights and reflections
variable_names
or entire ``` code fenced code samples)
curl
input and outputcat
ing the whole codebase with your error message and getting specific fix suggestions - particularly delightful!shared_dependencies.md
, and then insisting on using that in generating each file. This basically means GPT is able to talk to itself...shared_dependencies.md
is sometimes not comperehensive in understanding what are hard dependencies between files. So we just solved it by specifying a specific name
in the prompt. felt dirty at first but it works, and really it's just clear unambiguous communication at the end of the day.prompt.md
for SOTA smol-dev promptingPlease subscribe to https://latent.space/ for a fuller writeup and insights and reflections
We were working on a Chrome Extension, which requires images to be generated, so we added some usecase specific code in there to skip destroying/regenerating them, that we haven't decided how to generalize.
We dont have access to GPT4-32k, but if we did, we'd explore dumping entire API/SDK documentation into context.
The feedback loop is very slow right now (time
says about 2-4 mins to generate a program with GPT4, even with parallelization due to Modal (occasionally spiking higher)), but it's a safe bet that it will go down over time (see also "future directions" below).
things to try/would accept open issue discussions and PRs:
popup.html.md
and content_script.js.md
and so onprompt.md
for existing codebases - write a script to read in a codebase and write a descriptive, bullet pointed prompt that generates it
smol pm
, but its not very good yet - would love for some focused polish/effort until we have quine smol developer that can generate itself lmaomodal run anthropic.py --prompt prompt.md --outputdir=anthropic
to try it