PicoCreator / smol-dev-js

Smol personal AI, for smol incremental tasks in a JS project
MIT License
325 stars 53 forks source link

Smol-Dev-JS

You own personal AI dev, which you can direct development in a collaborative back and forth experience. Think of it as pair-programming, via the command line.

Written entire in JS (no python), and is able to make both "smol-er" changes (you can ask it to change a few lines or a file or two), or "big-ger" changes (ie. generate a full project from specs for you) - the choice is urs.

This switches your role from a developer, to a "senior developer" - where you are instructing your junior developers what to do (and hope they get it right).

For best results: Generally treat the AI like a junior developer who joined the project on day 0, and is still learning the ropes. And you are the senior developer who is teaching it - and making small incremental changes - and you will get better result as you prompt your way in a loop.

Not no code, not low code, but some third thing.

Allowing you to focus on sword fighting the big picture, while the AI does the button mashing coding for you.

Additionally, because the changes are small, incremental, and runs in a tight loop you are in full control of it. You do not need to worry about it going out of control like some autonomous agents. Allowing you to review each commit, and revert them or make changes yourself if needed.

Quoting the original smol-dev : does not end the world or overpromise AGI. instead of making and maintaining specific, rigid, one-shot starters, like create-react-app, or create-nextjs-app, this is basically create-anything-app where you develop your scaffolding prompt in a tight loop with your smol dev.

Commands & Setup

Update to node 18

sudo npm install -g n
sudo n 18

smol-dev-js setup

Reminder: Do check on your openAI dashboard if you have GPT4 access

Install via NPM

npm install -g smol-dev-js

Either start a new JS project, or go to an existing nodejs project, run the setup function, and follow the process

cd my-js-project
smol-dev-js setup

smol-dev-setup

This will ask for your API keys, and setup the .smol-dev-js folder which it will use internally for you

It is highly recommended to use anthropic claude, if you have the API key, as its so much faster and more reliable then openAI as of now for this use case. For openAI this uses gpt4-8k for the heavy lifting, while downgrading to gpt3.5 for some smol-er task

smol-dev-js prompt

Run the following command to start the smol-dev-js process, in your new or existing JS project

cd my-js-project
smol-dev-js prompt

smol-dev-run

Once everything is generated, review the code and begin a loop where you ...

engineering with prompts, rather than prompt engineering

Found an error? paste it in and let the AI suggest a fix for you. Or tell it what to do to fix it.

Loop until happiness is attained. Or that you find the AI being unhelpful, and take back control.

smol-dev-js spec2code

Got all your project specifications file ready? Run th spec2code, and let the smol-dev AI generate it for you.

The general format of the spec folder should be

You will need the spec folder to be configured

smol-dev-js code2spec

Lazy to write specs to an existing codebase from scratch, let the smol-dev AI generate a draft for you.

You will need the spec folder to be configured

Want to customize the settings further?

After generating the config, you can look into .smol-dev-js/config folder for various settings, including

Example Usage

Spec prompt to code

this is from the original smol-dev, with a single README.md as the spec.

Notes

Innovation and insights

this list is a derivative from the original smol-dev proj

Cavet

Unless your the lucky few who gotten access to antrohpic AI, GPT4 can be very very slow. Making the feedback loop run into several minutes (this will improve over time as AI scales up worldwide)

Also for larger projects and scripts, due to the way things are currently setup, it is possible to hit 8k limits and have scripts get cut off

Want this to work with a local model ???

Want to have this working locally? Without an internet connection?

Reach out to me, and help me make it happen !! (GPUs, funding, data, etc)

ps: if you email me the files, it is taken that you waived copyright for it - picocreator+ai-data (at) gmail.com

Future directions ??

Things to do

Things that are done

Architecture / process flow

The bulk of the main run logic is within src/ai/seq/generateFilesFromPrompts.js which is called in a larger loop from src/cli/command/prompt.js. The following is the sequence of events

For the spec2code, it follows the same process as above, with the prompt of "regenerate all the src files from the provided spec" and not having the main loop.

Optimization notes

controversial optimization: The AI model forcefully converts everything to tab spacing. I dun care about your oppinion on this, as its an engineering decision, where it is literally a huge 20% +++ in tokens savings, and the models may not be able to work without it.

resonable optimization: This is currently targetted to be optimized only for JS. The reduced scope is intentional, so that we can optimize its responsiveness and usage, without over-inflating the project.

While nothing stops it from working with other languages, it was designed with JS in mind, and will likely not work as well with other languages.

Backstory

V1 prototype was the English Compiler, made in Feb 2023

While it technically works, it was faced with multiple context size related issues.

Fast forward 3 months, and the context size of models have jumped from 4k, to 8k for public users. And 32k and 100k for private users.

Subsequently the smol-ai/dev project has shown that with only 8k context size and gpt4, we have slowly reached the stage where the output is starting to "just work"

This project is subsequently a full rewrite of the original English Compiler project, along with reimagining the approach based lessons learnt from the original and smol-ai/dev