,______ .______ .______ ,___
: __ \ \____ |: \ : __|
| \____|/ ____|| _,_ || : |
| : \ \ . || : || |
| |___\ \__:__||___| || |
|___| : |___||___|
*
Run a language model in local, without internet, to entertain you or help answering questions about radare2 or reverse engineering in general. Note that models used by r2ai are pulled from external sources which may behave different or respond unrealible information. That's why there's an ongoing effort into improving the post-finetuning using memgpt-like techniques which can't get better without your help!
R2AI is structured into four independent components:
Running make
will setup a python virtual environment in the current directory installing all the necessary dependencies and will get into a shell to run r2ai.
The installation is now splitted into two different targets:
make install
will place a symlink in $BINDIR/r2ai
make install-plugin
will install the native r2 plugin into your homemake install-decai
will install the decai r2js decompiler pluginmake install-server
will install the decai r2js decompiler pluginWhen installed via r2pm you can execute it like this:
r2pm -r r2ai
Additionally you can get the r2ai
command inside r2 to run as an rlang plugin by installing the bindings:
r2pm -i rlang-python
r2pm -i r2ai-plugin
After this you should get the r2ai
command inside the radare2 shell. Set the R2_DEBUG=1
environment to see the reasons why the plugin is not loaded if it's not there.
On Windows you may follow the same instructions, just ensure you have the right python environment ready and create the venv to use
git clone https://github.com/radareorg/r2ai
cd r2ai
set PATH=C:\Users\YOURUSERNAME\Local\Programs\Python\Python39\;%PATH%
python3 -m pip install .
python3 main.py
There are 4 different ways to run r2ai
:
r2pm -r r2ai
or python -m r2ai.cli
r2ai -c '-r act as a calculator' -c '3+3=?'
r2 -i r2ai/plugin.py /bin/ls
r2pm -ci rlang-python r2ai-plugin
): r2 -c 'r2ai -h'
#!pipe python -m r2ai.cli
'$r2ai=#!pipe python -m r2ai.cli
When using OpenAI, Claude or any of the Functionary local models you can use the auto mode which permits the language model to execute r2 commands, analyze the output in loop and in a loop until it is resolved. Here's a sample session to achieve that:
$ r2pm -i r2ai-plugin
(env)$ r2 /bin/ls
[0x00000000]> r2ai -m openai:gpt-4
[0x00000000]> r2ai ' list the imports for this program
[0x00000000]> r2ai ' draw me a donut
[0x00000000]> r2ai ' decompile current function and explain it
You can interact with r2ai from standalone python, from r2pipe via r2 keeping a global state or using the javascript interpreter embedded inside radare2
.
Just run make
.. or well python3 -m r2ai.cli
~
, |
and >
and other r2shell features