A fast optimizing Brainfuck interpreter written in pure Rust
brainwhat
?
brainwhat
is a fast optimizing Brainfuck interpreter written in pure Rust
fast – Rust is a systems programming language, so it compiles to very fast native binaries.
optimizing – Brainfuck code is first parsed to a list of instructions, then optimized and after that these instructions are passed to the interpreter.
interpreter – it doesn't compile Brainfuck to another language, instead, it executes program directly.
Brainfuck is an esoteric programming language created by Urban Müller. It is not intended for practical use, but to challenge programmers. Despite its extreme minimalism, it is Turing complete, so it can solve any computation problem with enough memory and time.
Brainfuck operates on an array of memory cells, where each cell is initially set to zero. Also there's a pointer, initially set to the first memory cell. Brainfuck has eight commands (all other characters are ignored, so they can be used as comments):
Command | C equivalent | Description |
---|---|---|
Program start | int ptr = 0; char arr[30000]; memset(arr, 0, sizeof(arr)); |
|
> |
ptr++; |
Move the pointer to the right |
< |
ptr--; |
Move the pointer to the left |
+ |
arr[ptr]++; |
Increment value in the current cell |
- |
arr[ptr]--; |
Decrement value in the current cell |
. |
putchar(arr[ptr]); |
Print value in the current cell as an ASCII character |
, |
arr[ptr] = getchar(); |
Read a character and put its ASCII value in the current cell |
[ |
while (arr[ptr]) { |
Jump past the matching ] if the current cell is zero |
] |
} |
Jump back to the matching [ if the current cell is not zero |
For more information, check out Wikipedia, Esolangs.org and BrainFuck Programming Tutorial.
This project should:
cargo install --git https://github.com/dmitmel/brainwhat
As a command-line tool:
brainwhat path/to/program.b
brainwhat < path/to/program.b
As a library (e.g. for generating Brainfuck programs using AI):
extern crate brainwhat;
// this program prints "hi!"
let code = ">+++++[-<+++>>++++++>++<<]<[->+++++++<]>-.+.>+++.>.";
let code_chars = code.chars().collect::<Vec<char>>();
// 4 memory cells is enough for this program
let memory_size = 4;
let parsed_program = brainwhat::parse(&code_chars)?;
let optimized_program = brainwhat::optimize(&parsed_program)?;
let mut interpreter = brainwhat::Interpreter::new(memory_size);
interpreter.run(&optimized_program)?;
,
) instruction, current cell will be set to zero.Brainfuck program: DBFI (Brainfuck self-interpreter) which runs DBFI which runs simple program which prints hello123
Baseline implementations: bff
and bff4
(with linear optimizations turned off)
Hardware: 13-inch Early 2015 MacBook Pro with Intel Core i7 @ 3.1 GHz.
Benchmark #1: bin/bff bench_prog < bench_input
Time (mean ± σ): 298.3 ms ± 4.0 ms [User: 293.3 ms, System: 2.6 ms]
Range (min … max): 292.0 ms … 303.8 ms
Benchmark #2: bin/bff4 < bench_bff4_input
Time (mean ± σ): 247.9 ms ± 4.5 ms [User: 242.7 ms, System: 2.8 ms]
Range (min … max): 244.3 ms … 259.3 ms
Benchmark #3: bin/brainwhat bench_prog < bench_input
Time (mean ± σ): 372.4 ms ± 7.1 ms [User: 367.1 ms, System: 2.9 ms]
Range (min … max): 364.3 ms … 383.2 ms
Summary
'bin/bff4 < bench_bff4_input' ran
1.20x faster than 'bin/bff bench_prog < bench_input'
1.50x faster than 'bin/brainwhat bench_prog < bench_input'
These are currently implemented optimizations, but I'm planning to add more.
Some instructions (+
, -
, >
, <
) are stackable, meaning that they can be merged into one instruction:
+++++ >>> ------- <<<<<< -> [Add(5), Move(3), Add(-7), Move(-6)]
12345 123 1234567 123456 <-- instruction counts
All loops are linked when parsing program, so interpreter doesn't have to search matching brackets at runtime (this optimization makes interpreter really fast!!!):
__ ____________________________
/ \ <------ loops ------> / \
| | | |
+[ ,. ] -> [Add(1), JumpIfZero(4), Read, Print, JumpIfNonZero(1)]
01 23 4 <-- instruction addresses
PRs accepted.
I would really appreciate your help with TODOs!