This is an experimental tree-based writing interface for GPT-3. The code is actively being developed and thus unstable and poorly documented.
Read mode
Tree view
Navigation
Generation
File I/O
ooo what features! wow so cool
Read this for a conceptual explanation of block multiverse interface and demo video
Wavefunction
button on bottom bar. This will open the block multiverse interface in the right sidebar (drag to resize).Propagate
to propagate plot the block multiversePropagate
again to plot future block multiverse starting from a renormalized frameReset zoom
to reset zoom level to initial positionClear
to clear the block multiverse plot. Do this before generating a new block multiverse.Alt hotkeys correspond to Command on Mac
Open: o
, Control-o
Import JSON as subtree: Control-Shift-O
Save: s
, Control-s
Change chapter: Control-y
Preferences: Control-p
Generation Settings: Control-Shift-P
Visualization Settings: Control-u
Multimedia dialog: u
Tree Info: Control-i
Node Metadata: Control+Shift+N
Run Code: Control+Shift+B
Toggle edit / save edits: e
, Control-e
Toggle story textbox editable: Control-Shift-e
Toggle visualize: j
, Control-j
Toggle bottom pane: Tab
Toggle side pane: Alt-p
Toggle show children: Alt-c
Hoist: Alt-h
Unhoist: Alt-Shift-h
Click to go to node: Control-shift-click
Next: period
, Return
, Control-period
Prev: comma
, Control-comma
Go to child: Right
, Control-Right
Go to next sibling: Down
, Control-Down
Go to parent: Left
, Control-Left
Go to previous Sibling: Up
, Control-Up
Return to root: r
, Control-r
Walk: w
, Control-w
Go to checkpoint: t
Save checkpoint: Control-t
Go to next bookmark: d
, Control-d
Go to prev bookmark: a
, Control-a
Search ancestry: Control-f
Search tree: Control-shift-f
Click to split node: Control-alt-click
Goto node by id: Control-shift-g
Toggle bookmark: b
, Control-b
Toggle archive node: !
Generate: g
, Control-g
Inline generate: Alt-i
Add memory: Control-m
View current AI memory: Control-Shift-m
View node memory: Alt-m
Delete: BackSpace
, Control-BackSpace
Merge with Parent: Shift-Left
Merge with children: Shift-Right
Move node up: Shift-Up
Move node down: Shift-Down
Change parent: Shift-P
New root child: Control-Shift-h
New Child: h
, Control-h
, Alt-Right
New Parent: Alt-Left
New Sibling: Alt-Down
Toggle edit / save edits: Control-e
Save edits as new sibling: Alt-e
Click to edit history: Control-click
Click to select token: Alt-click
Next counterfactual token: Alt-period
Previous counterfactual token: Alt-comma
Apply counterfactual changes: Alt-return
Enter text: Control-bar
Escape textbox: Escape
Prepend newline: n
, Control-n
Prepend space: Control-Space
Collapse all except subtree: Control-colon
Collapse node: Control-question
Collapse subtree: Control-minus
Expand children: Control-quotedbl
Expand subtree: Control-plus
Center view: l
, Control-l
Reset zoom: Control-0
Make sure you have tkinter installed
sudo apt-get install python3-tk
Setup your python env (should be >= 3.9.13)
```python3 -m venv env```
```source env/bin/activate```
Install requirements
pip install -r requirements.txt
[Optional] Set environmental variables for OPENAI_API_KEY
, GOOSEAI_API_KEY
, AI21_API_KEY
(you can also use the settings options)
export OPENAI_API_KEY={your api key}
conda create -n pyloom python=3.10
conda activate pyloom
pip install -r requirements-mac.txt
python main.py
(Only tested on Linux.)
Run the make targets
```make build```
```make run```
llama.cpp lets you run models locally, and is especially useful for running models on Mac. [https://github.com/abetlen/llama-cpp-python] provides nice installation and a convenient API.
conda create -n llama-cpp-local python=3.10; conda activate llama-cpp-local
llama-cpp-python
, as per these instructions. For instance, to infer on MPS: CMAKE_ARGS="-DLLAMA_METAL=on"
pip install 'llama-cpp-python[server]'
pip install huggingface-hub
python3 -m llama_cpp.server --hf_model_repo_id NousResearch/Meta-Llama-3-8B-GGUF --model 'Meta-Llama-3-8B-Q4_5_M.gguf' --port 8009
conda activate llama-cpp-local
and start your llama-cpp-python server.pyloom
environment and run main.py
{
'model': 'Meta-Llama-3-8B-Q4_5_M',
'type': 'llama-cpp',
'api_base': 'http://localhost:8009/v1',
},