A beautiful C++ libcurl / ChatGPT interface
There are numerous ChatGPT command line programs currently available. Many of them are written in Python. I wanted something a bit quicker and a bit easier to install, so I wrote this program in C++.
Ensure that you have access to a valid OpenAI API key and ensure that this API key is set as the following environment variable:
OPENAI_API_KEY="<your-api-key>"
Interested in a specific release? To download 1.0.0
, for example:
wget https://github.com/dsw7/GPTifier/archive/refs/tags/v1.0.0.tar.gz
Which will yield:
v1.0.0.tar.gz
Then run:
tar -xvf v1.0.0.tar.gz
Which will generate:
GPTifier-1.0.0
Change directories into GPTifier-1.0.0
and proceed with the next steps.
To set the product up, simply run the make
target:
make compile
The binary will be installed into whatever install directory is resolved by CMake's install().
{fmt}
This project uses {fmt} for string formatting. The compiler will abort if {fmt}
cannot be found anywhere. See Get Started for instructions on
installing {fmt}
.
This project makes reference to a "home directory" (~/.gptifier
, specifically) that must be set up prior to
running the program. To set up ~/.gptifier
, run:
./setup
This script will dump a configuration file under ~/.gptifier
. Open the file:
~/.gptifier/gptifier.toml
And apply the relevant configurations. Next, drop into the program:
gpt run
The program should start an interactive session if the configuration file was properly set up.
The compilation process will generate many build artifacts. Clean up the build artifacts by running:
make clean
run
commandThis command works with OpenAI's chat completion models, such as GPT-4 Turbo and GPT-4.
Simply run gpt run
! This will begin an interactive session. Type in a prompt:
$ gpt run
------------------------------------------------------------------------------------------
Input: What is 3 + 5?
And hit Enter. The program will dispatch a request and return:
...
Results: 3 + 5 equals 8.
------------------------------------------------------------------------------------------
Export:
> Write reply to file? [y/n]:
In the above example, the user is prompted to export the completion a file. Entering y will print:
...
> Writing reply to file /home/<your-username>/.gptifier/completions.gpt
------------------------------------------------------------------------------------------
Subsequent requests will append to this file. In some cases, prompting interactively may be undesirable, such
as when running automated unit tests. To disable the y/n prompt, run gpt run
with the -u
or
--no-interactive-export
flags.
A chat completion can be run against an available model by specifying the model name using the -m
or
--model
option. For example, to create a chat completion via command line using the GPT-4 model, run:
gpt run --model gpt-4 --prompt "What is 3 + 5?"
[!TIP] A full list of models can be found by running the models command
[!NOTE] See Input selection for more information regarding how to pass a prompt into this command
short
commandThe short
command is almost identical to the the run
command, but this command returns
a chat completion under the following conditions:
An example follows:
gpt short --prompt "What is 2 + 2?"
Which will print out:
2 + 2 equals 4.
[!TIP] Use this command if running GPTifier via something like
vim
'ssystem()
function
embed
commandThis command converts some input text into a vector representation of the text. To use the command, run:
gpt embed
------------------------------------------------------------------------------------------
Input: Convert me to a vector!
And hit Enter. The program will dispatch a request and return:
------------------------------------------------------------------------------------------
Request: {
"input": "Convert me to a vector!",
"model": "text-embedding-ada-002"
}
...
The results will be exported to a JSON file: ~/.gptifier/embeddings.gpt
. In a nutshell, the embeddings.gpt
file will contain a vector:
$$ \begin{bmatrix}a_1 & a_2 & \dots & a_{1536}\end{bmatrix}, $$
Where 1536 is the dimension of the output vector corresponding to model text-embedding-ada-002
. The cosine
similarity of a set of such vectors can be used to evaluate the similarity between text.
[!NOTE] See Input selection for more information regarding how to pass embedding text into this command
models
commandThis command returns a list of currently available models. Simply run:
gpt models
Which will return:
------------------------------------------------------------------------------------------
Model ID Owner Creation time
------------------------------------------------------------------------------------------
dall-e-3 system 2023-10-31 20:46:29
whisper-1 openai-internal 2023-02-27 21:13:04
davinci-002 system 2023-08-21 16:11:41
... ... ...
For certain commands, a hierarchy exists for choosing where input text comes from. The hierarchy roughly follows:
Check for raw input via command line option:
gpt run -p "What is 3 + 5?"
or gpt embed -i "A foo that bars"
Check for input file specified via command line:
gpt [run | embed] -r <filename>
Check for a default input file in the current working directory:
Inputfile
exists in the current directory, read from this fileMakefile
or perhaps a Dockerfile
Read from stdin:
vim
In the Exporting a result section, it was stated that results can be voluntarily
exported to ~/.gptifier/completions.gpt
. One may be interested in integrating this into a vim
workflow.
This can be achieved as follows. First, add the following function to ~/.vimrc
:
function OpenGPTifierResults()
let l:results_file = expand('~') . '/.gptifier/completions.gpt'
if filereadable(l:results_file)
execute 'vs' . l:results_file
else
echoerr l:results_file . ' does not exist'
endif
endfunction
Then add a command to ~/.vimrc
:
" Open GPTifier results file
command G :call OpenGPTifierResults()
The command G
will open ~/.gptifier/completions.gpt
in a separate vertical split, thus allowing for cherry
picking saved OpenAI completions into a source file, for example.
GPTifier's access to OpenAI resources can be managed by setting up a GPTifier project under OpenAI's user platform. Some possibilities include setting usage and model limits. To integrate GPTifier with an OpenAI project, open GPTifier's configuration file:
vim +/project-id ~/.gptifier/gptifier.toml
And set project-id
to the project ID associated with the newly created GPTifier project. The ID can be
obtained from the General settings page
(authentication is required).
To run unit tests:
make test
This target will compile the current branch, then run pytest unit tests against the branch. The target will also run Valgrind tests in an attempt to detect memory management bugs.
Code in this project is formatted using ClangFormat. This project uses the Microsoft formatting style. To format the code, run:
make format
Code in this project is linted using cppcheck. To run the linter:
make lint
All bash code in this project is subjected to shellcheck static analysis. Run:
make sc
See shellcheck for more information.