Witek902 / Caissa

Strong chess engine
MIT License
63 stars 10 forks source link
ai chess cpp neural-network nnue

Caissa Chess Engine

LinuxBuildStatus

ArtImage

(image generated with DALL·E 2)

Overview

Strong, UCI command-line chess engine, written from scratch in C++ in development since early 2021. Optimized for regular chess, FRC (Fischer Random Chess) and DFRC (Double Fischer Random Chess).

Playing strength

Caissa is listed on many chess engines ranking lists:

History / Originality

The engine has been written from the ground up. In early versions it used a simple PeSTO evaluation, which was replaced by the Stockfish NNUE for a short time. Since version 0.7, Caissa uses it's own efficiently updated neural network, trained with Caissa self-play games using a custom trainer. In a way, the first own Caissa network is based on Stockfish's network, but it was much weaker because of the small data set used back then (a few million positions). Currently (as of version 1.18) over 12 billion newly generated positions are used. Also, the old self-play games are successively purged, so that the newer networks are trained only on the most recent games generated by the most recent engine, and so on.

The runtime neural network evaluation code is located in PackedNeuralNetwork.cpp and was inspired by nnue.md document. The neural network trainer is written completely from scratch and is located in NetworkTrainer.cpp, NeuralNetwork.cpp and other NeuralNetwork* files. The trainer is purely CPU-based and is heavily optimized to take advantage of many threads and AVX instructions as well as it exploits the sparse nature of the nets.

The games are generated with the utility SelfPlay.cpp, which generates games with a fixed number of nodes/depth and saves them in a custom binary game format to save space. The opening books used are either Stefan's Pohl UHO books or DFRC openings with few random moves played at the beginning.

Supported UCI options

Provided EXE versions

Features

General

Search Algorithm

Evaluation

Neural net trainer

Misc

Modules

The projects comprises following modules:

Compilation

Linux - makefile

To compile for Linux just call make in src directory:

cd src
make -j

NOTE: This will compile the default AVX2/BMI2 version.

Linux - CMake

To compile for Linux using CMake:

mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Final ..
make -j

NOTE: Currently, the project compiles with AVX2/BMI2 support by default.

There are three configurations supported:

Windows - Visual Studio

To compile for Windows, use GenerateVisualStudioSolution.bat to generate Visual Studio solution. The only tested Visual Studio version is 2022. Using CMake directly in Visual Studio was not tested.

NOTE: After compilation make sure you copy appropriate neural net file from data/neuralNets directory to location where executable file is generated (build/bin on Linux or build\bin\x64\<Configuration> on Windows).