Closed d5364t54ytfr4 closed 10 months ago
I see. I asked asked Google Bard about the innovations in Chess Engines and there were a lot. So, I expected this engine to be a lot better. Also, I asked Bard about if a C compiler created using modern concepts can be faster than an assembler made in the 80s and it said yes, a C compiler made using modern concepts can indeed be faster.
I guess for Chess Engines it's because you are just a hobbyist when it comes to Chess Engines since you didn't study Chess Engine tech. And for the modern C compilers being faster than 80s Assemblers, that might not be true. Chat AIs aren't always accurate.
Google Bard when asked about Chess Engine innovations from 1975:
Pre-1975:
Minimax search: Early chess programs like Kaissa relied on this basic algorithm to explore possible moves and choose the one that minimizes their opponent's potential gain. Evaluation functions: Hand-crafted rules (e.g., material count, piece placement) assigned positional values to guide minimax search. Static evaluation: Positions were analyzed without considering future moves, leading to shallow play. 1975-1985:
Alpha-beta pruning: This search optimization significantly reduced the number of moves to consider, paving the way for deeper analysis. Dynamic evaluation: Positional values became context-dependent, accounting for factors like pawn structure and king safety. Opening books: Pre-computed databases of strong opening lines improved early play consistency. 1985-1995:
Multi-ply search: Engines began analyzing further ahead, anticipating opponent's responses and planning multi-layered strategies. Quiescence search: Focused evaluation in tactical situations ensured accurate calculation of immediate threats and captures. Null-move pruning: Another search optimization identified positions where forcing opponent's next move led to no immediate advantage. 1995-2005:
Positional understanding: Engines started appreciating subtle positional concepts like outposts, weak squares, and pawn breaks. Endgame databases: Pre-calculated solutions for specific endgame positions saved vast amounts of calculation time. Heuristics: Custom evaluation criteria for specific piece configurations and pawn structures further refined positional judgments. 2005-2015:
Hash tables: Stored previously analyzed positions to avoid redundant calculations, significantly increasing search efficiency. Bitboard representation: Efficient data structure for manipulating piece positions and board features, enabling faster move generation and bitwise operations. Monte Carlo Tree Search (MCTS): This probabilistic search algorithm explored promising and less obvious variations, leading to more creative and resilient play. 2015-present:
Machine learning (ML): Neural networks trained on vast datasets of human games further refined evaluation functions and improved strategic understanding. Self-learning: AlphaZero demonstrated the potential of engines learning from scratch, surpassing previous engines trained on human data. Hybrid approaches: Combining traditional search algorithms with MCTS and ML led to even stronger engines like Hydra. This timeline is not exhaustive, but it highlights some key algorithms and concepts that have shaped the evolution of chess engines. Remember, these advancements often go hand-in-hand and build upon each other, leading to the incredibly powerful chess engines we have today.
The same with ChatGPT
1950s-1960s: Early Concepts
Minimax Algorithm: Developed by Claude Shannon, this algorithm forms the basis for many game-playing programs. It involves exploring the game tree to find the best move assuming optimal play from both sides. 1970s-1980s: Hardware Limitations
Alpha-Beta Pruning: An optimization of the Minimax algorithm, reducing the number of nodes evaluated in the search tree. Bitboards: Representing chess positions efficiently using bitwise operations to speed up move generation and evaluation. 1990s: Rise of Commercial Engines
Transposition Tables: Storing previously calculated positions to avoid redundant computations. Quiescence Search: A refinement of the search algorithm, extending the search to positions with fewer tactical variations. Late 1990s: Deep Blue and Endgame Tablebases
Endgame Tablebases: Precomputed databases for certain endgame positions, providing perfect play in those situations. Parallel Search: Utilizing multiple processors to explore different branches of the search tree simultaneously. 2000s: Monte Carlo Tree Search (MCTS)
MCTS: A probabilistic search algorithm that balances exploration and exploitation, gaining popularity in games like Go and later in chess. Hardware Improvements: Increasing processing power allows for deeper search and more sophisticated evaluations. 2010s: Neural Networks and Deep Learning
Deep Learning: Neural networks applied to chess engines for position evaluation and move selection. AlphaZero: Developed by DeepMind, AlphaZero uses self-play and deep learning to master chess without human knowledge of openings or endgames. Recent Years: Neural Network Enhancements
Neural Network Architectures: Continued advancements in neural network architectures for more accurate and efficient position evaluation. Hardware Acceleration: Utilizing specialized hardware, such as GPUs and TPUs, for faster neural network computations. Ongoing Developments: Reinforcement Learning
Reinforcement Learning: Further exploration of reinforcement learning techniques to improve chess engines' decision-making processes. Hybrid Approaches: Combining traditional algorithms with neural network approaches for better overall performance.
It's not in the same league as those engines. This engine has no opening book and uses a very simple min/max algorithm to decide what moves to make. On higher skill levels, it will take the first round moves and run them through a second (make the move and do the algorithm over again) and more rounds. Obviously the quality of the "issues" the algorithm considers matters a lot and I outline some of that in the readme (Is a piece being attacked, is it defending another piece, it's own value, etc.). But, a very important factor is knowing which moves should be considered and that's where this engine is especially weak. Lastly, the engines of the day will have been written is assembly language and can do a lot more work in the same time than this engine can do, being written in compiled "C" code. To rewrite this engine, I will have to learn about chess engines and not just make up stuff as I go along, which is what I did here.