We determine the weight of chess pieces by regression analysis

    Hello, Habr!

    This article will focus on a small programming sketch on the subject of machine learning. I came up with the idea when I took a course here known to many as a Machine Learning course taught by Andrew Ng on Kurser. After getting acquainted with the methods described in the lectures, I wanted to apply them to some real task. It did not take a long time to search for a topic - as a subject area, optimization of our own chess engine was just asking for itself.

    Introduction: about chess programs



    We will not go into details about the architecture of chess programs - this could be the topic of a separate publication or even a series of them. Consider only the most basic principles. The main components of almost any non-protein chess player are the search and evaluation of the position .

    A search is an enumeration of options, that is, an iterative deepening of the game tree. The evaluation function displays a set of positional signs on a numerical scale and serves as a target function for finding the best move. It is applied to the leaves of the tree, and gradually "returns" to its original position (root) using the alpha-beta procedure or its variations.

    Strictly speaking, the realthe score can take only three values: win, lose or draw - 1, 0 or ½. By Zermelo's theorem, for any given position, it is uniquely determined. In practice, due to a combinatorial explosion, no computer is able to calculate options to the leaves of the full game tree (exhaustive analysis in endgame databases is a separate case; 32-figure tables will not appear in the foreseeable future ... and in the boundless, most likely, also). Therefore, the programs work in the so-called Shannon model - they use a truncated game tree and an approximate estimate based on various heuristics.

    Search and evaluation do not exist independently of each other, they must be well balanced. Modern brute force algorithms have long been no longer a “dumb” enumeration of options; they include numerous special rules, including those related to position estimation.

    The first such search improvements appeared at the dawn of chess programming, in the 60s of XX century. Mention may be made, for example, of the forced variation technique (FW)- extension of individual search branches until the position “calms down” (the check and mutual capture of pieces will end). Extensions significantly increase the tactical vigilance of the computer, and also lead to the fact that the search tree becomes very heterogeneous - the length of individual branches can be several times the length of neighboring, less promising ones. Other search improvements, on the other hand, are clipping or shortening the search - and here the criterion for dropping bad options can, among other things, be the same static estimate.

    Parametrization and search improvement by machine learning methods is a separate interesting topic, but for now we will leave it aside. So far, we will only consider the evaluation function.

    How a computer evaluates a position


    A static estimate is a linear combination of various position attributes taken with some weighting factors. What are these signs? First of all, the number of pieces and pawns on both sides. The next important sign is the position of these figures, centralization, occupation by long-range figures of open lines and diagonals. Experience shows that taking into account only these two factors - the sum of the material and the relative value of the fields (fixed in the form of tables for each type of figure) - if there is a high-quality search, it can already provide game strength in the range of up to 2000-2200 Elo points. This is the level of a good first rank or candidate for a master.

    Further refinement of the assessment may include more and more subtle signs of a chess position: the presence and advancement of passed pawns, the proximity of the pieces to the position of the enemy king, his pawn cover, etc. The legendary Kaissa, the first world champion among programs (1974) had an estimated a function of several dozen signs . All of them are described in detail in the book “The machine plays chess,” a bibliographic link to which is given at the end of the article.


    One of the most “sophisticated” evaluation functions was for the Deep Blue machine, which became famous for its matches with Kasparov in 1996-97. (A detailed history of these matches can be found in a recent series of articles on Geektimes .)

    It is widely believed that the power of Deep Blue was based solely on the enormous speed of enumerating options. 200 million positions per second, a complete search (without cutoffs) for 12 half-moves - chess programs on modern hardware are just getting closer to such parameters. However, it was not only about speed. By the volume of "chess knowledge" in the evaluation function, this machine was also much superior to everyone. The Deep Blue score was implemented in hardware and included up to 8,000 different attributes. Strong grandmasters were involved in setting its coefficients (it is reliably known about working with Joel Benjamin, David Bronstein played test games with different versions of the machine).

    Without resources such as the creators of Deep Blue, we will limit the task. Of all the signs of the position taken into account for calculating the assessment, we take the most significant - the ratio of the material on the board.

    Cost of figures: simple models


    If you take any chess book for beginners, immediately after the chapter with an explanation of the chess moves there is usually a plate with the comparative value of the pieces, something like this:
    A typeCost
    Pawn1
    Horse3
    Elephant3
    Rook5
    Queen9
    King
    The king is sometimes credited with a final cost, obviously greater than the sum of all the material on the board - for example, 200 units. In this study, we will leave His Majesty alone, and we will not consider kings at all. Why? The answer is simple: they are always present on the board, therefore their material estimates are mutually subtracted, and they do not affect the general balance of forces.

    The quoted values ​​of the figures should only be considered as some basic guidelines. In reality, pieces can “rise in price” and “become cheaper” depending on the situation on the board, as well as on the stage of the game. As a first-order amendment, they usually consider combinations of two or three figures - one's own and one's opponent.

    This is how the third world champion evaluated various combinations of material in his classic “Chess Game Textbook”Jose Raul Capablanca :

    Jose Raul CapablancaFrom the point of view of the general theory of an elephant and a horse, it should be considered equally valuable, although, in my opinion, the elephant in most cases is a stronger figure. Meanwhile, it is considered fully established that two elephants are almost always stronger than two horses.

    The elephant in the game against pawns is stronger than the knight, and together with the pawns it also turns out to be stronger against the rook than the knight. An elephant and a rook are also stronger than a horse and a rook, but a queen and a horse can be stronger than a queen and an elephant. An elephant often costs more than three pawns, but it can rarely be said about a horse; he may even be weaker than three pawns.

    A rook is equal in strength to a knight and two pawns, or to an bishop and two pawns, but, as mentioned above, an elephant in a fight against a rook is stronger than a knight. Two rooks are slightly stronger than the queen. They are slightly weaker than two horses and an elephant and even weaker than two elephants and a horse. The strength of the horses decreases as the pieces on the board change, the strength of the rook, on the contrary, increases.

    Finally, as a rule, three light pieces are stronger than the queen.


    It turns out that most of these rules can be satisfied by remaining within the framework of the linear model, and simply slightly shifting the cost of the figures from their "school" values. For example, in one of the articles the following boundary conditions are given:

    B > N > 3P
    B + N = R + 1.5P
    Q + P = 2R

    And the values ​​that satisfy them:

    P = 100
    N = 320
    B = 330
    R = 500
    Q = 900
    K = 20000


    The names of the variables correspond to the notation of the figures in the English notation: P - pawn, N - horse, B - bishop, R - rook, Q - queen, K - king. The values ​​hereinafter are indicated in hundredths of a pawn.

    In fact, the given set of values ​​is not the only solution. Moreover, even non-compliance with some of the “inequalities to them. Capablanca ”will not lead to a sharp drop in the power of the game program, but only affect its stylistic features.

    As an experiment, I held a small match tournament of four versions of my GreKo engine with different weighting figures against three other programs - each version played 3 matches of 200 games with ultra-small time control (1 second + 0.1 sec. Per move). The results are shown in the table:

    VersionPawnHorseElephantRookQueenvs. Fruit 2.1vs. Crafty 23.4vs. Delfi 5.4Rating
    GreKo 12.5100400400600120061.076.071.02567
    GreKo A10030030050090055.069.073.02552
    GreKo B10032033050090057.071.064.02548
    GreKo C100325325550110072.574.569.02575
    We see that some variations in the scales of the figures lead to fluctuations in the strength of the game in the range of 20-30 Elo points. Moreover, one of the test versions showed even better results than the main version of the program. However, it is premature to unequivocally state that the game is strengthened on such a small number of games - the confidence interval for calculating the rating is comparable to several tens of Elo points.

    The "classical" values ​​of chess material were obtained intuitively by means of understanding by chess players of their practical experience. Attempts have also been made to bring some mathematical base under these values ​​- for example, on the basis of the mobility of the figures, the number of fields that they can control. We will try to approach the issue experimentally - based on the analysis of a large number of chess games. To calculate the cost of the pieces, we do not need an approximate assessment of the positions from these parties - only their results, as the most objective measure of success in chess.

    Material advantage and logistic curve


    For statistical analysis, a PGN file was taken containing almost 3,000 chess games in a blitz between 32 different chess engines, in the range from 1800 to 3000 Elo points. Using a specially written utility for each batch, a list of material relationships that arose on the board was compiled. Each ratio of the material did not get into statistics immediately after taking a piece or turning a pawn - at first retaliatory captures or several “silent” moves had to take place. Thus, short-term “jumps in material” were filtered out for 1-2 turns during exchanges.

    Then, according to the already known scale “1-3-3-5-9”, the material balance of the position was calculated, and for each of its values ​​(from -24 to 24) the number of points scored by white was accumulated. The statistics obtained are presented in the following graph:



    On the x axis is the material balance of the ΔM position from the point of view of white, in pawns. It is calculated as the difference in the total value of all white pieces and pawns and the same value for black. On the y axis - selective mathematical expectation of the result of the game (0 - black win, 0.5 - draw, 1 - white win). We see that the experimental data are very well described by the logistic curve :

    p (\ Delta M) = \ frac {1} {1 + e ^ {- \ alpha \ Delta M}}

    A simple visual selection allows you to determine the parameter of the curve: α = 0.7 , its dimension is the inverse pawns.
    For comparison, the graph shows two more logistic curves with other values ​​of the parameter α .

    What does this mean in practice? Let us see a randomly chosen position in which White has an advantage of 2 pawns ( ΔM = 2) With a probability close to 80%, we can say: the game will end with a white victory. Similarly, if White does not have an elephant or a knight ( ΔM = -3 ), their chances of not losing are only about 12%. Positions with material equality ( ΔM = 0 ), as might be expected, most often end in a draw.

    Formulation of the problem


    Now we are ready to formulate the problem of optimizing the evaluation function in terms of logistic regression.
    Let us be given a set of vectors of the following form:

    x_j = (\ Delta_P, \ Delta_N, \ Delta_B, \ Delta_R, \ Delta_Q) _j

    where Δ i , i = P ... Q is the difference in the number of white and black pieces of type i (from the pawn to the queen, we do not consider the king). These vectors are material relationships found in batches (usually multiple vectors correspond to one lot).

    Let also be given the vector y j , whose components take values ​​0, 1 and 2. These values ​​correspond to the outcomes of the games: 0 - black win, 1 - draw, 2 - white win.

    It is required to find the vector θ of cost figures:

    \ theta = (\ theta_P, \ theta_N, \ theta_B, \ theta_R, \ theta_Q)

    minimizing the cost function for the logistic regression:

    J (\ theta) = \ frac {1} {m} [\ sum_ {i = 1} ^ {m} y ^ {(i)} log (h_ \ theta (x ^ {(i)})) + ( 1-y ^ {(i)}) log (1-h_ \ theta (x ^ {(i)})]],
    where
    h_ \ theta (x) = \ frac {1} {1 + e ^ {- \ theta ^ Tx}}- the logistic function to the vector argument.

    To prevent “over-training” and instability effects in the solution found, a regularization parameter can be added to the cost function, which prevents the coefficients in the vector from taking too large values:

    J_ {reg} (\ theta) = J (\ theta) + \ frac {\ lambda} {2m} \ sum_ {j = 1} ^ {5} {\ theta_j ^ 2}

    The coefficient value for the regularization parameter is chosen small, in this case, the value λ = 10 -6 was used .

    To solve the minimization problem, we apply the simplest gradient descent method with a constant step:

    \ theta_ {n + 1} \ leftarrow \ theta_n - \ alpha \ nabla J_ {reg} (\ theta_n)

    where the gradient components of the function J reg have the form:

    (\ nabla J_ {reg}) _ 0 = \ frac {1} {m} \ sum_ {i = 1} ^ {m} {(h _ {\ theta} (x ^ {(i)}) - y ^ {( i)}) x_0 ^ {(i)}}

    (\ nabla J_ {reg}) _ j = \ frac {1} {m} \ sum_ {i = 1} ^ {m} {(h _ {\ theta} (x ^ {(i)}) - y ^ {( i)}) x_j ^ {(i)}} - \ frac {\ lambda} {m} {\ theta_j}

    Since we are looking for a symmetric solution that, given material equality, gives the probability of a batch ½ outcome, we always set the zero coefficient of the vector θ to zero, and we only need the second of these expressions for the gradient.

    The derivation of the above formulas will not be considered here. To anyone interested in their rationale, I highly recommend the already mentioned machine learning course at Coursera.

    Program and Results


    Since the first part of the task — parsing PGN files and highlighting a set of attributes for each position — was already practically implemented in the chess engine code, the rest was also decided to be written in C ++. The source code of the program and test batches in PGN files are available on github . The program can be built and run under Windows (MSVC) or Linux (gcc).

    The ability to use specialized tools in the future like Octave, MATLAB, R, etc. also provided - in the process, the program generates an intermediate text file with sets of attributes and outcomes of batches, which can easily be imported into these environments.

    The file contains a textual representation of a set of vectors x j - a matrix of dimension mx (n + 1), the first 5 columns of which contain the components of the material balance (from a pawn to the queen), and in the 6th - the result of the game.

    Consider a simple example. The following is a PGN record of one of the test batches.

    [Event "OpenRating 31"]
    [Site "BEAR-HOME"]
    [Date "2013.05.09"]
    [Round "1"]
    [White "Simplex 0.9.7"]
    [Black "IvanHoe 999946f"]
    [Result "0-1"]
    [TimeControl "60+1"]
    [PlyCount "96"]
    1. d4 d5 2. c4 e6 3. e3 c6 4. Nf3 Nd7 5. Nbd2 Nh6 6. e4 Bb4 7. a3 Ba5 8.
    cxd5 exd5 9. exd5 cxd5 10. Qe2+ Kf8 11. Qb5 Nf6 12. Bd3 Qe7+ 13. Kd1 Bb6
    14. Re1 Bd7 15. Qb3 Be6 16. Re2 Qc7 17. Qb4+ Kg8 18. Nb3 Bf5 19. Bb1 Bxb1
    20. Rxb1 Nf5 21. Bd2 a5 22. Qa4 h6 23. Rc1 Qb8 24. Bxa5 Qf4 25. Qb4 Bxa5
    26. Nxa5 Kh7 27. Nxb7 Rab8 28. a4 Ne4 29. h3 Rhc8 30. Ra1 Rc7 31. Qa3 Rcxb7
    32. g3 Qc7 33. Rc1 Qa5 34. Rxe4 dxe4 35. Rc5 Qa6 36. Nd2 Nxd4 37. Rc4 Nb3
    38. Nxb3 Qxc4 39. Nd2 Rd8 40. Qc3 Qf1+ 41. Kc2 Qe2 42. f4 e3 43. b4 Rc7 44.
    Kb3 Qd1+ 45. Ka2 Rxc3 46. Nb1 Qxa4+ 47. Na3 Rc2+ 48. Ka1 Rd1# 0-1

    The corresponding fragment of the intermediate file has the form:

     0  0  0  0  0  0
     1  0  0  0  0  0
     2  0  0  0  0  0
     2 -1  0  0  0  0
     2  0  0 -1  0  0
     1  0  0 -1  0  0
     1  1  0 -2  0  0

    In the 6th column, everywhere 0 is the result of the game, Black's victory. The remaining columns show the balance of the number of pieces on the board. The first line is full material equality, all components are 0. The second line is White’s extra pawn, this is the position after the 24th move. Note that the previous exchanges are not reflected in any way, they occurred too quickly. After the 27th move, White already has 2 extra pawns - this is line 3. And so on. Before White’s final attack by White, a pawn and a knight for two rooks:

    3r4 / 2r2ppk / 7p / 8 / PP3P2 / 1KQ1p1PP / 3Nq3 / 8

    Like the openings in the opening, the final moves in the game did not affect the contents of the file. They were eliminated by the “filter of tactics” because they represented a series of captures, checkers and evasions from them.

    The same records are created for all analyzed parties, an average of 5-10 lines per game. After parsing the PGN database with batches, this file is fed to the input of the second part of the program, which is actually engaged in solving the minimization problem.

    As a starting point for gradient descent, you can, for example, take a vector with the values ​​of the weights of the figures from the textbook. But it’s more interesting not to give the algorithm any hints, and start from scratch. It turns out that our cost function is quite “good” - the trajectory quickly, in a few thousand steps, reaches a global minimum. How the values ​​of the pieces change in this case are shown in the following graph (at each step, normalization was carried out for the weight of the pawn = 100):



    Cost function convergence graph

    Program text output
    C:\CHESS>pgnlearn.exe OpenRating.pgn
    Reading file: OpenRating.pgn
    Games: 2997
    Created file: OpenRating.mat
    Loading dataset...
    [ 20196 x 5 ]
    Solving (gradient method)...
    Iter 0: [ 0 0 0 0 0 ] -> 0.693147
    Iter 1000: [ 0.703733 1.89849 2.31532 3.16993 6.9148 ] -> 0.470379
    Iter 2000: [ 0.735853 2.08733 2.51039 3.47418 7.7387 ] -> 0.469398
    Iter 3000: [ 0.74429 2.13676 2.56152 3.55386 7.95879 ] -> 0.46933
    Iter 4000: [ 0.746738 2.15108 2.57635 3.57697 8.02296 ] -> 0.469324
    Iter 5000: [ 0.747467 2.15535 2.58077 3.58385 8.0421 ] -> 0.469324
    Iter 6000: [ 0.747685 2.15663 2.58209 3.58591 8.04785 ] -> 0.469324
    Iter 7000: [ 0.747751 2.15702 2.58249 3.58653 8.04958 ] -> 0.469324
    Iter 8000: [ 0.747771 2.15713 2.58261 3.58672 8.0501 ] -> 0.469324
    Iter 9000: [ 0.747777 2.15717 2.58265 3.58678 8.05026 ] -> 0.469324
    Iter 10000: [ 0.747779 2.15718 2.58266 3.58679 8.0503 ] -> 0.469324
    PIECE VALUES:
    Pawn:   100
    Knight: 288.478
    Bishop: 345.377
    Rook:   479.66
    Queen:  1076.56
    Press ENTER to finish

    After normalization and rounding, we obtain the following set of values:
    A typeCost
    Pawn100
    Horse288
    Elephant345
    Rook480
    Queen1077
    King
    Check if the Capablanca rules are being followed?
    RatioNumerical valuesPerformed?
    B> n345> 288Yes
    B> 3P345> 3 * 100Yes
    N> 3P288 <3 * 100not
    B + N = R + 1.5P345 + 288 ~ = 480 + 1.5 * 100yes (with an error <0.5%)
    Q + P = 2R1077 + 100> 2 * 480not
    The result is quite encouraging. Not knowing anything about the events actually occurring on the board, considering only the outcomes of the games and the material removed from the board, our algorithm was able to derive the values ​​of the figures, which are quite close to their traditional values.

    Can the obtained values ​​be used to enhance the game program? Alas, at this stage the answer is no. Test blitz matches show that the strength of the GreKo game from the use of the parameters found has not changed, and in some cases has even decreased. Why did it happen? One of the obvious reasons is the already mentioned close relationship between search and position evaluation. In the search for the engine, a whole series of heuristics has been laid down to cut off unpromising branches, and the criteria for these cutoffs (threshold values) are closely tied to a static estimate. Changing the cost of figures, we sharply shift the scale of quantities - the shape of the search tree changes, a new balancing of constants is required for all heuristics. This is a rather time-consuming task.

    Experiment with lots of people


    Let’s try to expand our experiment by considering games not only of computers, but also of people. As a dataset for training, we take the games of two outstanding modern grandmasters - world champion Magnus Carlsen and ex-champion Anand Viswanathan , as well as Adolf Andersen, representative of 19th-century romantic chess .

    Anand and Carlsen
    Anand and Carlsen vie for the world crown

    The table below shows the results of solving the regression problem for the games of these chess players.
    AnandCarlsenAndersen
    Pawn100100100
    Horse216213286
    Elephant230243289
    Rook355352531
    Queen7627861013
    King
    It is easy to see that the “human” values ​​of the cost of the figures were not at all the same as those taught by beginners in textbooks. In the case of Carlsen and Anand, a smaller scale of the scale is striking - the queen costs a little more than 7.5 pawns, respectively, the whole range for other pieces has shrunk. The bishop is still a little more expensive than the knight, but both of them do not reach the traditional three pawns. Two rooks are weaker than the queen, etc.

    I must say that a similar picture is observed not only in Vichy and Magnus, but also for most grandmasters, whose games were tested. And some kind of style dependence has not been clarified. The values ​​are shifted from the classical ones in the same direction by positional masters like Mikhail Botvinnik and Anatoly Karpov, and by attacking chess players like Mikhail Tal, Judit Polgar ...

    Одним из немногих исключений стал Адольф Андерсен — лучший европейский игрок середины XIX века, автор знаменитой «вечнозелёной партии». Вот для него значения стоимости фигур оказались очень близки к тем, которые используют компьютерные программы. Напрашиваются самые разнообразные фантастические гипотезы, вроде тайного читерства немецкого маэстро через портал во времени… (Шутка, конечно. Адольф Андерсен был крайне порядочным человеком, и никогда бы себе такого не позволил.)

    Adolf Andersen
    Адольф Андерсен (1818-1879),
    человек-компьютер


    Why is there such an effect with the compression of the range of cost of the figures? Of course, do not forget about the extreme limitations of our model - taking into account additional positional factors could make significant adjustments. But, perhaps, the point is in a weak technique for a person to realize a material advantage - with respect to modern chess programs, of course. Simply put, it’s hard for a person to accurately play the queen, because he has too many opportunities. I recall a textbook anecdote about Lasker (in other versions - Capablanca / Alekhine / Thale), who allegedly played with a handicap with an occasional fellow traveler on a train. The climax was: “The queen only interferes!”

    Conclusion


    We examined one of the aspects of the evaluation function of chess programs - the cost of the material. We were convinced that this part of the static estimate in the Shannon model has a completely “physical” meaning - it is smoothly (through the logistic function) related to the probability of the outcome of the game. Then we examined several common combinations of weighting figures, and evaluated the order of their influence on the strength of the game program.

    Using the regression apparatus on the games of various chess players, both live and computer, we determined the optimal values ​​of the figures under the assumption of a purely material evaluation function. They discovered an interesting effect of lower material cost for people compared to machines, and they "suspected of cheating" of one of the chess classics. We tried to apply the found values ​​in a real engine and ... did not achieve much success.

    Куда двигаться дальше? Для более точной оценки позиции можно добавлять в модель новые шахматные знания — то есть увеличивать размерность векторов x и θ. Даже оставаясь в области только материальных критериев (без учёта полей, занимаемых фигурами на доске), можно добавить целый ряд релевантных признаков: два слона, пара из ферзя и коня, пара из ладьи и слона, разноцвет, последняя пешка в эндшпиле… Шахматистам хорошо известно, как ценность фигур может зависеть от их сочетания или стадии партии. В шахматных программах соответствующие веса (бонусы или штрафы) могут достигать десятых долей пешки и более.

    One of the possible ways (along with increasing the sample size) is to use the games played by a previous version of the same program for training. In this case, there is hope for greater consistency of some features of the assessment with others. It is also possible to use as a function of value not the success of predicting the outcome of the game (which may end several tens of moves after the position in question), but the correlation of the static estimate with the dynamic one - i.e. with the result of alpha-beta search to a certain depth.

    However, as noted above, for the direct enhancement of the game program, the results obtained may be unsuitable. It often happens this way: after training on a series of tests, the program begins to better solve tests(in our case, to predict the results of games), but it’s not better to play ! Currently, in chess programming the mainstream has become intensive testing exclusively in a practical game. New versions of top engines before testing are tested on tens and hundreds of thousands of games with ultra-short time controls ...

    In any case, I plan to conduct a number of experiments on the statistical analysis of chess games. If this topic is of interest to the Habr audience, the article may be continued when receiving any non-trivial results.

    In the course of research, not a single chess piece was affected.

    Bibliography


    Adelson-Welsky, G.M .; Arlazarov, V.L .; Bitman, A.R. and others. - The machine plays chess. M .: Nauka, 1983
    The book of authors of the Soviet program "Kaissa", which describes in detail both the general algorithmic foundations of chess programs and the specific details of the implementation of the evaluation function and the search for "Kaissa".

    Kornilov E. - Programming chess and other logic games. SPb .: BHV-Petersburg, 2005
    A more modern and “practical” book contains a large number of code examples.

    Feng-hsiung Hsu - Behind Deep Blue. Princeton University Press, 2002
    A book by one of the creators of the Deep Blue chess machine, which details the history of its creation and the internal structure. The appendix contains the texts of all chess games played by Deep Blue in official competitions.

    References


    Chessprogramming Wiki - an extensive collection of materials on all theoretical and practical aspects of chess programming.

    Machine Learning in Games is a site dedicated to machine learning in games. It contains a large number of scientific articles on research in the field of chess, drafts, go, reverse, backgammon, etc.

    Kaissa is a page dedicated to Kaissa. The coefficients of its evaluation function are presented in detail.

    Stockfish is today's strongest open source software.

    A comparison of Rybka 1.0 beta and Fruit 2.1
    A detailed comparison of the internal structure of two popular chess programs.

    GreKo is the author’s chess program.
    It was used as one of the sources of test computer games. Also, based on its move generator and PGN notation parser, a utility was developed for analyzing experimental data.

    pgnlearn - utility code and sample batch files on github.

    Also popular now: