AlphaGo vs. Ke Jie: Professional Go Player Ratings

    In March 2016, one of the strongest people in Go players lost for the first time to a computer system, playing without a handicap. Until that moment, winning with 4 handicaps was considered the best achievement , and playing on equal terms was still far away - perhaps sometime in the next decade. Suddenly, the AlphaGo system from the British DeepMind appeared on the scene, which 4-1 beat one of the most famous players of recent years, Lee Sedola.

    A year ago, the South Korean player of the 9th professional dana lost to the computer system of the Google division, and in the perception of many go went into the category of a game in which machines are played stronger than the best of people. More AlphaGo almost did not "glow." In April of this year, DeepMind eruptedannouncement: AlphaGo will play with Ke Jie, the player on the first line of ratings. He himself announced his intention to play against AI last summer, but only this year the exact date of the match was announced. DeepMind promised that the program will additionally play against five masters at once.

    The games took place on the planned days, and their result finally showed that the AlphaGo level is much higher than human. That fourth game of the match Lee Sedol - AlphaGo will probably remain the last human victory over this AI: at the end of the games, the developers announced that the system was leaving go.

    We discussed with two professional players the level of this version of the program, as well as the future of human relations and computer systems.

    In the photo: almost five go masters are almost ready to admit defeat - the AlphaGo system, their opponent, began to play lazily, as if anticipating a victory.

    Why does an American company need an Asian board game?




    Google proudly demonstrates the achievements of the British division of DeepMind. The search engine bought this company in 2014 for half a billion US dollars. DeepMind Technologies Limited focuses on artificial intelligence systems: specialists use machine learning to create algorithms that, for example, master video games perfectly . Product activity is not limited to the Atari 2600 arcade classic - DeepMind is trying to use its systems to analyze medical data.

    However, the whole business of Google is more or less built on smart systems. In 2016, 88%Alphabet Holding's revenue was provided by advertising. These are Google AdWords, YouTube video ads, and other tools. For contextual advertising, it is important to correctly guess the interests of the user, so the Internet giant collects huge amounts of data about users and analyzes them. There are many projects with the beginnings of AI: a voice assistant in Android and the Home column, camera algorithms of the latest Pixel smartphone, Google Translate neural translation, machine vision of Waymo robotic vehicles. Even the keyboard on a smartphone can claim to be smart.

    AlphaGo can be called a demonstration of the power of AI-endeavors within the search engine. It is not surprising that the matches are noticeable. The last time the games were held at the Four Seasons Hotel in Seoul. Google this time with China Weizi Associationand a local sports bureau organized the Future of Go Summit in Wujen, Zhejiang Province. This is a fairly significant collaboration with the Chinese authorities, given that Google services are blocked in mainland China. Betting on winnings increased: Lee Sedol could get a million dollars in case of victory, Ke Jie was promised one and a half.


    Noah Sheldon, Wired

    Training


    20 years ago, Kasparov first lost to the Deep Blue chess computer. After a decade, the superiority of a computer algorithm over a person in chess no longer raised questions. Why computers have not conquered the Asian game before?

    Outwardly, it’s easier: two players try to fence off a larger area on the board than the opponent, placing stones of two colors. But on a standard board, there are 19 × 19 possible positions of stones in a googol ( 10,100 ) times more than the positions of pieces on a chessboard. This already puts an end to the possibilities of brute-force brute force.

    There are more moves in the go game than in chess, and soon after the start of the game, almost all of the 361 points on the board have to be taken into account. The initial moves - fuseki - quickly begin to go into something original. In chess, pieces decrease from the board, stones are added to th. It is impossible to apply many of the algorithms that are characteristic of chess systems. Prior to AlphaGo, the best of the systems used expert bases with good moves, the assessment of moves using a tree search or the Monte Carlo method, but even they could beat only an amateur, but not a professional.

    DeepMind improved the existing approach. Monte Carlo method addedneural networks of policies and values ​​that are trained on 160 thousand games from the KGS server, and then in the games of the system against itself. The resulting product was first compared with other computer systems, then man - with one of the earlier versions of AlphaGo, three-time European champion Fan Hui played. The man lost five of five games.

    And that was just the beginning of the AI ​​test against humans. Go is lower in Europe than in Asia, the homeland of the game. Therefore, only after defeating the famous Korean Lee Sedole, AlphaGo could be recognized as a system that plays better than a person.

    In some ways, the Future of Go Summit resembled the events of 20 years ago. IBM put a specialized chess computer against a person, and Google demonstrated not only the power of software, but also a specialized Tensor Processing Unit (TPU) for machine learning.

    Convolutional neural networks are, as a rule, computations on graphic accelerators: video cards connected via PCIe, usually intended for computing and displaying graphics in games. The architecture features of video cards - hundreds and thousands of cores on a chip - help in well-parallelized tasks, but central processors are worse for such tasks. Both AMD and Nvidia produce specialized professional “number crushers”, Intel is pulling up with the Xeon Phi line of coprocessors .


    One version of TPU.

    Google claims to have created a Special Purpose Chip ( ASIC ), on which you can create a much more powerful neural network architecture. In machine learning tasks, TPU can surpass traditional video accelerators and central processors by 70 times, by 196 times in the number of calculations per watt.

    Last March, a whole computing cluster played against Lee Sedol. On the eve and during the game, the media repeated that these were 1920 processor cores and 280 video accelerators. It was, of course, impossible to verify what was actually there - AlphaGo was launched in the Google cloud, and optical fiber was installed in the Four Seasons Hotel for communication. Only in May 2016, Google admittedthat for more than a year now uses TPU in data centers. Confidential version AlphaGo Lee sauce were 50 TPU boards.


    Servers that beat the Korean master of the 9th dan.

    Against Ke Jie put up one car with just one TPU module. Was Jie easier?

    Game one



    Reutersz / Stringer, Quartz

    Record the broadcast in English
    Record the broadcast with comments in Russian (Ilya Shikshin and Ksenia Lifanova)
    Game moves

    In the period after December 29, 2016, an unexpectedly strong player appeared on the Korean server Tygem and Chinese Fox. He called himself Magister or Magist, then changed his nickname to Master and continued to crush opponents. The professional of the 9th dan, Gu Li, appointed a reward of 100 thousand yuan (about $ 14.4 thousand) to the one who will beat Master. But the unknown player did not know defeats: in a row he won 60 games against professional high-level players. Both Chinese and Korean AI developers have sought to replicate and surpass the success of AlphaGo, perhaps this is their attempt?

    On January 4, 2017, the head of DeepMind, Demis Hassabis, admitted that Magister or Master is a test version of AlphaGo, which was tested in unofficial online matches. Hassabis thanked everyone who participated in the test. Interestingly, even then Ke Jie lost to AlphaGo three games.

    The AlphaGo version that played on New Year's Eve was launched on one machine, only one TPU board was installed on the computer. On May 23, they put up about the same configuration against Ke Jie. She “thought” 50 moves ahead and managed to consider 100 thousand moves per second.

    May 23, in the first game, Jie chose black stones, which in go means the move first. He started the game with an unusually spied on AlphaGo match strategy for January. Commentators noted that Jie also borrowed Master’s strategies in regular games against people - those 60 matches had a wide impact on the world of go. The AlphaGo system responded confidently and managed faster than expected. Three and a half hours later, commentators began to say that Ke Jie had little chance of recouping. An hour later, the man admitted defeat. AlphaGo won by a margin of just half a point.

    AlphaGo continued to play with herself and learn. According toin DeepMind, the version for playing Li Sedoll was three stones (for playing on equal terms you would need three stone handicaps) is stronger than the version for playing Fan Hui, the AlphaGo Master version is three stones stronger than AlphaGo Lee.

    After the first game, Jie said that he would never play alone against the machine again, that the future belongs to AI. In a letter to The Last Battle, the 19-year-old champion admitted that a soulless computer system might be smarter than him, but he would not give up.

    Did Ke Jie try to play in a special way against the computer? According to the seven-time European champion Alexander Dinerstein (3 professional dan, 7 dan EGF), these attempts did not bring success:

    “Like Lee Sedoll in the first installment of the match, Ke tried to play unconventionally in the opening, so that the computer would think independently. But the difference between go and chess is that in chess openings are developed dozens of moves ahead, and in go already the first moves you can get a completely unique position that has not previously been encountered in practice. In chess, we call it “wrong openings” and try to punish an opponent for an unconventional game. And not the one with the best memory wins in go, but the one who better understands the game. ”

    The holder of 1 professional dan and 7 dan EGF Ilya Shikshin believes that in the first game Ke Jie expected the “human” style of the opponent:

    “In the first installment, Ke Jie tried to play like a man. As a result, it was quickly discovered that Ke was behind, and he had no chance to recoup. In the second and third games, Ke tried to complicate the situation on the board as much as possible in order to confuse the program. However, it didn’t work to provoke AlphaGo to an error. ”

    Second game



    Recording Broadcast in English
    Record broadcast with commentary in Russian (Ilya Shikshin and Timur Sankin)
    Moves games

    Ke Jie is the first line number of ratings, for example, the Go Ratings has . Naturally, the situation obliges us to become a benchmark for the performance of computer go systems. To show whether a machine can play stronger than humanity, it is enough to beat the strongest of them. This was what AlphaGo needed to do in three games in order to go down in history forever. Prior to Future of Go Summit, in unofficial games, Jie played several games in a row, both AlphaGo itself and another, less well-known algorithm.

    In March of this year, the FineArt program won first place in the 10th Computer Systems Championship of the University of Electrical Communications in Japan ( Computer Go UEC Cup ). She won all 11 games against other programs and outperformed such well-known algorithms as French CrazyStone, Darkforest from Facebook and Japanese Deep Zen Go. FineArt then fought against player 7 of the professional dan, Ithiriki Ryo. The program played on equal terms, without handicap stones, and won.

    It is worth noting that AlphaGo bypasses such competitions in computer systems. Apparently, in DeepMind they prefer lush events with games against the most eminent players, and the product evaluation against other programs is left to internal tests.

    FineArt - development of the laboratory of AI of the Chinese telecommunications giantTencent , known for its many web portals, QQ.com messenger and WeChat, TenPay payment tools and other products. Tencent began developing FineArt in March 2016, just before the landmark AlphaGo match - Lee Sedol. FineArt works in a manner similar to AlphaGo: these are all the same neural networks of policies and values ​​trained on large datasets and games against themselves, which DeepMind described in a scientific paper published during the first publications about AlphaGo. The first versions of Tencent's computer system started playing on the Fox Chinese server under various pseudonyms until the name FineArt or 绝艺, a phrase from an ancient Chinese poem, was chosen.

    For the first time on foxwq.com against FineArt, Ke Jie played last November. Then the man won and lost. The updated FineArt version from February 2017 did not leave him a chance: Jie lost 10 times in a row.

    On May 25, the second game of the duel between Ke Jie and AlphaGo took place. This time, the computer played with black stones and went first, which, as they say in DeepMind, gives a small advantage to the human opponent due to the peculiarities of the system. Jie is known for his good skill with white stones. But still, he spent a considerable part of the game in intense tension: Jie squeezed locks of his hair between his thumb and forefinger. Later in a press conference, he compared AlphaGo to a god player.


    A man made wise decisions in the first half of the match. Hassabis wrote in his microblog that, according to AlphaGo, Jie played perfectly. In a press conference after the game, he said that he saw a chance to win. But in the second half, he began to lose ground. By the third hour of the game, Jie used about twice as much playing time as AlphaGo. By the fourth hour, AlphaGo had simplified the complex positions that people created at the beginning of the game, which commentators regarded as a bad sign for Jie. After 15 minutes, he conceded defeat.

    Ke Jie’s two defeats is the victory of AlphaGo in a series of three games, the third meeting could only show how much stronger a man is than a machine. Interestingly, in the country of the match, media coverage wasat the level of actual censorship. Two days before the first game, representatives of state television evaporated; during the games in China, there were no broadcasts available. The only Chinese-language video broadcast was on YouTube blocked in China. If the local media talked about games against AI, they avoided mentioning the word “Google”.

    Game three


    As part of the Future of Go Summit, artificial intelligence played not only against Ke Jie. On May 26, AlphaGo helped the two masters fight each other in the morning, and played against a group of people in the afternoon. Apparently, the first Google wants to hint how much machine learning will help people, the second - how AI can be smarter than humanity.


    Gu Li during a doubles game, Google

    Recording a video broadcast in English
    Doubles game moves

    The double go rules imply that instead of two people, the stones on the board alternately place four, while negotiations between the members of each team are prohibited. In this case, each of the teams consisted of a person and a computer. At first, Gu Li went (9th professional dan) for blacks, then Lien Xiao (8th professional dan). After that, their assistants walked once - identical versions of AlphaGo. The cycle was repeated. In this game, black won.


    Five players who fought against AlphaGo. From left to right: Xi Yue, Mi Yutin, Tang Weixing, Chen Yao, Zhou Ruiyang, Google

    Team game moves

    In the team go, five masters of the 9th professional dan placed black stones on the board, their opponent - one copy of AlphaGo - white. Human players were allowed to confer with each other and discuss moves, and the team played with appreciable pleasure, albeit maintaining a balanced playing style. AlphaGo won here too.



    Recording the broadcast in English
    Recording the broadcast with comments in Russian (Ilya Shikshin)
    Third game moves The

    last game with Ke Jie was held on May 27. AlphaGo again played with black stones. And again on the board were shown some amazing combinations of stones. For example, AlphaGo made an unusual seventh move. Despite all the efforts of Ke Jie, after three and a half hours he had to admit defeat in the final game.

    The future of AlphaGo


    Not a single person, nor a whole group of five masters could beat the new version of AI. Obviously, this is not the program that DeepMind showed in March 2016. Alexander Dinerstein describes the changes in the system in this way:

    “She has become even less predictable. The reason here is that the previous version studied in batches of people, and the current version - in batches of AlphaGo with AlphaGo. She played with herself millions of times, drawing conclusions and learning from her mistakes. And if in chess programs there are bases of human parties that benefit the computer (Kramnik, for example, insisted that the computer did not use them in the match), then AlphaGo wanted to spit on everything that we had ever invented in go. She does not need our debut development. From people, she adopted only the rules of go. Everything else is her own experience. ”

    In response to the same question, Ilya Shikshin notes that the effectiveness has changed to a greater extent:

    “If the version of AlphaGo, played with Lee Sedol, it seemed that you can win with an error-free game, then the person can not win the current version of AlphaGo. AlphaGo’s playing style hasn’t basically changed - this is to take the maximum from the position. Only now does the program make it even more efficient. ”

    Is it possible that another person lives in the world - not necessarily someone specific - who can beat AlphaGo? As Dienstein says, the edge is on the machine side:

    “AlphaGo is not what we saw in the match with Lee Sedoll. Her creators say that she began to play 3 stones stronger. By chess standards, this can mean a head start in the knight. And if the previous version could still be fought, now the question is closed. "There is no person in the world who is able to defeat AlphaGo on an equal footing."

    Ilya Shikshin believes that there were chances, but AI quickly improves its level:

    “Perhaps the current version of AlphaGo, who was playing with Ke Jie, could be beaten by one of the people. But you need to understand that this is far from the ultimate level of the program. Last year's AlphaGo, which played with Lee Sedoll, was significantly weaker than the current one. And one can only guess at what power the programs will play in a year. ”

    According to Dienstein, Ke Jie himself did not make significant mistakes on his part:

    “Ke was trying to find weaknesses in the game program. He didn’t play like he usually plays with people. Aggravated, fought. He did everything he could, but he had no chance in all 3 games. The middle game lasts 250 moves. Here, by the 50th move, AlphaGo received an overwhelming advantage, and then its task was to keep it. The program knows how to do this very well - it selects moves that increase the likelihood of victory, and people often resemble the rajas from the Golden Antelope cartoon - they want to increase their advantage by winning not one point, but ten. This greed often leads to disaster. ”

    At the same time, AlphaGo Master AI, which Ke Jie fought with, plays without errors, says Dienstein:

    “In the match with Lee Sedoll, experts found program errors. Here, even the best masters of the world say that they do not see them. I think that with AlphaGo the question is finally closed. Lee Sedoll was the last person on earth to beat this program. If the Ke game could still be improved, the computer game was close to ideal from our point of view. ”

    A similar opinion was expressed by Ilya Shikshin:

    I do not believe that Ke Jie made serious mistakes. To consider that there were mistakes is the same as to believe that Ke Jie had a chance. But there was no chance. AlphaGo plays a level stronger.

    Probably the most powerful computer system is leaving - it will not be sold as a commercial product, it will no longer try to prove its superiority. In a post “ AlphaGo 's Next Turn, ” DeepMind explained that the project’s development team will be disbanded - specialists will deal with other problems that can be solved by machine learning: diagnosing diseases and finding methods for their treatment, improving energy efficiency and inventing new materials. But AlphaGo will no longer participate in matches.

    Two official games with significant media attention - and a sharp rejection of further development. You can blame Google for the fact that due to the company boasted of its achievements of AI. But DeepMind is noticeably trying to help the game community: the unit promises to publish a scientific paper explaining all the changes that have been made to AlphaGo. And now with this data, other developers will be able to create or improve their own systems, hinted at DeepMind. Apparently, Google is not interested in creating a gaming AI as a commercial product. DeepMind is also about to release a position analysis tool with which the player can see how AlphaGo thinks.

    In addition to promises, the company left a giftin the form of 50 AlphaGo games against itself. As Ilya Shikshin observes, “50 textbooks can be obtained from these 50 games.” He says the level is so high that it takes a long analysis to figure out the games shown. Alexander Dinerstein points out that there is a problem of copying the AI ​​style: in the debut you can play like AlphaGo, but what to do next?

    Future go


    To defeat Li Sedol, a whole computing cluster was needed, for Ke Jie - only one machine with one TPU module. A situation is getting closer when a mobile chip with tiny power consumption can surpass the best of people in go.

    In chess, this will not surprise anyone . These are not isolated cases. For example, the current world chess champion Magnus Carlsen on his personal YouTube channel sometimes promotes a mobile app named Play Magnus . The application is based on one of the most powerful chess engines Stockfish . It intentionally underestimates its performance and emulates the level of knowledge of Magnus chess at different ages.

    It is not surprising that in serious chess championships the use of electronic devices is tightly regulated. Alexander Dinerstein says something similar may happen in Go:

    “Yes, probably, and we will soon have a problem with cheating, but for now you can have a phone in your pocket and not be afraid to get a technical defeat in the tournament for this. And on the Internet people play, and online tournaments with prizes are held. Apparently the reason is that the most powerful programs are not yet available for download. ”

    Ilya Shikshin believes that the introduction of restrictions is possible:

    “So far I have not heard about the rules governing the use of gadgets in go competitions. But probably soon they will appear. ”

    One scientific publication about AlphaGo in the journal Nature in 2016 was enough to ensure that in 2017, on a computer system competition, the game was achieved on an equal footing with professional players. Developers of existing products are copying the DeepMind approach, the game level of programs is growing. Alexander Dinerstein:

    “Over the past year, programs have skyrocketed. A couple of years ago, we easily beat them on 3-5 stones of handicap. Now it’s not clear who should give this head start to anyone. ”

    If DeepMind does share data on upgrading AlphaGo Lee to AlphaGo Master, then computer systems are likely to surpass humans. What if in the future you have to play, making allowance for a weaker protein team? Would it be interesting to watch such a game? Alexander Dinerstein:

    “Yes, soon we will not be able to play with the programs on an equal footing, but in Go the game on the handicap is no less interesting. I think that we still have to see many matches where the handicap will change from party to party. Top masters were asked on what handicap they are ready to play with God if their life is at stake. The answers were different, but most agree that on 4 stones a person will never lose. Let's see if this is so. By the way, Lee Sedoll in a recent interview claims that he will easily overcome AlphaGo on 2 handicap stones. I really doubt it. ”

    Ilya Shikshin:

    “The first time will definitely be interesting. Go Seigen, one of the strongest players in the 20th century, claimed that he would need 5 handicap stones to defeat Go-god (an opponent who plays perfectly). I believe that many players in go would be interested in checking how far they are from Gob-god. ”

    In any case, players agree that go has grown in popularity. Alexander Dinerstein:

    “Thanks to these matches, many new people learned about go. A lot of chess players come, people associated with programming. In the comments to the articles in the media, they stopped asking what kind of game it was. People do not just comment on the news and sympathize with the masters of go. They even offer their own methods of dealing with computers. So, for example, someone suggested switching to the game in “Chapaev.”

    Ilya Shikshin:

    “The appearance of AlphaGo was logical. Sooner or later this was to happen. There are, of course, both positive and negative sides to this. From the positive - a lot of people learned about go, the level of people in go can increase, thanks to the programs. From the negative - many people regard this as a kind of loss, soon there will really be restrictions on the use of gadgets during competitions. ”

    “Do not forget that go is not only a sport. Go is part of eastern culture. Go is an art, a way of self-knowledge, self-expression and communication. The adversarial part is only one of the components of the go world. ”

    The future of AI


    AlphaGo is not some prototype of Skynet and not even an impressive form of strong AI. This is the program: the operator still arranges the stones on the board. AlphaGo is a narrowly focused system that can only play go. AlphaGo cannot rank search results, cannot play old Atari games, does not recognize a person by the face, does not recognize breast cancer by mammography, cannot support a conversation.

    However, AlphaGo is based on the samegeneral principles that can be used in a wide range of tasks. This is reinforcement learning, Monte Carlo method, value function. Conventional components are assembled in a new and not quite standard form. AlphaGo does not go away to work in some other Alphabet holding structure. This is more a manifestation of the quantity and quality of human talent available to solve non-gaming problems.



    With the creation of motor vehicles, neither people running competitions nor horse races went into oblivion. But auto racing was born - a sport that has its fans. Perhaps in the future, teams representing tech giants will put up bots to compete to solve a problem - not necessarily just one - subject to strict restrictions. In approximately the same way as in the “Formula 1” engine characteristics are strictly limited , you can enter restrictions here. For example, no more than 10 billion transistors on a computer, no more than 500 watts of power consumption and so on.

    However, this is just a fantasy. In this particular case, AlphaGo AIs have never been competed against other programs - only against people or themselves (paired go in Vuzhen). The closest competitor AlphaGo is developed with the same FineArt principles from the Chinese company Tencent. Is DeepMind afraid of a Chinese copy? According to Alexander Dinerstein, the FineArt style is still different from AlphaGo:

    “If anyone outperforms AlphaGo, then its computer counterparts. We have not seen AlphaGo games against other programs. Perhaps FineArt is already able to fight it on an equal footing. What surprises me is the difference in game styles. It would seem that the technologies are the same: the neural network and Monte Carlo, but at the same time, the game Deep Zen Go or FineArt is similar to the game of top pros, and AlphaGo is completely different. People have never played like that. As Shi Yue, one of the strongest Chinese, said: "This is a game from the future." It is impossible to beat. Moreover, even to understand the moves is not easy. There are such moves that people, in principle, do not consider as possible. If we assume that this is exactly what the best strategy in go looks like, then we can safely throw in the trash all those books that have been published over the 4000-year history of go. ”

    Ilya Shikshin expressly states that FineArt is inferior to AlphaGo:

    «Программа FineArt так же, как и другие сильнейшие на сегодняшний день программы, были созданы на основе статьи об AlphaGo, которая вышла в журнале Nature в январе 2016 года. Полагаю, что DeepMind просто не интересно соревноваться с программами, которые появились благодаря их собственным разработкам. AlphaGo играет значительно сильнее FineArt и других программ. FineArt выигрывает, примерно, 7 из 8 партий у сильнейших профессионалов. AlphaGo выигрывает все.»

    The FineArt situation and the AlphaGo game in China are largely an illustration of the changing landscape of artificial intelligence systems.

    In recent days, a significant wave of publications ( 1 , 2 , 3 ) has passed in American publications that China strives to overtake the United States in the field of AI. Her article probably beganin the New York Times. The author even admitted that China is one step behind, but wondered about the speed of distance reduction. At the moment, China is spending billions of state dollars to finance machine learning startups and attracting scientists from other countries. The article presents the case of a German robotics expert who went not to a prestigious university in Europe or the USA, but to China. He was hired by a grant six times higher, which opened up the possibility of opening a laboratory, hiring an assistant and other staff.

    Private Chinese companies are not far behind. For example, the local search giant Baidu, together with the state, has opened a laboratory led by researchers who used to be involved in Chinese combat robots. The same Baidu is working on unmanned vehicles, speech recognition systems and visual information. Part of the development surpasses its Western counterparts. In October 2016, Microsoft announced that its system recognizes human speech as well as people. Andrew Eun, who was still working at Baidu at that time, noted that this milestone in China passed back in 2015.


    Just as Tencent's FineArt may be inferior to Google's AlphaGo, so Chinese attempts to work with AI may be behind the US. But the lag is narrowing. Demonstration matches of the program from DeepMind against Asian masters of go changed the tone of state discussions about financing. Their content has become wider and clearer, Chinese professors told NYT.

    The potential of artificial intelligence systems is noticed, even attempts to catch up are disturbing. Perhaps this is just a stormy summer before the next winter of AI . Perhaps this is a new stage in the race for world technological superiority.

    Also popular now: