Starcraft AI Competition History

Original author: David Churchill
  • Transfer
image

Introduction


Since the first Starcraft AI Competition, held in 2010, the topic of artificial intelligence in real-time strategies (RTS) has become increasingly popular. Competitors present their Starcraft AI bots that fight in the standard version of Starcraft: Broodwar . These RTS games competitions, inspired by previous competitions such as Open RTS (ORTS) , have been examples of demonstrating the current state of artificial intelligence in real-time strategy games. Starcraft AI bots are managed using the Brood War Application Programming Interface (BWAPI)developed in 2009 as a way to interact and control Starcraft: Broodwar using C ++. With the growth of functionality and popularity of BWAPI, the first AI bots (agents) for Starcraft began to appear and the opportunity to organize a real Starcraft AI competition appeared. We will talk in detail about each major Starcraft AI competition, as well as the development of UAlbertaBot , our bot participating in these competitions. It should be noted that I have been the organizer of the AIIDE competitions since 2011 and, naturally, I have more information about these competitions. Each competition will be reviewed in chronological order, with full results and links to download the source codes of bots and the response files of the AIIDE and CIG competitions.

If you wish, you can read the 2015 AIIDE Starcraft AI Competition Report here .

"Why not StarCraft 2?"


This question is constantly being asked to me when I say that we are organizing Starcraft: BroodWar AI competitions. In these competitions, only BWAPI is used as the BroodWar programming interface . BWAPI was created by BroodWar reverse engineering and uses the BroodWar program to read and write to the memory space to read data and send commands to the game. Since any program involved in such actions would be considered a hack or cheat engine, Blizzard said it would not allow us to do something similar for StarCraft 2. In fact, most versions of StarCraft 2 end user license agreements (EULAs)explicitly prohibits any change to the program. We are glad that Blizzard allowed us to conduct tournaments using BWAPI, it even helped us by providing prizes for the AIIDE tournament, but due to a change in its policy, we cannot repeat the same for StarCraft 2.

There are other RTS game engines, which can be used in competitions. One such engine, ORTS, is a free RTS software engine that hosted competitions until BWAPI was released in 2010 and the first AIIDE Starcraft AI Competitions were held. Another engine, microRTS , is a Java RTS engine that plays a simplified grid based RTS. It was developed specifically for testing AI techniques.

AI Techniques for RTS


I highly recommend studying wonderful reviews of the current state of AI techniques for StarCraft and descriptions of bot architectures in the following articles:


Acknowledgments


RTS AI research and competitions require a huge amount of work for a lot of people, so I want to thank those who helped organize the current and past competitions, developed bots and generally helped develop the AI ​​for RTS. Firstly, I thank the RTS AI research team of the University of Alberta (UofA), of which I am a member. The university has been involved in the study of AI for RTS since the release of the motivating articleMichael Buro (Michael Buro) in 2003. UofA hosted the ORTS AI Competition in 2006-2009, and since 2011 has been hosting the AIIDE Starcraft AI Competition annually. I want to personally thank the former and current members of the RTS AI research group for all their many years of assistance in advertising, organizing and conducting these competitions, and for continuing to conduct world-class research in this area. In the photo below, from left to right: Nicolas Barriga, David Churchill, Marius Stanescu and Michael Buro. The picture did not include former members of the group Tim Furtak, Sterling Oersten, Graham Erickson, Doug Schneider, Jason Lorenz and Abdallah Saffidin.



I also want to thank those who organized and hosted the current and past Starcraft AI Competitions. Thanks to Ben Weber for hosting the first AIIDE Starcraft AI Competitions, which sparked international interest in the field of AI research for RTS. Michal Certicky puts a lot of effort into hosting and supporting Student Starcraft AI Tournament every year., as well as a permanent tournament ladder and video streaming of bots. He also greatly helped in popularizing the AI ​​field for RTS. The organizers of the CIG Starcraft AI Competition were Johan Hagelback, Mike Preuss, Ben Weber, Tobias Mahlmann, Kyung-Joong Kim, Ho-Chul Ho-Chul Cho), Inseok Oh (In-Seok Oh) ​​and Manche Kim (Man-Je Kim). Thanks also to Krasimir Krastev (krasi0) for storing the original Starcraft AI Bot Ladder . Also many thanks to Santi Ontanon for developing the microRTS AI system. And I especially thank Adam Heinermann for continuing to develop the BWAPI. Without it, all these studies would not have been possible.

Starcraft AI Competition


AIIDE 2010



[Note transl.: the tables in the original article are collected by javascript, so transferring them to translation would be too time-consuming. If you need the working links in the table, they are available in the original .]

AIIDE Starcraft AI Competitions were first held in 2010 by Ben Weber at the Expressive Intelligence Studio of the University of California, Santa Cruz, as part of the AIIDE Conference on Artificial Intelligence and Interactive Digital Entertainment(Artificial Intelligence and Interactive Digital Entertainment). 26 participants competed in four different game modes, which ranged from simple battles to playing full Starcraft. Since this was the first year of the competition and the infrastructure was not yet developed, each game of the tournament was launched manually on two laptops. It was also necessary to record the results independently. In addition, information was not stored for bots that would allow them to learn more about opponents between matches. There were four tournament categories in 2010 competitions. The first tournament was a battle with unit micromanagement on a flat surface, consisting of four separate games with different combinations of units. Of the six competitors, FreSCBot was the winner, and Sherbrooke finished second. The second tournament was also a micromanagement of units, but on a difficult terrain. There were two applicants in this category, and FreSCBot won again, ahead of Sherbrooke. The third tournament was a technically limited StarCraft game on one well-known map with the fog of war turned off. Players could only choose the Protoss race, and the use of new units was prohibited. Eight bots participated in this elimination tournament after two defeats (double elimination). In the final, MimicBot took first place, defeating Botnik. Since it was a StarCraft version with full information, MimicBot chose the strategy of “repeating the order of building the enemy, gaining an advantage in the economy, if possible”, and it worked quite well. and FreSCBot won again, ahead of Sherbrooke. The third tournament was a technically limited StarCraft game on one well-known map with the fog of war turned off. Players could only choose the Protoss race, and the use of new units was prohibited. Eight bots participated in this elimination tournament after two defeats (double elimination). In the final, MimicBot took first place, defeating Botnik. Since it was a StarCraft version with full information, MimicBot chose the strategy of “repeating the order of building the enemy, gaining an advantage in the economy, if possible”, and it worked quite well. and FreSCBot won again, ahead of Sherbrooke. The third tournament was a technically limited StarCraft game on one well-known map with the fog of war turned off. Players could only choose the Protoss race, and the use of new units was prohibited. Eight bots participated in this elimination tournament after two defeats (double elimination). In the final, MimicBot took first place, defeating Botnik. Since it was a StarCraft version with full information, MimicBot chose the strategy of “repeating the order of building the enemy, gaining an advantage in the economy, if possible”, and it worked quite well. Players could only choose the Protoss race, and the use of new units was prohibited. Eight bots participated in this elimination tournament after two defeats (double elimination). In the final, MimicBot took first place, defeating Botnik. Since it was a StarCraft version with full information, MimicBot chose the strategy of “repeating the order of building the enemy, gaining an advantage in the economy, if possible”, and it worked quite well. Players could only choose the Protoss race, and the use of new units was prohibited. Eight bots participated in this elimination tournament after two defeats (double elimination). In the final, MimicBot took first place, defeating Botnik. Since it was a StarCraft version with full information, MimicBot chose the strategy of “repeating the order of building the enemy, gaining an advantage in the economy, if possible”, and it worked quite well.

The fourth tournament was considered the main competition in which the bots played the full-length game StarCraft: Brood War with the fog of war on. The tournament was conducted in double elimination format with random pairs. The results of the top five games were selected. Opponents could play for any of the three races; in the gameplay, only actions considered to be “cheating” in the StarCraft community were prohibited. Since computer programs written in BWAPI do not have restrictions on the number of commands sent to the Starcraft engine, some actions not provided for by the developers were possible, for example, sliding buildings and walking ground units through walls. Such actions were considered cheating and were banned in the tournament. A set of cards from five well-studied professional cards was announced to opponents in advance. Of these, one for each game was randomly selected. The fourth tournament was won by Overmind - a zerg bot created by a large team from the University of California, Berkeley. Overmind won the final of the Krasi0 Terran bot Krasimir Krastev.


Overmind actively used the powerful and agile flying unit of the zerg mutalisk (Mutalisk), which he very successfully controlled using potential fields. In general, Overmind's strategy was to initially defend from Zergling and sinking colonies (fixed defense towers) to protect the initial development and gather resources to create the first mutalisks. After creating the mutalisks, they went to the enemy base, patrolled and attacked its perimeter. If the bot did not win the direct first attack, it slowly patrolled and destroyed any unprotected units, gradually draining the enemy to the limit, and then destroying him with his last attack. The runner-up krasi0 bot used a deeply defensive terran strategy. He built bunkers, Siege Tanks and Missile Turret missile towers for defense. After building a certain number of units, he sent an army of mechanical units to the enemy’s base. This bot worked quite well and only Overmind lost in the competition. In 2011, Ars Technica wroteexcellent article about Overmind . A man-to-machine match was also held, in which a professional player and the best AI participated. You can see it here:


The first version of UAlbertaBot was released in the summer of 2010 and sent to AIIDE 2010 in September. A group of six students, under my leadership with Michael Buro from the University of Alberta, created the initial version of UAlbertaBot using the BWAPI and Brood War Standard Add-on Library (BWSAL), which provided features such as simple construction queuing, building layout, and management of workers . The 2010 version of the UAlbertaBot bot played for the zerg race and used one basic strategy in which the zerg mutalisk flying unit was actively used. Although the bot’s combat and micromanagement functions were implemented well, serious logical errors at the beginning of the game and planning the construction procedure led to poor results in the 2010 competitions. He was defeated in the third round of the grid by the Terran bot krasi0. This first implementation of UAlbertaBot suffered from technical problems, so after the competition we decided to completely rewrite the bot for the next competition.

CIG 2010


Following the success of the AIIDE 2010 competition, an attempt was made to conduct AI competitions for Starcraft at the Computational Intelligence in Games (CIG) conference. CIG 2010, hosted by Johan Hagelback, Mike Preus, and Ben Weber, used a single game mode similar to the third tournament with technical limitations from the AIIDE 2010 competition, but instead of the terran race, the bots played for the protoss race. Unfortunately, the first year of the CIG competition suffered from serious technical problems, which led to the decision to use specially created Starcraft cards instead of the traditional ones, which led to critical errors of the Brood War Terrain Analysis (BWTA) software of many bots. Because of these frequent “freezes,” it was decided that it was impossible to determine the winner in the competition. UAlbertaBot did not participate in these competitions,

AIIDE 2011




In 2011, AIIDE competitions were held at the University of Alberta and continues to this day. They are organized and conducted annually by me and Michael Buro. Due to the small number of participants in the first, second and third tournaments of the 2010 competitions, we decided that in 2011 AIIDE competitions will consist only of a fully functional game in Starcraft (with the same rules as in the fourth tournament of 2010), and from smaller micromanagement tournaments, we refused. The rules of the tournament were changed: all participants must provide the source code of their bots and allow its publication after the end of the competition. There were several reasons for this. Firstly, the barrier to entry for future competitions has been reduced: programming an AI bot for Starcraft takes a lot of time, therefore, future participants can download and modify the source code of old bots to save time. Another reason is the simplification of protection against cheats: thousands of games are now played at the tournament, so we can’t observe each and track the use of cheating tactics, but they are much easier to detect by examining the source code (too much code obfuscation is prohibited). And the last reason: help in the development of modern AI for Starcraft. Bot developers can borrow past bot strategies and techniques by studying their source code. Ideally, all bots in new competitions should be at least as strong as last year. In the competition in 2011, 13 bots participated. therefore, we cannot observe each and track the use of cheating tactics, but they are much easier to detect by examining the source code (too much code obfuscation is prohibited). And the last reason: help in the development of modern AI for Starcraft. Bot developers can borrow past bot strategies and techniques by studying their source code. Ideally, all bots in new competitions should be at least as strong as last year. In the competition in 2011, 13 bots participated. therefore, we cannot observe each and track the use of cheating tactics, but they are much easier to detect by examining the source code (too much code obfuscation is prohibited). And the last reason: help in the development of modern AI for Starcraft. Bot developers can borrow past bot strategies and techniques by studying their source code. Ideally, all bots in new competitions should be at least as strong as last year. In the competition in 2011, 13 bots participated. Ideally, all bots in new competitions should be at least as strong as last year. In the competition in 2011, 13 bots participated. Ideally, all bots in new competitions should be at least as strong as last year. In the competition in 2011, 13 bots participated.

The 2010 tournaments were launched by Ben Weber on two laptops. I had to manually launch Starcraft and create games. Physical activity was quite high - in the only double elimination tournament with random pairs, about 60 games were played. This led to negative reviews: a tournament in this style depended heavily on how lucky the couple were. Therefore, in the 2011 competitions, we decided to get rid of any randomness in tournaments and used the circular loop format. In this format, you need to play a much larger number of games, so in the summer of 2011, Jason Lorenz (summer bachelor student Michael Buro) and I wrote software that would automatically schedule and run Starcraft round-robin tournaments on any number of computers connected to the local network. This software used a client-server architecture with a single server creating a game schedule and storing the results, and several clients running the Starcraft game and software for monitoring the game process and recording the results after its completion. Bot files, game replays, and final results were available to all clients in the Windows shared folder on the local network. The initial version of the software allowed 2340 games to be played in the same time during which we played 60 games in 2010 competitions. Bots played thirty times with each of the opponents. The competition used 10 cards selected from professional tournaments and balanced for all races. They could be downloaded from the competition website in advance. AIIDE competitions were created according to the model of "human" competitions, in which a set of cards and opponents are known in advance,


At the end of the five-day competition, the first place went to Skynet, the second to UAlbertaBot, and the third to Aiur. Skynet is a protoss bot written by software developer Andrew Smith from the UK and has used many reliable protoss strategies: Zealot rush, mid-game Dragoon / Zealot army, and army of zealots, dragoons, and destroyers ( Reaver) at the end of the game. His reliable work with the economy and good protection in the early stages allowed him to hold out against the much more aggressive protoss bots - UAlbertaBot and Aiur. UAlbertaBot then played for the Protoss race, and is described in detail in the next paragraph. Aiur was written by Florian Richoux, a graduate student at the University of Nantes. He also played for the Protoss and used several different strategies, such as the rush with Zealots and an army of Zealots and Dragoons.

The Overmind bot, which won the AIIDE 2010 competition, did not participate in 2011 because it revealed its great vulnerability to aggression in the early stages of the game and it was easily defeated by rush of all three races. The Overmind development team also said that it did not want to disclose the source code of the bot, therefore, it could not participate in the 2011 competitions. Instead, a team of Berkeley undergraduate students participated in the seventh-ranked Terran bot Undermind.

UAlbertaBot was completely rewritten in 2011 by me and Sterling Oersten (a student of Michael Buro) and instead of zerg began to play for the protoss race. The most important reason for switching to protoss was that we found that implementing protoss strategies is much easier from a technical point of view, and these strategies are much more stable during testing. Zerg race strategies are highly dependent on the smart layout of buildings, and we did not study this problem very well at that time. Since the zerg are quite weak in defense in the early stages of the game, their buildings must be positioned so that they form a “maze” to the base, slowing down the advance of the enemy towards working units. The Protoss race does not have such a problem, and its protection in the early stages is good enough thanks to powerful units - Zealots and Dragoons.construction order search systems, which was developed taking into account the simpler infrastructure of the Protoss buildings, so it did not work for zerg and terrans. This planning system for the construction of buildings had the goal of constructing a certain number of unit types. It could automatically plan optimal construction orders in real time in time, and provided much better results than the priority construction order system of the BWSAL system used in the 2010 UAlbertaBot version. The new construction queue planning system added to UAlbertaBot was the first version of a comprehensive search-based approach now used in all bots in AI competitions for Starcraft. This new version of UAlbertaBot has implemented a very aggressive early rush strategy by zealots, which stunned opponents and allowed to win many games in just a few minutes. If the original rush strategy did not kill the enemy with zealots, it went into the creation of remote units - dragoons for use in strategy in the middle and late stages of the game. UAlbertaBot worked pretty well and reached second place in the competition. The ratio of victories and defeats for him was comparable only with the results of Skynet and Undermind. The Skynet bot managed to stop the early rush of UAlbertaBot with the impressive use of the dragoon at the start of the game, which allowed several zealots to be killed with one dragoon. Undermind's strategy was to build several terran bunkers as an initial defense against the aggressive UAlbertaBot rush. she moved to the creation of remote units - dragoons for use in strategy in the middle and late stages of the game. UAlbertaBot worked pretty well and reached second place in the competition. The ratio of victories and defeats for him was comparable only with the results of Skynet and Undermind. The Skynet bot managed to stop the early rush of UAlbertaBot with the impressive use of the dragoon at the start of the game, which allowed several zealots to be killed with one dragoon. Undermind's strategy was to build several terran bunkers as an initial defense against the aggressive UAlbertaBot rush. she moved to the creation of remote units - dragoons for use in strategy in the middle and late stages of the game. UAlbertaBot worked pretty well and reached second place in the competition. The ratio of victories and defeats for him was comparable only with the results of Skynet and Undermind. The Skynet bot managed to stop the early rush of UAlbertaBot with the impressive use of the dragoon at the start of the game, which allowed several zealots to be killed with one dragoon. Undermind's strategy was to build several terran bunkers as an initial defense against the aggressive UAlbertaBot rush. The ratio of victories and defeats for him was comparable only with the results of Skynet and Undermind. The Skynet bot managed to stop the early rush of UAlbertaBot with the impressive use of the dragoon at the start of the game, which allowed several zealots to be killed with one dragoon. Undermind's strategy was to build several terran bunkers as an initial defense against the aggressive UAlbertaBot rush. The ratio of victories and defeats for him was comparable only with the results of Skynet and Undermind. The Skynet bot managed to stop the early rush of UAlbertaBot with the impressive use of the dragoon at the start of the game, which allowed several zealots to be killed with one dragoon. Undermind's strategy was to build several terran bunkers as an initial defense against the aggressive UAlbertaBot rush.

CIG 2011


The 2011 CIG competition was hosted by Tobias Malman from the University of Copenhagen Information Technology and Mike Preus from the Dortmund University of Technology. Taking into account the lessons of the previous year, this time the CIG competitions were held on a standard “human” set of five cards, however, unlike AIIDE, the cards were not known to the participants in advance. In the competitions, the same rules were used as in the AIIDE 2011 tournament: a fully functional game in Starcraft with the “fog of war” turned on and cheating prohibited. The two most important differences in the rules were that it was not necessary for CIG to reveal the source code of the bots, and the set of cards was not known in advance. Since both AIIDE and CIG took place in August (due to the conference schedule) and the same bots often participated in them, the CIG organizers decided to slightly change the rules of the tournament, hiding a set of cards, which would lead to more interesting results. Ten bots participated in the competition, and since the CIG organizers did not have automation software, games were started manually on only a few computers. Because of this, instead of one round-robin tournament, as in AIIDE, the competitions were divided into two groups of five bots with ten round-robin games for each pair. After this group stage, the best two bots from each group entered the final group and ten games were played again for each pair. Despite the fact that UAlbertaBot defeated Skynet in the first group, Skynet took the first place in the final, the second place with a slight margin took UAlbertaBot, the third - Xelnaga, and the fourth - BroodwarBotQ. and since the organizers of CIG did not have automation software, games were run manually on just a few computers. Because of this, instead of one round-robin tournament, as in AIIDE, the competitions were divided into two groups of five bots with ten round-robin games for each pair. After this group stage, the best two bots from each group entered the final group and ten games were played again for each pair. Despite the fact that UAlbertaBot defeated Skynet in the first group, Skynet took the first place in the final, the second place with a slight margin took UAlbertaBot, the third - Xelnaga, and the fourth - BroodwarBotQ. and since the organizers of CIG did not have automation software, games were run manually on just a few computers. Because of this, instead of one round-robin tournament, as in AIIDE, the competitions were divided into two groups of five bots with ten round-robin games for each pair. After this group stage, the best two bots from each group entered the final group and ten games were played again for each pair. Despite the fact that UAlbertaBot defeated Skynet in the first group, Skynet took the first place in the final, the second place with a slight margin took UAlbertaBot, the third - Xelnaga, and the fourth - BroodwarBotQ. The competition was divided into two groups of five bots with ten round games for each pair. After this group stage, the best two bots from each group entered the final group and ten games were played again for each pair. Despite the fact that UAlbertaBot defeated Skynet in the first group, Skynet took the first place in the final, the second place with a slight margin took UAlbertaBot, the third - Xelnaga, and the fourth - BroodwarBotQ. The competition was divided into two groups of five bots with ten round games for each pair. After this group stage, the best two bots from each group entered the final group and ten games were played again for each pair. Despite the fact that UAlbertaBot defeated Skynet in the first group, Skynet took the first place in the final, the second place with a slight margin took UAlbertaBot, the third - Xelnaga, and the fourth - BroodwarBotQ.

Since there was a gap of only a couple of weeks between the two competitions, no major changes were made to CIG in either Skynet or UAlbertaBot. The only changes to UAlbertaBot were the removal of fragments of manually entered information about maps and the location of buildings on maps that participated in AIIDE. They have been replaced by algorithmic solutions.



SSCAIT 2011 ( detailed results )


The first student AI tournament for Starcraft (Student Starcraft AI Tournament, SSCAIT) was held in the winter of 2011 and was organized by Michal Certitsky of the Comenius University in Bratislava. This tournament was conceived as part of the Introduction to AI course taught by Michal at the university. As part of the course, each student had to write a bot for the competition. Since there were a lot of students on the course, 50 people took part in the competitions. All of them were students. Specialized software written by Michal was used to automatically schedule and run tournament games. The format of the tournament divided 50 participants into ten groups of five people for group stages. 16 finalists moved to the double elimination stage. The winner of the final was Roman Danielis, a student at Comenius University.

AIIDE 2012




AIIDE 2012 competitions were again held at the University of Alberta. A serious change was made to them: a permanent file storage allowed bots to learn during the entire time of the competition. The tournament management software was improved, each bot got access to the reading folder and the recording folder contained in the shared folder accessible to all client machines. During each round, bots could read from the reading folder and write to the recording folder, and at the end of each round-robin cycle (when all bots played one game on one card), the contents of the recording folder were copied to the reading folder, providing access to all information, recorded in previous rounds. This method of copying guaranteed that none of the bots would gain an advantage in information during the round due to the schedule. Ten bots participated in the 2012 competition, and by the end of the fifth day, 8,279 games were played, 184 for each pair of bots. The final results were almost similar to the 2011 results. Skynet took first place, Aiur - second, and UAlbertaBot - third. Aiur's work has been enhanced by the introduction of a new strategy called \ textit {cheese}: an early rush strategy with the Photon Cannon, which other bots were not ready for.


Human matches against the 2012 car can be seen here:


For these competitions, one major update was made to UAlbertaBot - the addition of the SparCraft combat simulation package. In the 2011 version, UAlbertaBot simply waited for the creation of a threshold number of zealots, and then constantly sent them to the enemy base and did not even back down. In the 2012 version, the battle simulation module was updated, which allowed to evaluate the results of the battle. It was used in battles to predict who would win it. If our victory was predicted, then the bot continued to attack, if the victory of the enemy - the bot retreated to its base. This new tactic has proven its worth in practice, but Aiur's defense at the early stage of the game has improved significantly compared to last year, when UAlbertaBot took second place. UAlbertaBot also implemented three clear strategies for 2012 competitions: rush with zealots from the 2011 version, rush dragoons and rush dark templar (Dark Templar). The bot also used the input / output in a permanent file to store match data against specific opponents, and chose a strategy against the corresponding bot using the UCB-1 formula. This training strategy worked pretty well: 60% of victories at the beginning of the tournament turned to 68.6% by the end of the tournament. One of the main reasons for Aiur's victory over UAlbertaBot in the tournament was that the strategies of the rush dragoons and dark templars were poorly implemented. Therefore, the learning algorithm for choosing strategies eventually came to the constant use of rush strategy by pilots, and the study of other strategies using past victories as an example proved futile. If the rush strategy was chosen by zealots in each game, then UAlbertaBot would take the second place. This training strategy worked pretty well: 60% of victories at the beginning of the tournament turned to 68.6% by the end of the tournament. One of the main reasons for Aiur's victory over UAlbertaBot in the tournament was that the strategies of the rush dragoons and dark templars were poorly implemented. Therefore, the learning algorithm for choosing strategies eventually came to the constant use of rush strategy by pilots, and the study of other strategies using past victories as an example proved futile. If the rush strategy was chosen by zealots in each game, then UAlbertaBot would take the second place. This training strategy worked pretty well: 60% of victories at the beginning of the tournament turned to 68.6% by the end of the tournament. One of the main reasons for Aiur's victory over UAlbertaBot in the tournament was that the strategies of the rush dragoons and dark templars were poorly implemented. Therefore, the learning algorithm for choosing strategies eventually came to the constant use of rush strategy by pilots, and the study of other strategies using past victories as an example proved futile. If the rush strategy was chosen by zealots in each game, then UAlbertaBot would take the second place. Therefore, the learning algorithm for choosing strategies eventually came to the constant use of rush strategy by pilots, and the study of other strategies using past victories as an example proved futile. If the rush strategy was chosen by zealots in each game, then UAlbertaBot would take the second place. Therefore, the learning algorithm for choosing strategies eventually came to the constant use of rush strategy by pilots, and the study of other strategies using past victories as an example proved futile. If the rush strategy was chosen by zealots in each game, then UAlbertaBot would take the second place.

CIG 2012




In 2012 CIG competitions, tournament management software with AIIDE was used, so much more games could be played as part of the tournament. There were ten participants in the competition, with many bots participating in AIIDE. A set of six previously unknown cards was used, which were not in the previous tournament. In total, 4050 games were played, each bot played 90 times with everyone else. As in AIIDE, bots for learning were available to read and write to a permanent file, however, due to differences in the structure of the network folders of the AIIDE and CIG competitions, they did not work exactly as intended. It is also worth noting that, compared to AIIDE, there were three times more bot crashes, so it became obvious that there were technical problems with the AIIDE tournament management software. Skynet won again, UAlbertaBot took second place, Aiur - third, and Adjutant is the fourth. In these competitions, UAlbertaBot did not use the data file and training, because this technique proved to be not the best in the AIIDE 2012 competitions. A rush strategy was implemented by pilots, which allowed the bot to climb to second place.

SSCAIT 2012 ( detailed results )


A few months later, in December, the second SSCAI tournament was held, which was much more widely advertised to participants outside the Comenius University. The competition again involved many bots from Michal Certitsky's AI course, and the total number of participants was 52 bots. The format of the tournament was a simple circular cycle, where each bot played one game with the rest, or 51 games for each bot. After the completion of the roundabout, the final fights were divided into two categories: the student division and the mixed division. To participate in the student division, the bot had to be written by a single student. Points were distributed as follows: 3 points for a victory, 1 point for a draw. In the final of the student division, the first place was taken by Matej Istenik (bot Dementor) from the University of Zilina (Slovakia), the second is Marcin Bartnicki from Gdansk University of Technology, and the third is UAlbertaBot. The mixed division was available to all participants. The eight best bots competed in the net with a knockout after the first defeat. In the final, IceBot defeated Marcin Bartnicki and won first place. The UAlbertaBot version from CIG 2012 was used in these competitions because SSCAIT also used a set of unknown cards.

CIG 2013




In 2013, CIG competitions were held a few weeks earlier than AIIDE. The tournament management software, which was supposed to be ready for AIIDE competitions, was being rewritten, so CIG used the 2012 software version. This meant that the training file system did not work for the same reasons as in the previous year. Due to additional technical difficulties with setting up the tournament, only 1000 games were played, only a quarter of the amount of the last year. The top three were the same as in the previous year: Skynet - first place, UAlbertaBot - second, and Aiur - third. Xelnaga has moved from past sixth to fourth. UAlbertaBot underwent major changes (see in the section below), which were not yet completed by the time of the CIG 2013 competition, so the version of the bot with AIIDE 2012 was involved.

AIIDE 2013 ( detailed competition report )




For AIIDE 2013 competitions, the tournament management software was almost completely rewritten to be more robust in different network designs. Previous software versions relied on the existence of Windows shared folders for storing files, and now they began to use data exchange via Java sockets instead. All bot files, game replays, results, and read / write folders were packed and sent via Java sockets. This meant that the tournament can now be held on a network with any configuration that supports TCP (for Java sockets) and UDP (for a Starcraft network game). A guide / demo of this tournament management software can be found here:


Only 8 bots participated in the 2013 competitions, and this still remains the minimum number of all tournaments, although the quality of the bots was quite high. A total of 5597 games were played, which allowed 200 battles with each of the bots, twenty on each of the ten cards that remained unchanged from last year. The usual favorites showed themselves well again, but UAlbertaBot took first place and overthrew Skynet from the throne, which moved to second. In third place is Aiur, in fourth is Ximp, a new bot written by Tomas Vajda, a student from AI Michal Certitsky’s course at Comenius University. Ximp played for the Protoss race, and his strategy was to expand the economy very early under the protection of photon guns. He played in deep defense and built an army of aircraft carriers (Carrier), very powerful flying units of the late stages of the game. After creating a certain number of aircraft carriers, they flew all over the map, destroying everything in its path. Unfortunately, an error in the Ximp code led to 100% crashes of the games on the Fortress map, which gave opponents an easy victory. If not for this failure, he would easily have taken third place. An impressive Aiur bot strategy training is also noteworthy: the initial level of victories of 50% gradually increased by the end of the competition to 58.51%, which raised it from fourth to third place.

For these competitions, UAlbertaBot made changes, but the updated version of SparCraft was more important. UAlbertaBot used to use a simpler version of SparCraft, but it gave much less accurate results than the new SparCraft module. He correctly simulated all damage and types of armor, which was not taken into account in the previous version. This provided a much more accurate simulation, significantly improving the behavior of the bot at the beginning of the game. Bot got rid of the strategies of dragoons and dark Templars and left only rush Zealots, which was still the strongest strategy. The only exception was the battles with Skynet, where the bot that performed the rush with the dark templars in the last competition showed that he was rather weak for this strategy. A bug was found in the 2012 UAlbertaBot version, forcing all working units of the bot to chase any units, which led to the victory of Skynet and Aiur due to lack of resources when this happened. This serious mistake was corrected and a few minor ones, thanks to the percentage of victories reached its maximum. UAlbertaBot overtook Skynet by almost 10% wins.

SSCAIT 2013 ( detailed results )


In the SSCAIT competitions, many bots from the 2013 AI course and more than 50 participants participated again. In the tournament of 2013, almost two times more games were held than in 2012: two round-robin rounds were played between all bots on randomly selected cards. The results were again divided into two categories, student and mixed, with the same rules as last year. In the student division, Ximp took the first place, the second WOPR bot, written by Soren Klett from the University of Bielefeld, and UAlbertaBot again took third place. Technical problems and errors in the early stages of the Ximp game from the AIIDE competitions were fixed and it proved to be very good at the round-robin stage. In the mixed division, the top eight bots from the round robin stage played in the net with elimination after the first defeat (single elimination). In the quarter finals, UAlbertaBot was defeated by the IceBot bot. The final was held between IceBot and krasi0. The krasi0 bot won and IceBot took second place. The AIIDE UAlberaBot version was used in these competitions without any changes.

CIG 2014




The 2014 CIG competition was hosted by Gyeongjun Kim, Hochol Cho and Inseok O of Sejong University. They involved 13 bots. In 2014, 20 cards unknown to opponents were used for CIG. Until now, it was the largest set of cards used in Starcraft AI competitions. The tournament used an updated version of AIIDE tournament management software, that is, reading and writing to a file for the first time fully earned in CIG competitions. A total of 4680 games were played, each bot played three times with all the others on each of the twenty cards. Major changes were made to some of the strong bots of the past, and this affected the results.

IceBot won the competition, second place went to Ximp, the third to LetaBot, and the fourth to Aiur. IceBot has been competing since 2012, but has never ranked above sixth. His strategies were completely redesigned and more people worked on his creation. The result is a much more stable and reliable system. Thanks to a very strong defense in the early stages of the game, he could defend himself from many bots with early aggressive behavior. Ximp continued to use its strategy with aircraft carriers, making small changes and fixing bugs. LetaBot was a new terran bot written by Martin Rooijackers from the University of Maastricht. The source code is based on the 2012 UAlbertaBot version and is adapted for the Terran game. UAlbertaBot took the fifth place in the competition, because it was not updated for this year's competition. Due to the victory of UAlbertaBot in AIIDE 2013, part of the bots implemented strategies specifically aimed at defeating him, and the combination with specially developed protection against early aggression from IceBot, Ximp and LetaBot drove him to fifth place.

AIIDE 2014




A small number of participants in 2013 forced to cover the event more widely in order to attract more interest in the 2014 competitions. In addition, if the team participating in the 2013 competitions did not send a new bot in 2014, then the 2013 version automatically re-participated in the tournament in order to assess how the new versions are improving. In total, 18 bots were sent to the competition, and this record has not yet been broken. The 2013 versions of UAlbertaBot, Aiur, and Skynet participated again because their authors did not have time to make changes. Since only a couple of weeks passed between the CIG and AIIDE competitions, many bots were the same, and this affected the results: the top four remained the same as in CIG. IceBot is the first, Ximp is the second, LetaBot is the third, and Aiur is the fourth.

Man's matches against the car in 2014 can be seen here:


SSCAIT 2014 ( detailed results )


In 2014, the SSCAIT structure was updated and began to use tournament management software, which allowed using the same structure of reading and writing files for training, as well as playing more games in less time. Since all three types of competitions now used this open source software that participants could use and test, sending bots and holding tournaments was simplified. Participants could be calm, their bots worked in all three competitions. The format and rules of the tournament were the same as in 2013: each of 42 participants played with all other bots twice, that is, 861 games were played. The results were again divided into the student and mixed divisions. In the student division, LetaBot took the first place, WOPR the second, and UAlbertaBot the third. In the mixed division, the tournament of the eight best bots was played with a knockout after the first loss. In the finals, LetaBot defeated the Ximp bot and won the competition, and UAlbertaBot turned out to be stronger than IceBot in the bronze match and took third place.

UAlbertaBot version was the same as in AIIDE 2013, but with slight improvements in behavior in the field of unit positioning and building placement. After the pretty bad results of CIG and AIIDE, it was surprising that this old version of UAlbertaBot took third place in both divisions.

CIG 2015




In 2015, CIG, again organized by participants from Sejong University, made significant changes to the rules. Most important change: disclosing the source code of participating bots is optional. This surprised many, because the competition was held as part of a scientific conference. Second change: one author can participate with several bots. This is a controversial decision, because there was the possibility of a battle between two bots of one author, where one can automatically lose to the other. Also, one bot could transmit information about previous matches to another. Fortunately, no such problems were found. The 2015 competitions were held in half the time due to technical difficulties that arose at the last moment, so only 2730 games were played between 14 participants.

The results of these competitions were quite amazing, because all three of the best places were taken by new participants, and they all played for zerg. The ZZZBot winner is written by software developer Chris Coxe. It implemented a single strategy: rush with four zerglings, the fastest attack you can make in Starcraft. Despite the relatively simple strategy and implementation, few of the bots were ready for such a quick attack and lost with lightning speed. In second place was tscmoo-Z, a zerg bot written by Vegard Mella, an independent Norwegian programmer. The Tscmoo bot mainly used strategies of the middle and late stages of the game, implemented almost a dozen different building orders and strategies, and over time learned to select the right one for each opponent. Third place went to Overkill, another zerg bot, its author was a Chinese data processing engineer Sijia Xu. Overkill had several strategies in its arsenal, but mainly relied on mutalisks and was similar to Overmind from the AIIDE 2010 competition. During CIG UAlbertaBot, there was a process of serious changes and the new version was not ready to participate, so the version with AIIDE 2013, which took the tenth a place.

AIIDE 2015 ( detailed competition report )




Since 2011, AIIDE competitions at the University of Alberta have been held in a student computer lab consisting of twenty Windows XP machines. Since the laboratory was actively used by students, games could only be launched between the end of the school year and the beginning of the next (usually at the end of August). In 2015, the competitions were held on virtual machines, so thanks to the great help of Nicholas Barrig (another student who studied AI for RTS with Michael Buro), much more games could be played. In total, we had four Linux servers, each of which had three virtual machines with Windows XP, that is, only 12 virtual machines for the tournament. The schedule for AIIDE also changed (the competition was postponed to November), and this meant that they could be held for a full two weeks, that is twice as long as last year. Another advantage of virtual machines is that competitions can be monitored and managed through remote desktop software. With KRDC, the ssh tunnel could control all 12 machines simultaneously, and the tournament could be stopped or restarted directly from home.

25 bots were registered at AIIDE competitions, three of which did not participate due to technical reasons. 22 participants represented 12 countries, which made these competitions the largest and covering the most countries. In 2015, the most even distribution of races was also obtained. Last year, Zerg was poorly represented, but this year five new Zerg bots participated, many of which had previously participated in CIG competitions. The distribution of AIIDE races is shown below.



There was also the first bot event (UAlbertaBot) with a random race. By choosing Random, the bot randomly received one of the three Starcraft races after starting the game. This meant that the bot is much more difficult to program, because strategies are needed for all three races, but this also gave an advantage, because the opponent did not know the UAlbertaBot race until it was found in the game. Also, changes were made to the tournament management software, correcting the error in storing the permanent file, sometimes leading to overwriting the file. Due to the transition in 2015 to virtual machines, competitions could have any duration, so this time they lasted 14 days. A total of 20,788 games were played, nine games with each of 22 bots on each of ten cards. This is twice the previous record of games played.



The finalists were very close to each other: statistically the difference in the percentage of victories between the first and second, third and fourth places was less than 1%. Three new Zerg bots performed extremely well - tscmoo took first place with 88.52% wins, ZZZKBot second with 87.83% wins, and Overkill third with 80.69% wins. UAlbertaBot was in fourth place with 80.2% in random race mode. This was a great achievement, because the game for several races is more difficult to implement. Another achievement UAlbertaBot - he scored more than 50% against each bot in the competition, but did not reach the maximum total percentage of victories. The reason was that UAlbertaBot had 2/3 victories against weak bots, because one of the three races did not defeat these bots. You can study the detailed results of the competition with the achievements of each bot. The general strategies of tscmoo, ZZZKBot and Overkill have not changed since CIG 2015, however, minor bugs were fixed in each bot. We also had a Man vs. Machine match, in which an experienced Starcraft player and the best competition bots participated. The gamecan be seen here .


Almost all UAlbertaBot modules were rewritten a few months before AIIDE 2015. The bot learned to play not only for protoss, but also for other races. This required a much more reliable and general approach to micromanagement and planning of the construction procedure, which had previously been adapted to protoss. The construction order search system was updated to now search for a construction order for any of the three races. Fixed bugs that previously led to UAlbertaBot crashes during the competition. The new version of this software was given the name BOSS (build order search system) and it was published on github as part of the UAlbertaBot project. Another big change to UAlbertaBot was the creation of bot options configuration files. Files were written in JSON and bot parsed them at the beginning of each game. Configuration files contain many options for making strategic and tactical decisions, for example, the choice of strategies for each of the races, unit micromanagement options and debugging options. They also contain UAlbertaBot construction procedures for starting the game, which can be quickly and conveniently edited. Thanks to this configuration file, all options and construction orders can be edited without recompiling the bot, which speeds up development and simplifies modification and use by other programmers.

A few days before the competition, test games were played against many bots participating in AIIDE 2014 and CIG 2015. Manual analysis of these matches was carried out and construction procedures and strategies were created against some of the bots, such as Skynet, LetaBot, Ximp and Aiur. For example, against Ximp, the bot used a strong air defense strategy, because it is known that Ximp always builds aircraft carriers. If the bot played against Aiur for the Terrans, then it created a large number of vultures (Vulture), because they impeded the Aiur Zealot rush. It was risky, because any of the opponents could change their strategy before the AIIDE competitions, but many of them did not, and in the long run such a model paid off. If the bot played against an unknown adversary, by default, he used one of three rush strategies, depending on his race: rush zerglings for zerg, zealots for protoss, and marines for terrans. Thanks to this, the competition became very successful for UAlbertaBot and the percentage of victories over each of the bots was high.

SSCAIT 2015 ( detailed results )



From the translator: I could not find the detailed results of AIIDE 2016, but this report gives a general idea.

The next AIIDE competitions will be held in September 2017, registration ends on August 1.

Also popular now: