Ant algorithms


    Most recently, an article was published on this blog devoted to the behavior of a swarm of bees. This article talks about another swarm intelligence algorithm called the ant algorithm . It consists of an introduction that briefly talks about a borrowed natural mechanism, a description of the original Marco Dorigo algorithm, a review of other ant algorithms, and a conclusion that indicates the application areas of ant algorithms and promising directions in their research.


    An ant cannot be called smart. A single ant is not able to make the slightest decision. The fact is that it is arranged very primitively: all of its actions come down to elementary reactions to the environment and its brothers. The ant is not able to analyze, draw conclusions and seek solutions.

    These facts, however, are in no way consistent with the success of ants as a species. They exist on the planet for more than 100 million years, build huge dwellings, provide them with everything they need, and even wage real wars. Compared to the complete helplessness of individual individuals, the accomplishments of ants seem unthinkable.

    Ants are able to achieve such successes due to their sociality. They live only in collectives - colonies. All colony ants form the so-called swarm intelligence. The individuals that make up the colony should not be smart: they should only interact according to certain - extremely simple - rules, and then the colony will be fully effective.

    There are no dominant individuals in the colony, no bosses and subordinates, no leaders who give instructions and coordinate actions. The colony is completely self-organizing. Each of the ants has information only about the local situation, not one of them has an idea about the whole situation as a whole - only about what he himself or from his relatives learned, explicitly or implicitly. The implicit interactions of ants called stigmergia are based on the mechanisms for finding the shortest path from an ant hill to a food source.

    Each time passing from the anthill to food and back, the ants leave behind a path of pheromones. Other ants, having felt such traces on the earth, will instinctively rush towards it. Since these ants also leave pheromone paths behind them, the more ants go along a certain path, the more attractive it becomes for their relatives. Moreover, the shorter the path to the food source, the less time it takes for ants to eat it - and therefore, the faster the traces left on it become noticeable.

    In 1992, in his dissertation, Marco Dorigo suggested borrowing the described natural mechanism for solving optimization problems [1]. Imitating the behavior of an ant colony in nature, ant algorithms use multi-agent systems whose agents function according to extremely simple rules. They are extremely effective in solving complex combinatorial problems - such as, for example, the traveling salesman problem, the first of those solved using this type of algorithm.


    The classic ant algorithm for solving the traveling salesman problem

    As mentioned above, the ant algorithm models a multi-agent system. Her agents will be called ants in the future. Like real ants, they are quite simple: they require a small amount of memory to carry out their duties, and at each step of the work they perform simple calculations.

    Each ant stores in memory a list of nodes passed by it. This list is called a tabu list or simply ant memory . Choosing a node for the next step, the ant “remembers” the nodes already passed and does not consider them as possible for the transition. At each step, the list of prohibitions is replenished with a new node, and before the new iteration of the algorithm — that is, before the ant passes the path again — it is emptied.

    In addition to the list of prohibitions, when choosing a node for the transition, the ant is guided by the “attractiveness” of the ribs that it can go through. It depends, firstly, on the distance between the nodes (that is, on the weight of the rib), and secondly, on the traces of pheromones left on the edge by ants that passed earlier on it. Naturally, in contrast to the weights of the ribs, which are constant, the traces of pheromones are updated at each iteration of the algorithm: as in nature, traces evaporate over time, and passing ants, on the contrary, strengthen them.

    Let the ant is in a node , and the node - this is one of the nodes that are available for transfer: . Denote the weight of the rib connecting the nodes and , as , and the intensity of the pheromone on it - as . Then the probability of the transition of the ant fromto be equal to:

    where and - is adjustable parameters that determine the importance of the components (weight ribs and pheromone level) at a choice of the way. Obviously, when an algorithm turns into a classic greedy algorithm, and when it quickly converges to some suboptimal solution. The selection of the correct ratio of parameters is the subject of research, and in the general case is made on the basis of experience.

    Once an ant passes the route, he leaves on all the edges of the trail passed inversely proportional to the length of the distance traveled:


    where - the length of the path, and- adjustable parameter. In addition, traces of the pheromone evaporate, that is, the intensity of the pheromone on all edges decreases at each iteration of the algorithm. Thus, at the end of each iteration, it is necessary to update the intensity values:


    Overview of the modifications of the classical algorithm

    The results of the first experiments using the ant algorithm to solve the traveling salesman problem were promising, but far from the best in comparison with existing methods. However, the simplicity of the classical ant algorithm (called the "ant system") left room for improvement - and it was algorithmic improvements that became the subject of further research by Marco Dorigo and other specialists in the field of combinatorial optimization. Basically, these improvements are associated with a greater use of search history and a more thorough study of the areas around successful solutions already found. Below are the most noteworthy of the modifications.

    Elitist ant system

    One of these improvements is the introduction of the so-called "elite ants" into the algorithm. Experience has shown that when passing ribs entering short paths, ants are more likely to find even shorter paths. Thus, an effective strategy is to artificially increase the level of pheromones on the most successful routes. To do this, at each iteration of the algorithm, each of the elite ants goes the way, which is the shortest of those found at the moment.

    Experiments show that, to a certain level, an increase in the number of elite ants is quite effective, significantly reducing the number of iterations of the algorithm. However, if the number of elite ants is too large, then the algorithm quickly finds a suboptimal solution and gets stuck in it [3]. Like other variable parameters, the optimal number of elite ants should be determined empirically.


    Luca M. Gambardella and Marco Dorigo published a work in 1995 in which they presented an ant algorithm, which got its name by analogy with the machine learning method Q-learning [4]. The algorithm is based on the idea that the ant system can be interpreted as a reinforced learning system. Ant-Q reinforces this analogy by borrowing many ideas from Q-learning.

    The algorithm stores a Q-table matching each of the edges with a value that determines the “usefulness” of the transition along this edge. This table changes during the operation of the algorithm - that is, training the system. The value of the edge transition utility is calculated based on the values ​​of the transition utility along the next edges as a result of a preliminary determination of the possible next states. After each iteration, the utilities are updated based on the lengths of the paths that included the corresponding edges.

    Ant colony system

    In 1997, the same researchers published a paper devoted to yet another ant algorithm developed by them [5]. To increase efficiency compared to the classical algorithm, they introduced three main changes.

    Firstly, the level of pheromones on the edges is updated not only at the end of the next iteration, but also at each transition of ants from node to node. Secondly, at the end of the iteration, the level of pheromones increases only on the shortest path found. Thirdly, the algorithm uses a modified transition rule: either, with a certain degree of probability, the ant certainly chooses the best edge according to the length and level of pheromones, or it makes the choice in the same way as in the classical algorithm.

    Max-min Ant System

    In the same year, Tomas Stützle and Holger Hoos proposed an ant algorithm in which an increase in the concentration of pheromones occurs only on the best paths taken by ants [6]. Such great attention to local optima is compensated by the introduction of restrictions on the maximum and minimum concentration of pheromones on the edges, which extremely effectively protect the algorithm from premature convergence to suboptimal solutions.

    At the initialization stage, the concentration of pheromones on all edges is set equal to the maximum. After each iteration of the algorithm, only one ant leaves a trace - either the most successful at this iteration, or, like the algorithm with elitism, an elite one. This is achieved, on the one hand, a more thorough study of the search area, on the other - its acceleration.


    Bernd Bullnheimer, Richard F. Hartl and Christine Strauß developed a modification of the classical ant algorithm in which at the end of each iteration the ants are ranked according to the lengths of the paths they traveled [7]. The number of pheromones left by the ant on the ribs is thus assigned in proportion to its position. In addition, for a more thorough study of the neighborhoods of already found successful solutions, the algorithm uses elite ants.



    The traveling salesman problem is perhaps the most investigated combinatorial optimization problem. It is not surprising that it was she who was chosen first for the solution using an approach that borrows the behavior of the ant colony. Later this approach was applied in solving many other combinatorial problems, including assignment problems, graph coloring task, routing problems, tasks from data mining and pattern recognition areas and others.

    The effectiveness of ant algorithms is comparable to the efficiency of general metaheuristic methods, and in some cases also to problem-oriented methods. Ant algorithms show the best results for problems with large dimensions of the search areas. Ant algorithms are well suited for use with local search procedures, allowing you to quickly find the starting points for them.

    The most promising areas for further research in this direction should be considered an analysis of the method of choosing customizable algorithm parameters. In recent years, various methods of adapting the parameters of algorithms on the fly have been proposed [8]. Since the behavior of ant algorithms depends strongly on the choice of parameters, it is precisely this problem that the greatest attention of researchers at the moment is drawn to.



    [1] M. Dorigo, “Ottimizzazione, apprendimento automatico, ed algoritmi basati su metafora naturale (Optimization, Learning, and Natural Algorithms)”, doctoral thesis “Doctorate in Systems and Information Electronic Engineering”, Politecnico di Milano, 1992 g

    [2] A. Colorni, M. Dorigo, V. Maniezzo, "Distributed Optimization by Ant Colonies" // Proceedings of the First European Conference on Artificial Life, Paris, France, Elsevier Publishing, pp. 134-142, 1991 of

    [3] M. Dorigo, V. Maniezzo, A. Colorni, "The Ant System: Optimization by a colony of cooperating agents" // IEEE Transactions on Systems, Man, and Cybernetics-Part B, 26, 1, p 29-41, 1996

    [4] LM Gambardella, M. Dorigo, “Ant-Q: A Reinforcement Learning Approach to the Traveling Salesman Problem” // Twelfth International Conference on Machine Learning, Morgan Kaufmann, pp. 252-260, 1995.

    [5] M Dorigo, LM Gambardella, Ant Colony System: A Cooperative Learning Approach to the Traveling Salesman Problem // IEEE Transactions on Evolutionary Computation Vol. 1, 1, pp. 53-66, 1997.

    [6] T. Stützle, H. Hoos, “MAX-MIN Ant System and local search for the traveling salesman problem” // IEEE International Conference on Evolutionary Computation, p. 309-314, 1997

    [7] Bernd Bullnheimer, Richard F. Hartl, Christine Strauß, “A new rank based version of the Ant System. A computational study ”// Adaptive Information Systems and Modeling in Economics and Management Science, 1, 1997

    [8] T. Stützle, M. López-Ibáñez, P. Pellegrini, M. Maur, M. de Oca, M. Birattari, Michael Maur, M. Dorigo, “Parameter Adaptation in Ant Colony Optimization” // Technical Report, IRIDIA, Université Libre de Bruxelles, 2010.

    Also popular now: