Ant Colony Optimization

# Ant Colony Optimization

edited by Christian Blum, IRIDIA

## About Ant Colony Optimization

Ant Colony Optimization (ACO) is a metaheuristic approach proposed by Dorigo et al. in [Dorigo 1992], [Dorigo et al. 1996], and [Dorigo et al. 1999]. The inspiring source of ACO is the foraging behavior of real ants. This behavior enables ants to find shortest paths between food sources and their nest. While walking from food sources to the nest and vice versa, ants deposit a substance called pheromone on the ground. When they decide about a direction to go, they choose with higher probability paths that are marked by stronger pheromone concentrations. This basic behavior is the basis for a cooperative interaction which leads to the emergence of shortest paths.

ACO algorithms are based on a parametrized probabilistic model -- the pheromone model -- that is used to model the chemical pheromone trails. Artificial ants incrementally construct solutions by adding opportunely defined solution components to a partial solution under consideration. For doing that, artificial ants perform randomized walks on a completely connected graph G=(C,L), whose vertices are the solution components C and the set L are the connections. This graph is commonly called construction graph. When a constrained combinatorial optimization problem is considered, the problem constraints are built into the ants' constructive procedure in such a way that in every step of the construction process only feasible solution components can be added to the current partial solution. In most applications, ants are implemented to build feasible solutions, but sometimes it is desirable to also let them pheromone trail parameter Τi. The value of such a parameter is usually denoted by τi. The set of all pheromone trail parameters is denoted by Τ. These pheromone values are used by the ants to make probabilistic decisions on how to move on the construction graph.

The framework of the ACO metaheuristic is shown below. It consists of three parts gathered in the ScheduleActivities construct. The ScheduleActivities construct does not specify how these three activities are scheduled and synchronized. This is up to the algorithm designer.

 Framework for Ant Colony Optimization ``` WHILE termination conditions not met DO ScheduleActivities AntBasedSolutionConstruction() PheromoneUpdate() DaemonActions() {optional} END ScheduleActivities ENDWHILE ```

AntBasedSolutionConstruction(): An ant constructively builds a solution to the problem by moving through nodes of the construction graph G. Ants move by applying a stochastic local decision policy that makes use of the pheromone values and the heuristic values on components and/or connections of the construction graph. While moving, the ant keeps in memory the partial solution it has built in terms of the path it was walking on the construction graph.

PheromoneUpdate(): When adding a component ci to the current partial solution, an ant can update the values of the pheromone trails that where used for this construction step. This kind of pheromone update is called online step-by-step pheromone update. Once an ant has built a solution, it can (by using its memory) retrace the same path backward and update the pheromone trails of the used components and/or connections according to the quality of the solution it has built. This is called online delayed pheromone update. Another important concept in Ant Colony Optimization is pheromone evaporation. Pheromone evaporation is the process by means of which the pheromone trail intensity on the components decreases over time. From a practical point of view, pheromone evaporation is needed to avoid a too rapid convergence of the algorithm toward a sub-optimal region. It implements a useful form of forgetting, favoring the exploration of new areas in the search space.

DaemonActions(): Daemon actions can be used to implement centralized actions which cannot be performed by single ants. Examples are the use of a local search procedure applied to the solutions built by the ants, or the collection of global information that can be used to decide whether it is useful or not to deposit additional pheromone to bias the search process from an non-local perspective. As a practical example, the daemon can observe the path found by each ant in the colony and choose to deposit extra pheromone on the components used by the ant that built the best solution. Pheromone updates performed by the daemon are called offline pheromone updates.

M. Dorigo, 1992. Optimization, Learning and Natural Algorithms (in Italian), Ph.D. thesis, DEI, Politecnico di Milano, Italy. pp. 140.

M. Dorigo and G. Di Caro, 1999. The Ant Colony Optimization meta-heuristic. In D. Corne, M. Dorigo, and F. Glover editors, New Ideas in Optimization}, pages 11-32. McGraw-Hill.

M. Dorigo, G. Di Caro and L. M. Gambardella, 1999. Ant algorithms for discrete optimization. Artificial Life, 5, 2, pages 137-172.

M. Dorigo and L. M. Gambardella, 1997. Ant colony system: A cooperative learning approach to the travelling salesman problem. IEEE Transactions on Evolutionary Computation, 1, 1, pages 53-66.

M. Dorigo, V. Maniezzo and A. Colorni, 1996. Ant System: Optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man and Cybernetics - Part B, 26, 1, pages 29-41.

M. Dorigo and T. Stützle, 2002. The ant colony optimization metaheuristic: Algorithms, applications and advances. In F. Glover and G. Kochenberger editors, Handbook of Metaheuristics, volume 57 of International Series in Operations Research & Management Science, pages 251-285. Kluwer Academic Publishers, Norwell, MA.

M. Dorigo and T. Stützle, 2004. Ant Colony Optimization. MIT Press, Boston, MA.

T. Stützle and H. H. Hoos, 2000. MAX-MIN Ant System. Future Generation Computer Systems, 16, 8, pages 889-914.