PROJECTPUBLICATIONSALGORITHMSPROBLEMSprivate area
ACOECILSSATS

Iterated Local Search

Iterated Local Search

edited by Thomas Stützle and Luis Paquete, INTELLEKTIK


Contents


About Iterated Local Search

The essence of the Iterated Local Serach (ILS) metaheuristic can be given in a nut-shell: one iteratively builds a sequence of solutions generated by the embedded heuristic, leading to far better solutions than if one were to use repeated random trials of that heuristic. This simple idea has a long history, and its rediscovery by many authors has lead to many different names for ILS like iterated descent, large-step Markov chains, iterated Lin-Kernighan, chained local optimization , or combinations of these. For us, there are two main points that make an algorithm an ILS:
  • there must be a single chain that is being followed (this then excludes population-based algorithms);
  • the search for better solutions occurs in a reduced space defined by the output of a black-box heuristic.
In practice, local search has been the most frequently used embedded heuristic, but in fact any optimizer can be used, be-it deterministic or not.

Idea

ILS explores the search of local minima (denominated as S* ) w.r.t. to some given embedded heuristic, called LocalSearch in the following. ILS achieves this heuristically as follows. Given the current s*, we first apply a change or perturbation that leads to an intermediate state s' (which belongs to S). Then LocalSearch is applied to s' and we reach a solution s*' in S*. If s*' passes an acceptance test, it becomes the next element of the walk in S*; otherwise, one returns to s*. The resulting walk is a case of a stochastic search in S*, but where neighborhoods are never explicitly introduced. This ILS procedure should lead to good biased sampling as long as the perturbations are neither too small nor too large. If they are too small, one will often fall back to s* and few new solutions of S* will be explored. If on the contrary the perturbations are too large, s' will be random, there will be no bias in the sampling, and we will recover a random restart type algorithm.

Pictorial View

The overall ILS procedure is pictorially illustrated in the following figure. To be complete, let us note that generally the ILS walk will not be reversible; in particular one may sometimes be able to step from s(1)* to s(2)* but not from s(2)* to s(1)*. However this ``unfortunate'' aspect of the procedure does not prevent ILS from being very effective in practice.

Since deterministic perturbations may lead to short cycles (for instance of length 2), one should randomize the perturbations or have them be adaptive so as to avoid this kind of cycling. If the perturbations depend on any of the previous s*, one has a walk in S* with memory.

Procedural View

Written in a procedural way, ILS algorithms as metaheuristics have the following high level architecture:
 

Basic Iterated Local Search


s(0) = GenerateInitialSolution
s* =  LocalSearch(s_0)
REPEAT       
  s' =  Perturbation(s*,history) 
  s*' = LocalSearch}(s')
  s* =  AcceptanceCriterion(s*,s*',history)
UNTIL termination condition met 

In practice, much of the potential complexity of ILS is hidden in the history dependence. If there happens to be no such dependence, the walk has no memory: the perturbation and acceptance criterion do not depend on any of the solutions visited previously during the walk, and one accepts or not s*' with a fixed rule. This leads to random walk dynamics on S* that are ``Markovian'', the probability of making a particular step from s(1)* to s(2)* depending only on s(1)* and s(2)*.

Staying within Markovian walks, the most basic acceptance criteria will use only the difference in the costs of s* and s*'; this type of dynamics for the walk is then very similar in spirit to what occurs in simulated annealing. A limiting case of this is to accept only improving moves, as happens in simulated annealing at zero temperature; the algorithm then does (stochastic) descent in S*. If we add to such a method a CPU time criterion to stop the search for improvements, the resulting algorithm pretty much has two nested local searches; to be precise, it has a local search operating on S embedded in a stochastic search operating on S*. More generally, one can extend this type of algorithm to more levels of nesting, having a different stochastic search algorithm for S*, S**, etc... Each level would be characterized by its own type of perturbation and stopping rule; to our knowledge, such a construction has never been attempted.

Power of ILS

The potential power of ILS lies in its biased sampling of the set of local optima. The efficiency of this sampling depends both on the kinds of perturbations and on the acceptance criteria. Interestingly, even with the most naive implementations of these parts, ILS is much better than random restart. But still much better results can be obtained if the ILS modules are optimized. First, the acceptance criteria can be adjusted empirically as in simulated annealing without knowing anything about the problem being optimized. This kind of optimization will be familiar to any user of metaheuristics, though the questions of memory may become quite complex. Second, the Perturbation routine can incorporate as much problem-specific information as the developer is willing to put into it. In practice, a rule of thumb can be used as a guide: ``a good perturbation transforms one excellent solution into an excellent starting point for a local search''.

Together, these different aspects show that ILS algorithms can have a wide range of complexity, but complexity may be added progressively and in a modular way. (Recall in particular that all of the fine-tuning that resides in the embedded LocalSearch can be ignored if one wants, and it does not appear in the metaheuristic per-se.) This makes ILS an appealing metaheuristic for both academic and industrial applications. The cherry on the cake is speed: one can perform k local searches embedded within an ILS much faster than if the k local searches are run within random restart.


Readings

H.R. Lourenco, O. Martin, and T. Stuetzle. Iterated Local Search. In: F. Glover and G. Kochenberger, editors, Handbook of Metaheuristics. pages 321-353, Kluwer Academic Publishers, Norwell, MA, 2002

E. B. Baum. Towards practical ``neural'' computation for combinatorial optimization problems. In J. Denker, editor, Neural Networks for Computing, pages 53--64, 1986. AIP conference proceedings.

J. Baxter. Local optima avoidance in depot location. Journal of the Operational Research Society, 32:815--819, 1981.

R. K. Congram, C.N. Potts, and S. L. Van de Velde. An iterated dynasearch algorithm for the single--machine total weighted tardiness scheduling problem. INFORMS Journal on Computing, 14(1):52-67, 2002.

D. S. Johnson and L. A. McGeoch. The travelling salesman problem: A case study in local optimization. In E.H.L. Aarts and J.K. Lenstra, editors, Local Search in Combinatorial Optimization, pages 215--310. John Wiley & Sons, Chichester, England, 1997.

S. Kreipl. A large step random walk for minimizing total weighted tardiness in a job shop. Journal of Scheduling, 3(3):125--138, 2000.

O. Martin and S. W. Otto. Combining simulated annealing with local search heuristics. Annals of Operations Research, 63:57--75, 1996.

O. Martin, S. W. Otto, and E. W. Felten. Large-step Markov chains for the traveling salesman problem. Complex Systems, 5(3):299--326, 1991.

O. Martin, S. W. Otto, and E. W. Felten. Large-step Markov chains for the TSP incorporating local search heuristics. Operations Research Letters, 11:219--224, 1992.

T. Stuetzle Iterated Local Search for the Quadratic Assignment Problem. Technical Report AIDA-99-03, Darmstadt University of Technology, Computer Science Department, Intellectics Group. (submitted)

T. Stuetzle. Local Search Algorithms for Combinatorial Problems --- Analysis, Improvements, and New Applications. PhD thesis, Darmstadt University of Technology, Department of Computer Science, 1998.


Links


Network Coordinator: Marco Dorigo    e-mail: mdorigo@ulb.ac.be    Web site responsibles: Christian Blum and Max Manfrin