Project is funded by FCT/MCES, the Portuguese Science Foundation. Grant no. POSI/SRI/42065/2001. Proposal submitted on Mar/2001. Project started in May/2002 and finished in Dec/2005.

- Fernando Lobo
- Carlos Fonseca
- Cláudio Lima
- Sylvia Lins
- Samuel Viana
- Ricardo Bentes

Genetic algorithms (GAs) (Goldberg 1989) are search procedures based on principles of natural selection and genetics. They are often used as search algorithms to solve optimization problems where little is known about the objective function.

GAs have been applied successfully in a number of applications, but it has long been recognized that they could exhibit a much better performance if they were able to autonomously learn a problem's structure during the course of the search itself. Most research efforts along these lines have focused on evolving a problem's chromosomal representation along with its solution (Harik, 1997) but those methods have been only partially successful.

During the last couple of years, however, GAs have been recasted in terms of probabilistic optimization (Harik, 1999; Pelikan et al., 1999). and this new perspective have shown to be very effective in autonomously learning a problem's structure. These new algorithms, often called "Probabilistic model-building genetic algorithms (PMBGAs)" (Pelikan, Goldberg, and Lobo, 1999), have shown to outperform the traditional GAs by several orders of magnitude.

The main principle of PMBGAs is the recognition that a search algorithm performs induction. That is, it has to guess where to move next based on its past experiences. In the context of genetic algorithms, the population represents a collection of points that summarize the past experiences of the algorithm. Based on this set of points, the GA has to guess where to go next. The way traditional GAs typically do this guessing is by choosing another population through the actions of selection, crossover, and mutation. Unlike traditional GAs, PMBGAs replace the variation operators by the construction of a probabilistic model of promising solutions, and subsequent sampling from that model (or distribution). In other words, the probability distribution can be seen as a template to generate populations that are somehow similar (but different) from the best individuals in the current population.

This project will investigate additional questions that remain open in PMBGAs. Specifically, it will address (1) extensions for multi-objective algorithms, (2) investigate the role of crossover and mutation in GAs and reframe both in terms of probabilistic optimization, (3) investigate PMBGAs for permutation structures, and (4) investigate efficient parallelization schemes for PMBGAs.

The repercussions of the research work proposed herein will advance the state of the art in genetic algorithms. Moreover, it is likely that it will produce new efficient algorithms that can be applied to a wide range of problems.

- Pelikan M., Goldberg D. E., Lobo F. G. (1999). A Survey of Optimization by Building and Using Probabilistic Models. IlliGAL Tech Report No. 99018, Illinois Genetic Algorithms Laboratory, University of Illinois at Urbana-Champaign, Urbana.
- Goldberg D. E. (1989). Genetic algorithms in Search, Optimization, and Machine Learning. Reading, MA: Addison-Wesley.
- Harik G. (1999). Linkage Learning via Probabilistic Modeling in the ECGA. IlliGAL Tech Report No. 99010, Illinois Genetic Algorithms Laboratory, University of Illinois at Urbana-Champaign, Urbana, IL.
- Harik G. (1997). Learning linkage to efficiently solve problems of bounded difficulty using genetic algorithms. Phd dissertation, IlliGAL Tech Report No. 97005, Illinois Genetic Algorithms Laboratory, University of Illinois at Urbana-Champaign, Urbana, IL.
- Pelikan M., Goldberg D. E., and Cantú-Paz E. (1999). BOA: The Bayesian Optimization Algorithm. In Banzhaf et al. (Eds.) GECCO-99: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 525-532. Morgan Kaufmann.

Last updated by Fernando Lobo, 14/Mar/2006.