Efficiency enhancement techniques for probabilistic model building genetic algorithms

Funded by FCT/MCTES, the Portuguese Science Foundation. Grant no. PTDC/EIA/67776/2006. Proposal submitted on Jul/2006. Project started in Jan/2008. Finished in Jul/2011.

Project summary

Genetic algorithms (GAs) are search procedures based on principles of natural selection and genetics. They are often used as search algorithms to solve optimization problems where little is known about the objective function.

GAs have been applied successfully in a number of applications, but it has long been recognized that they could exhibit a much better performance if they were able to autonomously learn a problem's structure during the course of the search itself. During the last 7-8 years, GAs have been recasted in terms of probabilistic optimization, and this new perspective has shown to be very effective in autonomously learning a problem's structure. These new algorithms, often called "Estimation of Distribution Algorithms" (EDAs) or "Probabilistic model-building genetic algorithms" (PMBGAs) have shown to outperform traditional GAs by several orders of magnitude.

The main principle of EDAs is the recognition that a search algorithm performs induction; it has to guess where to move next based on its past experiences. In the context of genetic algorithms, the population represents a collection of points that summarize the past experiences of the algorithm. Based on this set of points, the GA has to guess where to go next. The way traditional GAs typically do this guessing is by choosing another population through the actions of selection, crossover, and mutation. Unlike traditional GAs, EDAs replace the variation operators by the construction of a probabilistic model of promising solutions, and subsequent sampling from that model.

Several researchers have noted that sampling solutions from the learned probabilistic model is equivalent to an idealized crossover operator that preserves important substructures, the so-called "building blocks" of a problem's solution. More recently, it has been shown that the probabilistic model can also be used to enhance the performance of EDAs in other aspects of the algorithm. Examples of that are fitness evaluation relaxation (Sastry, Pelikan, & Goldberg, 2004; Pelikan & Sastry, 2004; Sastry, Lima, & Goldberg, 2006), induction of global neighborhoods for mutation operators and local search (Sastry & Goldberg, 2004; Lima et al., 2006a), hybridization and adaptive time utilization (Lima et al., 2005; Lima et al., 2006a; Lima et al., 2006b), substructural niching (Sastry et al., 2005), offline (Yu & Goldberg, 2004) and online (Yu, Sastry, & Goldberg, 2005) population size adaptation, or simply to assist the user in a better interpretation and understanding of the non-linearities of the problem.

Along these lines, we intend to continue the research on some of these topics and explore new ones. Specifically, the project will address the interpretation of probabilistic models of EDAs to define substructural neighborhoods used in mutation operators and local search. Additionally, we plan to use a surrogate fitness model that is also based on the probabilistic model to evaluate the alternatives while performing local search in the subsolution space, reducing thereby the evaluation cost.

We will also investigate hybridization schemes for EDAs to obtain the best features of idealized recombination and mutation operators. Once substructural mutation operators and local search methods are available, it is necessary to investigate how to integrate these operators/procedures with the traditional sampling step (equivalent to recombination) in EDAs.

The usefulness of sporadic and incremental model building will also be studied, where instead of learning the structure and parameters of the model at each generation, the model learning phase can be relaxed in several ways so that the cost or performing model building (increasing with problem size) can be reduced.

Another important topic of research will be the offline interpretation of probabilistic models of EDAs to develop robust operators for specific classes of problems. While EDAs learn online the underlying problem structure at each generation, that information can be used to develop fixed but structure-based recombination and mutation operators to instances of problems that share the same kind of substructures.

Finally, we will apply the above techniques in artificial problems and real world optimization problems, performing comparative studies with standard versions of EDAs and other state-of-the-art search procedures.

The repercussions of the research work proposed herein will advance the state of the art in evolutionary computation and is likely to produce robust and efficient algorithms that can be applied to a wide range of problems.

Project publications



Last updated by Fernando Lobo, 10/Aug/2011.