Available papers Chronological order |
Genetic Programming and Evolvable Machines, 1(4), pp 339-361, 2000
Gzipped Postscript A New Genetic Algorithm Working on State Domain Order Statistics
M. Schoenauer et al. Eds., Proceedings of Parallel Problem Solving from Nature VI, pp 777-789, LNCS 1917, Springer Verlag, 2000 Gzipped Postscript
M. Schoenauer et al. Eds., Proceedings of Parallel Problem Solving from Nature VI, pp 211-220, LNCS 1917, Springer Verlag, 2000 Gzipped Postscript
M. Schoenauer et al. Eds., Proceedings of Parallel Problem Solving from Nature VI, pp 529-538, LNCS 1917, Springer Verlag, 2000 Gzipped Postscript
Proceedings of Congress on Evolutionary Computation , pp 896-901, IEEE Press, 2000 Gzipped Postscript
ESAIM: Proceedings, 2000 Gzipped Postscript
I. Parmee, Ed., Evolutionary Design and Manufacture, Springer Verlag, 2000
Evolutionary Computation
Proceedings of Congress on Evolutionary Computation
, pp 1110-1116, IEEE Press, 1999
Gzipped Postscript
Proceedings of Congress on Evolutionary Computation
, pp 1910-1916, IEEE Press, 1999
Gzipped Postscript
Evonet Summer School 1999
Z. Ras and A. Skowron editors, Proceedings of ISMIS'99,
Foundation of Intelligent Systems 11th International Symposium
LNAI 1609, Springer Verlag, pp 639-647, 1999
Proceedings of PPSN'98 Springer Verlag, LNCS 1498, pp 57-66, 1998.
Fundamenta Informaticae
Proceedings of ICEC'98, IEEE International conference on
Evolutionary Computation
LNAI 1609, Springer Verlag, pp 639-647, 1999
Presented at the Seventh Annual Conference on Evolutionary Programming - EP'98
Control and Cybernetics Special Issue on Evolutionary Computation
26(3) pp 307-338, 1997
Artificial Evolution'97 pp 287-299, LNCS 1363, Springer
Verlag 1997
Genetic Algorithms and Evolution Strategies in Engineering
and Computer Sciences pp 327-340, John Wiley 1997
Proceedings of IJCAI97, pp 888-896, 1997.
Proceedings of the Seventh International Conference on Genetic
Algorithms, Morgan Kaufmannm pp 291-298, 1997.
Proceedings of the Seventh International Conference on Genetic
Algorithms, Morgan Kaufmann, pp 268-275, 1997.
Proceedings of the Seventh International Conference on Genetic
Algorithms, Morgan Kaufmann, pp 322-329, 1997.
in D. Dasgupta and Z. Michalewicz Ed., Evolutionary Algorithms in
Engineering Applications. pp 479-496, Springer Verlag 1997.
The latter question is addressed by recording the last worst trials of
the hill-climbers within an individual termed {\em repoussoir}. The hill-climbe
rs further explore the neighborhood of their current point as to get away from
the repoussoir.
As to the former question, no definite answer is proposed. Nevertheless, we
experimentally show that hill-climbers behave quite differently depending on
whether one sets a mutation rate per bit, or sets the exact number of
bits to mutate per individual.
Proceedings of 1997 IEEE International Conference on
Evolutionary Computation, pp 319-324, Indianapolis, April 1997.
Revue Française de Mécanique 4 pp 237-246, 1997
Proceedings of the Sixth Annual Conference
on Evolutionary Programming,
LNCS 1213, pp 247-261, Indianapolis, April 1997.
Revue européenne des éléments finis. Vol 5(5-6), pp 619-648, 1996.
In this paper we (1) discuss difficulties connected with solving the
general nonlinear programming problem, (2) survey several approaches
which have emerged in the evolutionary computation community, and (3)
provide a set of eleven interesting test cases, which may serve as a
handy reference for future methods.
Evolutionary Computation Vol 4(1), pp 1-32, 1996.
Control and Cybernetics, Vol.25 No.5, pp 1059-1088, 1996.
Proceedings of Parallel Problems Solving from Nature, Berlin, Sept. 1996.
LNCS 1141
Proceedings of Parallel Problems Solving from Nature, Berlin, Sept. 1996.
LNCS 1141
- The set of all hypotheses that are
consistent with the data and cover
at least one training example, is given an implicit
characterization of polynomial complexity.
The only bias governing this induction
phase is that of the language of hypotheses.
- Classification of further examples is done via interpreting
this implicit theory; the interpretation mechanism allows one to
relax the consistency requirement and tune the specificity of the theory
at no extra induction cost.
Experimental validations demonstrate very good results
on both nominal and numerical datasets.
Proceedings of International Conference on Machine Learning, Bari,
July 1996
This paper investigates the use of machine learning, in order to
extract manageable information from this history. More precisely,
induction from examples of past trials and errors provides rules
discriminating errors from successful trials. Such rules allow to
a priori estimate the desirability of future trials; this
knowledge can support powerful control strategies.
Several strategies of ML-based control are applied to a genetic
algorithm, and tested on the Royal Road, a GA-deceptive, and a
combinatorial optimization problem. Comparing mutation control with
crossover control yields unexpected results.
Proceedings of International Conference on Machine Learning, Bari,
July 1996
Computers & Industrial Engineering Journal, vol 30(2), April 1996
in P. J. Angeline and K. E. Kinnear, Jr, Editors,
Advances in Genetic Programming II
MIT Press, Cambridge, MA, 1996.
Gzipped Postscript
The choice of a representation i.e. the definition
of the search space, is of vital importance in all Evolutionary
Optimization processes. In the context of Topological
Optimum Design (TOD) in Structural Mechanics, this paper investigates
possible representations for evolutionary shape design.
The goal is the identification of a shape in R2
having optimal mechanical properties.
Evolutionary Computation has been demonstrated a valuable tool
for TOD problems. However, all past results are based
on the straightforward bitstring representation
whose complexity increases with that of the underlying Mechanical
model. To overcome this difficulty, different representations
for shapes are introduced, and compared on the benchmark problem
of TOD using various evolutionary schemes.
The results are discussed with respect to the
different degree of epistasis of the representations.
Presented at the Fifth Annual Conference
on Evolutionary Programming, San Diego, March 1996
This paper presents an Evolutionary approach to problems
of shape optimization and identification: the aim is to find
a partition of a given design domain
of the 2D-plane or the 3D-space into
two subsets (e.g. material and void for the Optimal Design problems).
The central issue of this paper is the representation
of such repartition suitable for its evolutionary optimization.
Three different representations are introduced and discussed, and
comparative experimental results are presented.
In J. Périaux and G. Winter, editors,
Genetic Algorithms in Engineering and Computer Sciences,
John Wiley, 1995
The relative positions of the antennas of a direction finder
are critical regarding the efficiency and the robustness
of the indications of the apparatus.
The state of the art in direction finder
optimization is limited with classical methods
to the case of direction finders with 3 antennas,
with naive trial-and-error methods for more complex cases.
The use of Genetic Algorithms brings a tremendous increase
to that domain, allowing the optimization of direction finders
with up to 10 antennas in a reasonable computing time.
Moreover, as GAs provide multiple optimal solutions
on multi-modal problems, a thorough numerical investigation
of all possible optima provides a new insight in the
underlying optimization problem, finally leading
to the derivation of a formula for
the optimal direction finders.
Presented at GALESIA 95, First IEE/IEEE International Conference
on Genetic ALgorithms in Engineering Systems:
Innovations and Applications, Sheffield, September 1995
In this paper, structural topology optimization is addressed
through Genetic Algorithms. A set of designs is evolved
following the Darwinian survival-of-fittest principle.
The standard crossover and mutation operators are
tailored for the needs of 2D topology optimization. The
genetic algorithm based on these operators
is experimented on plane stress problems of cantilever plates: the
goal is to optimize the weight of the structure
under displacement constraints.
The main advantage of this approach is that it can both find out
alternative optimal solutions, as experimentally
demonstrated on a problem with multiple solutions, and
handle different kinds of mechanical model: some results in
elasticity with large displacements are presented.
In that case, the nonlinear
geometrical effects of the model lead to non viable solutions,
unless some constraints are imposed on the
stress field.
Presented at 21st ASME Design Automatic Conference,
Boston MA, ASME Press, 1995
The problem of minimizing by genetic algorithms the weight of a
composite laminate subjected to various failure
constraints is considered.
Constraints are accounted for through penalty functions.
The amount of penalty for each constraint violation is typically
controlled by a penalty parameter that has
a crucial influence on the
performance of the genetic algorithm. An optimal value of each
penalty parameter exists. It is larger than the smallest value of
the penalty for which the global optimum is feasible.
A generational elitist genetic algorithm is found to be less
efficient for laminate optimization than a genetic algorithm with a more
conservative selection procedure (a ``superelitist'' algorithm). A
segregated genetic algorithm is proposed
that uses a double penalty strategy and is superelitist. The segregated
genetic algorithm performs as well as the superelitist genetic algorithm
for optimal amounts of penalty. In addition, the segregated
genetic algorithm is less sensitive to the choice of penalty
parameters.
ICGA95, the sixth International Conference
on Genetic Algorithms, Pittsburg, USA, Morgan Kaufmann,
July 1995
Analytic chromatography is a physical process whose aim is the separation
of the components of
a chemical mixture, based on their different affinities for some
porous medium through which they are percolated.
This paper presents an application of evolutionary recurrent neural nets
optimization to the identification of the internal law of chromatography.
New mutation operators
involving the parameters of a single neuron are introduced.
Furthermore, the strategy for
using the different kind of mutation takes into account
the past
history of the neural net at hand.
The first results for one- and two-component
mixtures demonstrate the basic feasibility of the
recurrent neural net approach. A strategy to improve the robustness
of the results is presented.
Presented at EP95, the fourth annual conference
on Evolutionary Programming, San Diego, USA, MIT Press, March 1995
The control problem of soft-landing a toy lunar module simulation
is investigated in the context of neural nets. While
traditional supervised back-propagation training is inappropriate
for lack of training exemplars, genetic algorithms allow
a controller to be evolved without difficulty: Evolution is a form of
unsupervised learning.
A novelty introduced in this paper is the presentation
of additional renormalized inputs to the net; experiments indicate that the presence of such inputs
allows precision of control to be attained faster, when learning time is measured by the number of
generations for which the GA must run to attain a certain mean performance.
Presented at PPSN94, the third Parallel Problem Solving from Nature conference,
Jerusalem, Israel, Springer Verlag, October 1994
Crossover may achieve the fast
combination of performant building blocks ; but as a
counterpart, if a crossover point arises in a newly
discovered schema, this schema may just disappear.
We propose to use inductive learning to
control such destructive effects of crossover.
The idea is to periodically gather some examples
of crossovers, labelled as "good" or "bad" crossovers according
to their effects given the current population.
From these examples, inductive learning then builds
rules discriminating good from bad crossovers.
This ruleset allows to control
further crossovers : only good crossovers --- according to the
rule set --- are performed.
Some experimentations on the Royal Road problem are discussed.
Presented at PPSN94, the third Parallel Problem Solving from Nature conference,
Jerusalem, Israel, Springer Verlag, October 1994
This paper deals with technical issues relevant to artificial neural net (ANN)
training by genetic algorithms. Neural nets have applications ranging
from perception to control;
in the context of control, achieving great precision is more critical
than in pattern recognition
or classification tasks. In previous work, the authors have found that
when employing genetic search to train a net, both precision and training speed
can be greatly enhanced by an input renormalisation technique. In this paper we investigate
the automatic tuning of such renormalisation coefficients, as well as the tuning
of the slopes of the transfer functions of the individual neurons in the net.
Waiting time analysis is presented
as an alternative to the classical "mean perfomance" interpretation of GA experiments. It is
felt that it provides a more realistic evaluation of the real-world usefulness of a GA.
Presented at EA94, the first Evolution Artificielle conference,
Toulouse, France, Cepadues, September 1994
An induction-based method for retrieving similar cases and/or
easily adaptable cases is presented in a 3-steps process :
first, a rule set is learned from a data set ;
second, a reformulation of the problem domain is derived
from this ruleset ; third, a surface similarity
with respect to the reformulated problem
appears to be a structural similarity
with respect to the initial representation
of the domain.
This method achieves some integration between machine
learning and case-based reasoning : it uses both compiled knowledge
(through the similarity measure and the ruleset it
is derived from)
and instanciated knowledge (through the cases).
Presented at CBR94
We present a bottom-up generalization which builds
the maximally general terms
covering a positive example and rejecting negative examples in first-order
logic (FOL), i.e. in terms of Version Spaces, the set G.
This algorithm is based on rewriting negative
examples as constraints
upon the generalization of the positive example at hand.
The constraints space is partially ordered, inducing
a partial order on negative examples ; the near-misses
as defined
by Winston can then be formalized in FOL as
negative examples minimal with respect to this partial order.
As expected, only near-misses are necessary to build the set G.
Moreover, constraints can be used
directly to classify further examples.
Presented at ICML94, the International Conference on
Machine Learning, New Brunswick, USA, Morgan Kaufmann, July 1994
The precise docking of a truck at a loading dock has been proposed
by (Nguyen & Widrow 90) as a benchmark
problem for non-linear control by neural-nets.
The main difficulty is that back-propagation is not a priori suitable
as a learning paradigm, because no set of training vectors is available:
It is non-trivial to find solution trajectories that dock the truck from
anywhere in the loading yard.
In this paper we show how a genetic algorithm can evolve the weights
of a feedforward 3-layer neural
net that solves the control problem for a given starting state, achieving a short
trajectory from starting point to goal. The fitness of a net in the population
is a function of both the nearest position from the goal and the distance travelled.
The influence of input data renormalisation on trajectory
precision is also discussed.
Presented at ICEC94, the first International Conference on Evolutionary
Computation, Orlando, USA, IEEE Press, June 1994
Our concern is building the set G of maximally general terms
covering positive examples and rejecting negative
examples in propositional logic.
Negative examples are represented as
constraints on the search space.
This representation allows for defining a partial order on
the negative examples and on attributes too.
It is shown that only minimal negative
examples and minimal attributes are to be considered when
building the set G.
These results hold in case of a non-convergent data set.
Constraints can be directly used for a polynomial characterization
of G. They also allow for detecting
erroneous examples in a data set.
Presented at ECML94, the European Conference on Machine
Learning, Catana, Italy, April 1994
By means of membership functions, measurable informations
(size expressed in inches,
age expressed in years) are converted into linguistic qualifiers
(tall, young). It is emphasized that the acception of linguistic
qualifiers is context-dependant and may depend on several attributes.
Therefore, the difference between
membership functions and fuzzy rules vanishes.
A method inspired from inductive learning is presented : it
performs the extraction of membership functions and / or fuzzy rules
from examples.
in Uncertainty Modelling, Theory and Applications,
B. Ayyub and M.M. Gupta Eds,
serie Machine Intelligence and Pattern Recognition,
North Holland, 1994
We present a general method of handling constraints
in genetic optimization, based on the Behavioural Memory
paradigm. Instead of requiring the problem-dependent
design of either repair operators (projection on the feasible region)
or penalty function (weighted sum of the constraints violations
and the objective function), we sample the feasible region
by evolving from an initially random population, successively applying
a series of different
fitness functions which embody constraint satisfaction. The final step
is the optimization of the objective function restricted to
the feasible region.
The success of the whole process is highly dependent on the
genetic diversity maintained during the first steps, ensuring
a uniform sampling of the feasible region.
This method succeeded on some truss structure optimization problems,
where the other genetic techniques for handling the constraints failed
to give good results. Moreover in some domains, as in automatic generation
of software test data, no other technique can be easily applied,
as some constraints are not even computable until others
are satisfied.
Presented at ICGA93, the fourth International Conference on Genetic
Algorithms, Urbana Champaign, USA, Morgan Kaufmann, July 1993 On functions with a given fitness--distance relation
Leila Kallel, Bart Naudts and Marc Schoenauer
Recent work stresses the limitations of fitness distance correlation
(FDC) as an indicator of landscape difficulty for genetic algorithms
(GAs). Realizing that the fitness distance correlation (FDC) value
cannot be reliably related to landscape difficulty, we investigate
whether an interpretation of the whole correlation plot can yield reliable
information about the behavior of the GA. Our approach is as
follows. We present a generic method for constructing fitness
functions which share the same fitness versus distance-to-optimum
relation (FD relation). Special attention is given to FD relations
which show no local optimum in the correlation plot, as is the case
for the relation induced by Horn's longpath. We give an inventory of
different types of GA behavior found within a class of fitness
functions with a common correlation plot. We finally show that GA
behavior can be very sensitive to small modifications of the
fitness--distance relation.
How to detect all maxima of a function ?
J. Garnier and L. Kallel
This paper starts by a theoretical investigation of a family of
landscapes characterized by the number of their local optima N and
the distribution of the sizes alphaj of their attraction
basins. We then propose a practical methodology for identifying
these quantities (N and alphaj distribution)
for an unknown
landscape, given a random sample on that landscape and a local
steepest ascent search.
This methodology applies to any landscape specified with a
modification operator and provides bounds on search complexity (to
detect all local optima) when using the modification operator at
hand. Experiments demonstrate the efficiency of this methodology for
guiding the choice of modification operators, leading to the design
of problem-dependent optimization heuristics.
Evolutionary Algorithms as Fitness Function Debuggers
F. Mansanne, F. Carrère, A . Ehinger, and M. Schoenauer
All Evolutionary Algorithms experienced practitioners
emphasize the need
for a careful design of the fitness function.
It is commonly heard, for instance, that {\em ``If
there is a bug in your fitness function, the EA will find it''.}
This paper presents a case study of such a situation
in the domain of geophysical
underground identification: some weird solutions are found by the
Evolutionary Algorithm, obviously physically
absurd, but fulfilling almost perfectly the geophysical criterion.
Inside GA Dynamics: Ground Basis for Comparison
Leila Kallel
This paper reviews some of the commonly used methods to compare GA
performance on different problems or to compare the adequation of GA
parameters and operators. This points out the scaling issue when
differently distributed fitness functions are compared. Hamming
fitness functions are then proposed as a basis to relax the
dependence of GA-performance-measures on fitness scale or
distribution, and to account for any GA parameters bias. Different
measures are also proposed, distinguishing convergence time from GA
radial and effective trajectory in Hamming space.
Revisiting the Memory of Evolution
Michèle Sebag, Marc Schoenauer and Mathieu Peyral
A new evolution scheme is presented, memorizing the extreme
(best and worse) past
individuals through distributions over the binary search space. These
distributions are used to bias the mutation operator in a
($\mu+\lambda$) Evolution Strategy, guiding the
generation of newborn offspring: different {\em mimetic strategies}
are defined,
combining either attraction, indifference or repulsion with respect to
the two distributions. These
distributions are then updated from the best and the worse individuals
in the current population. Experiments on large size binary problems
allow one to delineate the niche of each of these mimetic strategies.
Non-parametric identification of geological models
M. Schoenauer, A. Ehinger and B. Braunschweig
Many problems to be solved in geophysical processing can be expressed
in terms of identification of spatial geological models : given a
function Phi applied to a geological model Gamma, producing a
result R, the problem is to find Gamma* such that
Phi(Gamma*) = R*, where R}* is the expected
result : a seismogram, a pressure curve, a seismic cross-section etc.
The presented research deals with the joint use of evolutionary
algorithms and
Voronoi diagrams to address some non-parametric instances of
identification
problems in geophysics, i.e. without a priori hypothesis about the
geometrical layout of possible solutions. In this paper, a first
application in velocity determination for seismic imaging demonstrates
the ability of this approach to identify both the geometry and the
velocities of the underground from experimental seismograms.
Sphere Operators and Their Applicability for Constrained
Parameter Optimization Problems
Marc Schoenauer and Zbigniew Michalewicz
In this paper we continue our earlier studies
on boundary operators for constrained parameter optimization
problems. We discuss thoroughly one particular class of boundary operators ---
sphere operators --- and argue about their applicability to
general constrained problems through a
mapping between the boundary of the feasible region of the search
space and a sphere. We provide with some experimental evaluation
of these transformations, on some problems from our earlier studies,
as well as on problems with convex search space.
Evolutionary Computation: An Introduction
Marc Schoenauer and Zbigniew Michalewicz
Evolutionary computation techniques have received a lot of attention
regarding their potential as optimization techniques for complex
real-world problems. These techniques, based on the powerful principle
of ``survival of the fittest'', model some natural phenomena of
genetic inheritance and Darwinian strife for survival; they also constitute
an interesting category of modern heuristic search. This introductory
article presents the main paradigms of evolutionary algorithms
(genetic algorithms, evolution strategies, evolutionary programming,
genetic programming) as well as other (hybrid) methods of evolutionary
computation. Two particular research directions (parallel
evolutionary techniques and self-adaptation) are discussed further in
the last part of this paper.
A priori comparison of binary crossover operators:
No universal statistical measure, but a set of hints
Leila Kallel and Marc Schoenauer
The choice of an operator in Evolutionary Computation is generally
based on comparative runs of the algorithms. However, some
statistical a priori measures, based on samples of
(parents-offspring),
have been proposed to avoid such brute comparisons.
In this paper we survey some of these works in the context of
binary crossover operators. We first extend these measures to overcome
some of
their limitations.
Unfortunately, experimental results on well known binary problems
suggest that
any of the measures used here can give false indications. Being all
defined as averages, they can miss the important parts:
none of the tested measures can pretend to be a good indicator alone.
However, considering together the Expected Mean Improvement to a
target value and the Fitness Operator Correlation gives the best
predictive results.
Moreover, detailed insights on the samples, based on some graphical
layouts of the offspring fitnesses, seem to allow more accurate
predictions on the respective performances of binary crossover operators.
Parametric and non-parametric identification of macro-mechanical models
Michèle Sebag, Marc Schoenauer and Habibou Maitournam
This chapter presents evolutionary identification of a particular form of behavioral laws for elasto-visco-plastic materials termed rheological models.
First, rheological models with known graph of connections are used,
and the identification amounts to identifying real-valued parameters.
But the more challenging task of identifying also the topology of
the rheological model can be tackled using Genetic Programming, after
turning those models into parse-trees. Both approaches are compared
and discussed on a simple artificial example.
Tractable Induction and Classification in First Order
Logic Via Stochastic Matching.
Michèle Sebag and Céline Rouveirol
Learning in first-order logic (FOL) languages suffers from a specific difficult
y: both
induction and classification are potentially exponential in the size of
hypotheses. This difficulty is usually dealt with
by limiting the size of hypotheses, via either syntactic restrictions
or search strategies.
This paper is concerned with polynomial induction and use of
FOL hypotheses with no size restrictions. This is done via stochastic matching:
instead of exhaustively exploring the set of matchings between any example
and any short candidate hypothesis, one
stochastically explores the set of matchings between any example
and any candidate hypothesis. The user sets the number of matching
samples to consider and thereby controls the cost of induction
and classification.
One advantage of this heuristic is to allow for resource-bounded learning,
without any a priori knowledge about the problem domain.
Experiments on a real-world problem pertaining to organic chemistry
fully demonstrate the potentialities of the
approach regarding both predictive accuracy
and computational cost.
Toward Civilized Evolution: Developing Inhibitions.
Michèle Sebag and Marc Schoenauer
Most evolutionary algorithms concerned with a memory of evolution
aim at memorizing and reusing the recipes of past successes
(e.g. fruitful operators or fruitful mutation directions). \\
The scheme proposed here follows the opposite track, and memorizes the
past failures of evolution (unfit offspring) through a
virtual individual termed the {\em virtual loser}. The underlying
metaphor is that offspring should attempt to get further away from the virtual
loser
than their parents. This is done by a new evolution operator, termed
{\em flee-mutation}, as an alternative to standard recombination and mutation.
Special attention is paid to adjusting the flee-mutation step size.
Experiments on large sized problems validate this approach, and unexpectedly
show that a constant flee-mutation step size over a population is desirable.
Alternative Random Initialization in Genetic Algorithms.
Leila Kallel and Marc Schoenauer
Though unanimously recognized as a crucial step in Evolutionary
Algorithms, initialization procedures have not been paid much attention so far.
In bitstring Genetic Algorithms, for instance, the standard
0/1 equiprobable choice for every bit is rarely discussed,
as the resulting distribution probability over the whole bitstring
space is uniform.
However, uniformity is relative to a measure on the search space.
First, considering the measure given by the density of 1's, the
{\em Uniform Covering} initialization procedure is
naturally designed. Second,
taking into account the probability of appearance of sequences of
identical bits leads to design another alternative initialization
procedure, the {\em Homogeneous Block} procedure.
These procedures are compared with the standard initialization procedure
on several problems.
A priori comparison is achieved using
Fitness-Distance Correlation. Actual experiments
demonstrate the accuracy of these FDC-based comparisons, and emphasize the
usefulness of the two proposed procedure.
Boundary Operators for Constrained Parameter
Optimization Problems.
Marc Schoenauer and Zbigniew Michalewicz
In this paper we continue our earlier study [in PPSN96] on
boundary operators for constrained parameter optimization
problems. The significance of this line of research is based on the
observation that usually the global solution for many optimization
problems lies on the boundary of the feasible region. Thus, for many
constrained numerical optimization problems it might be beneficial to
search just the boundary of the search space defined by a set of
constraints (some other algorithm might be used for searching the
interior of the search space, if activity of a constraint is not
certain). We present a few boundary operators for a sphere and provide
with their experimental evaluation.
Identification of Mechanical Inclusions
Marc Schoenauer, Francois Jouve and Leila Kallel
Evolutionary Algorithms provide a general approach
to inverse problem solving: As optimization methods, they
only require the computation of values of the function to optimize.
Thus, the only prerequisite to efficiently handle inverse problems
is a good numerical model of the direct problem, and
a representation for potential solutions.
The identification of mechanical inclusion, even in the linear
elasticity framework, is a difficult problem, theoretically ill-posed:
Evolutionary Algorithms are in that context a good tentative choice
for a robust numerical method, as standard deterministic
algorithms have proven inaccurate and unstable.
However, great attention must be given to the implementation.
The representation, which determines the search space, is
critical for a successful application of Evolutionary Algorithms
to any problem.
Two original representations are presented for the inclusion
identification problem, together with the associated
evolution operators (crossover and mutation).
Both provide outstanding results on simple instances of the identification
problem, including experimental robustness in presence of noise.
A Society of Hill-Climbers
Michèle Sebag and Marc Schoenauer
This paper is concerned with function optimization in binary search spaces.
It focuses on how
hill-climbers can work together and/or use their past trials in
order to speed up the search.
A hill-climber is viewed as a set of mutations. The challenge is
twofold: one must determine how many bits should be mutated, and
which bits should preferably be mutated, or in other words,
which climbing directions are to be preferred.
Two algorithms describing societies of hill-climbers, with or without memory
of the past trials, are described. These are experimented on several 900-bits
problems, and significantly outperform standard Genetic Algorithms and
Evolution Strategies.
Optimisation topologique de formes par algorithmes génétiques
Couro Kane et Marc Schoenauer
Le travail présenté dans ce papier est centré sur le problème de l'optimisation
topologique des formes, et utilise la méthode des algorithmes génétiques.
Une population de formes évolue selon le principe darwinien
Les plus adaptés survivent, la fonction d'adaptation
poussant les individus de cette population vers les structures
optimales.
Nous montrons la flexibilité de cette approche, qui permet d'obtenir
des résultats hors de portée des algorithmes standard du domaine ;
le prix à payer est un coût de calcul important.
Ainsi, plusieurs solutions quasi-optimales peuvent être obtenues pour le
même problème ; l'optimisation peut être faite en respectant des
contraintes sur plus d'un chargement ; le chargement peut être appliqué
sur la frontière inconnue de la structure finale (comme par exemple
pour l'optimisation d'un dôme sous-marin) ; enfin, cette méthode
peut être appliquée à tout type de matériau et à tout modèle
mécanique, ainsi qu'en attestent les résultats en élasticité
non-linéaire, les premiers du genre à notre connaissance.
Les clés de ces succès sont des adaptations de la méthode générale
des algorithmes génétiques au problème posé. Tout d'abord, des
opérateurs génétiques spécialisés ont été utilisés à la place
des opérateurs standard. Ensuite, la fonction d'adaptation a du être
modifiée après que les premiers résultats utilisant le modèle de l'élasticité
en grands déplacements eurent conduit à des résultats non-viables :
une contrainte sur le champ de contraintes mécaniques a été incorporée
à la fonction d'adaptation.
Inductive Learning of Mutation Step-size in Evolutionary Parameter Optimization
Michèle Sebag, Marc Schoenauer, and Caroline Ravisé
The problem of setting the mutation step-size for real-coded evolutionary
algorithms has received different answers: exogenous
rules like the $1/5$ rule, or endogenous factor
like the self-adaptation of the step-size in the Gaussian
mutation of modern Evolution Strategies.
On the other hand, in the bitstring framework,
the control of both crossover and mutation by means of
Inductive Leaning has proven beneficial to evolution,
mostly by recognizing -- and forbidding -- past errors (i.e. crossover or
mutations leading to offspring that will not survive next
selection step).
This Inductive Learning-based control is transposed to the control
of mutation step-size in evolutionary parameter optimization,
and the resulting algorithm is experimentally compared to
the self-adaptive step-size of Evolution Strategies.
Mechanical Inclusions Identification by Evolutionary Computation.
Marc Schoenauer, Leila Kallel and Francois Jouve
The problem of the identification of mechanical inclusion is theoretically ill-posed,
and to-date numerical algorithms have demonstrated to be inaccurate
and unstable. On the other hand,
Evolutionary Algorithms provide a general approach
to inverse problem solving.
However, great care must be taken during the implementation:
The choice of the representation, which determines the search space, is
critical.
Three representations are presented and discussed.
Whereas the straightforward mesh-dependent
representation suffers strong limitations,
both mesh-independent representation
provide outstanding results on simple instances
of the identification
problem, including experimental robustness in presence of noise.
Evolutionary Algorithms for Constrained Parameter
Optimization Problems.
Zbigniew Michalewicz and Marc Schoenauer
Evolutionary computation techniques have received a lot of attention
regarding their potential as optimization techniques for complex
numerical functions. However, they have not produced a significant
breakthrough in the area of nonlinear programming due to the fact that
they have not addressed the issue of constraints in a systematic way.
Only recently several methods have been proposed for handling
nonlinear constraints by evolutionary algorithms for numerical
optimization problems; however, these methods have several drawbacks
and the experimental results on many test cases have been disappointing.
Topological Optimum Design using Genetic Algorithms
C. Kane and M. Schoenauer
Structural topology optimization is addressed
through Genetic Algorithms: A set of designs is evolved
following the Darwinian {\em survival-of-fittest} principle.
The goal is to optimize the weight of the structure
under displacement constraints.
This approach demonstrates high
flexibility, and breaks many limits of
standard optimization algorithms, in spite of the heavy requirements
in term of computational effort:
Alternate optimal solutions to the same problem can be found;
Structures can be optimized with respect to multiple loadings;
The prescribed loadings can be applied on the unknown boundary
of the solution, rather than on the fixed boundary of the design domain;
Different materials as well as different mechanical models
can be used, as witnessed by the first results of Topological Optimum Design
ever obtained in the large displacements model.
But these results could not have been obtained without
careful specific handling of the specific aspects
of topological genetic optimization:
First, specific genetic operators
(crossover, mutation) were introduced; Second, special
attention was paid to the design of the objective function;
The nonlinear
geometrical effects of the large displacement model
lead to non viable solutions,
unless some constraints are imposed on the
stress field..
Mutation by Imitation in Boolean Evolution Strategies
M. Sebag and M. Schoenauer
Adaptive heuristics have been developed in the Evolution Strategy
frame regarding the mutation of real-valued variables.
But these heuristics poorly extend to
discrete variables: when the rate or variance of
mutation gets too small, mutation has no effect any more.
To overcome this problem, we propose two mutation
operators, that use the
worst individuals of the current population as
beacons indicating the limits of the current promising region:
Mutation by differentiation drives individuals away from
the beacon-individuals.
Mutation by imitation inversely assumes that beacon-individuals still
contain relevant informations, and aims at moving the individual
at hand nearer to the beacons. Mutation by imitation produces offspring
that share the features of several ``parents''; but in contrast with
classical crossover, it allows one to control the distance between the offspringand the main parent, by fixing the number of bits to mutate. Mutation by imitation
thus permits a tunable exchange of informations among individuals.
Both operators have been implemented in a boolean (mu + lambda)-ES framework,
and experimented on four problems: the Royal Road problem,
a GA-deceptive problem, the combinatorial multiple knapsack optimization
problem and the Long Path problem. Comparative validation is presented
and discussed.
Evolutionary Computation at the Edge of
Feasibility
M. Schoenauer and Z. Michalewicz
Numerical optimization problems enjoy a significant popularity in
evolutionary computation community; all major evolutionary techniques
use such problems for various tests and experiments.
However, many of these techniques (as well as other, classical
optimization methods) encounter difficulties in solving some real-world
problems which include non-trivial constraints.
This paper discusses a new development which is based on the
observation that very often the global solution
lies on the boundary of the feasible region. Thus, for many
constrained numerical optimization problems it might be beneficial to
limit the search to that boundary, using problem-specific operators.
Two test cases illustrate this approach: specific operators
are designed from the simple analytical expression of the
constraints. Some possible generalizations
to larger classes of constraints are discussed as well.
Delaying the Choice of Bias: A Disjunctive Version Space Approach
M. Sebag
This paper is concerned with alleviating the choice of learning
biases via a two-step process:
An
Advanced Evolution Should Not Repeat Its Past Errors
Caroline Ravisé and Michele Sebag
A safe control of genetic evolution consists in preventing past errors
of evolution from being repeated. This could be done by keeping track
of the history of evolution, but maintaining and exploiting the
complete history is intractable.
Evolutionary algorithms for Contrained Engineering Problems
Z. Michalewicz, D. Dasgupta, R. Leriche and M. Schoenauer
Evolutionary computation techniques receive an ever increasing attention
regarding their potentialities as optimization techniques for complex
problems. One of the recent areas of applications for these techniques
is in the field of industrial engineering; these include scheduling
and sequencing in manufacturing systems, computer-aided design,
facility layout and location problems, distribution and transportation
problems, and many others.
In this paper we concentrate on the issue of constraint handling
methods for evolutionary computation techniques. This general
discussion is followed by three test case studies:
truss structure optimization problem, design of a composite laminated
plate, and the unit commitment problem.
Evolutionary Identification of Macro-Mechanical Models
M. Schoenauer, M. Sebag, F. Jouve, B. Lamy, and H. Maitournam
This chapter illustrates the potential of genetic programming (GP) in
the field of macro-mechanical modeling, addressing the problem of
identification of a mechanical model for a material. Two kinds of
models are considered. One-dimensional dynamic models are represented
via symbolic formulations termed {\em rheological models}, which are
directly evolved by GP. Three-dimensional static models of
hyperelastic materials are expressed in terms of strain energy
functions. A model is rated based on the distance between the
behavior predicted by the model, and the actual behavior of the
material given by a set of mechanical experiments. The choice of GP
is motivated by strong arguments, relying on the tree-structure of
rheological models in the first case, and on the need for first and
second order derivatives in the second case. Key issues are the
exploration of viable individuals only, and the use of Gaussian
mutations to optimize numerical constants.
Shape Representations and Evolution Schemes
Marc Schoenauer
Representations for Evolutionary Optimization and Identification
in Structural Mechanics
Marc Schoenauer
Optimization of direction finders by Genetic Algorithms
Lionel Taieb and Marc Schoenauer
Structural topology optimization in linear and nonlinear
elasticity using genetic algorithms
Couro Kane, François Jouve and Marc Schoenauer
A Segregated Genetic Algorithm for Constrained
Structural Optimization
Rodolphe G. Le Riche, Catherine Knopf-Lenoir
and Raphael T. Haftka
Evolutionary Chromatographic Law
Identification by Recurrent Neural Nets
Alessandro Fadda and Marc Schoenauer
Genetic Lander: An Experiment in Accurate Neuro-Genetic Control
Edmund Ronald and Marc Schoenauer
Controlling Crossover through Inductive Learning
Michele Sebag and Marc Schoenauer
Genetic Extensions of Neural Net Learning:
Transfer Functions and Renormalisation Coefficients
Marc Schoenauer and Edmund Ronald
A Rule-Based Similarity Measure
Michele Sebag and Marc Schoenauer
A Constraint-Based Induction Algorithm in FOL
Michele Sebag
Neuro-Genetic Truck Backer-Upper Controller
Marc Schoenauer and Edmund Ronald
Using Constraints to Building Version Spaces
Michele Sebag and Marc Schoenauer
Inductive Learning of Membership Functions
and Fuzzy Rules
Michele Sebag and Marc Schoenauer
Constrained GA optimization
Marc Schoenauer and Spyros Xanthakis
Back to EEAAX home page