Available papers Chronological order |
Genetic Programming and Evolvable Machines, 1(4), pp 339-361, 2000
Gzipped Postscript
A New Genetic Algorithm Working on State Domain Order Statistics
Daniel Delahaye and Stépahen Puechmorel- PPSN 2000
M. Schoenauer et al. Eds., Proceedings of Parallel Problem Solving from Nature VI, pp 777-789, LNCS 1917, Springer Verlag, 2000 Gzipped Postscript
M. Schoenauer et al. Eds., Proceedings of Parallel Problem Solving from Nature VI, pp 211-220, LNCS 1917, Springer Verlag, 2000 Gzipped Postscript
M. Schoenauer et al. Eds., Proceedings of Parallel Problem Solving from Nature VI, pp 529-538, LNCS 1917, Springer Verlag, 2000 Gzipped Postscript
Proceedings of Congress on Evolutionary Computation , pp 896-901, IEEE Press, 2000 Gzipped Postscript
ESAIM: Proceedings, 2000 Gzipped Postscript
I. Parmee, Ed., Evolutionary Design and Manufacture, Springer Verlag, 2000
Evolutionary Computation
Proceedings of Congress on Evolutionary Computation , pp 1110-1116, IEEE Press, 1999 Gzipped Postscript
Proceedings of Congress on Evolutionary Computation , pp 1910-1916, IEEE Press, 1999 Gzipped Postscript
Evonet Summer School 1999
Z. Ras and A. Skowron editors, Proceedings of ISMIS'99, Foundation of Intelligent Systems 11th International Symposium LNAI 1609, Springer Verlag, pp 639-647, 1999
Proceedings of PPSN'98 Springer Verlag, LNCS 1498, pp 57-66, 1998.
Fundamenta Informaticae
Proceedings of ICEC'98, IEEE International conference on Evolutionary Computation LNAI 1609, Springer Verlag, pp 639-647, 1999
Presented at the Seventh Annual Conference on Evolutionary Programming - EP'98
Control and Cybernetics Special Issue on Evolutionary Computation 26(3) pp 307-338, 1997
Artificial Evolution'97 pp 287-299, LNCS 1363, Springer Verlag 1997
Genetic Algorithms and Evolution Strategies in Engineering and Computer Sciences pp 327-340, John Wiley 1997
Proceedings of IJCAI97, pp 888-896, 1997.
Proceedings of the Seventh International Conference on Genetic Algorithms, Morgan Kaufmannm pp 291-298, 1997.
Proceedings of the Seventh International Conference on Genetic Algorithms, Morgan Kaufmann, pp 268-275, 1997.
Proceedings of the Seventh International Conference on Genetic Algorithms, Morgan Kaufmann, pp 322-329, 1997.
in D. Dasgupta and Z. Michalewicz Ed., Evolutionary Algorithms in Engineering Applications. pp 479-496, Springer Verlag 1997.
The latter question is addressed by recording the last worst trials of
the hill-climbers within an individual termed {\em repoussoir}. The hill-climbe
rs further explore the neighborhood of their current point as to get away from
the repoussoir.
As to the former question, no definite answer is proposed. Nevertheless, we
experimentally show that hill-climbers behave quite differently depending on
whether one sets a mutation rate per bit, or sets the exact number of
bits to mutate per individual.
Two algorithms describing societies of hill-climbers, with or without memory
of the past trials, are described. These are experimented on several 900-bits
problems, and significantly outperform standard Genetic Algorithms and
Evolution Strategies.
Proceedings of 1997 IEEE International Conference on Evolutionary Computation, pp 319-324, Indianapolis, April 1997.
Revue Française de Mécanique 4 pp 237-246, 1997
Proceedings of the Sixth Annual Conference on Evolutionary Programming, LNCS 1213, pp 247-261, Indianapolis, April 1997.
Revue européenne des éléments finis. Vol 5(5-6), pp 619-648, 1996.
In this paper we (1) discuss difficulties connected with solving the general nonlinear programming problem, (2) survey several approaches which have emerged in the evolutionary computation community, and (3) provide a set of eleven interesting test cases, which may serve as a handy reference for future methods.
Evolutionary Computation Vol 4(1), pp 1-32, 1996.
Control and Cybernetics, Vol.25 No.5, pp 1059-1088, 1996.
Proceedings of Parallel Problems Solving from Nature, Berlin, Sept. 1996. LNCS 1141
Proceedings of Parallel Problems Solving from Nature, Berlin, Sept. 1996. LNCS 1141
- The set of all hypotheses that are consistent with the data and cover at least one training example, is given an implicit characterization of polynomial complexity. The only bias governing this induction phase is that of the language of hypotheses.
- Classification of further examples is done via interpreting this implicit theory; the interpretation mechanism allows one to relax the consistency requirement and tune the specificity of the theory at no extra induction cost.
Experimental validations demonstrate very good results on both nominal and numerical datasets.
Proceedings of International Conference on Machine Learning, Bari, July 1996
This paper investigates the use of machine learning, in order to extract manageable information from this history. More precisely, induction from examples of past trials and errors provides rules discriminating errors from successful trials. Such rules allow to a priori estimate the desirability of future trials; this knowledge can support powerful control strategies.
Several strategies of ML-based control are applied to a genetic algorithm, and tested on the Royal Road, a GA-deceptive, and a combinatorial optimization problem. Comparing mutation control with crossover control yields unexpected results. Proceedings of International Conference on Machine Learning, Bari, July 1996
Computers & Industrial Engineering Journal, vol 30(2), April 1996
in P. J. Angeline and K. E. Kinnear, Jr, Editors, Advances in Genetic Programming II MIT Press, Cambridge, MA, 1996. Gzipped Postscript
The choice of a representation i.e. the definition of the search space, is of vital importance in all Evolutionary Optimization processes. In the context of Topological Optimum Design (TOD) in Structural Mechanics, this paper investigates possible representations for evolutionary shape design. The goal is the identification of a shape in R2 having optimal mechanical properties. Evolutionary Computation has been demonstrated a valuable tool for TOD problems. However, all past results are based on the straightforward bitstring representation whose complexity increases with that of the underlying Mechanical model. To overcome this difficulty, different representations for shapes are introduced, and compared on the benchmark problem of TOD using various evolutionary schemes. The results are discussed with respect to the different degree of epistasis of the representations.
Presented at the Fifth Annual Conference on Evolutionary Programming, San Diego, March 1996
This paper presents an Evolutionary approach to problems of shape optimization and identification: the aim is to find a partition of a given design domain of the 2D-plane or the 3D-space into two subsets (e.g. material and void for the Optimal Design problems). The central issue of this paper is the representation of such repartition suitable for its evolutionary optimization. Three different representations are introduced and discussed, and comparative experimental results are presented.
In J. Périaux and G. Winter, editors, Genetic Algorithms in Engineering and Computer Sciences, John Wiley, 1995
The relative positions of the antennas of a direction finder are critical regarding the efficiency and the robustness of the indications of the apparatus. The state of the art in direction finder optimization is limited with classical methods to the case of direction finders with 3 antennas, with naive trial-and-error methods for more complex cases. The use of Genetic Algorithms brings a tremendous increase to that domain, allowing the optimization of direction finders with up to 10 antennas in a reasonable computing time. Moreover, as GAs provide multiple optimal solutions on multi-modal problems, a thorough numerical investigation of all possible optima provides a new insight in the underlying optimization problem, finally leading to the derivation of a formula for the optimal direction finders.
Presented at GALESIA 95, First IEE/IEEE International Conference on Genetic ALgorithms in Engineering Systems: Innovations and Applications, Sheffield, September 1995
In this paper, structural topology optimization is addressed through Genetic Algorithms. A set of designs is evolved following the Darwinian survival-of-fittest principle. The standard crossover and mutation operators are tailored for the needs of 2D topology optimization. The genetic algorithm based on these operators is experimented on plane stress problems of cantilever plates: the goal is to optimize the weight of the structure under displacement constraints. The main advantage of this approach is that it can both find out alternative optimal solutions, as experimentally demonstrated on a problem with multiple solutions, and handle different kinds of mechanical model: some results in elasticity with large displacements are presented. In that case, the nonlinear geometrical effects of the model lead to non viable solutions, unless some constraints are imposed on the stress field.
Presented at 21st ASME Design Automatic Conference, Boston MA, ASME Press, 1995
The problem of minimizing by genetic algorithms the weight of a composite laminate subjected to various failure constraints is considered. Constraints are accounted for through penalty functions. The amount of penalty for each constraint violation is typically controlled by a penalty parameter that has a crucial influence on the performance of the genetic algorithm. An optimal value of each penalty parameter exists. It is larger than the smallest value of the penalty for which the global optimum is feasible. A generational elitist genetic algorithm is found to be less efficient for laminate optimization than a genetic algorithm with a more conservative selection procedure (a ``superelitist'' algorithm). A segregated genetic algorithm is proposed that uses a double penalty strategy and is superelitist. The segregated genetic algorithm performs as well as the superelitist genetic algorithm for optimal amounts of penalty. In addition, the segregated genetic algorithm is less sensitive to the choice of penalty parameters.
ICGA95, the sixth International Conference on Genetic Algorithms, Pittsburg, USA, Morgan Kaufmann, July 1995
Analytic chromatography is a physical process whose aim is the separation of the components of a chemical mixture, based on their different affinities for some porous medium through which they are percolated. This paper presents an application of evolutionary recurrent neural nets optimization to the identification of the internal law of chromatography. New mutation operators involving the parameters of a single neuron are introduced. Furthermore, the strategy for using the different kind of mutation takes into account the past history of the neural net at hand. The first results for one- and two-component mixtures demonstrate the basic feasibility of the recurrent neural net approach. A strategy to improve the robustness of the results is presented.
Presented at EP95, the fourth annual conference on Evolutionary Programming, San Diego, USA, MIT Press, March 1995
The control problem of soft-landing a toy lunar module simulation is investigated in the context of neural nets. While traditional supervised back-propagation training is inappropriate for lack of training exemplars, genetic algorithms allow a controller to be evolved without difficulty: Evolution is a form of unsupervised learning. A novelty introduced in this paper is the presentation of additional renormalized inputs to the net; experiments indicate that the presence of such inputs allows precision of control to be attained faster, when learning time is measured by the number of generations for which the GA must run to attain a certain mean performance.
Presented at PPSN94, the third Parallel Problem Solving from Nature conference, Jerusalem, Israel, Springer Verlag, October 1994
Crossover may achieve the fast combination of performant building blocks ; but as a counterpart, if a crossover point arises in a newly discovered schema, this schema may just disappear. We propose to use inductive learning to control such destructive effects of crossover. The idea is to periodically gather some examples of crossovers, labelled as "good" or "bad" crossovers according to their effects given the current population. From these examples, inductive learning then builds rules discriminating good from bad crossovers. This ruleset allows to control further crossovers : only good crossovers --- according to the rule set --- are performed. Some experimentations on the Royal Road problem are discussed.
Presented at PPSN94, the third Parallel Problem Solving from Nature conference, Jerusalem, Israel, Springer Verlag, October 1994
This paper deals with technical issues relevant to artificial neural net (ANN) training by genetic algorithms. Neural nets have applications ranging from perception to control; in the context of control, achieving great precision is more critical than in pattern recognition or classification tasks. In previous work, the authors have found that when employing genetic search to train a net, both precision and training speed can be greatly enhanced by an input renormalisation technique. In this paper we investigate the automatic tuning of such renormalisation coefficients, as well as the tuning of the slopes of the transfer functions of the individual neurons in the net. Waiting time analysis is presented as an alternative to the classical "mean perfomance" interpretation of GA experiments. It is felt that it provides a more realistic evaluation of the real-world usefulness of a GA.
Presented at EA94, the first Evolution Artificielle conference, Toulouse, France, Cepadues, September 1994
An induction-based method for retrieving similar cases and/or easily adaptable cases is presented in a 3-steps process : first, a rule set is learned from a data set ; second, a reformulation of the problem domain is derived from this ruleset ; third, a surface similarity with respect to the reformulated problem appears to be a structural similarity with respect to the initial representation of the domain. This method achieves some integration between machine learning and case-based reasoning : it uses both compiled knowledge (through the similarity measure and the ruleset it is derived from) and instanciated knowledge (through the cases).
Presented at CBR94
We present a bottom-up generalization which builds the maximally general terms covering a positive example and rejecting negative examples in first-order logic (FOL), i.e. in terms of Version Spaces, the set G. This algorithm is based on rewriting negative examples as constraints upon the generalization of the positive example at hand. The constraints space is partially ordered, inducing a partial order on negative examples ; the near-misses as defined by Winston can then be formalized in FOL as negative examples minimal with respect to this partial order. As expected, only near-misses are necessary to build the set G. Moreover, constraints can be used directly to classify further examples.
Presented at ICML94, the International Conference on Machine Learning, New Brunswick, USA, Morgan Kaufmann, July 1994
The precise docking of a truck at a loading dock has been proposed by (Nguyen & Widrow 90) as a benchmark problem for non-linear control by neural-nets. The main difficulty is that back-propagation is not a priori suitable as a learning paradigm, because no set of training vectors is available: It is non-trivial to find solution trajectories that dock the truck from anywhere in the loading yard.
In this paper we show how a genetic algorithm can evolve the weights of a feedforward 3-layer neural net that solves the control problem for a given starting state, achieving a short trajectory from starting point to goal. The fitness of a net in the population is a function of both the nearest position from the goal and the distance travelled. The influence of input data renormalisation on trajectory precision is also discussed.
Presented at ICEC94, the first International Conference on Evolutionary Computation, Orlando, USA, IEEE Press, June 1994
Our concern is building the set G of maximally general terms covering positive examples and rejecting negative examples in propositional logic. Negative examples are represented as constraints on the search space. This representation allows for defining a partial order on the negative examples and on attributes too. It is shown that only minimal negative examples and minimal attributes are to be considered when building the set G. These results hold in case of a non-convergent data set. Constraints can be directly used for a polynomial characterization of G. They also allow for detecting erroneous examples in a data set.
Presented at ECML94, the European Conference on Machine Learning, Catana, Italy, April 1994
By means of membership functions, measurable informations (size expressed in inches, age expressed in years) are converted into linguistic qualifiers (tall, young). It is emphasized that the acception of linguistic qualifiers is context-dependant and may depend on several attributes. Therefore, the difference between membership functions and fuzzy rules vanishes. A method inspired from inductive learning is presented : it performs the extraction of membership functions and / or fuzzy rules from examples.
in Uncertainty Modelling, Theory and Applications, B. Ayyub and M.M. Gupta Eds, serie Machine Intelligence and Pattern Recognition, North Holland, 1994
We present a general method of handling constraints in genetic optimization, based on the Behavioural Memory paradigm. Instead of requiring the problem-dependent design of either repair operators (projection on the feasible region) or penalty function (weighted sum of the constraints violations and the objective function), we sample the feasible region by evolving from an initially random population, successively applying a series of different fitness functions which embody constraint satisfaction. The final step is the optimization of the objective function restricted to the feasible region. The success of the whole process is highly dependent on the genetic diversity maintained during the first steps, ensuring a uniform sampling of the feasible region. This method succeeded on some truss structure optimization problems, where the other genetic techniques for handling the constraints failed to give good results. Moreover in some domains, as in automatic generation of software test data, no other technique can be easily applied, as some constraints are not even computable until others are satisfied.
Presented at ICGA93, the fourth International Conference on Genetic Algorithms, Urbana Champaign, USA, Morgan Kaufmann, July 1993