Selected Publications

Marc Schoenauer

Below are some quick presentations of my 5 most significant publications in the period 2002-2008 - by chronologial order.

  1. Universal Consistency and Bloat in GP.
    Sylvain Gelly, Olivier Teytaud, Nicolas Bredeche, and Marc Schoenauer.
    Revue d'Intelligence Artificielle, 20:805-827, 2006.
    http://hal.inria.fr/inria-00112840/en/

    The behaviour of Genetic Programming for symbolic regression problems is studied in the Statistic Learning framework. A first issue concerns the Universal Consistency (UC) property, namely the convergence toward the optimal solution when the number of examples goes to infinity. The second issue is that of bloat, i.e. uncontrolled code size growth. The paper proves that 1- UC is obtained conditionally to regularisation, e.g. adding some parsimony pressure to the fitness function; 2- when the number of examples goes to infinity, bloat occurs if the target solution does not belong to the search space; bloat can occur even if the target solution belongs to the search space, in the absence of regularisation; 3- a specific regularisation term is proposed, enforcing both UC and preventing bloat.

  2. Robust Multi-Cellular Developmental Design.
    Alexandre Devert, Nicolas Bredèche, and Marc Schoenauer.
    In D. Thierens et al., editor, Proc. GECCO'07, pages 982 - 989. ACM Press, 2007.
    http://hal.inria.fr/inria-00145336/en/

    This paper tackles the ``flag'' problem, arriving at a target 2D pattern through iterating local interaction rules among the pixels. The interaction rules are implemented as Neural Networks, usingas optimization engine the well-known NEAT (Neuro-Evolution by Augmented Topology) that evolves both the topology and the weights of the network (that will be replaced by an evolutionary-optimized ESN in a further work published in EA'07 (http://hal.inria.fr/inria-00174593/en/). The main finding of this paper concerns the stopping criterion, involved in the fixed-point based fitness computation. It is shown that a stabilization-based stopping criterion is significantly more efficient than the standard criterion based on a fixed number of iterations; further, this new criterion leads to remarkably stable solutions, displaying outstanding self-healing capabilities with respect to lesions.

  3. Ensemble Learning for Free with Evolutionary Algorithms ?
    Christian Gagné, Michèle Sebag, Marc Schoenauer, and Marco Tomassini.
    In Dirk Thierens et al., editor, GECCO, pages 1782-1789. ACM SIGEVO, ACM, 2007.
    http://hal.inria.fr/inria-00144010/en/

    Ensemble learning, building and aggregating a population of hypotheses, is among the most efficient approaches in Machine Learning. As EC-based learning also produces a great many hypotheses along evolution, the goal of Evolutionary Ensemble Learning is to adapt EC to the direct production of an ensemble of hypotheses. The diversity of the ensemble is obtained through a co-evolution mechanism; further, a margin-based criterion is used to extract the best ensemble either along evolution or from the final population.

  4. Divide-and-Evolve: a Sequential Hybridization Strategy using Evolutionary Algorithms.
    Marc Schoenauer, Pierre Savéant, and Vincent Vidal.
    In Z. Michalewicz and P. Siarry, editors, Advances in Metaheuristics for Hard Optimization Natural Computing Series, Natural Computing Series, pages 179-198. Springer Verlag, 2007.
    http://hal.inria.fr/inria-00176967/en/ (or directly here)

    A memetic algorithm is proposed for temporal planning problems, hybridizing an evolutionary algorithm and a direct solver. The principle of the Divide and Evolve algorithm is to decompose a complex problem into a sequence of hopefully simpler planning problems, each of which can be solved by the solver. A plan for the initial problem is then obtained by concatenating all intermediate plans. The approach is validated on classical benchmarks and indeed DAE can solve problems that the embedded direct planner CPT cannot solve. Further comparative results have been obtained since then (work submitted, see http://hal.inria.fr/inria-00322880/en/).

  5. Extreme Value Based Adaptive Operator Selection.
    Álvaro Fialho, Luis Da Costa, Marc Schoenauer, and Michèle Sebag.
    In 10th International Conference on Parallel Problem Solving From Nature (PPSN X), pages 175-184, 09 2008.
    http://hal.inria.fr/inria-00287355/en/

    This paper resumes an earlier work published at GECCO'08 (http://hal.inria.fr/inria-00278542/en/), concerned with Adaptive Operator Selection. This earlier paper focused on tracking the best operator using dynamic Multi-Armed Bandit algorithms; this approach was successfully compared to other AOS mechanisms from the literature on an artificial setting, considering well-specified dynamic rewards for the operators. This paper tackles the other issue in Adaptive Operator Selection, namely characterizing the operator contribution in terms of a (scalar) reward. The contribution of the paper is to devise an extreme-value based reward, considering the extreme fitness improvement in a time window, as opposed to the average improvements.