Below are some quick presentations of my 5 most significant publications in the period 2002-2008 - by chronologial order.

**Universal Consistency and Bloat in GP.**

Sylvain Gelly, Olivier Teytaud, Nicolas Bredeche, and Marc Schoenauer.*Revue d'Intelligence Artificielle*, 20:805-827, 2006.`http://hal.inria.fr/inria-00112840/en/`*Universal Consistency*(UC) property, namely the convergence toward the optimal solution when the number of examples goes to infinity. The second issue is that of*bloat*, i.e. uncontrolled code size growth. The paper proves that 1- UC is obtained conditionally to regularisation, e.g. adding some parsimony pressure to the fitness function; 2- when the number of examples goes to infinity, bloat occurs if the target solution does not belong to the search space; bloat can occur even if the target solution belongs to the search space, in the absence of regularisation; 3- a specific regularisation term is proposed, enforcing both UC and preventing bloat.**Robust Multi-Cellular Developmental Design.**

Alexandre Devert, Nicolas Bredèche, and Marc Schoenauer.

In D. Thierens et al., editor,*Proc. GECCO'07*, pages 982 - 989. ACM Press, 2007.`http://hal.inria.fr/inria-00145336/en/``http://hal.inria.fr/inria-00174593/en/`). The main finding of this paper concerns the stopping criterion, involved in the fixed-point based fitness computation. It is shown that a stabilization-based stopping criterion is significantly more efficient than the standard criterion based on a fixed number of iterations; further, this new criterion leads to remarkably stable solutions, displaying outstanding self-healing capabilities with respect to lesions.**Ensemble Learning for Free with Evolutionary Algorithms ?**

Christian Gagné, Michèle Sebag, Marc Schoenauer, and Marco Tomassini.

In Dirk Thierens et al., editor,*GECCO*, pages 1782-1789. ACM SIGEVO, ACM, 2007.`http://hal.inria.fr/inria-00144010/en/`**Divide-and-Evolve: a Sequential Hybridization Strategy using Evolutionary Algorithms.**

Marc Schoenauer, Pierre Savéant, and Vincent Vidal.

In Z. Michalewicz and P. Siarry, editors,*Advances in Metaheuristics for Hard Optimization Natural Computing Series*, Natural Computing Series, pages 179-198. Springer Verlag, 2007.

`http://hal.inria.fr/inria-00176967/en/ (or directly here)`*Divide and Evolve*algorithm is to decompose a complex problem into a sequence of hopefully simpler planning problems, each of which can be solved by the solver. A plan for the initial problem is then obtained by concatenating all intermediate plans. The approach is validated on classical benchmarks and indeed DAE can solve problems that the embedded direct planner CPT cannot solve. Further comparative results have been obtained since then (work submitted, see`http://hal.inria.fr/inria-00322880/en/`).**Extreme Value Based Adaptive Operator Selection.**

Álvaro Fialho, Luis Da Costa, Marc Schoenauer, and Michèle Sebag.

In*10th International Conference on Parallel Problem Solving From Nature (PPSN X)*, pages 175-184, 09 2008.`http://hal.inria.fr/inria-00287355/en/``http://hal.inria.fr/inria-00278542/en/`), concerned with Adaptive Operator Selection. This earlier paper focused on tracking the best operator using dynamic Multi-Armed Bandit algorithms; this approach was successfully compared to other AOS mechanisms from the literature on an artificial setting, considering well-specified dynamic rewards for the operators. This paper tackles the other issue in Adaptive Operator Selection, namely characterizing the operator contribution in terms of a (scalar) reward. The contribution of the paper is to devise an extreme-value based reward, considering the extreme fitness improvement in a time window, as opposed to the average improvements.