The behaviour of Genetic Programming for symbolic regression problems is studied in the Statistic Learning framework. A first issue concerns the Universal Consistency (UC) property, namely the convergence toward the optimal solution when the number of examples goes to infinity. The second issue is that of bloat, i.e. uncontrolled code size growth. The paper proves that 1- UC is obtained conditionally to regularisation, e.g. adding some parsimony pressure to the fitness function; 2- when the number of examples goes to infinity, bloat occurs if the target solution does not belong to the search space; bloat can occur even if the target solution belongs to the search space, in the absence of regularisation; 3- a specific regularisation term is proposed, enforcing both UC and preventing bloat.
This paper tackles the ``flag'' problem, arriving at a target 2D pattern through iterating local interaction rules among the pixels. The interaction rules are implemented as Neural Networks, usingas optimization engine the well-known NEAT (Neuro-Evolution by Augmented Topology) that evolves both the topology and the weights of the network (that will be replaced by an evolutionary-optimized ESN in a further work published in EA'07 (http://hal.inria.fr/inria-00174593/en/). The main finding of this paper concerns the stopping criterion, involved in the fixed-point based fitness computation. It is shown that a stabilization-based stopping criterion is significantly more efficient than the standard criterion based on a fixed number of iterations; further, this new criterion leads to remarkably stable solutions, displaying outstanding self-healing capabilities with respect to lesions.
Ensemble learning, building and aggregating a population of hypotheses, is among the most efficient approaches in Machine Learning. As EC-based learning also produces a great many hypotheses along evolution, the goal of Evolutionary Ensemble Learning is to adapt EC to the direct production of an ensemble of hypotheses. The diversity of the ensemble is obtained through a co-evolution mechanism; further, a margin-based criterion is used to extract the best ensemble either along evolution or from the final population.
A memetic algorithm is proposed for temporal planning problems, hybridizing an evolutionary algorithm and a direct solver. The principle of the Divide and Evolve algorithm is to decompose a complex problem into a sequence of hopefully simpler planning problems, each of which can be solved by the solver. A plan for the initial problem is then obtained by concatenating all intermediate plans. The approach is validated on classical benchmarks and indeed DAE can solve problems that the embedded direct planner CPT cannot solve. Further comparative results have been obtained since then (work submitted, see http://hal.inria.fr/inria-00322880/en/).
This paper resumes an earlier work published at GECCO'08 (http://hal.inria.fr/inria-00278542/en/), concerned with Adaptive Operator Selection. This earlier paper focused on tracking the best operator using dynamic Multi-Armed Bandit algorithms; this approach was successfully compared to other AOS mechanisms from the literature on an artificial setting, considering well-specified dynamic rewards for the operators. This paper tackles the other issue in Adaptive Operator Selection, namely characterizing the operator contribution in terms of a (scalar) reward. The contribution of the paper is to devise an extreme-value based reward, considering the extreme fitness improvement in a time window, as opposed to the average improvements.