Français Anglais
Accueil Annuaire Plan du site
Accueil > Production scientifique > Résultat majeur
Production scientifique
Résultat majeur : PRIX DU MEILLEUR ARTICLE éTUDIANT
PRIX DU MEILLEUR ARTICLE éTUDIANT
22 février 2013

Wang Chen et Pablo Adasme ont obtenu le prix du meilleur article étudiant à ICORES 2013.
In this paper, we propose a distributionally robust model for a (0-1) stochastic quadratic bi-level programming problem. To this purpose, we first transform the stochastic bi-level problem into an equivalent deterministic formulation. Then, we use this formulation to derive a bi-level distributionally robust model. The latter is accomplished while taking into account the set of all possible distributions for the input random parameters. Finally, we transform both, the deterministic and the distributionally robust models into single level optimization problems. This allows comparing the optimal solutions of the proposed models. Our preliminary numerical results indicate that slight conservative solutions can be obtained when the number of binary variables in the upper level problem is larger than the number of variables in the follower.



Activités de recherche
  [aucun]

Equipe
  [aucun]

Contact
  [aucun]
Résultats majeurs
HOW FAST CAN YOU CONVERGE TOWARDS A CONSENSUS VALUE?
28 octobre 2021
In their recent work, Matthias Fuegger (LMF), Thomas Nowak (LISN), and Manfred Schwarz (TU Wien) stu

MODEL TRANSFORMATION AS CONSERVATIVE THEORY-TRANSFORMATION
30 octobre 2020
We present a new technique to construct tool support for domain-specific languages (DSLs) inside the

BEST STUDENT PAPER AWARD (ML) AT ECML 2019
20 septembre 2019
Guillaume Doquet (A&O), Best Student Paper Award (category Machine Learning) at ECML 2019.

BEST PAPER AWARD - HPCS 2019 - ON SERVER-SIDE FILE ACCESS PATTERN MATCHING
17 juillet 2019
Francieli Zanon Boito¹ , Ramon Nou², Laércio Lima Pilla³, Jean Luca Bez⁴, Jean-François Méhaut¹, T

BEST FULL PAPER AWARD EDM 2019 - EDUCATIONAL DATA MINING
05 juillet 2019
DAS3H: Modeling Student Learning and Forgetting for Optimally Scheduling Distributed Practice of Ski