Français Anglais
Accueil Annuaire Plan du site
Home > Research results > Dissertations & habilitations
Research results
Faculty habilitation de APPERT Caroline
APPERT Caroline
Faculty habilitation
Group : Human-Centered Computing

.

Starts on 00/00/0000
Advisor :

Funding :
Affiliation : vide
Laboratory :

Defended on 26/06/2017, committee :
- Michel Beaudouin-Lafon, Professeur, Université Paris-Sud, France
- Stephen Brewster, Professor, University of Glasgow, Scotland
- Géry Casiez, Professeur, Université Lille 1, Lille, France
- Andy Cockburn, Professor, University of Canterbury, New Zealand
- Jean-Claude Martin, Professeur, Université Paris-Sud, France
- Laurence Nigay, Professeur, Université Grenoble Alpes, Grenoble, France
- Shumin Zhai, Senior Staff Research Scientist, Google, Mountain View, CA, USA

Research activities :

Abstract :
Optimizing the bandwidth of the communication channel between users and the system is fundamental for designing efficient interactive systems. Apart from the case of speech-based interfaces that rely on users' natural language, this entails designing an efficient language that users can adopt and that the system can understand. My research has been focusing on studying and optimizing the two following types of languages: interfaces that allow users to trigger actions through the direct manipulation of on-screen objects, and interactive systems that allow users to invoke commands by performing specific movements. Direct manipulation requires encoding most information in the graphical representation, mostly relying on users' ability to recognize visual elements; whereas gesture-based interaction interprets the shape and dynamics of users' movements, mostly relying on users' ability to recall specific movements. I will present my main research projects about these two types of language, and discuss how we can increase the efficiency of interactive systems that make use of them. When using direct manipulation, achieving a high expressive power and a good level of usability depends on the interface's ability to accommodate large graphical scenes while enabling the easy selection and manipulation of objects in the scene. When using gestures, it depends on the number of different gestures in the system's vocabulary, as well as on the simplicity of those gestures, that should remain easy to learn and execute. I will conclude with directions for future work around interaction with tangible objects.

Ph.D. dissertations & Faculty habilitations
.


.


.