WILD - projects

Here are some of the research projects that are developed in the context of the WILD room.

Remote interaction

thumbnail

We design and test a number of interaction techniques to manipulate content of the wall at a distance. So far we have studied pointing and navigation (pan-and-zoom).

We have developed and compared several dual-mode techniques to point at a distance with very high precision: first, the user moves the cursor in the vicinity of the target, using a laser-like pointing technique; then, the user switches mode to activate precision pointing. We have shown that we can efficiently and reliably point at targets as small as 2mm from several meters away from the screen.

We have also developed and compared a number of techniques to pan and zoom content on the wall display. We tested 12 techniques by combining three factors: unimanual vs. bimanual control, linear vs. rotational motion, free-hand vs. 2D-constrained vs. 1D constrained movements. A paper published at the ACM CHI 2011 conference (see Publications) describes this work.

Multiscale interfaces

thumbnail

Even though the WILD wall display is very large and has an ultra-high resolution, it is often necessary to display images larger than the wall. This is the case for example with astronomy imagery, gigapixel images and maps, as well as very large graphs and tables.

The first issue to address to support multiscale interaction is how to display large imagery on the WILD wall in real-time. We have developed a version of our ZVTM toolkit (see the Software page) that runs on the WILD cluster: each machine runs a replica that displays one part of the overall image. Through caching and other techniques, we achieve smooth panning and zooming of extremely large images, such as a 27 gigapixel image of Paris or a 400,000 pixels-wide image of the center of the galaxy.

The second issue is how to interact with such imagery. As described above we have tested a variety of techniques for panning and zooming. We are also looking at other interactions with the content, such as using magnifying lenses and applying filters to the images and/or the underlying data.

Multisurface applications

thumbnail

We explore interactions that involve multiple input-output surfaces (wall display, multitouch table, personal devices) and multiple users. This involves creating a software architecture to support interaction that is distributed among multiple computers, and developing the interaction techniques themselves.

We have developed the Substance middleware to support multisurface applications (see the Videos, Software and Publications pages). Substance uses a data-oriented programming model that separates data from behavior, and a sharing model that supports sharing of data and/or behavior. We have used Substance to create applications on the WILD platform with over 30 processes running an the cluster, front-end computers, mobile devices and users' laptops.

We have explored multi-surface interaction with two applications built with Substance. SubstanceCanvas supports the display of arbitrary images and documents from a variety of sources. The user can move, scale and rotate them directly on the multitouch table or through a touch-enabled mobile device such as an iPod Touch or iPad. Multiple users can manipulate objects in parallel. SubstanceGrise displays an array of 3D brain scans for use by neuroanatomists. The brains can be organized on the wall by swapping them with a pointing device or through the multi-touch table. They can be oriented in 3D using a plastic model of brain.