2024-2025 Evaluation campaign - Group E

IaH department - Human-Centered Interaction

Portfolio of team AViz
Visual Analytics

Book: Mobile Data Visualization

Bongshin Lee, Raimund Dachselt, Petra Isenberg, Eun Kyoung Choe. Mobile Data Visualization. Chapman and Hall/CRC, pp.346, 2021. DOI: 10.1201/9781003090823 - HAL: hal-03457381

Context

This book is an outcome of a Dagstuhl seminar co-organized by Aviz members. It brought together a larger community of international researchers on our major research axis “Physicality in Input and Output.” Aviz members contributed to several chapters of the book and editing the entire book. The chapters with Aviz contributions are:

Contribution

Examples of mobile devices: a smartphone, tablet

Examples of mobile devices: a smartphone, tablet.

Over the past few decades, the visualization research community has conducted extensive research, designing and developing a large number of visualization techniques and systems mostly for a desktop environment. However, the accumulated knowledge may not be readily transferable to mobile devices due to their fundamental differences in their display size, interaction, and target audience, among others. The small display on mobile devices is more vulnerable to the scalability issue and poses a well-known challenge, the fat finger problem. Mouse-over interaction, which is prevalent in interactive visualization systems in the desktop environment, is not available on mobile devices. While traditional visualizations mainly target data-savvy groups of people such as scientists and researchers, visualizations on mobile devices should account for a broader range of target audience, including lay people who might have low data and visualization literacy.

Impact

This book is the first book on mobile data visualization. It has inspired a community of researchers to continue to work on the topic of mobile data visualization. Google Scholar does not count citations for the book itself but for its individual chapters. Summing up the citations counts for the chapters leads to 60 citations in total since 2021 (date of citation count: May, 2024).

We have presented work related to the contents of the book at invited talk at the University of Bergen and the University of Bremen and will present it during a keynote at the British Computer Graphics & Visual Computing (cgvc.org.uk) conference in September 2024.

Article: Perception! Immersion! Empowerment! Superpowers as Inspiration for Visualization

Wesley Willett, Bon Adriel Aseniero, Sheelagh Carpendale, Pierre Dragicevic, Yvonne Jansen, Lora Oehlberg, Petra Isenberg, IEEE Transactions on Visualization and Computer Graphics, 2022, 28 (1), pp.22–32.

Context

Examples of superpowers. Ultra Boy's penetra-vision, Black lanterns emotion vision, mad thinker prefiction, Amadeux Cho's multi powers

Examples of superpowers. Ultra Boy's penetra-vision, Black lanterns emotion vision, mad thinker prefiction, Amadeux Cho's multi powers.

We often talk about visualizations as tools for amplifying cognition—but what if we took that analogy a bit further, looking to superhero comics and other science fiction as sources of inspiration for visualizations that can enhance human abilities in new and surprising ways? Based on a deep dive into perceptual and cognitive superpowers in fiction, we propose two ways of thinking about the relationship between these powers and visualization, and describe what it means for a visualization to feel empowering. We also illustrate a set of new “visualization superpowers” that highlight opportunities for new empowering data visualizations, as well as the challenges they must confront.

This paper is a direct outcome of our Inria-Calgary associated team SEVEN and a good example of our continuing collaboration with former postdocs, PhD students, and visiting professors. In this paper, we describe how the lens of fictional superpowers can help characterize how visualizations empower people and provide inspiration for new visualization systems.

Contribution

Enhancement examples to people's vision or cognition

Enhancement examples to people's vision or cognition.

The scope of all superpowers in fiction is extremely broad and the majority of powers tend to be pragmatic ones that let characters change the world around them—including physical enhancements like super-strength and matter manipulation or mental abilities like thought projection. We chose largely to ignore these, instead focusing on what we call epistemic superpowers—superhuman abilities that let characters gain knowledge of the world without necessarily altering it. Specifically, we consider epistemic abilities that are either visual (where characters see in enhanced ways) or that are illustrated visually in the source media, even if the powers themselves aren’t strictly visual.

Our first framework attempts to capture low-level building blocks that underpin many different superpowers. These include abilities that enhance vision—including ones that increase humans’ ability to use their visual system to observe the surrounding world—as well as examples that enhance cognition and amplify humans’ capacity to process or reason about observations. We highlight seven different kinds of of enhancements—but this isn’t an exhaustive list. Instead, it’s meant to provide a starting point for discussing the fictional superpowers that are are most likely to inspire new visualization approaches.

Enter the Many Dimensions of Empowerment!

As we’ve seen, technologies that resemble many superpowers already exist. However, depending on how they are implemented, some systems create a much stronger sense of empowerment than others. With that in mind, our second framework highlights seven dimensions of empowerment: scope, access, spatial relevance, temporal relevance, information richness, degree of control, and environmental reality, exploring the ways in which each can alter people’s sense of empowerment or agency. We illustrate these dimensions by comparing well known epistemic tools—existing technological systems which augment humans’ ability to learn about the world.

Ultimately, our frameworks, examples, and reflections, are a provocation more than anything else. We need to look more broadly for sources of inspiration, and for opportunities for the tools we create to play a role in settings beyond traditional analytic ones. Vis needs more creative and divergent thinking, especially as new platforms and use cases knit it ever closer to the fabric of everyday life!

Impact

The paper won a best paper award at the IEEE Visualization conference. The award paragraph read:

This paper is bold, innovative, fun, thorough and persuasive, and a wonderful and rare example of breakthrough, creative thinking about data visualization research. It introduces two theoretical frameworks to explore the dual issue of epistemic powers (i.e. that advance knowledge) in vision and cognition, and their important key mechanics, which span instruments to fairness and accessibility. It provides an original, important and legitimate perspective on what we do, including a wider perspective on evaluating the benefits of visualization. This paper will be remembered for many years for its original perspective, thought-provoking messages and persuasive explanations.

The paper has been cited 37 times according to Google scholar since its publication in 2021 (as of May, 2024).

Article: Scalability in Visualization

Gaëlle Richer, Alexis Pister, Moataz Abdelaal, Jean-Daniel Fekete, Michael Sedlmair, Daniel Weiskopf, IEEE Transactions on Visualization and Computer Graphics, December 2022, to appear.

Context

We introduce a conceptual model for scalability designed for visualization research.

Scalability is a frequent topic in visualization, with many papers claiming to improve scalability or achieve scalable—or sometimes, more scalable—techniques. The visualization research community has a long tradition of acknowledging the need for scalable solutions, for example, included in summaries of grand research challenges for various communities of visualization.

The recent restructuring of the IEEE VIS conferences into a single conference with multiple areas attests to the fact that visualization research is becoming more diverse and trying to be more integrated. While some articles will remain targeted to a distinct audience well aware of its own meaning of scalability, a growing number of articles will cross boundaries to address multiple meanings of scalability, leading to more diverse reviewers and readers, with different backgrounds. We aim to help authors, reviewers, and readers navigate the different aspects of scalability.

This article sets a theoretical backround to the activities of Aviz related to scalability.

Contribution

We clarify what “scalability” means when applied in the field of visualization and provide guidance on how to report scalability claims in articles.

The scalability model represents the scalability of a visualization process that tackles a specific problem by a function with four components: problem size, resources, assumptions, and effort.

The function maps the problem size, expected to vary or grow across applications, to the effort associated with the process’s solution to the problem, provided an amount of resources and some assumptions specific to the particular problem addressed.

Conceptual model with problem size variables S and resource variables R as input, assumptions A, and effort variables E as output to f.

The problem size variables are properties that characterize the complexity of the problem targeted or solved by the process. The resource variables are properties related to the material components of the system or application environment. Assumptions define the validity bounds of the function f for the chosen research context and problem definition. The effort variables are properties describing the performance of the process.

There are multiple interpretations of the model, as shown in the figure below. Sometimes, the shape of the function changes in nature, e.g., from quadratic to linear, as in (a). Sometimes, the same effort can be achieved with a larger problem size, e.g., the refresh rate of an interactive application can be maintained under 20ms for a larger size. Claiming that a novel algorithm or system is more scalable requires specifying the meaning of the scalability.

Examples of effort functions, f_new in solid blue and f_old in dashed green, with  f_new being more scalable than f_old according to three meanings of scalable.

With this model, we systematically analyze over 120 visualization publications from 1990 to 2020 to characterize the different notions of scalability in these works.

The article provides a set of 8 recommendations for authors to make sure an article fully explains what it means by "scalability" and provides the right information that is understandable by anyone in the visualization community. It also asks reviewers to clarify the request for scalability and, more generally, recommendations to the research community to better communicate about scalability issues.

Impact

This work is recent but it will help clarify what researchers with different backgrounds call "scalability" in visualization. Scalability is not well handled in visualization research, due to the difficulty of the topic, and a lack of a clear definition. This article fills that gap.

New research can now talk about scalability with a shared vocabulary. It allows researchers in human-computer interaction to use the term to discuss scalability related to, e.g., the limit of latency humans can deal with, as well as algorithm designers in high-performance computing to discuss scalability in terms of the shape of the runtime performance relative to the number of processors, for example. These discussions were hardly possible before due to too much implicit information researchers in particular domains had about their concept of scalability, which was hardly understandable from other domains.

Multiscale Unfolding: Illustratively Visualizing the Whole Genome at a Glance

Sarkis Halladjian, David Kouřil, Haichao Miao, M. Eduard Gröller, Ivan Viola, and Tobias Isenberg. IEEE Transactions on Visualization and Computer Graphics, 28(10):3456–3470, October 2022. DOI: 10.1109/TVCG.2021.3065443

YouTube videos:

ScaleTrotter YouTube/Vimeo videos:

Context

This publication and its predecessor publication nicely exemplify our work in illustrative visualization and visual abstraction, and also has links to our work on the connection of 3D visualizations with 2D representations.

Contribution

Multiscale Unfolding is an interactive technique for illustratively visualizing multiple hierarchical scales of DNA in a single view, showing the genome at different scales and demonstrating how one scale spatially folds into the next. The DNA's extremely long sequential structure—arranged differently on several distinct scale levels—is often lost in traditional 3D depictions, mainly due to its multiple levels of dense spatial packing and the resulting occlusion. Furthermore, interactive exploration of this complex structure is cumbersome, requiring visibility management like cut-aways. In contrast to existing temporally controlled multiscale data exploration, with Multiscale Unfolding we allow viewers to always see and interact with any of the involved scales. For this purpose we separated the depiction into constant-scale and scale transition zones. Constant-scale zones maintain a single-scale representation, while still linearly unfolding the DNA. Inspired by illustration, scale transition zones connect adjacent constant-scale zones via level unfolding, scaling, and transparency. We thus represent the spatial structure of the whole DNA macro-molecule, maintain its local organizational characteristics, linearize its higher-level organization, and use spatially controlled, understandable interpolation between neighboring scales. We also contributed interaction techniques that provide viewers with a coarse-to-fine control for navigating within our all-scales-in-one-view representations and visual aids to illustrate the size differences. Overall, Multiscale Unfolding allows viewers to grasp the DNA's structural composition from chromosomes to the atoms, with increasing levels of “unfoldedness,” and can be applied in data-driven illustration and communication.

Example of Multiscale Unfolding

This Multiscale Unfolding work builds upon the time-controlled scaling method ScaleTrotter, a conceptual framework for an interactive, multi-scale visualization of biological mesoscale data and, specifically, genome data. ScaleTrotter allows viewers to smoothly transition from the nucleus of a cell to the atomistic composition of the DNA, while bridging several orders of magnitude in scale. The challenges in creating an interactive visualization of genome data are fundamentally different in several ways from those in other domains like astronomy that require a multi-scale representation as well. First, genome data has intertwined scale levels—the DNA is an extremely long, connected molecule that manifests itself at all scale levels. Second, elements of the DNA do not disappear as one zooms out—instead the scale levels at which they are observed group these elements differently. Third, we have detailed information and thus geometry for the entire dataset and for all scale levels, posing a challenge for interactive visual exploration. Finally, the conceptual scale levels for genome data are close in scale space, requiring us to find ways to visually embed a smaller scale into a coarser one. We addressed these challenges by creating a new multi-scale visualization concept using a scale-dependent camera model that controls the visual embedding of the scales into their respective parents, the rendering of a subset of the scale hierarchy, and the location, size, and scope of the view. In traversing the scales, ScaleTrotter is roaming between 2D and 3D visual representations that are depicted in integrated visuals.

Example of the zoom stages of ScaleTrotter

Impact

After the presentation of both papers at IEEE VIS in 2021 and 2019, respectively, we have presented this work at a number of invited talks, including at Peking University (China), Linköping University (Sweden), Indiana University (USA), and the Shonan Seminar on Toughening the Foundations of Visual Abstraction (Japan).