IEEE VIS Publication Dataset

next
Vis
2011
WYSIWYG (What You See is What You Get) Volume Visualization
10.1109/TVCG.2011.261
2. 2114
J
In this paper, we propose a volume visualization system that accepts direct manipulation through a sketch-based What You See Is What You Get (WYSIWYG) approach. Similar to the operations in painting applications for 2D images, in our system, a full set of tools have been developed to enable direct volume rendering manipulation of color, transparency, contrast, brightness, and other optical properties by brushing a few strokes on top of the rendered volume image. To be able to smartly identify the targeted features of the volume, our system matches the sparse sketching input with the clustered features both in image space and volume space. To achieve interactivity, both special algorithms to accelerate the input identification and feature matching have been developed and implemented in our system. Without resorting to tuning transfer function parameters, our proposed system accepts sparse stroke inputs and provides users with intuitive, flexible and effective interaction during volume data exploration and visualization.
Hanqi Guo;Ningyu Mao;Xiaoru Yuan
Key Lab. of Machine Perception (Minist. of Educ.), Peking Univ., Beijing, China|c|;;
10.1109/TVCG.2010.145;10.1109/VISUAL.1998.745319;10.1109/TVCG.2008.120;10.1109/VISUAL.2003.1250414;10.1109/TVCG.2007.70591;10.1109/VISUAL.1996.568113;10.1109/TVCG.2009.189;10.1109/TVCG.2008.162;10.1109/VISUAL.2003.1250386;10.1109/VISUAL.2003.1250413;10.1109/VISUAL.2000.885694;10.1109/VISUAL.2005.1532856;10.1109/VISUAL.1997.663875;10.1109/TVCG.2006.148
Volume rendering, Sketching input, Human-computer interaction, Transfer functions, Feature space
InfoVis
2010
A Visual Backchannel for Large-Scale Events
10.1109/TVCG.2010.129
1. 1138
J
We introduce the concept of a Visual Backchannel as a novel way of following and exploring online conversations about large-scale events. Microblogging communities, such as Twitter, are increasingly used as digital backchannels for timely exchange of brief comments and impressions during political speeches, sport competitions, natural disasters, and other large events. Currently, shared updates are typically displayed in the form of a simple list, making it difficult to get an overview of the fast-paced discussions as it happens in the moment and how it evolves over time. In contrast, our Visual Backchannel design provides an evolving, interactive, and multi-faceted visual overview of large-scale ongoing conversations on Twitter. To visualize a continuously updating information stream, we include visual saliency for what is happening now and what has just happened, set in the context of the evolving conversation. As part of a fully web-based coordinated-view system we introduce Topic Streams, a temporally adjustable stacked graph visualizing topics over time, a People Spiral representing participants and their activity, and an Image Cloud encoding the popularity of event photos by size. Together with a post listing, these mutually linked views support cross-filtering along topics, participants, and time ranges. We discuss our design considerations, in particular with respect to evolving visualizations of dynamically changing data. Initial feedback indicates significant interest and suggests several unanticipated uses.
Dork, M.;Gruen, D.;Williamson, C.;Carpendale, S.
;;;
10.1109/VAST.2009.5333443;10.1109/TVCG.2007.70541;10.1109/TVCG.2008.166;10.1109/TVCG.2008.175;10.1109/INFVIS.2005.1532133;10.1109/INFVIS.2003.1249028;10.1109/VAST.2008.4677364;10.1109/VAST.2009.5333437
Backchannel, information visualization, events, multiple views, microblogging, information retrieval, World Wide Web
InfoVis
2010
An Extension of Wilkinson's Algorithm for Positioning Tick Labels on Axes
10.1109/TVCG.2010.130
1. 1043
J
The non-data components of a visualization, such as axes and legends, can often be just as important as the data itself. They provide contextual information essential to interpreting the data. In this paper, we describe an automated system for choosing positions and labels for axis tick marks. Our system extends Wilkinson's optimization-based labeling approach to create a more robust, full-featured axis labeler. We define an expanded space of axis labelings by automatically generating additional nice numbers as needed and by permitting the extreme labels to occur inside the data range. These changes provide flexibility in problematic cases, without degrading quality elsewhere. We also propose an additional optimization criterion, legibility, which allows us to simultaneously optimize over label formatting, font size, and orientation. To solve this revised optimization problem, we describe the optimization function and an efficient search algorithm. Finally, we compare our method to previous work using both quantitative and qualitative metrics. This paper is a good example of how ideas from automated graphic design can be applied to information visualization.
Talbot, J.;Lin, S.;Hanrahan, P.
;;
Axis labeling, nice numbers
InfoVis
2010
behaviorism: a framework for dynamic data visualization
10.1109/TVCG.2010.126
1. 1171
J
While a number of information visualization software frameworks exist, creating new visualizations, especially those that involve novel visualization metaphors, interaction techniques, data analysis strategies, and specialized rendering algorithms, is still often a difficult process. To facilitate the creation of novel visualizations we present a new software framework, behaviorism, which provides a wide range of flexibility when working with dynamic information on visual, temporal, and ontological levels, but at the same time providing appropriate abstractions which allow developers to create prototypes quickly which can then easily be turned into robust systems. The core of the framework is a set of three interconnected graphs, each with associated operators: a scene graph for high-performance 3D rendering, a data graph for different layers of semantically-linked heterogeneous data, and a timing graph for sophisticated control of scheduling, interaction, and animation. In particular, the timing graph provides a unified system to add behaviors to both data and visual elements, as well as to the behaviors themselves. To evaluate the framework we look briefly at three different projects all of which required novel visualizations in different domains, and all of which worked with dynamic data in different ways: an interactive ecological simulation, an information art installation, and an information visualization technique.
Forbes, A.;Hollerer, T.;Legrady, G.
Media Arts & Technol. Dept., Univ. of California, Santa Barbara, CA, USA|c|;;
10.1109/INFVIS.2004.64;10.1109/VISUAL.1996.567752;10.1109/INFVIS.1997.636761;10.1109/TVCG.2009.111
Frameworks, information visualization, information art, dynamic data
InfoVis
2010
Comparative Analysis of Multidimensional; Quantitative Data
10.1109/TVCG.2010.138
1. 1035
J
When analyzing multidimensional, quantitative data, the comparison of two or more groups of dimensions is a common task. Typical sources of such data are experiments in biology, physics or engineering, which are conducted in different configurations and use replicates to ensure statistically significant results. One common way to analyze this data is to filter it using statistical methods and then run clustering algorithms to group similar values. The clustering results can be visualized using heat maps, which show differences between groups as changes in color. However, in cases where groups of dimensions have an a priori meaning, it is not desirable to cluster all dimensions combined, since a clustering algorithm can fragment continuous blocks of records. Furthermore, identifying relevant elements in heat maps becomes more difficult as the number of dimensions increases. To aid in such situations, we have developed Matchmaker, a visualization technique that allows researchers to arbitrarily arrange and compare multiple groups of dimensions at the same time. We create separate groups of dimensions which can be clustered individually, and place them in an arrangement of heat maps reminiscent of parallel coordinates. To identify relations, we render bundled curves and ribbons between related records in different groups. We then allow interactive drill-downs using enlarged detail views of the data, which enable in-depth comparisons of clusters between groups. To reduce visual clutter, we minimize crossings between the views. This paper concludes with two case studies. The first demonstrates the value of our technique for the comparison of clustering algorithms. In the second, biologists use our system to investigate why certain strains of mice develop liver disease while others remain healthy, informally showing the efficacy of our system when analyzing multidimensional data containing distinct groups of dimensions.
Lex, A.;Streit, M.;Partl, C.;Kashofer, K.;Schmalstieg, D.
;;;;
10.1109/VISUAL.1996.568118;10.1109/VISUAL.1990.146402;10.1109/TVCG.2006.147;10.1109/TVCG.2007.70556;10.1109/TVCG.2007.70529;10.1109/TVCG.2009.167;10.1109/INFVIS.2000.885086
Multidimensional data, cluster comparison, bioinformatics visualization
InfoVis
2010
Declarative Language Design for Interactive Visualization
10.1109/TVCG.2010.144
1. 1156
J
We investigate the design of declarative, domain-specific languages for constructing interactive visualizations. By separating specification from execution, declarative languages can simplify development, enable unobtrusive optimization, and support retargeting across platforms. We describe the design of the Protovis specification language and its implementation within an object-oriented, statically-typed programming language (Java). We demonstrate how to support rich visualizations without requiring a toolkit-specific data model and extend Protovis to enable declarative specification of animated transitions. To support cross-platform deployment, we introduce rendering and event-handling infrastructures decoupled from the runtime platform, letting designers retarget visualization specifications (e.g., from desktop to mobile phone) with reduced effort. We also explore optimizations such as runtime compilation of visualization specifications, parallelized execution, and hardware-accelerated rendering. We present benchmark studies measuring the performance gains provided by these optimizations and compare performance to existing Java-based visualization tools, demonstrating scalability improvements exceeding an order of magnitude.
Heer, J.;Bostock, M.
Comput. Sci. Dept., Stanford Univ., Stanford, CA, USA|c|;
10.1109/TVCG.2009.174;10.1109/TVCG.2006.178;10.1109/INFVIS.2004.12;10.1109/TVCG.2007.70577;10.1109/TVCG.2009.128;10.1109/VISUAL.1992.235219;10.1109/TVCG.2009.191;10.1109/TVCG.2009.110;10.1109/TVCG.2007.70539;10.1109/INFVIS.2004.64;10.1109/INFVIS.2000.885086
Information visualization, user interfaces, toolkits, domain specific languages, declarative languages, optimization
InfoVis
2010
eSeeTrack—Visualizing Sequential fixation Patterns
10.1109/TVCG.2010.149
9. 962
J
We introduce eSeeTrack, an eye-tracking visualization prototype that facilitates exploration and comparison of sequential gaze orderings in a static or a dynamic scene. It extends current eye-tracking data visualizations by extracting patterns of sequential gaze orderings, displaying these patterns in a way that does not depend on the number of fixations on a scene, and enabling users to compare patterns from two or more sets of eye-gaze data. Extracting such patterns was very difficult with previous visualization techniques. eSeeTrack combines a timeline and a tree-structured visual representation to embody three aspects of eye-tracking data that users are interested in: duration, frequency and orderings of fixations. We demonstrate the usefulness of eSeeTrack via two case studies on surgical simulation and retail store chain data. We found that eSeeTrack allows ordering of fixations to be rapidly queried, explored and compared. Furthermore, our tool provides an effective and efficient mechanism to determine pattern outliers. This approach can be effective for behavior analysis in a variety of domains that are described at the end of this paper.
Hoi Ying Tsang;Tory, M.;Swindells, C.
;;
10.1109/TVCG.2009.117;10.1109/TVCG.2009.181;10.1109/TVCG.2008.172
InfoVis
2010
Evaluating the impact of task demands and block resolution on the effectiveness of pixel-based visualization
10.1109/TVCG.2010.150
9. 972
J
Pixel-based visualization is a popular method of conveying large amounts of numerical data graphically. Application scenarios include business and finance, bioinformatics and remote sensing. In this work, we examined how the usability of such visual representations varied across different tasks and block resolutions. The main stimuli consisted of temporal pixel-based visualization with a white-red color map, simulating monthly temperature variation over a six-year period. In the first study, we included 5 separate tasks to exert different perceptual loads. We found that performance varied considerably as a function of task, ranging from 75% correct in low-load tasks to below 40% in high-load tasks. There was a small but consistent effect of resolution, with the uniform patch improving performance by around 6% relative to higher block resolution. In the second user study, we focused on a high-load task for evaluating month-to-month changes across different regions of the temperature range. We tested both CIE L*u*v* and RGB color spaces. We found that the nature of the change-evaluation errors related directly to the distance between the compared regions in the mapped color space. We were able to reduce such errors by using multiple color bands for the same data range. In a final study, we examined more fully the influence of block resolution on performance, and found block resolution had a limited impact on the effectiveness of pixel-based visualization.
Borgo, R.;Proctor, K.;Chen, M.;Jänicke, H.;Murray, T.;Thornton, I.M.
Comput. Sci., Swansea Univ., Swansea, UK|c|;;;;;
10.1109/VISUAL.1995.480803
Pixel-based visualization, evaluation, user study, visual search, change detection
InfoVis
2010
FacetAtlas: Multifaceted Visualization for Rich Text Corpora
10.1109/TVCG.2010.154
1. 1181
J
Documents in rich text corpora usually contain multiple facets of information. For example, an article about a specific disease often consists of different facets such as symptom, treatment, cause, diagnosis, prognosis, and prevention. Thus, documents may have different relations based on different facets. Powerful search tools have been developed to help users locate lists of individual documents that are most related to specific keywords. However, there is a lack of effective analysis tools that reveal the multifaceted relations of documents within or cross the document clusters. In this paper, we present FacetAtlas, a multifaceted visualization technique for visually analyzing rich text corpora. FacetAtlas combines search technology with advanced visual analytical tools to convey both global and local patterns simultaneously. We describe several unique aspects of FacetAtlas, including (1) node cliques and multifaceted edges, (2) an optimized density map, and (3) automated opacity pattern enhancement for highlighting visual patterns, (4) interactive context switch between facets. In addition, we demonstrate the power of FacetAtlas through a case study that targets patient education in the health care domain. Our evaluation shows the benefits of this work, especially in support of complex multifaceted data analysis.
Nan Cao;Jimeng Sun;Yu-Ru Lin;Gotz, D.;Shixia Liu;Huamin Qu
Dept. of Comput. Sci. & Eng., Hong Kong Univ. of Sci. & Technol., Hong Kong, China|c|;;;;;
10.1109/TVCG.2006.122;10.1109/VAST.2009.5333443;10.1109/INFVIS.2000.885098;10.1109/TVCG.2009.140;10.1109/TVCG.2008.135;10.1109/TVCG.2008.172;10.1109/TVCG.2009.139;10.1109/TVCG.2009.171;10.1109/INFVIS.2005.1532126;10.1109/TVCG.2006.142;10.1109/TVCG.2006.147;10.1109/VISUAL.1998.745302;10.1109/TVCG.2009.165;10.1109/INFVIS.1995.528686;10.1109/TVCG.2006.185
Multi-facet visualization, Text visualization, Multi-relational Graph, Search UI
InfoVis
2010
GeneaQuilts: A System for Exploring Large Genealogies
10.1109/TVCG.2010.159
1. 1081
J
GeneaQuilts is a new visualization technique for representing large genealogies of up to several thousand individuals. The visualization takes the form of a diagonally-filled matrix, where rows are individuals and columns are nuclear families. After identifying the major tasks performed in genealogical research and the limits of current software, we present an interactive genealogy exploration system based on GeneaQuilts. The system includes an overview, a timeline, search and filtering components, and a new interaction technique called Bring & Slide that allows fluid navigation in very large genealogies. We report on preliminary feedback from domain experts and show how our system supports a number of their tasks.
Bezerianos, A.;Dragicevic, P.;Fekete, J.;Juhee Bae;Watson, B.
;;;;
10.1109/INFVIS.2002.1173156;10.1109/INFVIS.2005.1532124
Genealogy visualization, interaction
InfoVis
2010
Graphical inference for infovis
10.1109/TVCG.2010.161
9. 979
J
How do we know if what we see is really there? When visualizing data, how do we avoid falling into the trap of apophenia where we see patterns in random noise? Traditionally, infovis has been concerned with discovering new relationships, and statistics with preventing spurious relationships from being reported. We pull these opposing poles closer with two new techniques for rigorous statistical inference of visual discoveries. The "Rorschach" helps the analyst calibrate their understanding of uncertainty and "line-up" provides a protocol for assessing the significance of visual discoveries, protecting against the discovery of spurious structure.
Wickham, H.;Cook, D.;Hofmann, H.;Buja, A.
Rice Univ., Houston, TX, USA|c|;;;
10.1109/TVCG.2007.70577
Statistics, visual testing, permutation tests, null hypotheses, data plots
InfoVis
2010
Graphical Perception of Multiple Time Series
10.1109/TVCG.2010.162
9. 934
J
Line graphs have been the visualization of choice for temporal data ever since the days of William Playfair (1759-1823), but realistic temporal analysis tasks often include multiple simultaneous time series. In this work, we explore user performance for comparison, slope, and discrimination tasks for different line graph techniques involving multiple time series. Our results show that techniques that create separate charts for each time series--such as small multiples and horizon graphs--are generally more efficient for comparisons across time series with a large visual span. On the other hand, shared-space techniques--like standard line graphs--are typically more efficient for comparisons over smaller visual spans where the impact of overlap and clutter is reduced.
Javed, W.;McDonnel, B.;Elmqvist, N.
Purdue Univ. in West Lafayette, West Lafayette, IN, USA|c|;;
10.1109/TVCG.2008.166;10.1109/TVCG.2007.70583;10.1109/TVCG.2007.70535;10.1109/INFVIS.1999.801851;10.1109/TVCG.2008.125;10.1109/INFVIS.2005.1532144
Line graphs, braided graphs, horizon graphs, small multiples, stacked graphs, evaluation, design guidelines
InfoVis
2010
Gremlin: An Interactive Visualization Model for Analyzing Genomic Rearrangements
10.1109/TVCG.2010.163
9. 926
J
In this work we present, apply, and evaluate a novel, interactive visualization model for comparative analysis of structural variants and rearrangements in human and cancer genomes, with emphasis on data integration and uncertainty visualization. To support both global trend analysis and local feature detection, this model enables explorations continuously scaled from the high-level, complete genome perspective, down to the low-level, structural rearrangement view, while preserving global context at all times. We have implemented these techniques in Gremlin, a genomic rearrangement explorer with multi-scale, linked interactions, which we apply to four human cancer genome data sets for evaluation. Using an insight-based evaluation methodology, we compare Gremlin to Circos, the state-of-the-art in genomic rearrangement visualization, through a small user study with computational biologists working in rearrangement analysis. Results from user study evaluations demonstrate that this visualization model enables more total insights, more insights per minute, and more complex insights than the current state-of-the-art for visual analysis and exploration of genome rearrangements.
O'Brien, T.M.;Ritz, A.M.;Raphael, B.J.;Laidlaw, D.H.
Comput. Sci. Dept., Brown Univ., Providence, RI, USA|c|;;;
10.1109/TVCG.2009.174;10.1109/TVCG.2009.167
Information visualization, bioinformatics, insight-based evaluation
InfoVis
2010
How Information Visualization Novices Construct Visualizations
10.1109/TVCG.2010.164
9. 952
J
It remains challenging for information visualization novices to rapidly construct visualizations during exploratory data analysis. We conducted an exploratory laboratory study in which information visualization novices explored fictitious sales data by communicating visualization specifications to a human mediator, who rapidly constructed the visualizations using commercial visualization software. We found that three activities were central to the iterative visualization construction process: data attribute selection, visual template selection, and visual mapping specification. The major barriers faced by the participants were translating questions into data attributes, designing visual mappings, and interpreting the visualizations. Partial specification was common, and the participants used simple heuristics and preferred visualizations they were already familiar with, such as bar, line and pie charts. We derived abstract models from our observations that describe barriers in the data exploration process and uncovered how information visualization novices think about visualization specifications. Our findings support the need for tools that suggest potential visualizations and support iterative refinement, that provide explanations and help with learning, and that are tightly integrated into tool support for the overall visual analytics process.
Grammel, L.;Tory, M.;Storey, M.
Univ. of Victoria, Victoria, BC, Canada|c|;;
10.1109/TVCG.2007.70515;10.1109/TVCG.2006.163;10.1109/TVCG.2007.70541;10.1109/VAST.2009.5333878;10.1109/TVCG.2008.109;10.1109/VAST.2006.261428;10.1109/TVCG.2007.70577;10.1109/VAST.2008.4677358;10.1109/VAST.2008.4677365;10.1109/TVCG.2007.70535;10.1109/INFVIS.2005.1532136;10.1109/INFVIS.1998.729560;10.1109/TVCG.2007.70594;10.1109/INFVIS.2000.885086;10.1109/INFVIS.2001.963289;10.1109/INFVIS.2000.885092;10.1109/TVCG.2008.137
Empirical study, visualization, visualization construction, visual analytics, visual mapping, novices
InfoVis
2010
Laws of Attraction: From Perceptual Forces to Conceptual Similarity
10.1109/TVCG.2010.174
1. 1016
J
Many of the pressing questions in information visualization deal with how exactly a user reads a collection of visual marks as information about relationships between entities. Previous research has suggested that people see parts of a visualization as objects, and may metaphorically interpret apparent physical relationships between these objects as suggestive of data relationships. We explored this hypothesis in detail in a series of user experiments. Inspired by the concept of implied dynamics in psychology, we first studied whether perceived gravity acting on a mark in a scatterplot can lead to errors in a participant's recall of the mark's position. The results of this study suggested that such position errors exist, but may be more strongly influenced by attraction between marks. We hypothesized that such apparent attraction may be influenced by elements used to suggest relationship between objects, such as connecting lines, grouping elements, and visual similarity. We further studied what visual elements are most likely to cause this attraction effect, and whether the elements that best predicted attraction errors were also those which suggested conceptual relationships most strongly. Our findings show a correlation between attraction errors and intuitions about relatedness, pointing towards a possible mechanism by which the perception of visual marks becomes an interpretation of data relationships.
Ziemkiewicz, C.;Kosara, R.
;
10.1109/TVCG.2008.125
Perceptual cognition, visualization models, laboratory studies, cognition theory
InfoVis
2010
ManiWordle: Providing Flexible Control over Wordle
10.1109/TVCG.2010.175
1. 1197
J
Among the multifarious tag-clouding techniques, Wordle stands out to the community by providing an aesthetic layout, eliciting the emergence of the participatory culture and usage of tag-clouding in the artistic creations. In this paper, we introduce ManiWordle, a Wordle-based visualization tool that revamps interactions with the layout by supporting custom manipulations. ManiWordle allows people to manipulate typography, color, and composition not only for the layout as a whole, but also for the individual words, enabling them to have better control over the layout result. We first describe our design rationale along with the interaction techniques for tweaking the layout. We then present the results both from the preliminary usability study and from the comparative study between ManiWordle and Wordle. The results suggest that ManiWordle provides higher user satisfaction and an efficient method of creating the desired "art work," harnessing the power behind the ever-increasing popularity of Wordle.
Koh, K.;Bongshin Lee;Bohyoung Kim;Jinwook Seo
Seoul Nat. Univ., Seoul, South Korea|c|;;;
10.1109/TVCG.2007.70541;10.1109/TVCG.2007.70515;10.1109/TVCG.2009.171;10.1109/VAST.2009.5333443;10.1109/INFVIS.2003.1249031
Interaction design, direct manipulation, flexibilty-usability tradeoff, tag-cloud, participatory visualization, user study
InfoVis
2010
Matching Visual Saliency to Confidence in Plots of Uncertain Data
10.1109/TVCG.2010.176
9. 989
J
Conveying data uncertainty in visualizations is crucial for preventing viewers from drawing conclusions based on untrustworthy data points. This paper proposes a methodology for efficiently generating density plots of uncertain multivariate data sets that draws viewers to preattentively identify values of high certainty while not calling attention to uncertain values. We demonstrate how to augment scatter plots and parallel coordinates plots to incorporate statistically modeled uncertainty and show how to integrate them with existing multivariate analysis techniques, including outlier detection and interactive brushing. Computing high quality density plots can be expensive for large data sets, so we also describe a probabilistic plotting technique that summarizes the data without requiring explicit density plot computation. These techniques have been useful for identifying brain tumors in multivariate magnetic resonance spectroscopy data and we describe how to extend them to visualize ensemble data sets.
Feng, D.;Kwock, L.;Yueh Lee;Taylor, R.M.
Univ. of North Carolina at Chapel Hill, Chapel Hill, NC, USA|c|;;;
10.1109/TVCG.2008.119;10.1109/INFVIS.2001.963286;10.1109/TVCG.2008.167;10.1109/TVCG.2009.179;10.1109/INFVIS.2002.1173145;10.1109/TVCG.2009.131;10.1109/VISUAL.1999.809866;10.1109/TVCG.2009.114;10.1109/TVCG.2006.170;10.1109/INFVIS.2004.3;10.1109/VISUAL.1994.346302;10.1109/TVCG.2008.153;10.1109/TVCG.2009.118
Uncertainty visualization, brushing, scatter plots, parallel coordinates, multivariate data
InfoVis
2010
Mental Models; Visual Reasoning and Interaction in Information Visualization: A Top-down Perspective
10.1109/TVCG.2010.177
9. 1008
J
Although previous research has suggested that examining the interplay between internal and external representations can benefit our understanding of the role of information visualization (InfoVis) in human cognitive activities, there has been little work detailing the nature of internal representations, the relationship between internal and external representations and how interaction is related to these representations. In this paper, we identify and illustrate a specific kind of internal representation, mental models, and outline the high-level relationships between mental models and external visualizations. We present a top-down perspective of reasoning as model construction and simulation, and discuss the role of visualization in model based reasoning. From this perspective, interaction can be understood as active modeling for three primary purposes: external anchoring, information foraging, and cognitive offloading. Finally we discuss the implications of our approach for design, evaluation and theory development.
Zhicheng Liu;Stasko, J.
Sch. of Interactive Comput., Georgia Inst. of Technol., Atlanta, GA, USA|c|;
10.1109/TVCG.2009.187;10.1109/TVCG.2008.155;10.1109/INFVIS.2001.963289;10.1109/TVCG.2009.109;10.1109/TVCG.2007.70515;10.1109/TVCG.2009.180;10.1109/TVCG.2008.109;10.1109/TVCG.2008.171;10.1109/TVCG.2008.121;10.1109/VAST.2008.4677365
Mental model, model-based reasoning, distributed cognition, interaction, theory, information visualization
InfoVis
2010
MulteeSum: A Tool for Comparative Spatial and Temporal Gene Expression Data
10.1109/TVCG.2010.137
9. 917
J
Cells in an organism share the same genetic information in their DNA, but have very different forms and behavior because of the selective expression of subsets of their genes. The widely used approach of measuring gene expression over time from a tissue sample using techniques such as microarrays or sequencing do not provide information about the spatial position with in the tissue where these genes are expressed. In contrast, we are working with biologists who use techniques that measure gene expression in every individual cell of entire fruitfly embryos over an hour of their development, and do so for multiple closely-related subspecies of Drosophila. These scientists are faced with the challenge of integrating temporal gene expression data with the spatial location of cells and, moreover, comparing this data across multiple related species. We have worked with these biologists over the past two years to develop MulteeSum, a visualization system that supports inspection and curation of data sets showing gene expression over time, in conjunction with the spatial location of the cells where the genes are expressed - it is the first tool to support comparisons across multiple such data sets. MulteeSum is part of a general and flexible framework we developed with our collaborators that is built around multiple summaries for each cell, allowing the biologists to explore the results of computations that mix spatial information, gene expression measurements over time, and data from multiple related species or organisms. We justify our design decisions based on specific descriptions of the analysis needs of our collaborators, and provide anecdotal evidence of the efficacy of MulteeSum through a series of case studies.
Meyer, M.;Munzner, T.;DePace, A.;Pfister, H.
;;;
10.1109/TVCG.2006.178;10.1109/TVCG.2007.70583;10.1109/TVCG.2007.70589;10.1109/TVCG.2009.167
Spatial data, temporal data, gene expression
InfoVis
2010
Narrative Visualization: Telling Stories with Data
10.1109/TVCG.2010.179
1. 1148
J
Data visualization is regularly promoted for its ability to reveal stories within data, yet these ÔÇ£data storiesÔÇØ differ in important ways from traditional forms of storytelling. Storytellers, especially online journalists, have increasingly been integrating visualizations into their narratives, in some cases allowing the visualization to function in place of a written story. In this paper, we systematically review the design space of this emerging class of visualizations. Drawing on case studies from news media to visualization research, we identify distinct genres of narrative visualization. We characterize these design differences, together with interactivity and messaging, in terms of the balance between the narrative flow intended by the author (imposed by graphical elements and the interface) and story discovery on the part of the reader (often through interactive exploration). Our framework suggests design strategies for narrative visualization, including promising under-explored approaches to journalistic storytelling and educational media.
Segel, E.;Heer, J.
Stanford Univ., Stanford, CA, USA|c|;
10.1109/TVCG.2007.70577;10.1109/TVCG.2007.70539;10.1109/TVCG.2008.137;10.1109/VAST.2007.4388992
Narrative visualization, storytelling, design methods, case study, journalism, social data analysis