IEEE VIS Publication Dataset

next
VAST
2012
A generic model for the integration of interactive visualization and statistical computing using R
10.1109/VAST.2012.6400537
2. 234
M
This poster describes general concepts of integrating the statistical computation package R into a coordinated multiple views framework. The integration is based on a cyclic analysis workflow. In this model, interactive selections are a key aspect to trigger and control computations in R. Dynamic updates of data columns are a generic mechanism to transfer computational results back to the interactive visualization. Further aspects include the integration of the R console and an R object browser as views in our system. We illustrate our approach by means of an interactive modeling process.
Kehrer, J.;Boubela, R.N.;Filzmoser, P.;Piringer, H.
VRVis Res. Center, Vienna, Austria|c|;;;
VAST
2012
A Visual Analytics Approach to Multiscale Exploration of Environmental Time Series
10.1109/TVCG.2012.191
2. 2907
J
We present a Visual Analytics approach that addresses the detection of interesting patterns in numerical time series, specifically from environmental sciences. Crucial for the detection of interesting temporal patterns are the time scale and the starting points one is looking at. Our approach makes no assumption about time scale and starting position of temporal patterns and consists of three main steps: an algorithm to compute statistical values for all possible time scales and starting positions of intervals, visual identification of potentially interesting patterns in a matrix visualization, and interactive exploration of detected patterns. We demonstrate the utility of this approach in two scientific scenarios and explain how it allowed scientists to gain new insight into the dynamics of environmental systems.
Sips, M.;Kothur, P.;Unger, A.;Hege, H.-C.;Dransch, D.
German Res. Center for Geosci. GFZ, Germany|c|;;;;
10.1109/INFVIS.2001.963273;10.1109/INFVIS.1995.528685
Time series analysis, multiscale visualization, visual analytics
VAST
2012
A visual analytics approach to understanding cycling behaviour
10.1109/VAST.2012.6400550
2. 208
M
Existing research into cycling behaviours has either relied on detailed ethnographic studies or larger public attitude surveys [1] [9]. Instead, following recent contributions from information visualization [13] and data mining [5] [7], this design study uses visual analytics techniques to identify, describe and explain cycling behaviours within a large and attribute rich transactional dataset. Using data from London's bike share scheme1, customer level classifications will be created, which consider the regularity of scheme use, journey length and travel times. Monitoring customer usage over time, user classifications will attend to the dynamics of cycling behaviour, asking substantive questions about how behaviours change under varying conditions. The 3-year PhD project will contribute to academic and strategic discussions around sustainable travel policy. A programme of research is outlined, along with an early visual analytics prototype for rapidly querying customer journeys.
Beecham, R.;Wood, J.;Bowerman, A.
City Univ. London, London, UK|c|;;
VAST
2012
AlVis: Situation awareness in the surveillance of road tunnels
10.1109/VAST.2012.6400556
1. 162
C
In the surveillance of road tunnels, video data plays an important role for a detailed inspection and as an input to systems for an automated detection of incidents. In disaster scenarios like major accidents, however, the increased amount of detected incidents may lead to situations where human operators lose a sense of the overall meaning of that data, a problem commonly known as a lack of situation awareness. The primary contribution of this paper is a design study of AlVis, a system designed to increase situation awareness in the surveillance of road tunnels. The design of AlVis is based on a simplified tunnel model which enables an overview of the spatiotemporal development of scenarios in real-time. The visualization explicitly represents the present state, the history, and predictions of potential future developments. Concepts for situation-sensitive prioritization of information ensure scalability from normal operation to major disaster scenarios. The visualization enables an intuitive access to live and historic video for any point in time and space. We illustrate AlVis by means of a scenario and report qualitative feedback by tunnel experts and operators. This feedback suggests that AlVis is suitable to save time in recognizing dangerous situations and helps to maintain an overview in complex disaster scenarios.
Piringer, H.;Buchetics, M.;Benedik, R.
;;
10.1109/INFVIS.2002.1173149;10.1109/INFVIS.2005.1532134;10.1109/VAST.2011.6102456;10.1109/TVCG.2007.70544;10.1109/TVCG.2007.70521;10.1109/TVCG.2007.70621;10.1109/INFVIS.2004.27;10.1109/INFVIS.1995.528685;10.1109/TVCG.2008.185;10.1109/VAST.2007.4388994;10.1109/VAST.2007.4388998;10.1109/VAST.2008.4677353
VAST
2012
An adaptive parameter space-filling algorithm for highly interactive cluster exploration
10.1109/VAST.2012.6400493
1. 22
C
For a user to perceive continuous interactive response time in a visualization tool, the rule of thumb is that it must process, deliver, and display rendered results for any given interaction in under 100 milliseconds. In many visualization systems, successive interactions trigger independent queries and caching of results. Consequently, computationally expensive queries like multidimensional clustering cannot keep up with rapid sequences of interactions, precluding visual benefits such as motion parallax. In this paper, we describe a heuristic prefetching technique to improve the interactive response time of KMeans clustering in dynamic query visualizations of multidimensional data. We address the tradeoff between high interaction and intense query computation by observing how related interactions on overlapping data subsets produce similar clustering results, and characterizing these similarities within a parameter space of interaction. We focus on the two-dimensional parameter space defined by the minimum and maximum values of a time range manipulated by dragging and stretching a one-dimensional filtering lens over a plot of time series data. Using calculation of nearest neighbors of interaction points in parameter space, we reuse partial query results from prior interaction sequences to calculate both an immediate best-effort clustering result and to schedule calculation of an exact result. The method adapts to user interaction patterns in the parameter space by reprioritizing the interaction neighbors of visited points in the parameter space. A performance study on Mesonet meteorological data demonstrates that the method is a significant improvement over the baseline scheme in which interaction triggers on-demand, exact-range clustering with LRU caching. We also present initial evidence that approximate, temporary clustering results are sufficiently accurate (compared to exact results) to convey useful cluster structure during rapid and protracted interaction.
Ahmed, Z.;Weaver, C.
Sch. of Comput. Sci. & Center for Spatial Anal., Univ. of Oklahoma, Norman, OK, USA|c|;
10.1109/TVCG.2007.70515;10.1109/INFVIS.2004.12;10.1109/VAST.2009.5332629;10.1109/VAST.2008.4677357;10.1109/TVCG.2011.188;10.1109/VISUAL.1994.346302;10.1109/INFVIS.1998.729559;10.1109/VAST.2007.4388999
VAST
2012
An Affordance-Based Framework for Human Computation and Human-Computer Collaboration
10.1109/TVCG.2012.195
2. 2868
J
Visual Analytics is “the science of analytical reasoning facilitated by visual interactive interfaces” [70]. The goal of this field is to develop tools and methodologies for approaching problems whose size and complexity render them intractable without the close coupling of both human and machine analysis. Researchers have explored this coupling in many venues: VAST, Vis, InfoVis, CHI, KDD, IUI, and more. While there have been myriad promising examples of human-computer collaboration, there exists no common language for comparing systems or describing the benefits afforded by designing for such collaboration. We argue that this area would benefit significantly from consensus about the design attributes that define and distinguish existing techniques. In this work, we have reviewed 1,271 papers from many of the top-ranking conferences in visual analytics, human-computer interaction, and visualization. From these, we have identified 49 papers that are representative of the study of human-computer collaborative problem-solving, and provide a thorough overview of the current state-of-the-art. Our analysis has uncovered key patterns of design hinging on humanand machine-intelligence affordances, and also indicates unexplored avenues in the study of this area. The results of this analysis provide a common framework for understanding these seemingly disparate branches of inquiry, which we hope will motivate future work in the field.
Crouser, R.J.;Chang, R.
;
10.1109/VAST.2010.5652398;10.1109/VAST.2011.6102461;10.1109/TVCG.2009.199;10.1109/VAST.2010.5652910;10.1109/VAST.2010.5652484;10.1109/VAST.2009.5332584;10.1109/VAST.2010.5652885;10.1109/VAST.2009.5333564;10.1109/VAST.2010.5652392;10.1109/VAST.2009.5332586;10.1109/VAST.2011.6102451;10.1109/VAST.2009.5333023;10.1109/VAST.2009.5333020;10.1109/VAST.2009.5332628;10.1109/TVCG.2011.173;10.1109/TVCG.2011.218;10.1109/TVCG.2011.231;10.1109/VAST.2010.5652443;10.1109/VAST.2010.5653598;10.1109/VAST.2011.6102447
Human computation, human complexity, theory, framework
VAST
2012
Analyst's Workspace: An embodied sensemaking environment for large, high-resolution displays
10.1109/VAST.2012.6400559
1. 131
C
Distributed cognition and embodiment provide compelling models for how humans think and interact with the environment. Our examination of the use of large, high-resolution displays from an embodied perspective has lead directly to the development of a new sensemaking environment called Analyst's Workspace (AW). AW leverages the embodied resources made more accessible through the physical nature of the display to create a spatial workspace. By combining spatial layout of documents and other artifacts with an entity-centric, explorative investigative approach, AW aims to allow the analyst to externalize elements of the sensemaking process as a part of the investigation, integrated into the visual representations of the data itself. In this paper, we describe the various capabilities of AW and discuss the key principles and concepts underlying its design, emphasizing unique design principles for designing visual analytic tools for large, high-resolution displays.
Andrews, C.;North, C.
Virginia Tech, Blacksburg, VA, USA|c|;
10.1109/TVCG.2008.121;10.1109/VAST.2008.4677362;10.1109/INFVIS.2004.27;10.1109/VAST.2008.4677358;10.1109/TVCG.2006.184;10.1109/VAST.2007.4388992;10.1109/VAST.2010.5652880;10.1109/VAST.2011.6102449;10.1109/VAST.2007.4389006;10.1109/VAST.2011.6102438;10.1109/VAST.2009.5333878
Embodiment, distributed cognition, large and high-resolution display, sensemaking, space
VAST
2012
Augmenting visual representation of affectively charged information using sound graphs
10.1109/VAST.2012.6400547
2. 214
M
Within the Visual Analytics research agenda there is an interest on studying the applicability of multimodal information representation and interaction techniques for the analytical reasoning process. The present study summarizes a pilot experiment conducted to understand the effects of augmenting visualizations of affectively-charged information using auditory graphs. We designed an audiovisual representation of social comments made to different news posted on a popular website, and their affective dimension using a sentiment analysis tool for short texts. Participants of the study were asked to create an assessment of the affective valence trend (positive or negative) of the news articles using for it, the visualizations and sonifications. The conditions were tested looking for speed/accuracy trade off comparing the visual representation with an audiovisual one. We discuss our preliminary findings regarding the design of augmented information-representation.
Calderon, N.A.;Riecke, B.E.;Fisher, B.
;;
VAST
2012
Dis-function: Learning distance functions interactively
10.1109/VAST.2012.6400486
8. 92
C
The world's corpora of data grow in size and complexity every day, making it increasingly difficult for experts to make sense out of their data. Although machine learning offers algorithms for finding patterns in data automatically, they often require algorithm-specific parameters, such as an appropriate distance function, which are outside the purview of a domain expert. We present a system that allows an expert to interact directly with a visual representation of the data to define an appropriate distance function, thus avoiding direct manipulation of obtuse model parameters. Adopting an iterative approach, our system first assumes a uniformly weighted Euclidean distance function and projects the data into a two-dimensional scatterplot view. The user can then move incorrectly-positioned data points to locations that reflect his or her understanding of the similarity of those data points relative to the other data points. Based on this input, the system performs an optimization to learn a new distance function and then re-projects the data to redraw the scatter-plot. We illustrate empirically that with only a few iterations of interaction and optimization, a user can achieve a scatterplot view and its corresponding distance function that reflect the user's knowledge of the data. In addition, we evaluate our system to assess scalability in data size and data dimension, and show that our system is computationally efficient and can provide an interactive or near-interactive user experience.
Brown, E.T.;Jingjing Liu;Brodley, C.E.;Chang, R.
Dept. of Comput. Sci., Tufts Univ., Medford, MA, USA|c|;;;
10.1109/VISUAL.1990.146402;10.1109/VAST.2011.6102449;10.1109/VAST.2007.4388999;10.1109/VAST.2009.5332584;10.1109/VAST.2011.6102448;10.1109/VAST.2008.4677352;10.1109/VAST.2010.5652443
VAST
2012
Enterprise Data Analysis and Visualization: An Interview Study
10.1109/TVCG.2012.219
2. 2926
J
Organizations rely on data analysts to model customer engagement, streamline operations, improve production, inform business decisions, and combat fraud. Though numerous analysis and visualization tools have been built to improve the scale and efficiency at which analysts can work, there has been little research on how analysis takes place within the social and organizational context of companies. To better understand the enterprise analysts' ecosystem, we conducted semi-structured interviews with 35 data analysts from 25 organizations across a variety of sectors, including healthcare, retail, marketing and finance. Based on our interview data, we characterize the process of industrial data analysis and document how organizational features of an enterprise impact it. We describe recurring pain points, outstanding challenges, and barriers to adoption for visual analytic tools. Finally, we discuss design implications and opportunities for visual analysis research.
Kandel, S.;Paepcke, A.;Hellerstein, J.M.;Heer, J.
Stanford Univ., Stanford, CA, USA|c|;;;
10.1109/TVCG.2008.137;10.1109/VAST.2008.4677365;10.1109/VAST.2011.6102438;10.1109/INFVIS.2005.1532136;10.1109/VAST.2010.5652880;10.1109/VAST.2009.5333878;10.1109/VAST.2007.4389011;10.1109/VAST.2011.6102435
Data, analysis, visualization, enterprise
VAST
2012
Examining the Use of a Visual Analytics System for Sensemaking Tasks: Case Studies with Domain Experts
10.1109/TVCG.2012.224
2. 2878
J
While the formal evaluation of systems in visual analytics is still relatively uncommon, particularly rare are case studies of prolonged system use by domain analysts working with their own data. Conducting case studies can be challenging, but it can be a particularly effective way to examine whether visual analytics systems are truly helping expert users to accomplish their goals. We studied the use of a visual analytics system for sensemaking tasks on documents by six analysts from a variety of domains. We describe their application of the system along with the benefits, issues, and problems that we uncovered. Findings from the studies identify features that visual analytics systems should emphasize as well as missing capabilities that should be addressed. These findings inform design implications for future systems.
Youn-ah Kang;Stasko, J.
;
10.1109/VAST.2008.4677362;10.1109/VAST.2006.261416;10.1109/INFVIS.2004.5;10.1109/VAST.2011.6102438;10.1109/VAST.2012.6400559;10.1109/VAST.2007.4389006;10.1109/VAST.2009.5333878
Visual analytics, case study, qualitative evaluation
VAST
2012
Exploring cyber physical data streams using Radial Pixel Visualizations
10.1109/VAST.2012.6400541
2. 226
M
Cyber physical systems (CPS), such as smart buildings and data centers, are richly instrumented systems composed of tightly coupled computational and physical elements that generate large amounts of data. To explore CPS data and obtain actionable insights, we construct a Radial Pixel Visualization (RPV) system, which uses multiple concentric rings to show the data in a compact circular layout of small polygons (pixel cells), each of which represents an individual data value. RPV provides an effective visual representation of locality and periodicity of the high volume, multivariate data streams, and seamlessly combines them with the results of an automated analysis. In the outermost ring the results of correlation analysis and peak point detection are highlighted. Our explorations demonstrates how RPV can help administrators to identify periodic thermal hot spots, understand data center energy consumption, and optimize IT workload.
Hao, M.C.;Marwah, M.;Mittelstadt, S.;Janetzko, H.;Keim, D.A.;Dayal, U.;Bash, C.;Felix, C.;Patel, C.;Hsu, M.;Chen, Y.
Hewlett-Packard Labs., Palo Alto, CA, USA|c|;;;;;;;;;;
VAST
2012
Exploring the impact of emotion on visual judgement
10.1109/VAST.2012.6400540
2. 228
M
Existing research suggests that individual personality differences can influence performance with visualizations. In addition to stable traits such as locus of control, research in psychology has found that temporary changes in affect (emotion) can significantly impact individual performance on cognitive tasks. We examine the relationship between fundamental visual judgement tasks and affect through a crowdsourced user study that combines affective-priming techniques from psychology with longstanding graphical perception experiments. Our results suggest that affective-priming can significantly influence accuracy in visual judgements, and that some chart types may be more affected than others.
Harrison, L.;Chang, R.;Aidong Lu
UNC-Charlotte, Charlotte, NC, USA|c|;;
VAST
2012
Feature-similarity visualization of MRI cortical surface data
10.1109/VAST.2012.6400548
2. 212
M
We present an analytics-based framework for simultaneous visualization of large surface data collections arising in clinical neuroimaging studies. Termed Informatics Visualization for Neuroimaging (INVIZIAN), this framework allows the visualization of both cortical surfaces characteristics and feature relatedness in unison. It also uses dimension reduction methods to derive new coordinate systems using a Jensen-Shannon divergence metric for positioning cortical surfaces in a metric space such that the proximity in location is proportional to neuroanatomical similarity. Feature data such as thickness and volume are colored on the cortical surfaces and used to display both subject-specific feature values and global trends within the population. Additionally, a query-based framework allows the neuroscience researcher to investigate probable correlations between neuroanatomical and subject patient attribute values such as age and diagnosis.
Bowman, I.;Joshi, S.H.;Greer, V.;Van Horn, J.D.
Sch. of Med., Lab. of Neuro Imaging, UCLA, Los Angeles, CA, USA|c|;;;
VAST
2012
iLAMP: Exploring high-dimensional spacing through backward multidimensional projection
10.1109/VAST.2012.6400489
5. 62
C
Ever improving computing power and technological advances are greatly augmenting data collection and scientific observation. This has directly contributed to increased data complexity and dimensionality, motivating research of exploration techniques for multidimensional data. Consequently, a recent influx of work dedicated to techniques and tools that aid in understanding multidimensional datasets can be observed in many research fields, including biology, engineering, physics and scientific computing. While the effectiveness of existing techniques to analyze the structure and relationships of multidimensional data varies greatly, few techniques provide flexible mechanisms to simultaneously visualize and actively explore high-dimensional spaces. In this paper, we present an inverse linear affine multidimensional projection, coined iLAMP, that enables a novel interactive exploration technique for multidimensional data. iLAMP operates in reverse to traditional projection methods by mapping low-dimensional information into a high-dimensional space. This allows users to extrapolate instances of a multidimensional dataset while exploring a projection of the data to the planar domain. We present experimental results that validate iLAMP, measuring the quality and coherence of the extrapolated data; as well as demonstrate the utility of iLAMP to hypothesize the unexplored regions of a high-dimensional space.
Portes dos Santos Amorim, E.;Brazil, E.V.;Daniels, J.;Joia, P.;Nonato, L.G.;Sousa, M.C.
;;;;;
10.1109/INFVIS.2005.1532138;10.1109/TVCG.2008.116;10.1109/TVCG.2010.213;10.1109/TVCG.2009.140;10.1109/TVCG.2011.220;10.1109/VISUAL.1999.809866;10.1109/TVCG.2006.170;10.1109/INFVIS.2000.885086;10.1109/INFVIS.2004.15;10.1109/TVCG.2008.153;10.1109/INFVIS.2002.1173159;10.1109/INFVIS.2003.1249015;10.1109/VISUAL.1990.146402;10.1109/VISUAL.1996.567787;10.1109/TVCG.2010.170;10.1109/TVCG.2007.70580;10.1109/TVCG.2010.207;10.1109/INFVIS.2002.1173161
VAST
2012
Incorporating GOMS analysis into the design of an EEG data visual analysis tool
10.1109/VAST.2012.6400542
2. 224
M
In this paper, we present a case study where we incorporate GOMS (Goals, Operators, Methods, and Selectors) [2] task analysis into the design process of a visual analysis tool. We performed GOMS analysis on an Electroencephalography (EEG) analyst's current data analysis strategy to identify important user tasks and unnecessary user actions in his current workflow. We then designed an EEG data visual analysis tool based on the GOMS analysis result. Evaluation results show that the tool we have developed, EEGVis, allows the user to analyze EEG data with reduced subjective cognitive load, faster speed and increased confidence in the analysis quality. The positive evaluation results suggest that our design process demonstrates an effective application of GOMS analysis to discover opportunities for designing better tools to support the user's visual analysis process.
Hua Guo;Tran, D.;Laidlaw, D.H.
Dept. of Comput. Sci., Brown Univ., Providence, RI, USA|c|;;
VAST
2012
Infographics at the Congressional Budget Office
10.1109/VAST.2012.6400533
2. 242
M
The Congressional Budget Office (CBO) is an agency of the federal government with about 240 employees that provides the U.S. Congress with timely, nonpartisan analysis of important budgetary and economic issues. Recently, CBO began producing static infographics to present its headline stories and to provide information to the Congress in different ways.
Schwabish, J.A.
VAST
2012
Information retrieval failure analysis: Visual analytics as a support for interactive “what-if” investigation
10.1109/VAST.2012.6400551
2. 206
M
This poster provides an analytical model for examining performances of IR systems, based on the discounted cumulative gain family of metrics, and visualization for interacting and exploring the performances of the system under examination. Moreover, we propose machine learning approach to learn the ranking model of the examined system in order to be able to conduct a “what-if” analysis and visually explore what can happen if you adopt a given solution before having to actually implement it.
Angelini, M.;Ferro, N.;Granato, G.;Santucci, G.;Silvello, G.
Sapienza Univ. of Roma, Rome, Italy|c|;;;;
VAST
2012
Inter-active learning of ad-hoc classifiers for video visual analytics
10.1109/VAST.2012.6400492
2. 32
C
Learning of classifiers to be used as filters within the analytical reasoning process leads to new and aggravates existing challenges. Such classifiers are typically trained ad-hoc, with tight time constraints that affect the amount and the quality of annotation data and, thus, also the users' trust in the classifier trained. We approach the challenges of ad-hoc training by inter-active learning, which extends active learning by integrating human experts' background knowledge to greater extent. In contrast to active learning, not only does inter-active learning include the users' expertise by posing queries of data instances for labeling, but it also supports the users in comprehending the classifier model by visualization. Besides the annotation of manually or automatically selected data instances, users are empowered to directly adjust complex classifier models. Therefore, our model visualization facilitates the detection and correction of inconsistencies between the classifier model trained by examples and the user's mental model of the class definition. Visual feedback of the training process helps the users assess the performance of the classifier and, thus, build up trust in the filter created. We demonstrate the capabilities of inter-active learning in the domain of video visual analytics and compare its performance with the results of random sampling and uncertainty sampling of training sets.
Hoferlin, B.;Netzel, R.;Hoferlin, M.;Weiskopf, D.;Heidemann, G.
;;;;
10.1109/VAST.2010.5652398;10.1109/TVCG.2012.277
VAST
2012
Just-in-time annotation of clusters, outliers, and trends in point-based data visualizations
10.1109/VAST.2012.6400487
7. 82
C
We introduce the concept of just-in-time descriptive analytics as a novel application of computational and statistical techniques performed at interaction-time to help users easily understand the structure of data as seen in visualizations. Fundamental to just-intime descriptive analytics is (a) identifying visual features, such as clusters, outliers, and trends, user might observe in visualizations automatically, (b) determining the semantics of such features by performing statistical analysis as the user is interacting, and (c) enriching visualizations with annotations that not only describe semantics of visual features but also facilitate interaction to support high-level understanding of data. In this paper, we demonstrate just-in-time descriptive analytics applied to a point-based multi-dimensional visualization technique to identify and describe clusters, outliers, and trends. We argue that it provides a novel user experience of computational techniques working alongside of users allowing them to build faster qualitative mental models of data by demonstrating its application on a few use-cases. Techniques used to facilitate just-in-time descriptive analytics are described in detail along with their runtime performance characteristics. We believe this is just a starting point and much remains to be researched, as we discuss open issues and opportunities in improving accessibility and collaboration.
Kandogan, E.
10.1109/INFVIS.2003.1249015;10.1109/INFVIS.2005.1532142;10.1109/INFVIS.2004.3;10.1109/TVCG.2011.220;10.1109/INFVIS.2004.15;10.1109/INFVIS.1998.729559;10.1109/VAST.2006.261423;10.1109/TVCG.2009.153;10.1109/VAST.2010.5652885;10.1109/VAST.2009.5332628;10.1109/TVCG.2011.229
Just-in-time descriptive analytics, feature identification and characterization, point-based visualizations