IEEE VIS Publication Dataset

next
VAST
2014
Integrating Predictive Analytics and Social Media
10.1109/VAST.2014.7042495
1. 202
C
A key analytical task across many domains is model building and exploration for predictive analysis. Data is collected, parsed and analyzed for relationships, and features are selected and mapped to estimate the response of a system under exploration. As social media data has grown more abundant, data can be captured that may potentially represent behavioral patterns in society. In turn, this unstructured social media data can be parsed and integrated as a key factor for predictive intelligence. In this paper, we present a framework for the development of predictive models utilizing social media data. We combine feature selection mechanisms, similarity comparisons and model cross-validation through a variety of interactive visualizations to support analysts in model building and prediction. In order to explore how predictions might be performed in such a framework, we present results from a user study focusing on social media data as a predictor for movie box-office success.
Yafeng Lu;Kruger, R.;Thom, D.;Feng Wang;Koch, S.;Ertl, T.;Maciejewski, R.
Arizona State Univ., Tempe, AZ, USA|c|;;;;;;
10.1109/VAST.2012.6400557;10.1109/TVCG.2011.185;10.1109/VAST.2012.6400486;10.1109/TVCG.2013.162;10.1109/TVCG.2013.221;10.1109/VAST.2011.6102456;10.1109/TVCG.2012.291;10.1109/TVCG.2013.125;10.1109/INFVIS.2004.10;10.1109/VAST.2011.6102448;10.1109/VAST.2010.5652443;10.1109/INFVIS.2004.3
Social Media, Predictive Analytics, Feature Selection
VAST
2014
Interactive Visual Analysis of Image-Centric Cohort Study Data
10.1109/TVCG.2014.2346591
1. 1682
J
Epidemiological population studies impose information about a set of subjects (a cohort) to characterize disease-specific risk factors. Cohort studies comprise heterogenous variables describing the medical condition as well as demographic and lifestyle factors and, more recently, medical image data. We propose an Interactive Visual Analysis (IVA) approach that enables epidemiologists to rapidly investigate the entire data pool for hypothesis validation and generation. We incorporate image data, which involves shape-based object detection and the derivation of attributes describing the object shape. The concurrent investigation of image-based and non-image data is realized in a web-based multiple coordinated view system, comprising standard views from information visualization and epidemiological data representations such as pivot tables. The views are equipped with brushing facilities and augmented by 3D shape renderings of the segmented objects, e.g., each bar in a histogram is overlaid with a mean shape of the associated subgroup of the cohort. We integrate an overview visualization, clustering of variables and object shape for data-driven subgroup definition and statistical key figures for measuring the association between variables. We demonstrate the IVA approach by validating and generating hypotheses related to lower back pain as part of a qualitative evaluation.
Klemm, P.;Oeltze-Jafra, S.;Lawonn, K.;Hegenscheid, K.;Volzke, H.;Preim, B.
Otto-von-Guericke Univ. Magdeburg, Magdeburg, Germany|c|;;;;;
10.1109/TVCG.2013.160;10.1109/TVCG.2011.185;10.1109/VISUAL.2000.885739;10.1109/TVCG.2011.217;10.1109/TVCG.2007.70569
Interactive Visual Analysis, Epidemiology, Spine
VAST
2014
Knowledge Generation Model for Visual Analytics
10.1109/TVCG.2014.2346481
1. 1613
J
Visual analytics enables us to analyze huge information spaces in order to support complex decision making and data exploration. Humans play a central role in generating knowledge from the snippets of evidence emerging from visual data analysis. Although prior research provides frameworks that generalize this process, their scope is often narrowly focused so they do not encompass different perspectives at different levels. This paper proposes a knowledge generation model for visual analytics that ties together these diverse frameworks, yet retains previously developed models (e.g., KDD process) to describe individual segments of the overall visual analytic processes. To test its utility, a real world visual analytics system is compared against the model, demonstrating that the knowledge generation process model provides a useful guideline when developing and evaluating such systems. The model is used to effectively compare different data analysis systems. Furthermore, the model provides a common language and description of visual analytic processes, which can be used for communication between researchers. At the end, our model reflects areas of research that future researchers can embark on.
Sacha, D.;Stoffel, A.;Stoffel, F.;Bum Chul Kwon;Ellis, G.;Keim, D.A.
Data Anal. & Visualization Group, Univ. of Konstanz, Konstanz, Germany|c|;;;;;
10.1109/VISUAL.2005.1532781;10.1109/TVCG.2013.124;10.1109/VAST.2009.5333023;10.1109/TVCG.2011.229;10.1109/TVCG.2008.109;10.1109/VAST.2008.4677361;10.1109/VAST.2008.4677365;10.1109/VAST.2010.5652879;10.1109/TVCG.2012.273;10.1109/VAST.2008.4677358;10.1109/TVCG.2008.121;10.1109/VAST.2007.4389006;10.1109/VAST.2011.6102435;10.1109/TVCG.2013.120
Visual Analytics, Knowledge Generation, Reasoning, Visualization Taxonomies and Models, Interaction
VAST
2014
LoyalTracker: Visualizing Loyalty Dynamics in Search Engines
10.1109/TVCG.2014.2346912
1. 1742
J
The huge amount of user log data collected by search engine providers creates new opportunities to understand user loyalty and defection behavior at an unprecedented scale. However, this also poses a great challenge to analyze the behavior and glean insights into the complex, large data. In this paper, we introduce LoyalTracker, a visual analytics system to track user loyalty and switching behavior towards multiple search engines from the vast amount of user log data. We propose a new interactive visualization technique (flow view) based on a flow metaphor, which conveys a proper visual summary of the dynamics of user loyalty of thousands of users over time. Two other visualization techniques, a density map and a word cloud, are integrated to enable analysts to gain further insights into the patterns identified by the flow view. Case studies and the interview with domain experts are conducted to demonstrate the usefulness of our technique in understanding user loyalty and switching behavior in search engines.
Conglei Shi;Yingcai Wu;Shixia Liu;Hong Zhou;Huamin Qu
Hong Kong Univ. of Sci. & Technol., Hong Kong, China|c|;;;;
10.1109/VAST.2010.5652931;10.1109/TVCG.2009.171;10.1109/VAST.2007.4389008;10.1109/TVCG.2012.253;10.1109/INFVIS.1996.559227;10.1109/TVCG.2012.212;10.1109/TVCG.2010.129;10.1109/TVCG.2012.225;10.1109/TVCG.2011.239;10.1109/VAST.2012.6400494;10.1109/INFVIS.2005.1532150;10.1109/TVCG.2008.166
Time-series visualization, stacked graphs, log data visualization, text visualization
VAST
2014
Multi-Model Semantic Interaction for Text Analytics
10.1109/VAST.2014.7042492
1. 172
C
Semantic interaction offers an intuitive communication mechanism between human users and complex statistical models. By shielding the users from manipulating model parameters, they focus instead on directly manipulating the spatialization, thus remaining in their cognitive zone. However, this technique is not inherently scalable past hundreds of text documents. To remedy this, we present the concept of multi-model semantic interaction, where semantic interactions can be used to steer multiple models at multiple levels of data scale, enabling users to tackle larger data problems. We also present an updated visualization pipeline model for generalized multi-model semantic interaction. To demonstrate multi-model semantic interaction, we introduce StarSPIRE, a visual text analytics prototype that transforms user interactions on documents into both small-scale display layout updates as well as large-scale relevancy-based document selection.
Bradel, L.;North, C.;House, l.;Leman, S.
;;;
10.1109/VAST.2011.6102449;10.1109/TVCG.2013.188;10.1109/VAST.2012.6400559;10.1109/VAST.2012.6400486;10.1109/INFVIS.1995.528686;10.1109/VAST.2007.4389006
Visual analytics, Semantic Interaction, Sensemaking, Text Analytics
VAST
2014
Opening the Black Box: Strategies for Increased User Involvement in Existing Algorithm Implementations
10.1109/TVCG.2014.2346578
1. 1652
J
An increasing number of interactive visualization tools stress the integration with computational software like MATLAB and R to access a variety of proven algorithms. In many cases, however, the algorithms are used as black boxes that run to completion in isolation which contradicts the needs of interactive data exploration. This paper structures, formalizes, and discusses possibilities to enable user involvement in ongoing computations. Based on a structured characterization of needs regarding intermediate feedback and control, the main contribution is a formalization and comparison of strategies for achieving user involvement for algorithms with different characteristics. In the context of integration, we describe considerations for implementing these strategies either as part of the visualization tool or as part of the algorithm, and we identify requirements and guidelines for the design of algorithmic APIs. To assess the practical applicability, we provide a survey of frequently used algorithm implementations within R regarding the fulfillment of these guidelines. While echoing previous calls for analysis modules which support data exploration more directly, we conclude that a range of pragmatic options for enabling user involvement in ongoing computations exists on both the visualization and algorithm side and should be used.
Muhlbacher, T.;Piringer, H.;Gratzl, S.;Sedlmair, M.;Streit, M.
VRVis Res. Center, Vienna, Austria|c|;;;;
10.1109/VAST.2012.6400486;10.1109/VAST.2007.4388999;10.1109/VAST.2011.6102449;10.1109/TVCG.2007.70515;10.1109/TVCG.2009.151;10.1109/TVCG.2014.2346321;10.1109/VAST.2008.4677350;10.1109/TVCG.2006.171;10.1109/TVCG.2013.212;10.1109/TVCG.2013.125;10.1109/INFVIS.2002.1173145;10.1109/TVCG.2009.110;10.1109/INFVIS.2004.60;10.1109/VAST.2011.6102453;10.1109/TVCG.2012.195;10.1109/INFVIS.2003.1249014;10.1109/TVCG.2011.229
Visual analytics infrastructures, integration, interactive algorithms, user involvement, problem subdivision
VAST
2014
OpinionFlow: Visual Analysis of Opinion Diffusion on Social Media
10.1109/TVCG.2014.2346920
1. 1772
J
It is important for many different applications such as government and business intelligence to analyze and explore the diffusion of public opinions on social media. However, the rapid propagation and great diversity of public opinions on social media pose great challenges to effective analysis of opinion diffusion. In this paper, we introduce a visual analysis system called OpinionFlow to empower analysts to detect opinion propagation patterns and glean insights. Inspired by the information diffusion model and the theory of selective exposure, we develop an opinion diffusion model to approximate opinion propagation among Twitter users. Accordingly, we design an opinion flow visualization that combines a Sankey graph with a tailored density map in one view to visually convey diffusion of opinions among many users. A stacked tree is used to allow analysts to select topics of interest at different levels. The stacked tree is synchronized with the opinion flow visualization to help users examine and compare diffusion patterns across topics. Experiments and case studies on Twitter data demonstrate the effectiveness and usability of OpinionFlow.
Yingcai Wu;Shixia Liu;Kai Yan;Mengchen Liu;Fangzhao Wu
Microsoft Res., Redmond, WA, USA|c|;;;;
10.1109/TVCG.2011.239;10.1109/TVCG.2013.162;10.1109/TVCG.2013.221;10.1109/TVCG.2014.2346433;10.1109/INFVIS.2005.1532152;10.1109/TVCG.2012.291;10.1109/VAST.2006.261431;10.1109/TVCG.2010.129;10.1109/TVCG.2013.196;10.1109/TVCG.2014.2346919;10.1109/TVCG.2010.183;10.1109/VAST.2009.5333919
Opinion visualization, opinion diffusion, opinion flow, influence estimation, kernel density estimation, level-of-detail
VAST
2014
PEARL: An Interactive Visual Analytic Tool for Understanding Personal Emotion Style Derived from Social Media
10.1109/VAST.2014.7042496
2. 212
C
Hundreds of millions of people leave digital footprints on social media (e.g., Twitter and Facebook). Such data not only disclose a person's demographics and opinions, but also reveal one's emotional style. Emotional style captures a person's patterns of emotions over time, including his overall emotional volatility and resilience. Understanding one's emotional style can provide great benefits for both individuals and businesses alike, including the support of self-reflection and delivery of individualized customer care. We present PEARL, a timeline-based visual analytic tool that allows users to interactively discover and examine a person's emotional style derived from this person's social media text. Compared to other visual text analytic systems, our work offers three unique contributions. First, it supports multi-dimensional emotion analysis from social media text to automatically detect a person's expressed emotions at different time points and summarize those emotions to reveal the person's emotional style. Second, it effectively visualizes complex, multi-dimensional emotion analysis results to create a visual emotional profile of an individual, which helps users browse and interpret one's emotional style. Third, it supports rich visual interactions that allow users to interactively explore and validate emotion analysis results. We have evaluated our work extensively through a series of studies. The results demonstrate the effectiveness of our tool both in emotion analysis from social media and in support of interactive visualization of the emotion analysis results.
Jian Zhao;Liang Gou;Fei Wang;Zhou, M.X.
;;;
10.1109/TVCG.2011.239;10.1109/VAST.2012.6400485;10.1109/TVCG.2010.129;10.1109/TVCG.2011.185;10.1109/TVCG.2010.183
Personal emotion analytics, affective and mood modeling, social media text, Twitter, information visualization
VAST
2014
Proactive Spatiotemporal Resource Allocation and Predictive Visual Analytics for Community Policing and Law Enforcement
10.1109/TVCG.2014.2346926
1. 1872
J
In this paper, we present a visual analytics approach that provides decision makers with a proactive and predictive environment in order to assist them in making effective resource allocation and deployment decisions. The challenges involved with such predictive analytics processes include end-users' understanding, and the application of the underlying statistical algorithms at the right spatiotemporal granularity levels so that good prediction estimates can be established. In our approach, we provide analysts with a suite of natural scale templates and methods that enable them to focus and drill down to appropriate geospatial and temporal resolution levels. Our forecasting technique is based on the Seasonal Trend decomposition based on Loess (STL) method, which we apply in a spatiotemporal visual analytics context to provide analysts with predicted levels of future activity. We also present a novel kernel density estimation technique we have developed, in which the prediction process is influenced by the spatial correlation of recent incidents at nearby locations. We demonstrate our techniques by applying our methodology to Criminal, Traffic and Civil (CTC) incident datasets.
Malik, A.;Maciejewski, R.;Towers, S.;McCullough, S.;Ebert, D.S.
Purdue Univ., West Lafayette, IN, USA|c|;;;;
10.1109/TVCG.2013.125;10.1109/TVCG.2013.206;10.1109/VAST.2012.6400491;10.1109/VAST.2007.4389006;10.1109/TVCG.2013.200
Visual Analytics, Natural Scales, Seasonal Trend decomposition based on Loess (STL), Law Enforcement
VAST
2014
Progressive Visual Analytics: User-Driven Visual Exploration of In-Progress Analytics
10.1109/TVCG.2014.2346574
1. 1662
J
As datasets grow and analytic algorithms become more complex, the typical workflow of analysts launching an analytic, waiting for it to complete, inspecting the results, and then re-Iaunching the computation with adjusted parameters is not realistic for many real-world tasks. This paper presents an alternative workflow, progressive visual analytics, which enables an analyst to inspect partial results of an algorithm as they become available and interact with the algorithm to prioritize subspaces of interest. Progressive visual analytics depends on adapting analytical algorithms to produce meaningful partial results and enable analyst intervention without sacrificing computational speed. The paradigm also depends on adapting information visualization techniques to incorporate the constantly refining results without overwhelming analysts and provide interactions to support an analyst directing the analytic. The contributions of this paper include: a description of the progressive visual analytics paradigm; design goals for both the algorithms and visualizations in progressive visual analytics systems; an example progressive visual analytics system (Progressive Insights) for analyzing common patterns in a collection of event sequences; and an evaluation of Progressive Insights and the progressive visual analytics paradigm by clinical researchers analyzing electronic medical records.
Stolper, C.D.;Perer, A.;Gotz, D.
Sch. of Interactive Comput., Georgia Inst. of Technol., Atlanta, GA, USA|c|;;
10.1109/VAST.2006.261421;10.1109/TVCG.2013.227;10.1109/TVCG.2009.187;10.1109/TVCG.2011.179;10.1109/INFVIS.2005.1532133;10.1109/TVCG.2012.225;10.1109/TVCG.2013.179;10.1109/INFVIS.2000.885097;10.1109/TVCG.2013.200
Progressive visual analytics, information visualization, interactive machine learning, electronic medical records
VAST
2014
Run Watchers: Automatic Simulation-Based Decision Support in Flood Management
10.1109/TVCG.2014.2346930
1. 1882
J
In this paper, we introduce a simulation-based approach to design protection plans for flood events. Existing solutions require a lot of computation time for an exhaustive search, or demand for a time-consuming expert supervision and steering. We present a faster alternative based on the automated control of multiple parallel simulation runs. Run Watchers are dedicated system components authorized to monitor simulation runs, terminate them, and start new runs originating from existing ones according to domain-specific rules. This approach allows for a more efficient traversal of the search space and overall performance improvements due to a re-use of simulated states and early termination of failed runs. In the course of search, Run Watchers generate large and complex decision trees. We visualize the entire set of decisions made by Run Watchers using interactive, clustered timelines. In addition, we present visualizations to explain the resulting response plans. Run Watchers automatically generate storyboards to convey plan details and to justify the underlying decisions, including those which leave particular buildings unprotected. We evaluate our solution with domain experts.
Konev, A.;Waser, J.;Sadransky, B.;Cornel, D.;Perdigao, R.A.P.;Horvath, Z.;Groller, E.
VRVis Vienna, Vienna, Austria|c|;;;;;;
10.1109/INFVIS.2002.1173149;10.1109/VISUAL.2000.885727;10.1109/TVCG.2010.190;10.1109/TVCG.2011.248;10.1109/TVCG.2010.223;10.1109/TVCG.2008.145
Disaster management, simulation control, decision making, visual evidence, storytelling
VAST
2014
Serendip: Topic Model-Driven Visual Exploration of Text Corpora
10.1109/VAST.2014.7042493
1. 182
C
Exploration and discovery in a large text corpus requires investigation at multiple levels of abstraction, from a zoomed-out view of the entire corpus down to close-ups of individual passages and words. At each of these levels, there is a wealth of information that can inform inquiry - from statistical models, to metadata, to the researcher's own knowledge and expertise. Joining all this information together can be a challenge, and there are issues of scale to be combatted along the way. In this paper, we describe an approach to text analysis that addresses these challenges of scale and multiple information sources, using probabilistic topic models to structure exploration through multiple levels of inquiry in a way that fosters serendipitous discovery. In implementing this approach into a tool called Serendip, we incorporate topic model data and metadata into a highly reorderable matrix to expose corpus level trends; extend encodings of tagged text to illustrate probabilistic information at a passage level; and introduce a technique for visualizing individual word rankings, along with interaction techniques and new statistical methods to create links between different levels and information types. We describe example uses from both the humanities and visualization research that illustrate the benefits of our approach.
Alexander, E.;Kohlmann, J.;Valenza, R.;Witmore, M.;Gleicher, M.
Dept. of Comput. Sci., Univ. of Wisconsin-Madison, Madison, WI, USA|c|;;;;
10.1109/VAST.2011.6102449;10.1109/INFVIS.2000.885098;10.1109/INFVIS.1998.729568;10.1109/TVCG.2011.239;10.1109/TVCG.2011.220;10.1109/VAST.2012.6400486;10.1109/VAST.2011.6102461;10.1109/TVCG.2013.157;10.1109/TVCG.2013.162;10.1109/VAST.2007.4389006;10.1109/VAST.2007.4389004
Text visualization, topic modeling
VAST
2014
Supporting Communication and Coordination in Collaborative Sensemaking
10.1109/TVCG.2014.2346573
1. 1642
J
When people work together to analyze a data set, they need to organize their findings, hypotheses, and evidence, share that information with their collaborators, and coordinate activities amongst team members. Sharing externalizations (recorded information such as notes) could increase awareness and assist with team communication and coordination. However, we currently know little about how to provide tool support for this sort of sharing. We explore how linked common work (LCW) can be employed within a `collaborative thinking space', to facilitate synchronous collaborative sensemaking activities in Visual Analytics (VA). Collaborative thinking spaces provide an environment for analysts to record, organize, share and connect externalizations. Our tool, CLIP, extends earlier thinking spaces by integrating LCW features that reveal relationships between collaborators' findings. We conducted a user study comparing CLIP to a baseline version without LCW. Results demonstrated that LCW significantly improved analytic outcomes at a collaborative intelligence task. Groups using CLIP were also able to more effectively coordinate their work, and held more discussion of their findings and hypotheses. LCW enabled them to maintain awareness of each other's activities and findings and link those findings to their own work, preventing disruptive oral awareness notifications.
Mahyar, N.;Tory, M.
Univ. of Victoria, Victoria, BC, Canada|c|;
10.1109/VAST.2009.5333245;10.1109/VAST.2006.261439;10.1109/VAST.2008.4677358;10.1109/TVCG.2013.197;10.1109/VAST.2011.6102438;10.1109/VAST.2009.5333878;10.1109/VAST.2006.261430;10.1109/VAST.2007.4389011;10.1109/VAST.2007.4389006;10.1109/VAST.2011.6102447
Sensemaking, Collaboration, Externalization, Linked common work, Collaborative thinking space
VAST
2014
The Spinel Explorer - Interactive Visual Analysis of Spinel Group Minerals
10.1109/TVCG.2014.2346754
1. 1922
J
Geologists usually deal with rocks that are up to several thousand million years old. They try to reconstruct the tectonic settings where these rocks were formed and the history of events that affected them through the geological time. The spinel group minerals provide useful information regarding the geological environment in which the host rocks were formed. They constitute excellent indicators of geological environments (tectonic settings) and are of invaluable help in the search for mineral deposits of economic interest. The current workflow requires the scientists to work with different applications to analyze spine data. They do use specific diagrams, but these are usually not interactive. The current workflow hinders domain experts to fully exploit the potentials of tediously and expensively collected data. In this paper, we introduce the Spinel Explorer-an interactive visual analysis application for spinel group minerals. The design of the Spinel Explorer and of the newly introduced interactions is a result of a careful study of geologists' tasks. The Spinel Explorer includes most of the diagrams commonly used for analyzing spinel group minerals, including 2D binary plots, ternary plots, and 3D Spinel prism plots. Besides specific plots, conventional information visualization views are also integrated in the Spinel Explorer. All views are interactive and linked. The Spinel Explorer supports conventional statistics commonly used in spinel minerals exploration. The statistics views and different data derivation techniques are fully integrated in the system. Besides the Spinel Explorer as newly proposed interactive exploration system, we also describe the identified analysis tasks, and propose a new workflow. We evaluate the Spinel Explorer using real-life data from two locations in Argentina: the Frontal Cordillera in Central Andes and Patagonia. We describe the new findings of the geologists which would have been much more difficult to achieve using the cur- ent workflow only. Very positive feedback from geologists confirms the usefulness of the Spinel Explorer.
Lujan Ganuza, M.;Ferracutti, G.;Gargiulo, M.F.;Castro, S.;Bjerg, E.;Groller, E.;Matkovic, K.
VyGLab Res. Lab., Univ. Nac. del Sur, Bahia Blanca, Argentina|c|;;;;;;
10.1109/INFVIS.2000.885086;10.1109/TVCG.2009.155;10.1109/VISUAL.1995.485139
Interactive visual analysis, visualization in earth/space/ and environmental sciences, coordinated and multiple views, design studies
VAST
2014
TopicPanorama: A Full Picture of Relevant Topics
10.1109/VAST.2014.7042494
1. 192
C
We present a visual analytics approach to developing a full picture of relevant topics discussed in multiple sources such as news, blogs, or micro-blogs. The full picture consists of a number of common topics among multiple sources as well as distinctive topics. The key idea behind our approach is to jointly match the topics extracted from each source together in order to interactively and effectively analyze common and distinctive topics. We start by modeling each textual corpus as a topic graph. These graphs are then matched together with a consistent graph matching method. Next, we develop an LOD-based visualization for better understanding and analysis of the matched graph. The major feature of this visualization is that it combines a radially stacked tree visualization with a density-based graph visualization to facilitate the examination of the matched topic graph from multiple perspectives. To compensate for the deficiency of the graph matching algorithm and meet different users' needs, we allow users to interactively modify the graph matching result. We have applied our approach to various data including news, tweets, and blog data. Qualitative evaluation and a real-world case study with domain experts demonstrate the promise of our approach, especially in support of analyzing a topic-graph-based full picture at different levels of detail.
Shixia Liu;Xiting Wang;Jianfei Chen;Jun Zhu;Baining Guo
;;;;
10.1109/VAST.2011.6102461;10.1109/TVCG.2011.239;10.1109/TVCG.2009.111;10.1109/TVCG.2010.154;10.1109/VAST.2011.6102439;10.1109/TVCG.2006.193;10.1109/TVCG.2012.285;10.1109/TVCG.2013.221;10.1109/TVCG.2013.162;10.1109/TVCG.2012.279;10.1109/TVCG.2010.183;10.1109/TVCG.2014.2346433;10.1109/TVCG.2007.70521;10.1109/TVCG.2013.233;10.1109/TVCG.2013.212;10.1109/TVCG.2007.70582;10.1109/TVCG.2013.232;10.1109/TVCG.2010.129;10.1109/TVCG.2014.2346919
Topic graph, graph matching, graph visualization, user interactions, level-of-detail
VAST
2014
Towards Interactive, Intelligent, and Integrated Multimedia Analytics
10.1109/VAST.2014.7042476
3. 12
C
The size and importance of visual multimedia collections grew rapidly over the last years, creating a need for sophisticated multimedia analytics systems enabling large-scale, interactive, and insightful analysis. These systems need to integrate the human's natural expertise in analyzing multimedia with the machine's ability to process large-scale data. The paper starts off with a comprehensive overview of representation, learning, and interaction techniques from both the human's and the machine's point of view. To this end, hundreds of references from the related disciplines (visual analytics, information visualization, computer vision, multimedia information retrieval) have been surveyed. Based on the survey, a novel general multimedia analytics model is synthesized. In the model, the need for semantic navigation of the collection is emphasized and multimedia analytics tasks are placed on the exploration-search axis. The axis is composed of both exploration and search in a certain proportion which changes as the analyst progresses towards insight. Categorization is proposed as a suitable umbrella task realizing the exploration-search axis in the model. Finally, the pragmatic gap, defined as the difference between the tight machine categorization model and the flexible human categorization model is identified as a crucial multimedia analytics topic.
Zahlka, J.;Worring, M.
University of Amsterdam|c|;
10.1109/VAST.2006.261425;10.1109/VAST.2007.4389003;10.1109/INFVIS.2005.1532136;10.1109/TVCG.2010.136;10.1109/TVCG.2007.70515;10.1109/TVCG.2007.70541;10.1109/TVCG.2013.168
Multimedia (image/video/music) visualization, machine learning
VAST
2014
Transforming Scagnostics to Reveal Hidden Features
10.1109/TVCG.2014.2346572
1. 1632
J
Scagnostics (Scatterplot Diagnostics) were developed by Wilkinson et al. based on an idea of Paul and John Tukey, in order to discern meaningful patterns in large collections of scatterplots. The Tukeys' original idea was intended to overcome the impediments involved in examining large scatterplot matrices (multiplicity of plots and lack of detail). Wilkinson's implementation enabled for the first time scagnostics computations on many points as well as many plots. Unfortunately, scagnostics are sensitive to scale transformations. We illustrate the extent of this sensitivity and show how it is possible to pair statistical transformations with scagnostics to enable discovery of hidden structures in data that are not discernible in untransformed visualizations.
Tuan Nhon Dang;Wilkinson, L.
Dept. of Comput. Sci., Univ. of Illinois at Chicago, Chicago, IL, USA|c|;
10.1109/TVCG.2006.163;10.1109/INFVIS.2005.1532142;10.1109/TVCG.2013.187;10.1109/TVCG.2011.167;10.1109/VAST.2006.261423;10.1109/TVCG.2010.184;10.1109/VAST.2011.6102437;10.1109/VAST.2007.4389006
Scagnostics, Scatterplot matrix, Transformation, High-Dimensional Visual Analytics
VAST
2014
Using Visualizations to Monitor Changes and Harvest Insights from a Global-Scale Logging Infrastructure at Twitter
10.1109/VAST.2014.7042487
1. 122
C
Logging user activities is essential to data analysis for internet products and services. Twitter has built a unified logging infrastructure that captures user activities across all clients it owns, making it one of the largest datasets in the organization. This paper describes challenges and opportunities in applying information visualization to log analysis at this massive scale, and shows how various visualization techniques can be adapted to help data scientists extract insights. In particular, we focus on two scenarios: (1) monitoring and exploring a large collection of log events, and (2) performing visual funnel analysis on log data with tens of thousands of event types. Two interactive visualizations were developed for these purposes: we discuss design choices and the implementation of these systems, along with case studies of how they are being used in day-to-day operations at Twitter.
Wongsuphasawat, K.;Lin, J.
Twitter, Inc.|c|;
10.1109/INFVIS.2000.885091;10.1109/TVCG.2009.117;10.1109/INFVIS.1997.636718;10.1109/VAST.2007.4389008;10.1109/INFVIS.1996.559227;10.1109/TVCG.2012.225;10.1109/TVCG.2007.70529;10.1109/VAST.2012.6400494;10.1109/TVCG.2013.231;10.1109/INFVIS.2004.64;10.1109/VISUAL.1991.175815;10.1109/TVCG.2013.200;10.1109/VAST.2006.261421;10.1109/TVCG.2011.185
Information Visualization, Visual Analytics, Log Analysis, Log Visualization, Session Analysis, Funnel Analysis
VAST
2014
VAET: A Visual Analytics Approach for E-Transactions Time-Series
10.1109/TVCG.2014.2346913
1. 1752
J
Previous studies on E-transaction time-series have mainly focused on finding temporal trends of transaction behavior. Interesting transactions that are time-stamped and situation-relevant may easily be obscured in a large amount of information. This paper proposes a visual analytics system, Visual Analysis of E-transaction Time-Series (VAET), that allows the analysts to interactively explore large transaction datasets for insights about time-varying transactions. With a set of analyst-determined training samples, VAET automatically estimates the saliency of each transaction in a large time-series using a probabilistic decision tree learner. It provides an effective time-of-saliency (TOS) map where the analysts can explore a large number of transactions at different time granularities. Interesting transactions are further encoded with KnotLines, a compact visual representation that captures both the temporal variations and the contextual connection of transactions. The analysts can thus explore, select, and investigate knotlines of interest. A case study and user study with a real E-transactions dataset (26 million records) demonstrate the effectiveness of VAET.
Cong Xie;Wei Chen;Xinxin Huang;Yueqi Hu;Barlowe, S.;Jing Yang
State Key Lab. of CAD&CG, Zhejiang Univ., Hangzhou, China|c|;;;;;
10.1109/TVCG.2009.123;10.1109/VAST.2007.4389009;10.1109/TVCG.2012.212;10.1109/INFVIS.1995.528685;10.1109/VAST.2012.6400494;10.1109/TVCG.2010.162;10.1109/TVCG.2009.180
Time-Series, Visual Analytics, E-transaction
VAST
2014
VarifocalReader -- In-Depth Visual Analysis of Large Text Documents
10.1109/TVCG.2014.2346677
1. 1732
J
Interactive visualization provides valuable support for exploring, analyzing, and understanding textual documents. Certain tasks, however, require that insights derived from visual abstractions are verified by a human expert perusing the source text. So far, this problem is typically solved by offering overview-detail techniques, which present different views with different levels of abstractions. This often leads to problems with visual continuity. Focus-context techniques, on the other hand, succeed in accentuating interesting subsections of large text documents but are normally not suited for integrating visual abstractions. With VarifocalReader we present a technique that helps to solve some of these approaches' problems by combining characteristics from both. In particular, our method simplifies working with large and potentially complex text documents by simultaneously offering abstract representations of varying detail, based on the inherent structure of the document, and access to the text itself. In addition, VarifocalReader supports intra-document exploration through advanced navigation concepts and facilitates visual analysis tasks. The approach enables users to apply machine learning techniques and search mechanisms as well as to assess and adapt these techniques. This helps to extract entities, concepts and other artifacts from texts. In combination with the automatic generation of intermediate text levels through topic segmentation for thematic orientation, users can test hypotheses or develop interesting new research questions. To illustrate the advantages of our approach, we provide usage examples from literature studies.
Koch, S.;John, M.;Worner, M.;Muller, A.;Ertl, T.
Inst. of Visualization & Interactive Syst., Univ. of Stuttgart, Stuttgart, Germany|c|;;;;
10.1109/VAST.2010.5652926;10.1109/TVCG.2008.172;10.1109/VAST.2012.6400485;10.1109/TVCG.2013.188;10.1109/TVCG.2007.70577;10.1109/VAST.2012.6400486;10.1109/TVCG.2012.277;10.1109/TVCG.2009.165;10.1109/TVCG.2013.162;10.1109/INFVIS.1995.528686;10.1109/VAST.2009.5333248;10.1109/TVCG.2012.260;10.1109/VAST.2007.4389006;10.1109/VAST.2009.5333919;10.1109/VAST.2007.4389004
visual analytics, document analysis, literary analysis, natural language processing, text mining, machine learning, distant reading