IEEE VIS Publication Dataset

next
VAST
2012
Visual analytics for the big data era---A comparative review of state-of-the-art commercial systems
10.1109/VAST.2012.6400554
1. 182
C
Visual analytics (VA) system development started in academic research institutions where novel visualization techniques and open source toolkits were developed. Simultaneously, small software companies, sometimes spin-offs from academic research institutions, built solutions for specific application domains. In recent years we observed the following trend: some small VA companies grew exponentially; at the same time some big software vendors such as IBM and SAP started to acquire successful VA companies and integrated the acquired VA components into their existing frameworks. Generally the application domains of VA systems have broadened substantially. This phenomenon is driven by the generation of more and more data of high volume and complexity, which leads to an increasing demand for VA solutions from many application domains. In this paper we survey a selection of state-of-the-art commercial VA frameworks, complementary to an existing survey on open source VA tools. From the survey results we identify several improvement opportunities as future research directions.
Leishi Zhang;Stoffel, A.;Behrisch, M.;Mittelstadt, S.;Schreck, T.;Pompl, R.;Weber, S.;Last, H.;Keim, D.A.
Univ. of Konstanz, Konstanz, Germany|c|;;;;;;;;
10.1109/INFVIS.2004.12;10.1109/INFVIS.2004.64;10.1109/INFVIS.2000.885098
VAST
2012
Visual Analytics Methodology for Eye Movement Studies
10.1109/TVCG.2012.276
2. 2898
J
Eye movement analysis is gaining popularity as a tool for evaluation of visual displays and interfaces. However, the existing methods and tools for analyzing eye movements and scanpaths are limited in terms of the tasks they can support and effectiveness for large data and data with high variation. We have performed an extensive empirical evaluation of a broad range of visual analytics methods used in analysis of geographic movement data. The methods have been tested for the applicability to eye tracking data and the capability to extract useful knowledge about users' viewing behaviors. This allowed us to select the suitable methods and match them to possible analysis tasks they can support. The paper describes how the methods work in application to eye tracking data and provides guidelines for method selection depending on the analysis tasks.
Andrienko, G.;Andrienko, N.;Burch, M.;Weiskopf, D.
;;;
10.1109/VAST.2009.5332593;10.1109/TVCG.2011.193;10.1109/INFVIS.2005.1532150
Visual analytics, eye tracking, movement data, trajectory analysis
VAST
2012
Visual analytics methods for categoric spatio-temporal data
10.1109/VAST.2012.6400553
1. 192
C
We focus on visual analysis of space- and time-referenced categorical data, which describe possible states of spatial (geographical) objects or locations and their changes over time. The analysis of these data is difficult as there are only limited possibilities to analyze the three aspects (location, time and category) simultaneously. We present a new approach which interactively combines (a) visualization of categorical changes over time; (b) various spatial data displays; (c) computational techniques for task-oriented selection of time steps. They provide an expressive visualization with regard to either the overall evolution over time or unusual changes. We apply our approach on two use cases demonstrating its usefulness for a wide variety of tasks. We analyze data from movement tracking and meteorologic areas. Using our approach, expected events could be detected and new insights were gained.
von Landesberger, T.;Bremm, S.;Andrienko, N.;Andrienko, G.;Tekusova, M.
Tech. Univ. Darmstadt, Darmstadt, Germany|c|;;;;
10.1109/TVCG.2011.174;10.1109/TVCG.2009.117;10.1109/TVCG.2009.181;10.1109/INFVIS.2000.885098;10.1109/TVCG.2010.138;10.1109/VAST.2010.5652530;10.1109/INFVIS.2004.27;10.1109/INFVIS.2005.1532152;10.1109/INFVIS.2001.963281;10.1109/TVCG.2008.165;10.1109/TVCG.2009.153
VAST
2012
Visual Classifier Training for Text Document Retrieval
10.1109/TVCG.2012.277
2. 2848
J
Performing exhaustive searches over a large number of text documents can be tedious, since it is very hard to formulate search queries or define filter criteria that capture an analyst's information need adequately. Classification through machine learning has the potential to improve search and filter tasks encompassing either complex or very specific information needs, individually. Unfortunately, analysts who are knowledgeable in their field are typically not machine learning specialists. Most classification methods, however, require a certain expertise regarding their parametrization to achieve good results. Supervised machine learning algorithms, in contrast, rely on labeled data, which can be provided by analysts. However, the effort for labeling can be very high, which shifts the problem from composing complex queries or defining accurate filters to another laborious task, in addition to the need for judging the trained classifier's quality. We therefore compare three approaches for interactive classifier training in a user study. All of the approaches are potential candidates for the integration into a larger retrieval system. They incorporate active learning to various degrees in order to reduce the labeling effort as well as to increase effectiveness. Two of them encompass interactive visualization for letting users explore the status of the classifier in context of the labeled documents, as well as for judging the quality of the classifier in iterative feedback loops. We see our work as a step towards introducing user controlled classification methods in addition to text search and filtering for increasing recall in analytics scenarios involving large corpora.
Heimerl, F.;Koch, S.;Bosch, H.;Ertl, T.
Inst. for Visualization & Interactive Syst., Univ. Stuttgart, Stuttgart, Germany|c|;;;
10.1109/VAST.2011.6102449;10.1109/VAST.2011.6102453;10.1109/VAST.2007.4389006;10.1109/VAST.2012.6400492
Visual analytics, human computer interaction, information retrieval, active learning, classification, user evaluation
VAST
2012
Visual cluster exploration of web clickstream data
10.1109/VAST.2012.6400494
3. 12
C
Web clickstream data are routinely collected to study how users browse the web or use a service. It is clear that the ability to recognize and summarize user behavior patterns from such data is valuable to e-commerce companies. In this paper, we introduce a visual analytics system to explore the various user behavior patterns reflected by distinct clickstream clusters. In a practical analysis scenario, the system first presents an overview of clickstream clusters using a Self-Organizing Map with Markov chain models. Then the analyst can interactively explore the clusters through an intuitive user interface. He can either obtain summarization of a selected group of data or further refine the clustering result. We evaluated our system using two different datasets from eBay. Analysts who were working on the same data have confirmed the system's effectiveness in extracting user behavior patterns from complex datasets and enhancing their ability to reason.
Jishang Wei;Zeqian Shen;Sundaresan, N.;Kwan-Liu Ma
;;;
10.1109/INFVIS.2005.1532145;10.1109/VAST.2007.4389008;10.1109/VAST.2011.6102462;10.1109/VISUAL.1991.175815
VAST
2012
Visual exploration of local interest points in sets of time series
10.1109/VAST.2012.6400534
2. 240
M
Visual analysis of time series data is an important, yet challenging task with many application examples in fields such as financial or news stream data analysis. Many visual time series analysis approaches consider a global perspective on the time series. Fewer approaches consider visual analysis of local patterns in time series, and often rely on interactive specification of the local area of interest. We present initial results of an approach that is based on automatic detection of local interest points. We follow an overview-first approach to find useful parameters for the interest point detection, and details-on-demand to relate the found patterns. We present initial results and detail possible extensions of the approach.
Schreck, T.;Sharalieva, L.;Wanner, F.;Bernard, J.;Ruppert, T.;von Landesberger, T.;Bustos, B.
Univ. of Konstanz, Konstanz, Germany|c|;;;;;;
VAST
2012
Visual pattern discovery using random projections
10.1109/VAST.2012.6400490
4. 52
C
An essential element of exploratory data analysis is the use of revealing low-dimensional projections of high-dimensional data. Projection Pursuit has been an effective method for finding interesting low-dimensional projections of multidimensional spaces by optimizing a score function called a projection pursuit index. However, the technique is not scalable to high-dimensional spaces. Here, we introduce a novel method for discovering noteworthy views of high-dimensional data spaces by using binning and random projections. We define score functions, akin to projection pursuit indices, that characterize visual patterns of the low-dimensional projections that constitute feature subspaces. We also describe an analytic, multivariate visualization platform based on this algorithm that is scalable to extremely large problems.
Anand, A.;Wilkinson, L.;Tuan Nhon Dang
Dept. of Comput. Sci., Univ. of Illinois at Chicago, Chicago, IL, USA|c|;;
10.1109/VAST.2010.5652433;10.1109/TVCG.2011.178;10.1109/VAST.2011.6102437;10.1109/VAST.2007.4389006;10.1109/VAST.2010.5652392;10.1109/INFVIS.2005.1532142;10.1109/VAST.2009.5332629
Random Projections, High-dimensional Data
VAST
2012
Visualising variations in household energy consumption
10.1109/VAST.2012.6400545
2. 218
M
There is limited understanding of the relationship between neighbourhoods, demographic characteristics and domestic energy consumption habits. We report upon research that combines datasets relating to household energy use with geodemographics to enable better understanding of UK energy user types. A novel interactive interface is planned to evaluate the performance of specifically created energy-based data classifications. The research aims to help local governments and the energy industry in targeting households and populations for new energy saving schemes and in improving efforts to promote sustainable energy consumption. The new classifications may also stimulate consumption awareness amongst domestic users. This poster reports on initial visual findings and describes the research methodology, data sources and future visualisation requirements.
Goodwin, S.;Dykes, J.
giCentre, City Univ. London, London, UK|c|;
VAST
2012
Visualizing flows of images in social media
10.1109/VAST.2012.6400539
2. 230
M
Mass and social media provide flows of images for real world events. It is sometimes difficult to represent realities and impressions of events using only text. However, even a single photo might remind us complex events. Along with events in the real world, there are representative images, such as design of products and commercial pictures. We can therefore recognize changes in trends of people's ideas, experiences, and interests through observing the flows of such representative images. This paper presents a novel 3D visualization system to explore temporal changes in trends using images associating with different topics, called Image Bricks. We show case studies using images extracted from our six-year blog archive. We first extract clusters of images as topics related to given keywords. We then visualize them on multiple timelines in a 3D space. Users can visually read stories of topics through exploring visualized images.
Itoh, M.;Toyoda, M.;Kamijo, T.;Kitsuregawa, M.
;;;
VAST
2012
Watch this: A taxonomy for dynamic data visualization
10.1109/VAST.2012.6400552
1. 202
C
Visualizations embody design choices about data access, data transformation, visual representation, and interaction. To interpret a static visualization, a person must identify the correspondences between the visual representation and the underlying data. These correspondences become moving targets when a visualization is dynamic. Dynamics may be introduced in a visualization at any point in the analysis and visualization process. For example, the data itself may be streaming, shifting subsets may be selected, visual representations may be animated, and interaction may modify presentation. In this paper, we focus on the impact of dynamic data. We present a taxonomy and conceptual framework for understanding how data changes influence the interpretability of visual representations. Visualization techniques are organized into categories at various levels of abstraction. The salient characteristics of each category and task suitability are discussed through examples from the scientific literature and popular practices. Examining the implications of dynamically updating visualizations warrants attention because it directly impacts the interpretability (and thus utility) of visualizations. The taxonomy presented provides a reference point for further exploration of dynamic data visualization techniques.
Cottam, J.A.;Lumsdaine, A.;Weaver, C.
Indiana Univ., Bloomington, IN, USA|c|;;
10.1109/TVCG.2009.123;10.1109/INFVIS.2004.65;10.1109/TVCG.2007.70539;10.1109/TVCG.2008.125;10.1109/INFVIS.2000.885092
Dynamic Data, Interpretation
Vis
2012
A Data-Driven Approach to Hue-Preserving Color-Blending
10.1109/TVCG.2012.186
2. 2129
J
Color mapping and semitransparent layering play an important role in many visualization scenarios, such as information visualization and volume rendering. The combination of color and transparency is still dominated by standard alpha-compositing using the Porter-Duff over operator which can result in false colors with deceiving impact on the visualization. Other more advanced methods have also been proposed, but the problem is still far from being solved. Here we present an alternative to these existing methods specifically devised to avoid false colors and preserve visual depth ordering. Our approach is data driven and follows the recently formulated knowledge-assisted visualization (KAV) paradigm. Preference data, that have been gathered in web-based user surveys, are used to train a support-vector machine model for automatically predicting an optimized hue-preserving blending. We have applied the resulting model to both volume rendering and a specific information visualization technique, illustrative parallel coordinate plots. Comparative renderings show a significant improvement over previous approaches in the sense that false colors are completely removed and important properties such as depth ordering and blending vividness are better preserved. Due to the generality of the defined data-driven blending operator, it can be easily integrated also into other visualization frameworks.
Kuhne, L.;Giesen, J.;Zhiyuan Zhang;Sungsoo Ha;Mueller, K.
;;;;
10.1109/TVCG.2009.150;10.1109/TVCG.2008.118;10.1109/TVCG.2007.70623;10.1109/TVCG.2012.234;10.1109/VISUAL.2003.1250362
Color blending, hue preservation, knowledge-assisted visualization, volume rendering, parallel coordinates
Vis
2012
A Novel Approach to Visualizing Dark Matter Simulations
10.1109/TVCG.2012.187
2. 2087
J
In the last decades cosmological N-body dark matter simulations have enabled ab initio studies of the formation of structure in the Universe. Gravity amplified small density fluctuations generated shortly after the Big Bang, leading to the formation of galaxies in the cosmic web. These calculations have led to a growing demand for methods to analyze time-dependent particle based simulations. Rendering methods for such N-body simulation data usually employ some kind of splatting approach via point based rendering primitives and approximate the spatial distributions of physical quantities using kernel interpolation techniques, common in SPH (Smoothed Particle Hydrodynamics)-codes. This paper proposes three GPU-assisted rendering approaches, based on a new, more accurate method to compute the physical densities of dark matter simulation data. It uses full phase-space information to generate a tetrahedral tessellation of the computational domain, with mesh vertices defined by the simulation's dark matter particle positions. Over time the mesh is deformed by gravitational forces, causing the tetrahedral cells to warp and overlap. The new methods are well suited to visualize the cosmic web. In particular they preserve caustics, regions of high density that emerge, when several streams of dark matter particles share the same location in space, indicating the formation of structures like sheets, filaments and halos. We demonstrate the superior image quality of the new approaches in a comparison with three standard rendering techniques for N-body simulation data.
Kaehler, R.;Hahn, O.;Abel, T.
;;
10.1109/TVCG.2010.148;10.1109/VISUAL.2003.1250390;10.1109/VISUAL.2004.85;10.1109/TVCG.2006.154;10.1109/VISUAL.2003.1250404;10.1109/TVCG.2011.216;10.1109/TVCG.2009.142;10.1109/VISUAL.2001.964512;10.1109/VISUAL.2003.1250404
Astrophysics, dark matter, n-body simulations, tetrahedral grids
Vis
2012
A Perceptual-Statistics Shading Model
10.1109/TVCG.2012.188
2. 2274
J
The process of surface perception is complex and based on several influencing factors, e.g., shading, silhouettes, occluding contours, and top down cognition. The accuracy of surface perception can be measured and the influencing factors can be modified in order to decrease the error in perception. This paper presents a novel concept of how a perceptual evaluation of a visualization technique can contribute to its redesign with the aim of improving the match between the distal and the proximal stimulus. During analysis of data from previous perceptual studies, we observed that the slant of 3D surfaces visualized on 2D screens is systematically underestimated. The visible trends in the error allowed us to create a statistical model of the perceived surface slant. Based on this statistical model we obtained from user experiments, we derived a new shading model that uses adjusted surface normals and aims to reduce the error in slant perception. The result is a shape-enhancement of visualization which is driven by an experimentally-founded statistical model. To assess the efficiency of the statistical shading model, we repeated the evaluation experiment and confirmed that the error in perception was decreased. Results of both user experiments are publicly-available datasets.
Solteszova, V.;Turkay, C.;Price, M.C.;Viola, I.
Dept. of Inf., Univ. of Bergen, Bergen, Norway|c|;;;
10.1109/TVCG.2011.161
Shading, perception, evaluation, surface slant, statistical analysis
Vis
2012
A Visual Analysis Concept for the Validation of Geoscientific Simulation Models
10.1109/TVCG.2012.190
2. 2225
J
Geoscientific modeling and simulation helps to improve our understanding of the complex Earth system. During the modeling process, validation of the geoscientific model is an essential step. In validation, it is determined whether the model output shows sufficient agreement with observation data. Measures for this agreement are called goodness of fit. In the geosciences, analyzing the goodness of fit is challenging due to its manifold dependencies: 1) The goodness of fit depends on the model parameterization, whose precise values are not known. 2) The goodness of fit varies in space and time due to the spatio-temporal dimension of geoscientific models. 3) The significance of the goodness of fit is affected by resolution and preciseness of available observational data. 4) The correlation between goodness of fit and underlying modeled and observed values is ambiguous. In this paper, we introduce a visual analysis concept that targets these challenges in the validation of geoscientific models - specifically focusing on applications where observation data is sparse, unevenly distributed in space and time, and imprecise, which hinders a rigorous analytical approach. Our concept, developed in close cooperation with Earth system modelers, addresses the four challenges by four tailored visualization components. The tight linking of these components supports a twofold interactive drill-down in model parameter space and in the set of data samples, which facilitates the exploration of the numerous dependencies of the goodness of fit. We exemplify our visualization concept for geoscientific modeling of glacial isostatic adjustments in the last 100,000 years, validated against sea levels indicators - a prominent example for sparse and imprecise observation data. An initial use case and feedback from Earth system modelers indicate that our visualization concept is a valuable complement to the range of validation methods.
Unger, A.;Schulte, S.;Klemann, V.;Dransch, D.
GFZ German Reserach Center For Geosci., Potsdam, Germany|c|;;;
10.1109/TVCG.2010.192;10.1109/VAST.2010.5652895;10.1109/TVCG.2011.248;10.1109/TVCG.2008.145;10.1109/TVCG.2011.225;10.1109/TVCG.2010.223;10.1109/TVCG.2010.171;10.1109/TVCG.2010.190;10.1109/VISUAL.1993.398859;10.1109/TVCG.2010.181;10.1109/TVCG.2008.139
Earth science visualization, model validation, coordinated multiple views, spatio-temporal visualization, sea level indicators
Vis
2012
An Adaptive Prediction-Based Approach to Lossless Compression of Floating-Point Volume Data
10.1109/TVCG.2012.194
2. 2304
J
In this work, we address the problem of lossless compression of scientific and medical floating-point volume data. We propose two prediction-based compression methods that share a common framework, which consists of a switched prediction scheme wherein the best predictor out of a preset group of linear predictors is selected. Such a scheme is able to adapt to different datasets as well as to varying statistics within the data. The first method, called APE (Adaptive Polynomial Encoder), uses a family of structured interpolating polynomials for prediction, while the second method, which we refer to as ACE (Adaptive Combined Encoder), combines predictors from previous work with the polynomial predictors to yield a more flexible, powerful encoder that is able to effectively decorrelate a wide range of data. In addition, in order to facilitate efficient visualization of compressed data, our scheme provides an option to partition floating-point values in such a way as to provide a progressive representation. We compare our two compressors to existing state-of-the-art lossless floating-point compressors for scientific data, with our data suite including both computer simulations and observational measurements. The results demonstrate that our polynomial predictor, APE, is comparable to previous approaches in terms of speed but achieves better compression rates on average. ACE, our combined predictor, while somewhat slower, is able to achieve the best compression rate on all datasets, with significantly better rates on most of the datasets.
Fout, N.;Kwan-Liu Ma
UC Davis, Davis, CA, USA|c|;
10.1109/VISUAL.1996.568138;10.1109/TVCG.2006.143;10.1109/VISUAL.1994.346332
Volume compression, lossless compression, floating-point compression
Vis
2012
Analysis of Streamline Separation at Infinity Using Time-Discrete Markov Chains
10.1109/TVCG.2012.198
2. 2148
J
Existing methods for analyzing separation of streamlines are often restricted to a finite time or a local area. In our paper we introduce a new method that complements them by allowing an infinite-time-evaluation of steady planar vector fields. Our algorithm unifies combinatorial and probabilistic methods and introduces the concept of separation in time-discrete Markov-Chains. We compute particle distributions instead of the streamlines of single particles. We encode the flow into a map and then into a transition matrix for each time direction. Finally, we compare the results of our grid-independent algorithm to the popular Finite-Time-Lyapunov-Exponents and discuss the discrepancies.
Reich, W.;Scheuermann, G.
Univ. of Leipzig, Leipzig, Germany|c|;
10.1109/VISUAL.1999.809896
Vector field topology, flow visualization, feature extraction, uncertainty
Vis
2012
Augmented Topological Descriptors of Pore Networks for Material Science
10.1109/TVCG.2012.200
2. 2050
J
One potential solution to reduce the concentration of carbon dioxide in the atmosphere is the geologic storage of captured CO2 in underground rock formations, also known as carbon sequestration. There is ongoing research to guarantee that this process is both efficient and safe. We describe tools that provide measurements of media porosity, and permeability estimates, including visualization of pore structures. Existing standard algorithms make limited use of geometric information in calculating permeability of complex microstructures. This quantity is important for the analysis of biomineralization, a subsurface process that can affect physical properties of porous media. This paper introduces geometric and topological descriptors that enhance the estimation of material permeability. Our analysis framework includes the processing of experimental data, segmentation, and feature extraction and making novel use of multiscale topological analysis to quantify maximum flow through porous networks. We illustrate our results using synchrotron-based X-ray computed microtomography of glass beads during biomineralization. We also benchmark the proposed algorithms using simulated data sets modeling jammed packed bead beds of a monodispersive material.
Ushizima, D.;Morozov, D.;Weber, G.H.;Bianchi, A.G.C.;Sethian, J.A.;Bethel, E.W.
Comput. Res. Div., Lawrence Berkeley Nat. Lab., Berkeley, CA, USA|c|;;;;;
10.1109/TVCG.2010.218;10.1109/TVCG.2007.70603;10.1109/VISUAL.2005.1532795
Reeb graph, persistent homology, topological data analysis, geometric algorithms, segmentation, microscopy
Vis
2012
Automatic Detection and Visualization of Qualitative Hemodynamic Characteristics in Cerebral Aneurysms
10.1109/TVCG.2012.202
2. 2187
J
Cerebral aneurysms are a pathological vessel dilatation that bear a high risk of rupture. For the understanding and evaluation of the risk of rupture, the analysis of hemodynamic information plays an important role. Besides quantitative hemodynamic information, also qualitative flow characteristics, e.g., the inflow jet and impingement zone are correlated with the risk of rupture. However, the assessment of these two characteristics is currently based on an interactive visual investigation of the flow field, obtained by computational fluid dynamics (CFD) or blood flow measurements. We present an automatic and robust detection as well as an expressive visualization of these characteristics. The detection can be used to support a comparison, e.g., of simulation results reflecting different treatment options. Our approach utilizes local streamline properties to formalize the inflow jet and impingement zone. We extract a characteristic seeding curve on the ostium, on which an inflow jet boundary contour is constructed. Based on this boundary contour we identify the impingement zone. Furthermore, we present several visualization techniques to depict both characteristics expressively. Thereby, we consider accuracy and robustness of the extracted characteristics, minimal visual clutter and occlusions. An evaluation with six domain experts confirms that our approach detects both hemodynamic characteristics reasonably.
Gasteiger, R.;Lehmann, D.J.;van Pelt, R.;Janiga, G.;Beuing, O.;Vilanova, A.;Theisel, H.;Preim, B.
Dept. of Simulation & Graphics, Univ. of Magdeburg, Magdeburg, Germany|c|;;;;;;;
10.1109/TVCG.2011.215;10.1109/TVCG.2011.159;10.1109/TVCG.2011.243;10.1109/TVCG.2009.138;10.1109/TVCG.2010.153;10.1109/TVCG.2010.173
Cerebral aneurysm, Hemodynamic, Inflow jet, Impingement zone, Visualization, Glyph
Vis
2012
Automatic Tuning of Spatially Varying Transfer Functions for Blood Vessel Visualization
10.1109/TVCG.2012.203
2. 2354
J
Computed Tomography Angiography (CTA) is commonly used in clinical routine for diagnosing vascular diseases. The procedure involves the injection of a contrast agent into the blood stream to increase the contrast between the blood vessels and the surrounding tissue in the image data. CTA is often visualized with Direct Volume Rendering (DVR) where the enhanced image contrast is important for the construction of Transfer Functions (TFs). For increased efficiency, clinical routine heavily relies on preset TFs to simplify the creation of such visualizations for a physician. In practice, however, TF presets often do not yield optimal images due to variations in mixture concentration of contrast agent in the blood stream. In this paper we propose an automatic, optimization-based method that shifts TF presets to account for general deviations and local variations of the intensity of contrast enhanced blood vessels. Some of the advantages of this method are the following. It computationally automates large parts of a process that is currently performed manually. It performs the TF shift locally and can thus optimize larger portions of the image than is possible with manual interaction. The method is based on a well known vesselness descriptor in the definition of the optimization criterion. The performance of the method is illustrated by clinically relevant CT angiography datasets displaying both improved structural overviews of vessel trees and improved adaption to local variations of contrast concentration.
Lathen, G.;Lindholm, S.;Lenz, R.;Persson, A.;Borga, M.
Center for Med. Image Sci. & Visualization (CMIV), Linkoping Univ., Linkoping, Sweden|c|;;;;
10.1109/VISUAL.2003.1250414;10.1109/TVCG.2009.120;10.1109/VISUAL.2001.964516;10.1109/VISUAL.1996.568113;10.1109/TVCG.2008.162;10.1109/TVCG.2010.195;10.1109/TVCG.2008.123
Direct volume rendering, transfer functions, vessel visualization
Vis
2012
Coherency-Based Curve Compression for High-Order finite Element Model Visualization
10.1109/TVCG.2012.206
2. 2324
J
Finite element (FE) models are frequently used in engineering and life sciences within time-consuming simulations. In contrast with the regular grid structure facilitated by volumetric data sets, as used in medicine or geosciences, FE models are defined over a non-uniform grid. Elements can have curved faces and their interior can be defined through high-order basis functions, which pose additional challenges when visualizing these models. During ray-casting, the uniformly distributed sample points along each viewing ray must be transformed into the material space defined within each element. The computational complexity of this transformation makes a straightforward approach inadequate for interactive data exploration. In this paper, we introduce a novel coherency-based method which supports the interactive exploration of FE models by decoupling the expensive world-to-material space transformation from the rendering stage, thereby allowing it to be performed within a precomputation stage. Therefore, our approach computes view-independent proxy rays in material space, which are clustered to facilitate data reduction. During rendering, these proxy rays are accessed, and it becomes possible to visually analyze high-order FE models at interactive frame rates, even when they are time-varying or consist of multiple modalities. Within this paper, we provide the necessary background about the FE data, describe our decoupling method, and introduce our interactive rendering algorithm. Furthermore, we provide visual results and analyze the error introduced by the presented approach.
Bock, A.;Sunden, E.;Bingchen Liu;Wunsche, B.;Ropinski, T.
Sci. Visualization Group, Linkoping Univ., Linkoping, Sweden|c|;;;;
10.1109/VISUAL.1998.745310;10.1109/VISUAL.2004.91;10.1109/TVCG.2011.206;10.1109/TVCG.2006.110
finite element visualization, GPU-based ray-casting