IEEE VIS Publication Dataset

next
Vis
2009
Parameter Sensitivity Visualization for DTI fiber Tracking
10.1109/TVCG.2009.170
1. 1448
J
Fiber tracking of diffusion tensor imaging (DTI) data offers a unique insight into the three-dimensional organisation of white matter structures in the living brain. However, fiber tracking algorithms require a number of user-defined input parameters that strongly affect the output results. Usually the fiber tracking parameters are set once and are then re-used for several patient datasets. However, the stability of the chosen parameters is not evaluated and a small change in the parameter values can give very different results. The user remains completely unaware of such effects. Furthermore, it is difficult to reproduce output results between different users. We propose a visualization tool that allows the user to visually explore how small variations in parameter values affect the output of fiber tracking. With this knowledge the user cannot only assess the stability of commonly used parameter values but also evaluate in a more reliable way the output results between different patients. Existing tools do not provide such information. A small user evaluation of our tool has been done to show the potential of the technique.
Brecheisen, R.;Vilanova, A.;Platel, B.;ter Haar Romenij, B.
Tech. Univ. Eindhoven, Eindhoven, Netherlands|c|;;;
10.1109/TVCG.2008.147;10.1109/VISUAL.2005.1532853;10.1109/TVCG.2007.70518;10.1109/VISUAL.2005.1532778;10.1109/VISUAL.2005.1532779;10.1109/VISUAL.1996.568116;10.1109/VISUAL.1999.809894;10.1109/VISUAL.2004.30;10.1109/VISUAL.2001.964552
fiber Tracking, Parameter Sensitivity, Stopping Criteria, Diffusion Tensor Imaging, Uncertainty Visualization
Vis
2009
Perception-Based Transparency Optimization for Direct Volume Rendering
10.1109/TVCG.2009.172
1. 1290
J
The semi-transparent nature of direct volume rendered images is useful to depict layered structures in a volume. However, obtaining a semi-transparent result with the layers clearly revealed is difficult and may involve tedious adjustment on opacity and other rendering parameters. Furthermore, the visual quality of layers also depends on various perceptual factors. In this paper, we propose an auto-correction method for enhancing the perceived quality of the semi-transparent layers in direct volume rendered images. We introduce a suite of new measures based on psychological principles to evaluate the perceptual quality of transparent structures in the rendered images. By optimizing rendering parameters within an adaptive and intuitive user interaction process, the quality of the images is enhanced such that specific user requirements can be met. Experimental results on various datasets demonstrate the effectiveness and robustness of our method.
Ming-Yuen Chan;Yingcai Wu;Wai-Ho Mak;Wei Chen;Huamin Qu
Dept. of Comput. Sci. & Eng., Hong Kong Univ. of Sci. & Technol., Hong Kong, China|c|;;;;
10.1109/VISUAL.1998.745319;10.1109/VISUAL.2000.885694;10.1109/TVCG.2008.118;10.1109/VISUAL.2003.1250414;10.1109/TVCG.2007.70591;10.1109/VISUAL.2004.62;10.1109/TVCG.2008.162;10.1109/TVCG.2006.183;10.1109/TVCG.2008.159;10.1109/TVCG.2006.148
Direct volume rendering, image enhancement, layer perception
Vis
2009
Predictor-Corrector Schemes for Visualization ofSmoothed Particle Hydrodynamics Data
10.1109/TVCG.2009.173
1. 1250
J
In this paper we present a method for vortex core line extraction which operates directly on the smoothed particle hydrodynamics (SPH) representation and, by this, generates smoother and more (spatially and temporally) coherent results in an efficient way. The underlying predictor-corrector scheme is general enough to be applied to other line-type features and it is extendable to the extraction of surfaces such as isosurfaces or Lagrangian coherent structures. The proposed method exploits temporal coherence to speed up computation for subsequent time steps. We show how the predictor-corrector formulation can be specialized for several variants of vortex core line definitions including two recent unsteady extensions, and we contribute a theoretical and practical comparison of these. In particular, we reveal a close relation between unsteady extensions of Fuchs et al. and Weinkauf et al. and we give a proof of the Galilean invariance of the latter. When visualizing SPH data, there is the possibility to use the same interpolation method for visualization as has been used for the simulation. This is different from the case of finite volume simulation results, where it is not possible to recover from the results the spatial interpolation that was used during the simulation. Such data are typically interpolated using the basic trilinear interpolant, and if smoothness is required, some artificial processing is added. In SPH data, however, the smoothing kernels are specified from the simulation, and they provide an exact and smooth interpolation of data or gradients at arbitrary points in the domain.
Schindler, B.;Fuchs, R.;Biddiscombe, J.;Peikert, R.
Inst. of Visual Comput., ETH Zurich, Zurich, Switzerland|c|;;;
10.1109/VISUAL.1999.809896;10.1109/TVCG.2007.70595;10.1109/VISUAL.2005.1532851;10.1109/VISUAL.1998.745296;10.1109/VISUAL.2004.59;10.1109/TVCG.2007.70545;10.1109/TVCG.2008.164
Smoothed particle hydrodynamics, flow visualization, unsteady flow, feature extraction, vortex core lines
Vis
2009
Quantitative Texton Sequences for Legible Bivariate Maps
10.1109/TVCG.2009.175
1. 1530
J
Representing bivariate scalar maps is a common but difficult visualization problem. One solution has been to use two dimensional color schemes, but the results are often hard to interpret and inaccurately read. An alternative is to use a color sequence for one variable and a texture sequence for another. This has been used, for example, in geology, but much less studied than the two dimensional color scheme, although theory suggests that it should lead to easier perceptual separation of information relating to the two variables. To make a texture sequence more clearly readable the concept of the quantitative texton sequence (QTonS) is introduced. A QTonS is defined a sequence of small graphical elements, called textons, where each texton represents a different numerical value and sets of textons can be densely displayed to produce visually differentiable textures. An experiment was carried out to compare two bivariate color coding schemes with two schemes using QTonS for one bivariate map component and a color sequence for the other. Two different key designs were investigated (a key being a sequence of colors or textures used in obtaining quantitative values from a map). The first design used two separate keys, one for each dimension, in order to measure how accurately subjects could independently estimate the underlying scalar variables. The second key design was two dimensional and intended to measure the overall integral accuracy that could be obtained. The results show that the accuracy is substantially higher for the QTonS/color sequence schemes. A hypothesis that texture/color sequence combinations are better for independent judgments of mapped quantities was supported. A second experiment probed the limits of spatial resolution for QTonSs.
Ware, C.
Center for Coastal & Ocean Mapping, Univ. of New Hampshire, Durham, NH, USA|c|
10.1109/VISUAL.1995.480803;10.1109/VISUAL.1998.745292;10.1109/TVCG.2007.70623;10.1109/VISUAL.2000.885679;10.1109/VISUAL.1997.663874;10.1109/VISUAL.1990.146383
Bivariate maps, texture, texton, legibility, quantitative texton sequence, QTonS
Vis
2009
Sampling and Visualizing Creases with Scale-Space Particles
10.1109/TVCG.2009.177
1. 1424
J
Particle systems have gained importance as a methodology for sampling implicit surfaces and segmented objects to improve mesh generation and shape analysis. We propose that particle systems have a significantly more general role in sampling structure from unsegmented data. We describe a particle system that computes samplings of crease features (i.e. ridges and valleys, as lines or surfaces) that effectively represent many anatomical structures in scanned medical data. Because structure naturally exists at a range of sizes relative to the image resolution, computer vision has developed the theory of scale-space, which considers an n-D image as an (n + 1)-D stack of images at different blurring levels. Our scale-space particles move through continuous four-dimensional scale-space according to spatial constraints imposed by the crease features, a particle-image energy that draws particles towards scales of maximal feature strength, and an inter-particle energy that controls sampling density in space and scale. To make scale-space practical for large three-dimensional data, we present a spline-based interpolation across scale from a small number of pre-computed blurrings at optimally selected scales. The configuration of the particle system is visualized with tensor glyphs that display information about the local Hessian of the image, and the scale of the particle. We use scale-space particles to sample the complex three-dimensional branching structure of airways in lung CT, and the major white matter structures in brain DTI.
Kindlmann, G.;Estepar, R.S.J.;Smith, S.;Westin, C.-F.
Dept. of Comput. Sci., Univ. of Chicago, Chicago, IL, USA|c|;;;
10.1109/TVCG.2008.154;10.1109/VISUAL.1993.398880;10.1109/TVCG.2007.70604;10.1109/TVCG.2008.148;10.1109/VISUAL.1997.663930;10.1109/TVCG.2008.167;10.1109/VISUAL.1999.809896
Particle Systems, Crease Features, Ridge and Valley Detection, Lung CT, Diffusion Tensor MRI
Vis
2009
Scalable and Interactive Segmentation and Visualization of Neural Processes in EM Datasets
10.1109/TVCG.2009.178
1. 1514
J
Recent advances in scanning technology provide high resolution EM (electron microscopy) datasets that allow neuro-scientists to reconstruct complex neural connections in a nervous system. However, due to the enormous size and complexity of the resulting data, segmentation and visualization of neural processes in EM data is usually a difficult and very time-consuming task. In this paper, we present NeuroTrace, a novel EM volume segmentation and visualization system that consists of two parts: a semi-automatic multiphase level set segmentation with 3D tracking for reconstruction of neural processes, and a specialized volume rendering approach for visualization of EM volumes. It employs view-dependent on-demand filtering and evaluation of a local histogram edge metric, as well as on-the-fly interpolation and ray-casting of implicit surfaces for segmented neural structures. Both methods are implemented on the GPU for interactive performance. NeuroTrace is designed to be scalable to large datasets and data-parallel hardware architectures. A comparison of NeuroTrace with a commonly used manual EM segmentation tool shows that our interactive workflow is faster and easier to use for the reconstruction of complex neural processes.
Jeong, W.-K.;Beyer, J.;Hadwiger, M.;Vazquez, A.;Pfister, H.;Whitaker, R.T.
Sch. of Eng. & Appl. Sci., Harvard Univ., Cambridge, MA, USA|c|;;;;;
10.1109/TVCG.2008.169;10.1109/VISUAL.2003.1250357;10.1109/TVCG.2008.179;10.1109/VISUAL.1999.809912;10.1109/TVCG.2007.70532
Segmentation, neuroscience, connectome, volume rendering, implicit surface rendering, graphics hardware
Vis
2009
Stress Tensor field Visualization for Implant Planning in Orthopedics
10.1109/TVCG.2009.184
1. 1406
J
We demonstrate the application of advanced 3D visualization techniques to determine the optimal implant design and position in hip joint replacement planning. Our methods take as input the physiological stress distribution inside a patient's bone under load and the stress distribution inside this bone under the same load after a simulated replacement surgery. The visualization aims at showing principal stress directions and magnitudes, as well as differences in both distributions. By visualizing changes of normal and shear stresses with respect to the principal stress directions of the physiological state, a comparative analysis of the physiological stress distribution and the stress distribution with implant is provided, and the implant parameters that most closely replicate the physiological stress state in order to avoid stress shielding can be determined. Our method combines volume rendering for the visualization of stress magnitudes with the tracing of short line segments for the visualization of stress directions. To improve depth perception, transparent, shaded, and antialiased lines are rendered in correct visibility order, and they are attenuated by the volume rendering. We use a focus+context approach to visually guide the user to relevant regions in the data, and to support a detailed stress analysis in these regions while preserving spatial context information. Since all of our techniques have been realized on the GPU, they can immediately react to changes in the simulated stress tensor field and thus provide an effective means for optimal implant selection and positioning in a computational steering environment.
Dick, C.;Georgii, J.;Burgkart, R.;Westermann, R.
Comput. Graphics & Visualization Group, Tech. Univ. Munchen, Munich, Germany|c|;;;
10.1109/VISUAL.2005.1532780;10.1109/TVCG.2006.124;10.1109/VISUAL.2003.1250379;10.1109/VISUAL.2004.80;10.1109/VISUAL.1998.745294;10.1109/TVCG.2007.70532;10.1109/VISUAL.2002.1183797;10.1109/VISUAL.1994.346326;10.1109/VISUAL.2005.1532771;10.1109/VISUAL.2002.1183798;10.1109/VISUAL.1994.346326;10.1109/VISUAL.2002.1183799;10.1109/TVCG.2006.151;10.1109/VISUAL.1998.745316
Stress Tensor fields, Biomedical Visualization, Comparative Visualization, Implant Planning, GPU Techniques
Vis
2009
Structuring Feature Space: A Non-Parametric Method for Volumetric Transfer Function Generation
10.1109/TVCG.2009.185
1. 1480
J
The use of multi-dimensional transfer functions for direct volume rendering has been shown to be an effective means of extracting materials and their boundaries for both scalar and multivariate data. The most common multi-dimensional transfer function consists of a two-dimensional (2D) histogram with axes representing a subset of the feature space (e.g., value vs. value gradient magnitude), with each entry in the 2D histogram being the number of voxels at a given feature space pair. Users then assign color and opacity to the voxel distributions within the given feature space through the use of interactive widgets (e.g., box, circular, triangular selection). Unfortunately, such tools lead users through a trial-and-error approach as they assess which data values within the feature space map to a given area of interest within the volumetric space. In this work, we propose the addition of non-parametric clustering within the transfer function feature space in order to extract patterns and guide transfer function generation. We apply a non-parametric kernel density estimation to group voxels of similar features within the 2D histogram. These groups are then binned and colored based on their estimated density, and the user may interactively grow and shrink the binned regions to explore feature boundaries and extract regions of interest. We also extend this scheme to temporal volumetric data in which time steps of 2D histograms are composited into a histogram volume. A three-dimensional (3D) density estimation is then applied, and users can explore regions within the feature space across time without adjusting the transfer function at each time step. Our work enables users to effectively explore the structures found within a feature space of the volume and provide a context in which the user can understand how these structures relate to their volumetric data. We provide tools for enhanced exploration and manipulation of the transfer function, and we show that the initial t ransfer function generation serves as a reasonable base for volumetric rendering, reducing the trial-and-error overhead typically found in transfer function design.
Maciejewski, R.;Insoo Woo;Wei Chen;Ebert, D.S.
Rendering & Perceptualization Laboaratoy, Purdue Univ., Purdue, CA, USA|c|;;;
10.1109/TVCG.2008.119;10.1109/VISUAL.1998.745319;10.1109/VISUAL.2003.1250414;10.1109/VISUAL.2003.1250371;10.1109/VISUAL.2005.1532807;10.1109/TVCG.2008.162;10.1109/VISUAL.2001.964519;10.1109/VISUAL.2003.1250413;10.1109/VISUAL.2005.1532858;10.1109/TVCG.2006.148
Volume rendering, kernel density estimation, transfer function design, temporal volume rendering
Vis
2009
Supercubes: A High-Level Primitive for Diamond Hierarchies
10.1109/TVCG.2009.186
1. 1610
J
Volumetric datasets are often modeled using a multiresolution approach based on a nested decomposition of the domain into a polyhedral mesh. Nested tetrahedral meshes generated through the longest edge bisection rule are commonly used to decompose regular volumetric datasets since they produce highly adaptive crack-free representations. Efficient representations for such models have been achieved by clustering the set of tetrahedra sharing a common longest edge into a structure called a diamond. The alignment and orientation of the longest edge can be used to implicitly determine the geometry of a diamond and its relations to the other diamonds within the hierarchy. We introduce the supercube as a high-level primitive within such meshes that encompasses all unique types of diamonds. A supercube is a coherent set of edges corresponding to three consecutive levels of subdivision. Diamonds are uniquely characterized by the longest edge of the tetrahedra forming them and are clustered in supercubes through the association of the longest edge of a diamond with a unique edge in a supercube. Supercubes are thus a compact and highly efficient means of associating information with a subset of the vertices, edges and tetrahedra of the meshes generated through longest edge bisection. We demonstrate the effectiveness of the supercube representation when encoding multiresolution diamond hierarchies built on a subset of the points of a regular grid. We also show how supercubes can be used to efficiently extract meshes from diamond hierarchies and to reduce the storage requirements of such variable-resolution meshes.
Weiss, K.;De Floriani, L.
Dept. of Comput. Sci., Univ. of Maryland, College Park, MD, USA|c|;
10.1109/VISUAL.2002.1183810;10.1109/VISUAL.2000.885681;10.1109/VISUAL.2000.885703;10.1109/VISUAL.1997.663860;10.1109/VISUAL.1997.663869
Longest edge bisection, diamonds, hierarchy of diamonds, multiresolution models, selective refinement
Vis
2009
The Occlusion Spectrum for Volume Classification and Visualization
10.1109/TVCG.2009.189
1. 1472
J
Despite the ever-growing improvements on graphics processing units and computational power, classifying 3D volume data remains a challenge.In this paper, we present a new method for classifying volume data based on the ambient occlusion of voxels. This information stems from the observation that most volumes of a certain type, e.g., CT, MRI or flow simulation, contain occlusion patterns that reveal the spatial structure of their materials or features. Furthermore, these patterns appear to emerge consistently for different data sets of the same type. We call this collection of patterns the occlusion spectrum of a dataset. We show that using this occlusion spectrum leads to better two-dimensional transfer functions that can help classify complex data sets in terms of the spatial relationships among features. In general, the ambient occlusion of a voxel can be interpreted as a weighted average of the intensities in a spherical neighborhood around the voxel. Different weighting schemes determine the ability to separate structures of interest in the occlusion spectrum. We present a general methodology for finding such a weighting. We show results of our approach in 3D imaging for different applications, including brain and breast tumor detection and the visualization of turbulent flow.
Correa, C.;Kwan-Liu Ma
Univ. of California at Davis, Davis, CA, USA|c|;
10.1109/VISUAL.2003.1250414;10.1109/VISUAL.1999.809932;10.1109/TVCG.2008.162;10.1109/VISUAL.2001.964519;10.1109/VISUAL.2004.64;10.1109/VISUAL.2003.1250413;10.1109/TVCG.2006.115;10.1109/VISUAL.1997.663875;10.1109/TVCG.2006.148
Transfer functions, Ambient Occlusion, Volume Rendering, Interactive Classification
Vis
2009
Time and Streak Surfaces for Flow Visualization in Large Time-Varying Data Sets
10.1109/TVCG.2009.190
1. 1274
J
Time and streak surfaces are ideal tools to illustrate time-varying vector fields since they directly appeal to the intuition about coherently moving particles. However, efficient generation of high-quality time and streak surfaces for complex, large and time-varying vector field data has been elusive due to the computational effort involved. In this work, we propose a novel algorithm for computing such surfaces. Our approach is based on a decoupling of surface advection and surface adaptation and yields improved efficiency over other surface tracking methods, and allows us to leverage inherent parallelization opportunities in the surface advection, resulting in more rapid parallel computation. Moreover, we obtain as a result of our algorithm the entire evolution of a time or streak surface in a compact representation, allowing for interactive, high-quality rendering, visualization and exploration of the evolving surface. Finally, we discuss a number of ways to improve surface depiction through advanced rendering and texturing, while preserving interactivity, and provide a number of examples for real-world datasets and analyze the behavior of our algorithm on them.
Krishnan, H.;Garth, C.;Joy, K.I.
Inst. of Data Anal. & Visualization, Univ. of California, Davis, CA, USA|c|;;
10.1109/TVCG.2007.70557;10.1109/VISUAL.1992.235211;10.1109/VISUAL.1993.398875;10.1109/VISUAL.2001.964506;10.1109/TVCG.2008.163;10.1109/VISUAL.2000.885688;10.1109/TVCG.2008.133
3D vector field visualization, flow visualization, time-varying, time and streak surfaces, surface extraction
Vis
2009
Verifiable Visualization for Isosurface Extraction
10.1109/TVCG.2009.194
1. 1234
J
Visual representations of isosurfaces are ubiquitous in the scientific and engineering literature. In this paper, we present techniques to assess the behavior of isosurface extraction codes. Where applicable, these techniques allow us to distinguish whether anomalies in isosurface features can be attributed to the underlying physical process or to artifacts from the extraction process. Such scientific scrutiny is at the heart of verifiable visualization - subjecting visualization algorithms to the same verification process that is used in other components of the scientific pipeline. More concretely, we derive formulas for the expected order of accuracy (or convergence rate) of several isosurface features, and compare them to experimentally observed results in the selected codes. This technique is practical: in two cases, it exposed actual problems in implementations. We provide the reader with the range of responses they can expect to encounter with isosurface techniques, both under ldquonormal operating conditionsrdquo and also under adverse conditions. Armed with this information - the results of the verification process - practitioners can judiciously select the isosurface extraction technique appropriate for their problem of interest, and have confidence in its behavior.
Etiene, T.;Scheidegger, C.E.;Nonato, L.G.;Kirby, R.M.;Silva, C.T.
Sch. of Comput. & Sci. Comput., Univ. of Utah, Salt Lake City, UT, USA|c|;;;;
10.1109/TVCG.2006.149;10.1109/VISUAL.1994.346331
Verification, V&V, Isosurface Extraction, Marching Cubes
Vis
2009
VisMashup: Streamlining the Creation of Custom Visualization Applications
10.1109/TVCG.2009.195
1. 1546
J
Visualization is essential for understanding the increasing volumes of digital data. However, the process required to create insightful visualizations is involved and time consuming. Although several visualization tools are available, including tools with sophisticated visual interfaces, they are out of reach for users who have little or no knowledge of visualization techniques and/or who do not have programming expertise. In this paper, we propose VisMashup, a new framework for streamlining the creation of customized visualization applications. Because these applications can be customized for very specific tasks, they can hide much of the complexity in a visualization specification and make it easier for users to explore visualizations by manipulating a small set of parameters. We describe the framework and how it supports the various tasks a designer needs to carry out to develop an application, from mining and exploring a set of visualization specifications (pipelines), to the creation of simplified views of the pipelines, and the automatic generation of the application and its interface. We also describe the implementation of the system and demonstrate its use in two real application scenarios.
Santos, E.;Lins, L.;Ahrens, J.;Freire, J.;Silva, C.T.
Sci. Comput. & Imaging (SCI) Inst., Univ. of Utah, Salt Lake City, UT, USA|c|;;;;
10.1109/TVCG.2007.70584;10.1109/VISUAL.2005.1532795;10.1109/TVCG.2007.70577
Scientific Visualization, Dataflow, Visualization Systems
Vis
2009
Visual Exploration of Climate Variability Changes Using Wavelet Analysis
10.1109/TVCG.2009.197
1. 1382
J
Due to its nonlinear nature, the climate system shows quite high natural variability on different time scales, including multiyear oscillations such as the El Nino southern oscillation phenomenon. Beside a shift of the mean states and of extreme values of climate variables, climate change may also change the frequency or the spatial patterns of these natural climate variations. Wavelet analysis is a well established tool to investigate variability in the frequency domain. However, due to the size and complexity of the analysis results, only few time series are commonly analyzed concurrently. In this paper we will explore different techniques to visually assist the user in the analysis of variability and variability changes to allow for a holistic analysis of a global climate model data set consisting of several variables and extending over 250 years. Our new framework and data from the IPCC AR4 simulations with the coupled climate model ECHAM5/MPI-OM are used to explore the temporal evolution of El Nino due to climate change.
Jänicke, H.;Bottinger, M.;Mikolajewicz, U.;Scheuermann, G.
Univ. of Leipzig, Leipzig, Germany|c|;;;
10.1109/TVCG.2008.116;10.1109/VISUAL.2003.1250383;10.1109/VISUAL.1997.663871
Wavelet analysis, multivariate data, time-dependent data, climate variability change visualization, El Nino
Vis
2009
Visual Exploration of Nasal Airflow
10.1109/TVCG.2009.198
1. 1414
J
Rhinologists are often faced with the challenge of assessing nasal breathing from a functional point of view to derive effective therapeutic interventions. While the complex nasal anatomy can be revealed by visual inspection and medical imaging, only vague information is available regarding the nasal airflow itself: Rhinomanometry delivers rather unspecific integral information on the pressure gradient as well as on total flow and nasal flow resistance. In this article we demonstrate how the understanding of physiological nasal breathing can be improved by simulating and visually analyzing nasal airflow, based on an anatomically correct model of the upper human respiratory tract. In particular we demonstrate how various information visualization (InfoVis) techniques, such as a highly scalable implementation of parallel coordinates, time series visualizations, as well as unstructured grid multi-volume rendering, all integrated within a multiple linked views framework, can be utilized to gain a deeper understanding of nasal breathing. Evaluation is accomplished by visual exploration of spatio-temporal airflow characteristics that include not only information on flow features but also on accompanying quantities such as temperature and humidity. To our knowledge, this is the first in-depth visual exploration of the physiological function of the nose over several simulated breathing cycles under consideration of a complete model of the nasal airways, realistic boundary conditions, and all physically relevant time-varying quantities.
Zachow, S.;Muigg, P.;Hildebrandt, T.;Doleisch, H.;Hege, H.-C.
Zuse Inst. Berlin (ZIB), Berlin, Germany|c|;;;;
10.1109/TVCG.2008.139;10.1109/TVCG.2007.70588;10.1109/VISUAL.2003.1250390;10.1109/VISUAL.2000.885739;10.1109/VISUAL.1990.146402;10.1109/VISUAL.2005.1532788;10.1109/TVCG.2006.170
Flow visualization, exploratory data analysis, interactive visual analysis of scientific data, time-dependent data
Vis
2009
Visual Human+Machine Learning
10.1109/TVCG.2009.199
1. 1334
J
In this paper we describe a novel method to integrate interactive visual analysis and machine learning to support the insight generation of the user. The suggested approach combines the vast search and processing power of the computer with the superior reasoning and pattern recognition capabilities of the human user. An evolutionary search algorithm has been adapted to assist in the fuzzy logic formalization of hypotheses that aim at explaining features inside multivariate, volumetric data. Up to now, users solely rely on their knowledge and expertise when looking for explanatory theories. However, it often remains unclear whether the selected attribute ranges represent the real explanation for the feature of interest. Other selections hidden in the large number of data variables could potentially lead to similar features. Moreover, as simulation complexity grows, users are confronted with huge multidimensional data sets making it almost impossible to find meaningful hypotheses at all. We propose an interactive cycle of knowledge-based analysis and automatic hypothesis generation. Starting from initial hypotheses, created with linking and brushing, the user steers a heuristic search algorithm to look for alternative or related hypotheses. The results are analyzed in information visualization views that are linked to the volume rendering. Individual properties as well as global aggregates are visually presented to provide insight into the most relevant aspects of the generated hypotheses. This novel approach becomes computationally feasible due to a GPU implementation of the time-critical parts in the algorithm. A thorough evaluation of search times and noise sensitivity as well as a case study on data from the automotive domain substantiate the usefulness of the suggested approach.
Fuchs, R.;Waser, J.;Groller, E.
ETH Zurich, Zurich, Switzerland|c|;;
10.1109/TVCG.2007.70615;10.1109/TVCG.2008.139;10.1109/VAST.2007.4389001;10.1109/VAST.2007.4389000
Interactive Visual Analysis, Volumetric Data, Multiple Competing Hypotheses, Knowledge Discovery, Computerassisted Multivariate Data Exploration, Curse of Dimensionality, Predictive Analysis, Genetic Algorithm
Vis
2009
Visualization and Exploration of Temporal Trend Relationships in Multivariate Time-Varying Data
10.1109/TVCG.2009.200
1. 1366
J
We present a new algorithm to explore and visualize multivariate time-varying data sets. We identify important trend relationships among the variables based on how the values of the variables change over time and how those changes are related to each other in different spatial regions and time intervals. The trend relationships can be used to describe the correlation and causal effects among the different variables. To identify the temporal trends from a local region, we design a new algorithm called SUBDTW to estimate when a trend appears and vanishes in a given time series. Based on the beginning and ending times of the trends, their temporal relationships can be modeled as a state machine representing the trend sequence. Since a scientific data set usually contains millions of data points, we propose an algorithm to extract important trend relationships in linear time complexity. We design novel user interfaces to explore the trend relationships, to visualize their temporal characteristics, and to display their spatial distributions. We use several scientific data sets to test our algorithm and demonstrate its utilities.
Teng-Yok Lee;Han-Wei Shen
Ohio State Univ., Columbus, OH, USA|c|;
10.1109/TVCG.2008.131;10.1109/VISUAL.1999.809864;10.1109/VISUAL.2004.95;10.1109/VISUAL.2003.1250402;10.1109/TVCG.2007.70519;10.1109/INFVIS.1997.636793;10.1109/TVCG.2008.140;10.1109/VAST.2006.261421
SUBDTW, trend sequence, trend sequence clustering
Vis
2009
Volume Illustration of Muscle from Diffusion Tensor Images
10.1109/TVCG.2009.203
1. 1432
J
Medical illustration has demonstrated its effectiveness to depict salient anatomical features while hiding the irrelevant details. Current solutions are ineffective for visualizing fibrous structures such as muscle, because typical datasets (CT or MRI) do not contain directional details. In this paper, we introduce a new muscle illustration approach that leverages diffusion tensor imaging (DTI) data and example-based texture synthesis techniques. Beginning with a volumetric diffusion tensor image, we reformulate it into a scalar field and an auxiliary guidance vector field to represent the structure and orientation of a muscle bundle. A muscle mask derived from the input diffusion tensor image is used to classify the muscle structure. The guidance vector field is further refined to remove noise and clarify structure. To simulate the internal appearance of the muscle, we propose a new two-dimensional example based solid texture synthesis algorithm that builds a solid texture constrained by the guidance vector field. Illustrating the constructed scalar field and solid texture efficiently highlights the global appearance of the muscle as well as the local shape and structure of the muscle fibers in an illustrative fashion. We have applied the proposed approach to five example datasets (four pig hearts and a pig leg), demonstrating plausible illustration and expressiveness.
Wei Chen;Zhicheng Yan;Song Zhang;Crow, J.A.;Ebert, D.S.;McLaughlin, R.M.;Mullins, K.B.;Cooper, R.;Zi'ang Ding;Jun Liao
State Key Lab. of CAD & CG, Zhejiang Univ., Hangzhou, China|c|;;;;;;;;;
10.1109/TVCG.2006.144;10.1109/VISUAL.2005.1532856;10.1109/TVCG.2006.134;10.1109/VISUAL.2005.1532777;10.1109/VISUAL.2003.1250425;10.1109/VISUAL.2005.1532854;10.1109/VISUAL.2000.885694
Illustrative Visualization, Diffusion Tensor Image, Muscle, Solid Texture Synthesis
Vis
2009
Volume Ray Casting with Peak finding and Differential Sampling
10.1109/TVCG.2009.204
1. 1578
J
Direct volume rendering and isosurfacing are ubiquitous rendering techniques in scientific visualization, commonly employed in imaging 3D data from simulation and scan sources. Conventionally, these methods have been treated as separate modalities, necessitating different sampling strategies and rendering algorithms. In reality, an isosurface is a special case of a transfer function, namely a Dirac impulse at a given isovalue. However, artifact-free rendering of discrete isosurfaces in a volume rendering framework is an elusive goal, requiring either infinite sampling or smoothing of the transfer function. While preintegration approaches solve the most obvious deficiencies in handling sharp transfer functions, artifacts can still result, limiting classification. In this paper, we introduce a method for rendering such features by explicitly solving for isovalues within the volume rendering integral. In addition, we present a sampling strategy inspired by ray differentials that automatically matches the frequency of the image plane, resulting in fewer artifacts near the eye and better overall performance. These techniques exhibit clear advantages over standard uniform ray casting with and without preintegration, and allow for high-quality interactive volume rendering with sharp C0 transfer functions.
Knoll, A.;Hijazi, Y.;Westerteiger, R.;Schott, M.;Hansen, C.;Hagen, H.
Univ. of Kaiserslautern, Kaiserslautern, Germany|c|;;;;;
10.1109/VISUAL.1994.346320;10.1109/VISUAL.2003.1250384;10.1109/VISUAL.2003.1250412;10.1109/VISUAL.1998.745713;10.1109/TVCG.2006.154;10.1109/VISUAL.2000.885683;10.1109/TVCG.2006.149;10.1109/VISUAL.2001.964490;10.1109/VISUAL.2004.52;10.1109/VISUAL.1998.745300
direct volume rendering, isosurface, ray casting, ray differentials, sampling, transfer function, preintegration, view dependent
InfoVis
2008
A Framework of Interaction Costs in Information Visualization
10.1109/TVCG.2008.109
1. 1156
J
Interaction cost is an important but poorly understood factor in visualization design. We propose a framework of interaction costs inspired by Normanpsilas Seven Stages of Action to facilitate study. From 484 papers, we collected 61 interaction-related usability problems reported in 32 user studies and placed them into our framework of seven costs: (1) Decision costs to form goals; (2) system-power costs to form system operations; (3) Multiple input mode costs to form physical sequences; (4) Physical-motion costs to execute sequences; (5) Visual-cluttering costs to perceive state; (6) View-change costs to interpret perception; (7) State-change costs to evaluate interpretation. We also suggested ways to narrow the gulfs of execution (2-4) and evaluation (5-7) based on collected reports. Our framework suggests a need to consider decision costs (1) as the gulf of goal formation.
Lam, H.
Univ. of British Columbia, Vancouver, BC|c|
10.1109/INFVIS.2001.963289;10.1109/INFVIS.2003.1249020;10.1109/INFVIS.2005.1532151;10.1109/INFVIS.2004.21;10.1109/TVCG.2006.187;10.1109/INFVIS.2004.5;10.1109/TVCG.2007.70515;10.1109/TVCG.2006.120;10.1109/INFVIS.2005.1532133;10.1109/INFVIS.2004.19;10.1109/INFVIS.2005.1532126;10.1109/VAST.2006.261426;10.1109/VISUAL.1994.346302;10.1109/TVCG.2007.70589;10.1109/INFVIS.2005.1532132;10.1109/INFVIS.1998.729560;10.1109/TVCG.2007.70583
Interaction, Information Visualization, Framework, Interface Evaluation