IEEE VIS Publication Dataset

next
Vis
2010
FI3D: Direct-Touch Interaction for the Exploration of 3D Scientific Visualization Spaces
10.1109/TVCG.2010.157
1. 1622
J
We present the design and evaluation of FI3D, a direct-touch data exploration technique for 3D visualization spaces. The exploration of three-dimensional data is core to many tasks and domains involving scientific visualizations. Thus, effective data navigation techniques are essential to enable comprehension, understanding, and analysis of the information space. While evidence exists that touch can provide higher-bandwidth input, somesthetic information that is valuable when interacting with virtual worlds, and awareness when working in collaboration, scientific data exploration in 3D poses unique challenges to the development of effective data manipulations. We present a technique that provides touch interaction with 3D scientific data spaces in 7 DOF. This interaction does not require the presence of dedicated objects to constrain the mapping, a design decision important for many scientific datasets such as particle simulations in astronomy or physics. We report on an evaluation that compares the technique to conventional mouse-based interaction. Our results show that touch interaction is competitive in interaction speed for translation and integrated interaction, is easy to learn and use, and is preferred for exploration and wayfinding tasks. To further explore the applicability of our basic technique for other types of scientific visualizations we present a second case study, adjusting the interaction to the illustrative visualization of fiber tracts of the brain and the manipulation of cutting planes in this context.
Lingyun Yu;Svetachov, P.;Isenberg, P.;Everts, M.H.;Isenberg, T.
Univ. of Groningen, Groningen, Netherlands|c|;;;;
10.1109/VISUAL.2005.1532778;10.1109/TVCG.2007.70515;10.1109/VISUAL.2004.30
Direct-touch interaction, wall displays, 3D navigation and exploration, evaluation, illustrative visualization
Vis
2010
Gradient Estimation Revitalized
10.1109/TVCG.2010.160
1. 1504
J
We investigate the use of a Fourier-domain derivative error kernel to quantify the error incurred while estimating the gradient of a function from scalar point samples on a regular lattice. We use the error kernel to show that gradient reconstruction quality is significantly enhanced merely by shifting the reconstruction kernel to the centers of the principal lattice directions. Additionally, we exploit the algebraic similarities between the scalar and derivative error kernels to design asymptotically optimal gradient estimation filters that can be factored into an infinite impulse response interpolation prefilter and a finite impulse response directional derivative filter. This leads to a significant performance gain both in terms of accuracy and computational efficiency. The interpolation prefilter provides an accurate scalar approximation and can be re-used to cheaply compute directional derivatives on-the-fly without the need to store gradients. We demonstrate the impact of our filters in the context of volume rendering of scalar data sampled on the Cartesian and Body-Centered Cubic lattices. Our results rival those obtained from other competitive gradient estimation methods while incurring no additional computational or storage overhead.
Alim, U.;Moller, T.;Condat, L.
Simon Fraser Univ., Burnaby, BC, Canada|c|;;
10.1109/VISUAL.2001.964498;10.1109/VISUAL.1994.346331;10.1109/VISUAL.1997.663848;10.1109/VISUAL.2004.65
Derivative, Gradient, Reconstruction, Sampling, Lattice, Interpolation, Approximation, Frequency Error Kernel
Vis
2010
Illustrative Stream Surfaces
10.1109/TVCG.2010.166
1. 1338
J
Stream surfaces are an intuitive approach to represent 3D vector fields. In many cases, however, they are challenging objects to visualize and to understand, due to a high degree of self-occlusion. Despite the need for adequate rendering methods, little work has been done so far in this important research area. In this paper, we present an illustrative rendering strategy for stream surfaces. In our approach, we apply various rendering techniques, which are inspired by the traditional flow illustrations drawn by Dallmann and Abraham & Shaw in the early 1980s. Among these techniques are contour lines and halftoning to show the overall surface shape. Flow direction as well as singularities on the stream surface are depicted by illustrative surface streamlines. ;To go beyond reproducing static text book images, we provide several interaction features, such as movable cuts and slabs allowing an interactive exploration of the flow and insights into subjacent structures, e.g., the inner windings of vortex breakdown bubbles. These methods take only the parameterized stream surface as input, require no further preprocessing, and can be freely combined by the user. We explain the design, GPU-implementation, and combination of the different illustrative rendering and interaction methods and demonstrate the potential of our approach by applying it to stream surfaces from various flow simulations.
Born, S.;Wiebel, A.;Friedrich, J.;Scheuermann, G.;Bartz, D.
Univ. Leipzig, Leipzig, Germany|c|;;;;
10.1109/VISUAL.1990.146395;10.1109/TVCG.2009.190;10.1109/TVCG.2007.70565;10.1109/VISUAL.2005.1532857;10.1109/VISUAL.1999.809905;10.1109/TVCG.2008.133;10.1109/TVCG.2009.138;10.1109/VISUAL.2001.964506;10.1109/VISUAL.2005.1532858;10.1109/VISUAL.2005.1532855;10.1109/TVCG.2008.170;10.1109/VISUAL.2004.113;10.1109/VISUAL.2003.1250376
Flow visualization, Stream surfaces, Illustrative rendering, Silhouettes, GPU technique, 3D vector field data
Vis
2010
Interactive Histology of Large-Scale Biomedical Image Stacks
10.1109/TVCG.2010.168
1. 1395
J
Histology is the study of the structure of biological tissue using microscopy techniques. As digital imaging technology advances, high resolution microscopy of large tissue volumes is becoming feasible; however, new interactive tools are needed to explore and analyze the enormous datasets. In this paper we present a visualization framework that specifically targets interactive examination of arbitrarily large image stacks. Our framework is built upon two core techniques: display-aware processing and GPU-accelerated texture compression. With display-aware processing, only the currently visible image tiles are fetched and aligned on-the-fly, reducing memory bandwidth and minimizing the need for time-consuming global pre-processing. Our novel texture compression scheme for GPUs is tailored for quick browsing of image stacks. We evaluate the usability of our viewer for two histology applications: digital pathology and visualization of neural structure at nanoscale-resolution in serial electron micrographs.
Won-Ki Jeong;Schneider, J.;Turney, S.G.;Faulkner-Jones, B.E.;Meyer, D.;Westermann, R.;Reid, R.C.;Lichtman, J.;Pfister, H.
;;;;;;;;
10.1109/VISUAL.1994.346321;10.1109/VISUAL.2002.1183757;10.1109/VISUAL.2001.964520;10.1109/TVCG.2007.70516;10.1109/VISUAL.1995.480809;10.1109/VISUAL.2001.964531;10.1109/VISUAL.1995.480812;10.1109/VISUAL.2003.1250385
Gigapixel viewer, biomedical image processing, GPU, texture compression
Vis
2010
Interactive Separating Streak Surfaces
10.1109/TVCG.2010.169
1. 1577
J
Streak surfaces are among the most important features to support 3D unsteady flow exploration, but they are also among the computationally most demanding. Furthermore, to enable a feature driven analysis of the flow, one is mainly interested in streak surfaces that show separation profiles and thus detect unstable manifolds in the flow. The computation of such separation surfaces requires to place seeding structures at the separation locations and to let the structures move correspondingly to these locations in the unsteady flow. Since only little knowledge exists about the time evolution of separating streak surfaces, at this time, an automated exploration of 3D unsteady flows using such surfaces is not feasible. Therefore, in this paper we present an interactive approach for the visual analysis of separating streak surfaces. Our method draws upon recent work on the extraction of Lagrangian coherent structures (LCS) and the real-time visualization of streak surfaces on the GPU. We propose an interactive technique for computing ridges in the finite time Lyapunov exponent (FTLE) field at each time step, and we use these ridges as seeding structures to track streak surfaces in the time-varying flow. By showing separation surfaces in combination with particle trajectories, and by letting the user interactively change seeding parameters such as particle density and position, visually guided exploration of separation profiles in 3D is provided. To the best of our knowledge, this is the first time that the reconstruction and display of semantic separable surfaces in 3D unsteady flows can be performed interactively, giving rise to new possibilities for gaining insight into complex flow phenomena.
Ferstl, F.;Burger, K.;Theisel, H.;Westermann, R.
Comput. Graphics & Visualization group, Tech. Univ. Munchen, München, Germany|c|;;;
10.1109/TVCG.2009.190;10.1109/TVCG.2007.70557;10.1109/VISUAL.1992.235211;10.1109/TVCG.2008.133;10.1109/VISUAL.2001.964506;10.1109/TVCG.2007.70554;10.1109/VISUAL.1993.398875;10.1109/TVCG.2009.177;10.1109/TVCG.2009.154;10.1109/TVCG.2006.151;10.1109/TVCG.2007.70551;10.1109/TVCG.2008.163;10.1109/VISUAL.2005.1532780
Unsteady flow visualization, feature extraction, streak surface generation, GPUs
Vis
2010
Interactive Vector field Feature Identification
10.1109/TVCG.2010.170
1. 1568
J
We introduce a flexible technique for interactive exploration of vector field data through classification derived from user-specified feature templates. Our method is founded on the observation that, while similar features within the vector field may be spatially disparate, they share similar neighborhood characteristics. Users generate feature-based visualizations by interactively highlighting well-accepted and domain specific representative feature points. Feature exploration begins with the computation of attributes that describe the neighborhood of each sample within the input vector field. Compilation of these attributes forms a representation of the vector field samples in the attribute space. We project the attribute points onto the canonical 2D plane to enable interactive exploration of the vector field using a painting interface. The projection encodes the similarities between vector field points within the distances computed between their associated attribute points. The proposed method is performed at interactive rates for enhanced user experience and is completely flexible as showcased by the simultaneous identification of diverse feature types.
Daniels II, J.;Anderson, E.W.;Nonato, L.G.;Silva, C.T.
Sch. of Comput. & Sci. Comput., Univ. of Utah, Salt Lake City, UT, USA|c|;;;
10.1109/TVCG.2009.138;10.1109/VISUAL.1993.398846;10.1109/TVCG.2007.70579;10.1109/VISUAL.1991.175794;10.1109/VISUAL.1998.745333;10.1109/VISUAL.1992.235211;10.1109/TVCG.2008.116;10.1109/TVCG.2009.190;10.1109/VISUAL.1999.809917;10.1109/VISUAL.1997.663910;10.1109/VISUAL.1998.745296;10.1109/VISUAL.1991.175771;10.1109/VISUAL.2000.885690;10.1109/VISUAL.2000.885689
Vector field, data clustering, feature classification, high-dimensional data, user interaction
Vis
2010
Interactive Visual Analysis of Multiple Simulation Runs Using the Simulation Model View: Understanding and Tuning of an Electronic Unit Injector
10.1109/TVCG.2010.171
1. 1457
J
Multiple simulation runs using the same simulation model with different values of control parameters generate a large data set that captures the behavior of the modeled phenomenon. However, there is a conceptual and visual gap between the simulation model behavior and the data set that makes data analysis more difficult. We propose a simulation model view that helps to bridge that gap by visually combining the simulation model description and the generated data. The simulation model view provides a visual outline of the simulation process and the corresponding simulation model. The view is integrated in a Coordinated Multiple Views; (CMV) system. As the simulation model view provides a limited display space, we use three levels of details. We explored the use of the simulation model view, in close collaboration with a domain expert, to understand and tune an electronic unit injector (EUI). We also developed analysis procedures based on the view. The EUI is mostly used in heavy duty Diesel engines. We were mainly interested in understanding the model and how to tune it for three different operation modes: low emission, low consumption, and high power. Very positive feedback from the domain expert shows that the use of the simulation model view and the corresponding ;analysis procedures within a CMV system represents an effective technique for interactive visual analysis of multiple simulation runs.
Matkovic, K.;Gracanin, D.;Jelovic, M.;Ammer, A.;Lez, A.;Hauser, H.
VRVis Res. Center, Vienna, Austria|c|;;;;;
10.1109/TVCG.2009.155;10.1109/INFVIS.2002.1173149;10.1109/INFVIS.1995.528685;10.1109/INFVIS.2002.1173157
Visualization in physical sciences and engineering, time series data, coordinated multiple views
Vis
2010
Interactive Visualization of Hyperspectral Images of Historical Documents
10.1109/TVCG.2010.172
1. 1448
J
This paper presents an interactive visualization tool to study and analyze hyperspectral images (HSI) of historical documents. This work is part of a collaborative effort with the Nationaal Archief of the Netherlands (NAN) and Art Innovation, a manufacturer of hyperspectral imaging hardware designed for old and fragile documents. The NAN is actively capturing HSI of historical documents for use in a variety of tasks related to the analysis and management of archival collections, from ink and paper analysis to monitoring the effects of environmental aging. To assist their work, we have developed a comprehensive visualization tool that offers an assortment of visualization and analysis methods, including interactive spectral selection, spectral similarity analysis, time-varying data analysis and visualization, and selective spectral band fusion. This paper describes our visualization software and how it is used to facilitate the tasks needed by our collaborators. Evaluation feedback from our collaborators on how this tool benefits their work is included.
Seon Joo Kim;Shaojie Zhuo;Fanbo Deng;Chi-Wing Fu;Brown, M.S.
Nat. Univ. of Singapore, Singapore, Singapore|c|;;;;
10.1109/TVCG.2008.139;10.1109/TVCG.2008.161;10.1109/TVCG.2008.146;10.1109/TVCG.2006.155;10.1109/VISUAL.1995.485155;10.1109/TVCG.2008.182
Hyperspectral visualization, data exploration, image fusion, document processing and analysis
Vis
2010
IRIS: Illustrative Rendering for Integral Surfaces
10.1109/TVCG.2010.173
1. 1328
J
Integral surfaces are ideal tools to illustrate vector fields and fluid flow structures. However, these surfaces can be visually complex and exhibit difficult geometric properties, owing to strong stretching, shearing and folding of the flow from which they are derived. Many techniques for non-photorealistic rendering have been presented previously. It is, however, unclear how these techniques can be applied to integral surfaces. In this paper, we examine how transparency and texturing techniques can be used with integral surfaces to convey both shape and directional information. We present a rendering pipeline that combines these techniques aimed at faithfully and accurately representing integral surfaces while improving visualization insight. The presented pipeline is implemented directly on the GPU, providing real-time interaction for all rendering modes, and does not require expensive preprocessing of integral surfaces after computation.
Hummel, M.;Garth, C.;Hamann, B.;Hagen, H.;Joy, K.I.
Univ. of Kaiserslautern, Kaiserslautern, Germany|c|;;;;
10.1109/TVCG.2006.124;10.1109/VISUAL.2003.1250414;10.1109/TVCG.2008.133;10.1109/VISUAL.1992.235211;10.1109/TVCG.2009.190;10.1109/VISUAL.1993.398875;10.1109/VISUAL.2001.964506;10.1109/VISUAL.2000.885694;10.1109/TVCG.2008.163;10.1109/TVCG.2009.154
Flow visualization, integral surfaces, illustrative rendering
Vis
2010
Noodles: A Tool for Visualization of Numerical Weather Model Ensemble Uncertainty
10.1109/TVCG.2010.181
1. 1430
J
Numerical weather prediction ensembles are routinely used for operational weather forecasting. The members of these ensembles are individual simulations with either slightly perturbed initial conditions or different model parameterizations, or occasionally both. Multi-member ensemble output is usually large, multivariate, and challenging to interpret interactively. Forecast meteorologists are interested in understanding the uncertainties associated with numerical weather prediction; specifically variability between the ensemble members. Currently, visualization of ensemble members is mostly accomplished through spaghetti plots of a single midtroposphere pressure surface height contour. In order to explore new uncertainty visualization methods, the Weather Research and Forecasting (WRF) model was used to create a 48-hour, 18 member parameterization ensemble of the 13 March 1993 "Superstorm". A tool was designed to interactively explore the ensemble uncertainty of three important weather variables: water-vapor mixing ratio, perturbation potential temperature, and perturbation pressure. Uncertainty was quantified using individual ensemble member standard deviation, inter-quartile range, and the width of the 95% confidence interval. Bootstrapping was employed to overcome the dependence on normality in the uncertainty metrics. A coordinated view of ribbon and glyph-based uncertainty visualization, spaghetti plots, iso-pressure colormaps, and data transect plots was provided to two meteorologists for expert evaluation. They found it useful in assessing uncertainty in the data, especially in finding outliers in the ensemble run and therefore avoiding the WRF parameterizations that lead to these outliers. Additionally, the meteorologists could identify spatial regions where the uncertainty was significantly high, allowing for identification of poorly simulated storm environments and physical interpretation of these model issues.
Sanyal, J.;Song Zhang;Dyer, J.;Mercer, A.;Amburn, P.;Moorhead, R.J.
;;;;;
10.1109/TVCG.2009.114;10.1109/INFVIS.2002.1173145
Uncertainty visualization, weather ensemble, geographic/geospatial visualization, glyph-based techniques, time-varying data, qualitative evaluation
Vis
2010
On the Fractal Dimension of Isosurfaces
10.1109/TVCG.2010.182
1. 1205
J
A (3D) scalar grid is a regular n1 × n2 × n3 grid of vertices where each vertex v is associated with some scalar value sv. Applying trilinear interpolation, the scalar grid determines a scalar function g where g(v) = sv for each grid vertex v. An isosurface with isovalue σ is a triangular mesh which approximates the level set g-1 (σ). The fractal dimension of an isosurface represents the growth in the isosurface as the number of grid cubes increases. We define and discuss the fractal isosurface dimension. Plotting the fractal dimension as a function of the isovalues in a data set provides information about the isosurfaces determined by the data set. We present statistics on the average fractal dimension of 60 publicly available benchmark data sets. We also show the fractal dimension is highly correlated with topological noise in the benchmark data sets, measuring the topological noise by the number of connected components in the isosurface. Lastly, we present a formula predicting the fractal dimension as a function of noise and validate the formula with experimental results.
Khoury, M.;Wenger, R.
Comput. & Inf. Sci. Dept., Ohio State Univ., Columbus, OH, USA|c|;
10.1109/TVCG.2006.168;10.1109/TVCG.2008.160;10.1109/VISUAL.2004.28;10.1109/VISUAL.1996.568103;10.1109/VISUAL.2001.964515;10.1109/VISUAL.1991.175782;10.1109/VISUAL.1997.663875
Isosurfaces, scalar data, fractal dimension
Vis
2010
Pre-Integrated Volume Rendering with Non-Linear Gradient Interpolation
10.1109/TVCG.2010.187
1. 1494
J
Shading is an important feature for the comprehension of volume datasets, but is difficult to implement accurately. Current techniques based on pre-integrated direct volume rendering approximate the volume rendering integral by ignoring non-linear gradient variations between front and back samples, which might result in cumulated shading errors when gradient variations are important and / or when the illumination function features high frequencies. In this paper, we explore a simple approach for pre-integrated volume rendering with non-linear gradient interpolation between front and back samples. We consider that the gradient smoothly varies along a quadratic curve instead of a segment in-between consecutive samples. This not only allows us to compute more accurate shaded pre-integrated look-up tables, but also allows us to more efficiently process shading amplifying effects, based on gradient filtering. An interesting property is that the pre-integration tables we use remain two-dimensional as for usual pre-integrated classification. We conduct experiments using a full hardware approach with the Blinn-Phong illumination model as well as with a non-photorealistic illumination model.
Guetat, A.;Ancel, A.;Marchesin, S.;Dischler, J.-M.
;;;
10.1109/VISUAL.2000.885683;10.1109/VISUAL.2000.885694;10.1109/VISUAL.1990.146391;10.1109/TVCG.2009.149
Ray casting, pre-integration, Phong shading, volume rendering
Vis
2010
Projector Placement Planning for High Quality Visualizations on Real-World Colored Objects
10.1109/TVCG.2010.189
1. 1641
J
Many visualization applications benefit from displaying content on real-world objects rather than on a traditional display (e.g., a monitor). This type of visualization display is achieved by projecting precisely controlled illumination from multiple projectors onto the real-world colored objects. For such a task, the placement of the projectors is critical in assuring that the desired visualization is possible. Using ad hoc projector placement may cause some appearances to suffer from color shifting due to insufficient projector light radiance being exposed onto the physical surface. This leads to an incorrect appearance and ultimately to a false and potentially misleading visualization. In this paper, we present a framework to discover the optimal position and orientation of the projectors for such projection-based visualization displays. An optimal projector placement should be able to achieve the desired visualization with minimal projector light radiance. When determining optimal projector placement, object visibility, surface reflectance properties, and projector-surface distance and orientation need to be considered. We first formalize a theory for appearance editing image formation and construct a constrained linear system of equations that express when a desired novel appearance or visualization is possible given a geometric and surface reflectance model of the physical surface. Then, we show how to apply this constrained system in an adaptive search to efficiently discover the optimal projector placement which achieves the desired appearance. Constraints can be imposed on the maximum radiance allowed by the projectors and the projectors' placement to support specific goals of various visualization applications. We perform several real-world and simulated appearance edits and visualizations to demonstrate the improvement obtained by our discovered projector placement over ad hoc projector placement.
Law, A.J.;Aliaga, D.;Majumder, A.
Dept. of Comput. Sci., Purdue Univ., West Lafayette, IN, USA|c|;;
10.1109/TVCG.2009.124;10.1109/VISUAL.2002.1183793;10.1109/TVCG.2006.121;10.1109/TVCG.2007.70586;10.1109/VISUAL.2000.885684
Large and High-resolution Displays, Interaction Design, Mobile and Ubiquitous Visualization
Vis
2010
Result-Driven Exploration of Simulation Parameter Spaces for Visual Effects Design
10.1109/TVCG.2010.190
1. 1476
J
Graphics artists commonly employ physically-based simulation for the generation of effects such as smoke, explosions, and similar phenomena. The task of finding the correct parameters for a desired result, however, is difficult and time-consuming as current tools provide little to no guidance. In this paper, we present a new approach for the visual exploration of such parameter spaces. Given a three-dimensional scene description, we utilize sampling and spatio-temporal clustering techniques to generate a concise overview of the achievable variations and their temporal evolution. Our visualization system then allows the user to explore the simulation space in a goal-oriented manner. Animation sequences with a set of desired characteristics can be composed using a novel search-by-example approach and interactive direct volume rendering is employed to provide instant visual feedback.
Bruckner, S.;Moller, T.
GrUVi (Graphics, Usability, & Visualization Lab.), Simon Fraser Univ., Burnaby, BC, Canada|c|;
10.1109/VISUAL.1992.235222;10.1109/VISUAL.1999.809871;10.1109/VISUAL.2003.1250401;10.1109/TVCG.2006.164;10.1109/VISUAL.2003.1250402;10.1109/INFVIS.1998.729559;10.1109/TVCG.2009.200;10.1109/VAST.2007.4389013;10.1109/VISUAL.1993.398859;10.1109/TVCG.2009.153;10.1109/TVCG.2007.70581;10.1109/VAST.2006.261421
Visual exploration, visual effects, clustering, time-dependent volume data
Vis
2010
Scalable Multi-variate Analytics of Seismic and Satellite-based Observational Data
10.1109/TVCG.2010.192
1. 1420
J
Over the past few years, large human populations around the world have been affected by an increase in significant seismic activities. For both conducting basic scientific research and for setting critical government policies, it is crucial to be able to explore and understand seismic and geographical information obtained through all scientific instruments. In this work, we present a visual analytics system that enables explorative visualization of seismic data together with satellite-based observational data, and introduce a suite of visual analytical tools. Seismic and satellite data are integrated temporally and spatially. Users can select temporal ;and spatial ranges to zoom in on specific seismic events, as well as to inspect changes both during and after the events. Tools for designing high dimensional transfer functions have been developed to enable efficient and intuitive comprehension of the multi-modal data. Spread-sheet style comparisons are used for data drill-down as well as presentation. Comparisons between distinct seismic events are also provided for characterizing event-wise differences. Our system has been designed for scalability in terms of data size, complexity (i.e. number of modalities), and varying form factors of display environments.
Xiaoru Yuan;He Xiao;Hanqi Guo;Peihong Guo;Kendall, W.;Huang, J.;Yongxian Zhang
Key Lab. of Machine Perception, Peking Univ., Beijing, China|c|;;;;;;
10.1109/TVCG.2009.179;10.1109/VISUAL.2003.1250412;10.1109/VISUAL.1990.146402;10.1109/VISUAL.2002.1183814;10.1109/TVCG.2008.170;10.1109/TVCG.2008.184
Earth Science Visualization, Multivariate Visualization, Seismic Data, Scalable Visualization
Vis
2010
Spatial Conditioning of Transfer Functions Using Local Material Distributions
10.1109/TVCG.2010.195
1. 1310
J
In many applications of Direct Volume Rendering (DVR) the importance of a certain material or feature is highly dependent on its relative spatial location. For instance, in the medical diagnostic procedure, the patient's symptoms often lead to specification of features, tissues and organs of particular interest. One such example is pockets of gas which, if found inside the body at abnormal locations, are a crucial part of a diagnostic visualization. This paper presents an approach that enhances DVR transfer function design with spatial localization based on user specified material dependencies. Semantic expressions are used to define conditions based on relations between different materials, such as only render iodine uptake when close to liver. The underlying methods rely on estimations of material distributions which are acquired by weighing local neighborhoods of the data against approximations of material likelihood functions. This information is encoded and used to influence rendering according to the user's specifications. The result is improved focus on important features by allowing the user to suppress spatially less-important data. In line with requirements from actual clinical DVR practice, the methods do not require explicit material segmentation that would be impossible or prohibitively time-consuming to achieve in most real cases. The scheme scales well to higher dimensions which accounts for multi-dimensional transfer functions and multivariate data. Dual-Energy Computed Tomography, an important new modality in radiology, is used to demonstrate this scalability. In several examples we show significantly improved focus on clinically important aspects in the rendered images.
Lindholm, S.;Ljung, P.;Lundstrom, C.;Persson, A.;Ynnerman, A.
;;;;
10.1109/TVCG.2009.185;10.1109/TVCG.2009.120;10.1109/TVCG.2008.147;10.1109/VISUAL.2003.1250412;10.1109/TVCG.2007.70591;10.1109/VISUAL.2001.964516;10.1109/TVCG.2009.189;10.1109/TVCG.2008.162;10.1109/VISUAL.2003.1250413;10.1109/VISUAL.1999.809932;10.1109/TVCG.2006.148
Direct Volume Rendering, Transfer Function, Spatial Conditioning, Neighborhood Meta-Data
Vis
2010
Special Relativistic Visualization by Local Ray Tracing
10.1109/TVCG.2010.196
1. 1250
J
Special relativistic visualization offers the possibility of experiencing the optical effects of traveling near the speed of light, including apparent geometric distortions as well as Doppler and searchlight effects. Early high-quality computer graphics images of relativistic scenes were created using offline, computationally expensive CPU-side 4D ray tracing. Alternate approaches such as image-based rendering and polygon-distortion methods are able to achieve interactivity, but exhibit inferior visual quality due to sampling artifacts. In this paper, we introduce a hybrid rendering technique based on polygon distortion and local ray tracing that facilitates interactive high-quality visualization of multiple objects moving at relativistic speeds in arbitrary directions. The method starts by calculating tight image-space footprints for the apparent triangles of the 3D scene objects. The final image is generated using a single image-space ray tracing step incorporating Doppler and searchlight effects. Our implementation uses GPU shader programming and hardware texture filtering to achieve high rendering speed.
Müller, T.;Grottel, S.;Weiskopf, D.
Visualization Res. Center (VISUS), Univ. of Stuttgart, Stuttgart, Germany|c|;;
10.1109/VISUAL.2000.885709
Poincare transformation, aberration of light, Doppler effect, illumination, searchlight effect, special relativity, GPU ray tracing
Vis
2010
Streak Lines as Tangent Curves of a Derived Vector field
10.1109/TVCG.2010.198
1. 1234
J
Characteristic curves of vector fields include stream, path, and streak lines. Stream and path lines can be obtained by a simple vector field integration of an autonomous ODE system, i.e., they can be described as tangent curves of a vector field. This facilitates their mathematical analysis including the extraction of core lines around which stream or path lines exhibit swirling motion, or the computation of their curvature for every point in the domain without actually integrating them. Such a description of streak lines is not yet available, which excludes them from most of the feature extraction and analysis tools that have been developed in our community. In this paper, we develop the first description of streak lines as tangent curves of a derived vector field - the streak line vector field - and show how it can be computed from the spatial and temporal gradients of the flow map, i.e., a dense path line integration is required. We demonstrate the high accuracy of our approach by comparing it to solutions where the ground truth is analytically known and to solutions where the ground truth has been obtained using the classic streak line computation. Furthermore, we apply a number of feature extraction and analysis tools to the new streak line vector field including the extraction of cores of swirling streak lines and the computation of streak line curvature fields. These first applications foreshadow the large variety of possible future research directions based on our new mathematical description of streak lines.
Weinkauf, T.;Theisel, H.
Courant Inst. of Math. Sci., New York Univ., New York, NY, USA|c|;
10.1109/TVCG.2007.70557;10.1109/VISUAL.2005.1532851;10.1109/TVCG.2007.70545;10.1109/TVCG.2009.154;10.1109/TVCG.2007.70554;10.1109/VISUAL.2004.99;10.1109/TVCG.2008.133;10.1109/TVCG.2009.190;10.1109/VISUAL.2005.1532832;10.1109/VISUAL.1999.809896;10.1109/VISUAL.1992.235211;10.1109/TVCG.2008.163;10.1109/TVCG.2007.70551
Unsteady flow visualization, streak lines, streak surfaces, feature extraction
Vis
2010
Superquadric Glyphs for Symmetric Second-Order Tensors
10.1109/TVCG.2010.199
1. 1604
J
Symmetric second-order tensor fields play a central role in scientific and biomedical studies as well as in image analysis and feature-extraction methods. The utility of displaying tensor field samples has driven the development of visualization techniques that encode the tensor shape and orientation into the geometry of a tensor glyph. With some exceptions, these methods work only for positive-definite tensors (i.e. having positive eigenvalues, such as diffusion tensors). We expand the scope of tensor glyphs to all symmetric second-order tensors in two and three dimensions, gracefully and unambiguously depicting any combination of positive and negative eigenvalues. We generalize a previous method of superquadric glyphs for positive-definite tensors by drawing upon a larger portion of the superquadric shape space, supplemented with a coloring that indicates the tensor's quadratic form. We show that encoding arbitrary eigenvalue sign combinations requires design choices that differ fundamentally from those in previous work on traceless tensors (arising in the study of liquid crystals). Our method starts with a design of 2-D tensor glyphs guided by principles of symmetry and continuity, and creates 3-D glyphs that include the 2-D glyphs in their axis-aligned cross-sections. A key ingredient of our method is a novel way of mapping from the shape space of three-dimensional symmetric second-order tensors to the unit square. We apply our new glyphs to stress tensors from mechanics, geometry tensors and Hessians from image analysis, and rate-of-deformation tensors in computational fluid dynamics.
Schultz, T.;Kindlmann, G.
Comput. Sci. Dept., Univ. of Chicago, Chicago, IL, USA|c|;
10.1109/VISUAL.1999.809905;10.1109/TVCG.2006.134;10.1109/VISUAL.1998.745294;10.1109/TVCG.2006.181;10.1109/VISUAL.1991.175773;10.1109/TVCG.2009.184;10.1109/TVCG.2006.182;10.1109/TVCG.2009.177;10.1109/TVCG.2010.166;10.1109/VISUAL.1993.398849;10.1109/VISUAL.2003.1250414;10.1109/VISUAL.1994.346326;10.1109/VISUAL.1997.663929;10.1109/VISUAL.2002.1183797;10.1109/VISUAL.2003.1250376;10.1109/VISUAL.2005.1532774;10.1109/VISUAL.2004.80;10.1109/TVCG.2006.115
Tensor Glyphs, Stress Tensors, Rate-of-Deformation Tensors, Geometry Tensors, Glyph Design
Vis
2010
Supine and Prone Colon Registration Using Quasi-Conformal Mapping
10.1109/TVCG.2010.200
1. 1357
J
In virtual colonoscopy, CT scans are typically acquired with the patient in both supine (facing up) and prone (facing down) positions. The registration of these two scans is desirable so that the user can clarify situations or confirm polyp findings at a location in one scan with the same location in the other, thereby improving polyp detection rates and reducing false positives. However, this supine-prone registration is challenging because of the substantial distortions in the colon shape due to the patient's change in position. We present an efficient algorithm and framework for performing this registration through the use of conformal geometry to guarantee that the registration is a diffeomorphism (a one-to-one and onto mapping). The taeniae coli and colon flexures are automatically extracted for each supine and prone surface, employing the colon geometry. The two colon surfaces are then divided into several segments using the flexures, and each segment is cut along a taenia coli and conformally flattened to the rectangular domain using holomorphic differentials. The mean curvature is color encoded as texture images, from which feature points are automatically detected using graph cut segmentation, mathematic morphological operations, and principal component analysis. Corresponding feature points are found between supine and prone and are used to adjust the conformal flattening to be quasi-conformal, such that the features become aligned. We present multiple methods of visualizing our results, including 2D flattened rendering, corresponding 3D endoluminal views, and rendering of distortion measurements. We demonstrate the efficiency and efficacy of our registration method by illustrating matched views on both the 2D flattened colon images and in the 3D volume rendered colon endoluminal view. We analytically evaluate the correctness of the results by measuring the distance between features on the registered colons.
Wei Zeng;Marino, J.;Chaitanya Gurijala, K.;Xianfeng Gu;Kaufman, A.
Comput. Sci. Dept. at, Stony Brook Univ., Stony Brook, NY, USA|c|;;;;
10.1109/VISUAL.2005.1532806;10.1109/TVCG.2006.112;10.1109/VISUAL.2004.75;10.1109/TVCG.2006.158
Data registration, geometry-based techniques, medical visualization, mathematical foundations for visualization