IEEE VIS Publication Dataset

next
Vis
2011
Automatic Transfer Functions Based on Informational Divergence
10.1109/TVCG.2011.173
1. 1941
J
In this paper we present a framework to define transfer functions from a target distribution provided by the user. A target distribution can reflect the data importance, or highly relevant data value interval, or spatial segmentation. Our approach is based on a communication channel between a set of viewpoints and a set of bins of a volume data set, and it supports 1D as well as 2D transfer functions including the gradient information. The transfer functions are obtained by minimizing the informational divergence or Kullback-Leibler distance between the visibility distribution captured by the viewpoints and a target distribution selected by the user. The use of the derivative of the informational divergence allows for a fast optimization process. Different target distributions for 1D and 2D transfer functions are analyzed together with importance-driven and view-based techniques.
Ruiz, M.;Bardera, A.;Boada, I.;Viola, I.;Feixas, M.;Sbert, M.
;;;;;
10.1109/TVCG.2010.132;10.1109/TVCG.2006.137;10.1109/TVCG.2006.159;10.1109/TVCG.2010.131;10.1109/TVCG.2006.152;10.1109/VISUAL.2003.1250414;10.1109/TVCG.2007.70576;10.1109/TVCG.2009.120;10.1109/VISUAL.1996.568113;10.1109/TVCG.2008.140;10.1109/VISUAL.2005.1532834;10.1109/VISUAL.2002.1183785;10.1109/TVCG.2006.148
Transfer function, Information theory, Informational divergence, Kullback-Leibler distance
Vis
2011
Branching and Circular Features in High Dimensional Data
10.1109/TVCG.2011.177
1. 1911
J
Large observations and simulations in scientific research give rise to high-dimensional data sets that present many challenges and opportunities in data analysis and visualization. Researchers in application domains such as engineering, computational biology, climate study, imaging and motion capture are faced with the problem of how to discover compact representations of highdimensional data while preserving their intrinsic structure. In many applications, the original data is projected onto low-dimensional space via dimensionality reduction techniques prior to modeling. One problem with this approach is that the projection step in the process can fail to preserve structure in the data that is only apparent in high dimensions. Conversely, such techniques may create structural illusions in the projection, implying structure not present in the original high-dimensional data. Our solution is to utilize topological techniques to recover important structures in high-dimensional data that contains non-trivial topology. Specifically, we are interested in high-dimensional branching structures. We construct local circle-valued coordinate functions to represent such features. Subsequently, we perform dimensionality reduction on the data while ensuring such structures are visually preserved. Additionally, we study the effects of global circular structures on visualizations. Our results reveal never-before-seen structures on real-world data sets from a variety of applications.
Bei Wang;Summa, B.;Pascucci, V.;Vejdemo-Johansson, M.
SCI Inst., Univ. of Utah, Salt Lake City, UT, USA|c|;;;
10.1109/TVCG.2010.213;10.1109/TVCG.2009.119;10.1109/TVCG.2007.70601;10.1109/TVCG.2010.139;10.1109/VAST.2010.5652940
Dimensionality reduction, circular coordinates, visualization, topological analysis
Vis
2011
Context Preserving Maps of Tubular Structures
10.1109/TVCG.2011.182
1. 2004
J
When visualizing tubular 3D structures, external representations are often used for guidance and display, and such views in 2D can often contain occlusions. Virtual dissection methods have been proposed where the entire 3D structure can be mapped to the 2D plane, though these will lose context by straightening curved sections. We present a new method of creating maps of 3D tubular structures that yield a succinct view while preserving the overall geometric structure. Given a dominant view plane for the structure, its curve skeleton is first projected to a 2D skeleton. This 2D skeleton is adjusted to account for distortions in length, modified to remove intersections, and optimized to preserve the shape of the original 3D skeleton. Based on this shaped 2D skeleton, a boundary for the map of the object is obtained based on a slicing path through the structure and the radius around the skeleton. The sliced structure is conformally mapped to a rectangle and then deformed via harmonic mapping to match the boundary placement. This flattened map preserves the general geometric context of a 3D object in a 2D display, and rendering of this flattened map can be accomplished using volumetric ray casting. We have evaluated our method on real datasets of human colon models.
Marino, J.;Wei Zeng;Xianfeng Gu;Kaufman, A.
Comput. Sci. Dept., Stony Brook Univ., Stony Brook, NY, USA|c|;;;
10.1109/TVCG.2006.112;10.1109/TVCG.2010.200;10.1109/VISUAL.2001.964540
Geometry-based technique, volume rendering, biomedical visualization, medical visualization, conformal mapping
Vis
2011
Crepuscular Rays for Tumor Accessibility Planning
10.1109/TVCG.2011.184
2. 2172
J
In modern clinical practice, planning access paths to volumetric target structures remains one of the most important and most complex tasks, and a physician's insufficient experience in this can lead to severe complications or even the death of the patient. In this paper, we present a method for safety evaluation and the visualization of access paths to assist physicians during preoperative planning. As a metaphor for our method, we employ a well-known, and thus intuitively perceivable, natural phenomenon that is usually called crepuscular rays. Using this metaphor, we propose several ways to compute the safety of paths from the region of interest to all tumor voxels and show how this information can be visualized in real-time using a multi-volume rendering system. Furthermore, we show how to estimate the extent of connected safe areas to improve common medical 2D multi-planar reconstruction (MPR) views. We evaluate our method by means of expert interviews, an online survey, and a retrospective evaluation of 19 real abdominal radio-frequency ablation (RFA) interventions, with expert decisions serving as a gold standard. The evaluation results show clear evidence that our method can be successfully applied in clinical practice without introducing substantial overhead work for the acting personnel. Finally, we show that our method is not limited to medical applications and that it can also be useful in other fields.
Khlebnikov, R.;Kainz, B.;Muehl, J.;Schmalstieg, D.
Graz Univ. of Technol., Graz, Austria|c|;;;
10.1109/TVCG.2007.70560
Accessibility, ray casting, medical visualization
Vis
2011
Distance Visualization for Interactive 3D Implant Planning
10.1109/TVCG.2011.189
2. 2182
J
An instant and quantitative assessment of spatial distances between two objects plays an important role in interactive applications such as virtual model assembly, medical operation planning, or computational steering. While some research has been done on the development of distance-based measures between two objects, only very few attempts have been reported to visualize such measures in interactive scenarios. In this paper we present two different approaches for this purpose, and we investigate the effectiveness of these approaches for intuitive 3D implant positioning in a medical operation planning system. The first approach uses cylindrical glyphs to depict distances, which smoothly adapt their shape and color to changing distances when the objects are moved. This approach computes distances directly on the polygonal object representations by means of ray/triangle mesh intersection. The second approach introduces a set of slices as additional geometric structures, and uses color coding on surfaces to indicate distances. This approach obtains distances from a precomputed distance field of each object. The major findings of the performed user study indicate that a visualization that can facilitate an instant and quantitative analysis of distances between two objects in interactive 3D scenarios is demanding, yet can be achieved by including additional monocular cues into the visualization.
Dick, C.;Burgkart, R.;Westermann, R.
Comput. Graphics & Visualization Group, Tech. Univ. Munchen, Munich, Germany|c|;;
10.1109/VISUAL.2002.1183752;10.1109/TVCG.2009.184
Distance visualization, biomedical visualization, implant planning, glyphs, distance fields
Vis
2011
Evaluation of Trend Localization with Multi-Variate Visualizations
10.1109/TVCG.2011.194
2. 2062
J
Multi-valued data sets are increasingly common, with the number of dimensions growing. A number of multi-variate visualization techniques have been presented to display such data. However, evaluating the utility of such techniques for general data sets remains difficult. Thus most techniques are studied on only one data set. Another criticism that could be levied against previous evaluations of multi-variate visualizations is that the task doesn't require the presence of multiple variables. At the same time, the taxonomy of tasks that users may perform visually is extensive. We designed a task, trend localization, that required comparison of multiple data values in a multi-variate visualization. We then conducted a user study with this task, evaluating five multivariate visualization techniques from the literature (Brush Strokes, Data-Driven Spots, Oriented Slivers, Color Blending, Dimensional Stacking) and juxtaposed grayscale maps. We report the results and discuss the implications for both the techniques and the task.
Livingston, M.A.;Decker, J.W.
;
10.1109/TVCG.2009.126;10.1109/VISUAL.1998.745292;10.1109/VISUAL.1990.146387;10.1109/VISUAL.1990.146386;10.1109/TVCG.2007.70623;10.1109/VISUAL.1991.175795;10.1109/VISUAL.1999.809905;10.1109/VISUAL.2003.1250362;10.1109/VISUAL.1998.745294;10.1109/VISUAL.2003.1250362
User study, multi-variate visualization, visual task design, visual analytics
Vis
2011
Extinction-Based Shading and Illumination in GPU Volume Ray-Casting
10.1109/TVCG.2011.198
1. 1802
J
Direct volume rendering has become a popular method for visualizing volumetric datasets. Even though computers are continually getting faster, it remains a challenge to incorporate sophisticated illumination models into direct volume rendering while maintaining interactive frame rates. In this paper, we present a novel approach for advanced illumination in direct volume rendering based on GPU ray-casting. Our approach features directional soft shadows taking scattering into account, ambient occlusion and color bleeding effects while achieving very competitive frame rates. In particular, multiple dynamic lights and interactive transfer function changes are fully supported. Commonly, direct volume rendering is based on a very simplified discrete version of the original volume rendering integral, including the development of the original exponential extinction into a-blending. In contrast to a-blending forming a product when sampling along a ray, the original exponential extinction coefficient is an integral and its discretization a Riemann sum. The fact that it is a sum can cleverly be exploited to implement volume lighting effects, i.e. soft directional shadows, ambient occlusion and color bleeding. We will show how this can be achieved and how it can be implemented on the GPU.
Schlegel, P.;Makhinya, M.;Pajarola, R.
Dept. of Inf., Univ. of Zurich, Zurich, Switzerland|c|;;
10.1109/TVCG.2007.70555;10.1109/VISUAL.2002.1183764;10.1109/VISUAL.2003.1250384
Volume Rendering, Shadows, Ambient Occlusion, GPU Ray-Casting, Exponential Extinction
Vis
2011
Feature-Based Statistical Analysis of Combustion Simulation Data
10.1109/TVCG.2011.199
1. 1831
J
We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing and reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion s- ience; however, it is applicable to many other science domains.
Bennett, J.C.;Krishnamoorthy, V.;Shusen Liu;Grout, R.W.;Hawkes, E.R.;Chen, J.H.;Shepherd, J.;Pascucci, V.;Bremer, P.-T.
Sandia Nat. Labs., Albuquerque, NM, USA|c|;;;;;;;;
10.1109/VISUAL.2004.96;10.1109/VISUAL.2003.1250386;10.1109/TVCG.2007.70603;10.1109/TVCG.2006.186;10.1109/VISUAL.1997.663875
Topology, Statistics, Data analysis, Data exploration, Visualization in Physical Sciences and Engineering, Multi-variate Data
Vis
2011
Features in Continuous Parallel Coordinates
10.1109/TVCG.2011.200
1. 1921
J
Continuous Parallel Coordinates (CPC) are a contemporary visualization technique in order to combine several scalar fields, given over a common domain. They facilitate a continuous view for parallel coordinates by considering a smooth scalar field instead of a finite number of straight lines. We show that there are feature curves in CPC which appear to be the dominant structures of a CPC. We present methods to extract and classify them and demonstrate their usefulness to enhance the visualization of CPCs. In particular, we show that these feature curves are related to discontinuities in Continuous Scatterplots (CSP). We show this by exploiting a curve-curve duality between parallel and Cartesian coordinates, which is a generalization of the well-known point-line duality. Furthermore, we illustrate the theoretical considerations. Concluding, we discuss relations and aspects of the CPC's/CSP's features concerning the data analysis.
Lehmann, D.J.;Theisel, H.
Dept. of Simulation & Graphics, Univ. of Magdeburg, Magdeburg, Germany|c|;
10.1109/TVCG.2008.119;10.1109/TVCG.2010.146;10.1109/VISUAL.1998.745284;10.1109/TVCG.2009.131;10.1109/VISUAL.1999.809896
Features, Parallel Coordinates, Topology, Visualization
Vis
2011
Flow Radar Glyphs—Static Visualization of Unsteady Flow with Uncertainty
10.1109/TVCG.2011.203
1. 1958
J
A new type of glyph is introduced to visualize unsteady flow with static images, allowing easier analysis of time-dependent phenomena compared to animated visualization. Adopting the visual metaphor of radar displays, this glyph represents flow directions by angles and time by radius in spherical coordinates. Dense seeding of flow radar glyphs on the flow domain naturally lends itself to multi-scale visualization: zoomed-out views show aggregated overviews, zooming-in enables detailed analysis of spatial and temporal characteristics. Uncertainty visualization is supported by extending the glyph to display possible ranges of flow directions. The paper focuses on 2D flow, but includes a discussion of 3D flow as well. Examples from CFD and the field of stochastic hydrogeology show that it is easy to discriminate regions of different spatiotemporal flow behavior and regions of different uncertainty variations in space and time. The examples also demonstrate that parameter studies can be analyzed because the glyph design facilitates comparative visualization. Finally, different variants of interactive GPU-accelerated implementations are discussed.
Hlawatsch, M.;Leube, P.;Nowak, W.;Weiskopf, D.
Visualization Res. Center (VISUS), Univ. of Stuttgart, Stuttgart, Germany|c|;;;
10.1109/VISUAL.2005.1532853;10.1109/VISUAL.1993.398849;10.1109/INFVIS.2001.963273;10.1109/VISUAL.2003.1250402;10.1109/VISUAL.1996.568139;10.1109/VISUAL.2005.1532857;10.1109/VISUAL.1991.175792;10.1109/VISUAL.1995.480819;10.1109/VISUAL.1996.568116;10.1109/TVCG.2009.182
Visualization, glyph, uncertainty, unsteady flow
Vis
2011
FoamVis: Visualization of 2D Foam Simulation Data
10.1109/TVCG.2011.204
2. 2105
J
Research in the field of complex fluids such as polymer solutions, particulate suspensions and foams studies how the flow of fluids with different material parameters changes as a result of various constraints. Surface Evolver, the standard solver software used to generate foam simulations, provides large, complex, time-dependent data sets with hundreds or thousands of individual bubbles and thousands of time steps. However this software has limited visualization capabilities, and no foam specific visualization software exists. We describe the foam research application area where, we believe, visualization has an important role to play. We present a novel application that provides various techniques for visualization, exploration and analysis of time-dependent 2D foam simulation data. We show new features in foam simulation data and new insights into foam behavior discovered using our application.
Lipsa, D.R.;Laramee, R.S.;Cox, S.J.;Davies, I.T.
Swansea Univ., Swansea, UK|c|;;;
10.1109/TVCG.2008.147;10.1109/TVCG.2008.139
Surface Evolver, bubble-scale simulation, time-dependent visualizations
Vis
2011
GPU-Based Interactive Cut-Surface Extraction From High-Order finite Element fields
10.1109/TVCG.2011.206
1. 1811
J
We present a GPU-based ray-tracing system for the accurate and interactive visualization of cut-surfaces through 3D simulations of physical processes created from spectral/hp high-order finite element methods. When used by the numerical analyst to debug the solver, the ability for the imagery to precisely reflect the data is critical. In practice, the investigator interactively selects from a palette of visualization tools to construct a scene that can answer a query of the data. This is effective as long as the implicit contract of image quality between the individual and the visualization system is upheld. OpenGL rendering of scientific visualizations has worked remarkably well for exploratory visualization for most solver results. This is due to the consistency between the use of first-order representations in the simulation and the linear assumptions inherent in OpenGL (planar fragments and color-space interpolation). Unfortunately, the contract is broken when the solver discretization is of higher-order. There have been attempts to mitigate this through the use of spatial adaptation and/or texture mapping. These methods do a better job of approximating what the imagery should be but are not exact and tend to be view-dependent. This paper introduces new rendering mechanisms that specifically deal with the kinds of native data generated by high-order finite element solvers. The exploratory visualization tools are reassessed and cast in this system with the focus on image accuracy. This is accomplished in a GPU setting to ensure interactivity.
Nelson, B.;Haimes, R.;Kirby, R.M.
Sci. Comput. & Imaging Inst., Univ. of Utah, Salt Lake City, UT, USA|c|;;
10.1109/VISUAL.2005.1532776;10.1109/VISUAL.2004.91;10.1109/TVCG.2006.154
High-order finite elements, spectral/hp elements, cut-plane extraction, GPU-based root-finding, GPU ray-tracing, cut-surface extraction
Vis
2011
GPU-based Real-Time Approximation of the Ablation Zone for Radiofrequency Ablation
10.1109/TVCG.2011.207
1. 1821
J
Percutaneous radiofrequency ablation (RFA) is becoming a standard minimally invasive clinical procedure for the treatment of liver tumors. However, planning the applicator placement such that the malignant tissue is completely destroyed, is a demanding task that requires considerable experience. In this work, we present a fast GPU-based real-time approximation of the ablation zone incorporating the cooling effect of liver vessels. Weighted distance fields of varying RF applicator types are derived from complex numerical simulations to allow a fast estimation of the ablation zone. Furthermore, the heat-sink effect of the cooling blood flow close to the applicator's electrode is estimated by means of a preprocessed thermal equilibrium representation of the liver parenchyma and blood vessels. Utilizing the graphics card, the weighted distance field incorporating the cooling blood flow is calculated using a modular shader framework, which facilitates the real-time visualization of the ablation zone in projected slice views and in volume rendering. The proposed methods are integrated in our software assistant prototype for planning RFA therapy. The software allows the physician to interactively place virtual RF applicator models. The real-time visualization of the corresponding approximated ablation zone facilitates interactive evaluation of the tumor coverage in order to optimize the applicator's placement such that all cancer cells are destroyed by the ablation.
Rieder, C.;Kroeger, T.;Schumann, C.;Hahn, H.K.
;;;
10.1109/TVCG.2010.208;10.1109/VISUAL.1998.745311;10.1109/VISUAL.2000.885694
Radiofrequency ablation, ablation zone visualization, distance field, volume rendering, GPU, interaction
Vis
2011
Hierarchical Event Selection for Video Storyboards with a Case Study on Snooker Video Visualization
10.1109/TVCG.2011.208
1. 1756
J
Video storyboard, which is a form of video visualization, summarizes the major events in a video using illustrative visualization. There are three main technical challenges in creating a video storyboard, (a) event classification, (b) event selection and (c) event illustration. Among these challenges, (a) is highly application-dependent and requires a significant amount of application specific semantics to be encoded in a system or manually specified by users. This paper focuses on challenges (b) and (c). In particular, we present a framework for hierarchical event representation, and an importance-based selection algorithm for supporting the creation of a video storyboard from a video. We consider the storyboard to be an event summarization for the whole video, whilst each individual illustration on the board is also an event summarization but for a smaller time window. We utilized a 3D visualization template for depicting and annotating events in illustrations. To demonstrate the concepts and algorithms developed, we use Snooker video visualization as a case study, because it has a concrete and agreeable set of semantic definitions for events and can make use of existing techniques of event detection and 3D reconstruction in a reliable manner. Nevertheless, most of our concepts and algorithms developed for challenges (b) and (c) can be applied to other application areas.
Parry, M.L.;Legg, P.A.;Chung, D.H.S.;Griffiths, I.W.;Chen, M.
Dept. of Comput. Sci., Swansea Univ., Swansea, UK|c|;;;;
10.1109/TVCG.2008.185;10.1109/INFVIS.2004.27;10.1109/VISUAL.2003.1250401;10.1109/TVCG.2007.70544;10.1109/TVCG.2006.194
Multimedia visualization, Time series data, Illustrative visualization
Vis
2011
Image Plane Sweep Volume Illumination
10.1109/TVCG.2011.211
2. 2134
J
In recent years, many volumetric illumination models have been proposed, which have the potential to simulate advanced lighting effects and thus support improved image comprehension. Although volume ray-casting is widely accepted as the volume rendering technique which achieves the highest image quality, so far no volumetric illumination algorithm has been designed to be directly incorporated into the ray-casting process. In this paper we propose image plane sweep volume illumination (IPSVI), which allows the integration of advanced illumination effects into a GPU-based volume ray-caster by exploiting the plane sweep paradigm. Thus, we are able to reduce the problem complexity and achieve interactive frame rates, while supporting scattering as well as shadowing. Since all illumination computations are performed directly within a single rendering pass, IPSVI does not require any preprocessing nor does it need to store intermediate results within an illumination volume. It therefore has a significantly lower memory footprint than other techniques. This makes IPSVI directly applicable to large data sets. Furthermore, the integration into a GPU-based ray-caster allows for high image quality as well as improved rendering performance by exploiting early ray termination. This paper discusses the theory behind IPSVI, describes its implementation, demonstrates its visual results and provides performance measurements.
Sunden, E.;Ynnerman, A.;Ropinski, T.
Sci. Visualization Group, Linkoping Univ., Linkoping, Sweden|c|;;
10.1109/TVCG.2011.161;10.1109/VISUAL.2002.1183761;10.1109/VISUAL.2003.1250394;10.1109/VISUAL.2002.1183764;10.1109/TVCG.2007.70573;10.1109/VISUAL.2003.1250384;10.1109/TVCG.2009.164
Interactive volume rendering, GPU-based ray-casting, Advanced illumination
Vis
2011
Interactive Multiscale Tensor Reconstruction for Multiresolution Volume Visualization
10.1109/TVCG.2011.214
2. 2143
J
Large scale and structurally complex volume datasets from high-resolution 3D imaging devices or computational simulations pose a number of technical challenges for interactive visual analysis. In this paper, we present the first integration of a multiscale volume representation based on tensor approximation within a GPU-accelerated out-of-core multiresolution rendering framework. Specific contributions include (a) a hierarchical brick-tensor decomposition approach for pre-processing large volume data, (b) a GPU accelerated tensor reconstruction implementation exploiting CUDA capabilities, and (c) an effective tensor-specific quantization strategy for reducing data transfer bandwidth and out-of-core memory footprint. Our multiscale representation allows for the extraction, analysis and display of structural features at variable spatial scales, while adaptive level-of-detail rendering methods make it possible to interactively explore large datasets within a constrained memory footprint. The quality and performance of our prototype system is evaluated on large structurally complex datasets, including gigabyte-sized micro-tomographic volumes.
Suter, S.K.;Iglesias Guitian, J.A.;Marton, F.;Agus, M.;Elsener, A.;Zollikofer, C.P.E.;Gopi, M.;Gobbetti, E.;Pajarola, R.
Univ. of Zurich, Zurich, Switzerland|c|;;;;;;;;
10.1109/VISUAL.2002.1183757;10.1109/VISUAL.1997.663900;10.1109/TVCG.2007.70516;10.1109/VISUAL.1998.745311;10.1109/TVCG.2006.146;10.1109/VISUAL.2003.1250385
GPU/CUDA, multiscale, tensor reconstruction, interactive volume visualization, multiresolution rendering
Vis
2011
Interactive Virtual Probing of 4D MRI Blood-Flow
10.1109/TVCG.2011.215
2. 2162
J
Better understanding of hemodynamics conceivably leads to improved diagnosis and prognosis of cardiovascular diseases. Therefore, an elaborate analysis of the blood-flow in heart and thoracic arteries is essential. Contemporary MRI techniques enable acquisition of quantitative time-resolved flow information, resulting in 4D velocity fields that capture the blood-flow behavior. Visual exploration of these fields provides comprehensive insight into the unsteady blood-flow behavior, and precedes a quantitative analysis of additional blood-flow parameters. The complete inspection requires accurate segmentation of anatomical structures, encompassing a time-consuming and hard-to-automate process, especially for malformed morphologies. We present a way to avoid the laborious segmentation process in case of qualitative inspection, by introducing an interactive virtual probe. This probe is positioned semi-automatically within the blood-flow field, and serves as a navigational object for visual exploration. The difficult task of determining position and orientation along the view-direction is automated by a fitting approach, aligning the probe with the orientations of the velocity field. The aligned probe provides an interactive seeding basis for various flow visualization approaches. We demonstrate illustration-inspired particles, integral lines and integral surfaces, conveying distinct characteristics of the unsteady blood-flow. Lastly, we present the results of an evaluation with domain experts, valuing the practical use of our probe and flow visualization techniques.
van Pelt, R.;Olivan Bescos, J.;Breeuwer, M.;Clough, R.E.;Groller, E.;ter Haar Romenij, B.;Vilanova, A.
Dept. of Biomed. Eng., Eindhoven Univ. of Technol., Eindhoven, Netherlands|c|;;;;;;
10.1109/VISUAL.1993.398849;10.1109/TVCG.2009.154;10.1109/TVCG.2010.173;10.1109/TVCG.2010.153;10.1109/TVCG.2007.70576;10.1109/TVCG.2008.133;10.1109/TVCG.2009.138;10.1109/VISUAL.2005.1532847;10.1109/TVCG.2010.166;10.1109/VISUAL.2005.1532857
Probing, Flow visualization, Illustrative visualization, Multivalued images, Phase-contrast cine MRI
Vis
2011
Interactive Volume Visualization of General Polyhedral Grids
10.1109/TVCG.2011.216
2. 2124
J
This paper presents a novel framework for visualizing volumetric data specified on complex polyhedral grids, without the need to perform any kind of a priori tetrahedralization. These grids are composed of polyhedra that often are non-convex and have an arbitrary number of faces, where the faces can be non-planar with an arbitrary number of vertices. The importance of such grids in state-of-the-art simulation packages is increasing rapidly. We propose a very compact, face-based data structure for representing such meshes for visualization, called two-sided face sequence lists (TSFSL), as well as an algorithm for direct GPU-based ray-casting using this representation. The TSFSL data structure is able to represent the entire mesh topology in a 1D TSFSL data array of face records, which facilitates the use of efficient 1D texture accesses for visualization. In order to scale to large data sizes, we employ a mesh decomposition into bricks that can be handled independently, where each brick is then composed of its own TSFSL array. This bricking enables memory savings and performance improvements for large meshes. We illustrate the feasibility of our approach with real-world application results, by visualizing highly complex polyhedral data from commercial state-of-the-art simulation packages.
Muigg, P.;Hadwiger, M.;Doleisch, H.;Groller, E.
Vienna Univ. of Technol., Vienna, Austria|c|;;;
10.1109/VISUAL.2005.1532796;10.1109/TVCG.2006.171;10.1109/TVCG.2007.70588;10.1109/VISUAL.2003.1250390;10.1109/VISUAL.2001.964514;10.1109/TVCG.2006.154;10.1109/VISUAL.2005.1532850;10.1109/VISUAL.2001.964511;10.1109/VISUAL.1999.809908
Volume rendering, unstructured grids, polyhedral grids, GPU-based visualization
Vis
2011
Interactive, Graph-based Visual Analysis of High-dimensional, Multi-parameter Fluorescence Microscopy Data in Toponomics
10.1109/TVCG.2011.217
1. 1891
J
In Toponomics, the function protein pattern in cells or tissue (the toponome) is imaged and analyzed for applications in toxicology, new drug development and patient-drug-interaction. The most advanced imaging technique is robot-driven multi-parameter fluorescence microscopy. This technique is capable of co-mapping hundreds of proteins and their distribution and assembly in protein clusters across a cell or tissue sample by running cycles of fluorescence tagging with monoclonal antibodies or other affinity reagents, imaging, and bleaching in situ. The imaging results in complex multi-parameter data composed of one slice or a 3D volume per affinity reagent. Biologists are particularly interested in the localization of co-occurring proteins, the frequency of co-occurrence and the distribution of co-occurring proteins across the cell. We present an interactive visual analysis approach for the evaluation of multi-parameter fluorescence microscopy data in toponomics. Multiple, linked views facilitate the definition of features by brushing multiple dimensions. The feature specification result is linked to all views establishing a focus+context visualization in 3D. In a new attribute view, we integrate techniques from graph visualization. Each node in the graph represents an affinity reagent while each edge represents two co-occurring affinity reagent bindings. The graph visualization is enhanced by glyphs which encode specific properties of the binding. The graph view is equipped with brushing facilities. By brushing in the spatial and attribute domain, the biologist achieves a better understanding of the function protein patterns of a cell. Furthermore, an interactive table view is integrated which summarizes unique fluorescence patterns. We discuss our approach with respect to a cell probe containing lymphocytes and a prostate tissue section.
Oeltze, S.;Freiler, W.;Hillert, R.;Doleisch, H.;Preim, B.;Schubert, W.
Univ. of Magdeburg, Magdeburg, Germany|c|;;;;;
10.1109/VAST.2009.5333911;10.1109/TVCG.2006.195;10.1109/TVCG.2006.147;10.1109/TVCG.2007.70569;10.1109/TVCG.2009.167
Visual Analytics, Fluorescence Microscopy, Toponomics, Protein Interaction, Graph Visualization
Vis
2011
iView: A Feature Clustering Framework for Suggesting Informative Views in Volume Visualization
10.1109/TVCG.2011.218
1. 1968
J
The unguided visual exploration of volumetric data can be both a challenging and a time-consuming undertaking. Identifying a set of favorable vantage points at which to start exploratory expeditions can greatly reduce this effort and can also ensure that no important structures are being missed. Recent research efforts have focused on entropy-based viewpoint selection criteria that depend on scalar values describing the structures of interest. In contrast, we propose a viewpoint suggestion pipeline that is based on feature-clustering in high-dimensional space. We use gradient/normal variation as a metric to identify interesting local events and then cluster these via k-means to detect important salient composite features. Next, we compute the maximum possible exposure of these composite feature for different viewpoints and calculate a 2D entropy map parameterized in longitude and latitude to point out promising view orientations. Superimposed onto an interactive track-ball interface, users can then directly use this entropy map to quickly navigate to potentially interesting viewpoints where visibility-based transfer functions can be employed to generate volume renderings that minimize occlusions. To give full exploration freedom to the user, the entropy map is updated on the fly whenever a view has been selected, pointing to new and promising but so far unseen view directions. Alternatively, our system can also use a set-cover optimization algorithm to provide a minimal set of views needed to observe all features. The views so generated could then be saved into a list for further inspection or into a gallery for a summary presentation.
Ziyi Zheng;Ahmed, N.;Mueller, K.
Comput. Sci. Dept., Stony Brook Univ., Stony Brook, NY, USA|c|;;
10.1109/TVCG.2009.156;10.1109/TVCG.2007.70576;10.1109/TVCG.2008.162;10.1109/TVCG.2008.159;10.1109/TVCG.2010.214;10.1109/TVCG.2009.172;10.1109/VISUAL.2005.1532833;10.1109/VISUAL.2005.1532818;10.1109/TVCG.2006.124;10.1109/TVCG.2009.185;10.1109/TVCG.2009.189;10.1109/VISUAL.2003.1250414;10.1109/VISUAL.2005.1532834
Direct volume rendering, k-means, entropy, view suggestion, set-cover problem, ant colony optimization