IEEE VIS Publication Dataset

next
Vis
2008
Particle-based Sampling and Meshing of Surfaces in Multimaterial Volumes
10.1109/TVCG.2008.154
1. 1546
J
Methods that faithfully and robustly capture the geometry of complex material interfaces in labeled volume data are important for generating realistic and accurate visualizations and simulations of real-world objects. The generation of such multimaterial models from measured data poses two unique challenges: first, the surfaces must be well-sampled with regular, efficient tessellations that are consistent across material boundaries; and second, the resulting meshes must respect the nonmanifold geometry of the multimaterial interfaces. This paper proposes a strategy for sampling and meshing multimaterial volumes using dynamic particle systems, including a novel, differentiable representation of the material junctions that allows the particle system to explicitly sample corners, edges, and surfaces of material intersections. The distributions of particles are controlled by fundamental sampling constraints, allowing Delaunay-based meshing algorithms to reliably extract watertight meshes of consistently high-quality.
Meyer, M.;Whitaker, R.T.;Kirby, R.M.;Ledergerber, C.;Pfister, H.
Initiative in Innovative Comput., Harvard Univ., Cambridge, MA|c|;;;;
10.1109/VISUAL.2002.1183808;10.1109/TVCG.2007.70604;10.1109/VISUAL.1997.663930;10.1109/TVCG.2007.70572;10.1109/TVCG.2007.70543;10.1109/TVCG.2006.149;10.1109/VISUAL.1997.663887
Sampling, meshing, visualizations
Vis
2008
Query-Driven Visualization of Time-Varying Adaptive Mesh Refinement Data
10.1109/TVCG.2008.157
1. 1722
J
The visualization and analysis of AMR-based simulations is integral to the process of obtaining new insight in scientific research. We present a new method for performing query-driven visualization and analysis on AMR data, with specific emphasis on time-varying AMR data. Our work introduces a new method that directly addresses the dynamic spatial and temporal properties of AMR grids that challenge many existing visualization techniques. Further, we present the first implementation of query-driven visualization on the GPU that uses a GPU-based indexing structure to both answer queries and efficiently utilize GPU memory. We apply our method to two different science domains to demonstrate its broad applicability.
Gosink, L.;Anderson, J.C.;Bethel, E.W.;Joy, K.I.
Inst. for Data Anal. & Visualization, Univ. of California, Davis, CA|c|;;;
10.1109/VISUAL.2000.885704;10.1109/VISUAL.2005.1532792;10.1109/VAST.2006.261437;10.1109/VISUAL.2002.1183820;10.1109/VISUAL.2003.1250402;10.1109/TVCG.2007.70519;10.1109/VISUAL.1993.398869;10.1109/VISUAL.2005.1532793
AMR, Query-Driven Visualization, Multitemporal Visualization
Vis
2008
Relation-Aware Volume Exploration Pipeline
10.1109/TVCG.2008.159
1. 1690
J
Volume exploration is an important issue in scientific visualization. Research on volume exploration has been focused on revealing hidden structures in volumetric data. While the information of individual structures or features is useful in practice, spatial relations between structures are also important in many applications and can provide further insights into the data. In this paper, we systematically study the extraction, representation,exploration, and visualization of spatial relations in volumetric data and propose a novel relation-aware visualization pipeline for volume exploration. In our pipeline, various relations in the volume are first defined and measured using region connection calculus (RCC) and then represented using a graph interface called relation graph. With RCC and the relation graph, relation query and interactive exploration can be conducted in a comprehensive and intuitive way. The visualization process is further assisted with relation-revealing viewpoint selection and color and opacity enhancement. We also introduce a quality assessment scheme which evaluates the perception of spatial relations in the rendered images. Experiments on various datasets demonstrate the practical use of our system in exploratory visualization.
Ming-Yuen Chan;Huamin Qu;Ka-Kei Chung;Wai-Ho Mak;Yingcai Wu
Dept. of Comput. Sci. & Eng., Hong Kong Univ. of Sci. & Technol., Hong Kong|c|;;;;
10.1109/TVCG.2007.70584;10.1109/TVCG.2007.70515;10.1109/TVCG.2006.144;10.1109/VISUAL.1999.809871;10.1109/TVCG.2007.70535;10.1109/TVCG.2007.70576;10.1109/VISUAL.2000.885694;10.1109/INFVIS.2003.1249009;10.1109/TVCG.2007.70555;10.1109/VISUAL.2005.1532835;10.1109/VISUAL.2005.1532788;10.1109/TVCG.2007.70591;10.1109/VISUAL.2005.1532834;10.1109/VISUAL.2005.1532856;10.1109/TVCG.2007.70572;10.1109/VISUAL.2005.1532833
Exploratory Visualization, Relation-Based Visualization, Visualization Pipeline
Vis
2008
Revisiting Histograms and Isosurface Statistics
10.1109/TVCG.2008.160
1. 1666
J
Recent results have shown a link between geometric properties of isosurfaces and statistical properties of the underlying sampled data. However, this has two defects: not all of the properties described converge to the same solution, and the statistics computed are not always invariant under isosurface-preserving transformations. We apply Federer's Coarea Formula from geometric measure theory to explain these discrepancies. We describe an improved substitute for histograms based on weighting with the inverse gradient magnitude, develop a statistical model that is invariant under isosurface-preserving transformations, and argue that this provides a consistent method for algorithm evaluation across multiple datasets based on histogram equalization. We use our corrected formulation to reevaluate recent results on average isosurface complexity, and show evidence that noise is one cause of the discrepancy between the expected figure and the observed one.
Scheidegger, C.E.;Schreiner, J.;Duffy, B.;Carr, H.;Silva, C.T.
Inst. of Sci. Comput. & Imaging, Utah Univ., Salt Lake City, UT|c|;;;;
10.1109/TVCG.2006.168;10.1109/TVCG.2008.119;10.1109/VISUAL.2001.964519;10.1109/VISUAL.2001.964515;10.1109/VISUAL.1997.663875;10.1109/VISUAL.2001.964516
Isosurfaces, Histograms, Coarea Formula
Vis
2008
Sinus Endoscopy - Application of Advanced GPU Volume Rendering for Virtual Endoscopy
10.1109/TVCG.2008.161
1. 1498
J
For difficult cases in endoscopic sinus surgery, a careful planning of the intervention is necessary. Due to the reduced field of view during the intervention, the surgeons have less information about the surrounding structures in the working area compared to open surgery. Virtual endoscopy enables the visualization of the operating field and additional information, such as risk structures (e.g., optical nerve and skull base) and target structures to be removed (e.g., mucosal swelling). The Sinus Endoscopy system provides the functional range of a virtual endoscopic system with special focus on a realistic representation. Furthermore, by using direct volume rendering, we avoid time-consuming segmentation steps for the use of individual patient datasets. However, the image quality of the endoscopic view can be adjusted in a way that a standard computer with a modern standard graphics card achieves interactive frame rates with low CPU utilization. Thereby, characteristics of the endoscopic view are systematically used for the optimization of the volume rendering speed. The system design was based on a careful analysis of the endoscopic sinus surgery and the resulting needs for computer support. As a small standalone application it can be instantly used for surgical planning and patient education. First results of a clinical evaluation with ENT surgeons were employed to fine-tune the user interface, in particular to reduce the number of controls by using appropriate default values wherever possible. The system was used for preoperative planning in 102 cases, provides useful information for intervention planning (e.g., anatomic variations of the Rec. Frontalis), and closely resembles the intraoperative situation.
Kruger, A.;Kubisch, C.;Preim, B.;Preim, B.
Dept. of Simulation & Graphics, Otto-von-Guericke-Univ. of Magdeburg, Magdeburg|c|;;;
10.1109/VISUAL.2003.1250370;10.1109/VISUAL.2003.1250384;10.1109/VISUAL.2004.98
medical visualization, sinus surgery, operation planning, virtual endoscopy, volume rendering
Vis
2008
Size-based Transfer Functions: A New Volume Exploration Technique
10.1109/TVCG.2008.162
1. 1387
J
The visualization of complex 3D images remains a challenge, a fact that is magnified by the difficulty to classify or segment volume data. In this paper, we introduce size-based transfer functions, which map the local scale of features to color and opacity. Features in a data set with similar or identical scalar values can be classified based on their relative size. We achieve this with the use of scale fields, which are 3D fields that represent the relative size of the local feature at each voxel. We present a mechanism for obtaining these scale fields at interactive rates, through a continuous scale-space analysis and a set of detection filters. Through a number of examples, we show that size-based transfer functions can improve classification and enhance volume rendering techniques, such as maximum intensity projection. The ability to classify objects based on local size at interactive rates proves to be a powerful method for complex data exploration.
Correa, C.;Kwan-Liu Ma
Univ. of California, Davis, CA|c|;
10.1109/VISUAL.2003.1250414;10.1109/VISUAL.1999.809932;10.1109/VISUAL.2000.885694;10.1109/VISUAL.2001.964519;10.1109/VISUAL.2004.64;10.1109/VISUAL.2003.1250413;10.1109/VISUAL.1995.480812;10.1109/VISUAL.2003.1250369;10.1109/VISUAL.2005.1532817
Transfer Functions, Interactive Visualization, Volume Rendering, Scale Space, GPU Techniques
Vis
2008
Smoke Surfaces: An Interactive Flow Visualization Technique Inspired by Real-World Flow Experiments
10.1109/TVCG.2008.163
1. 1403
J
Smoke rendering is a standard technique for flow visualization. Most approaches are based on a volumetric, particle based, or image based representation of the smoke. This paper introduces an alternative representation of smoke structures: as semi-transparent streak surfaces. In order to make streak surface integration fast enough for interactive applications, we avoid expensive adaptive retriangulations by coupling the opacity of the triangles to their shapes. This way, the surface shows a smoke-like look even in rather turbulent areas. Furthermore, we show modifications of the approach to mimic smoke nozzles, wool tufts, and time surfaces. The technique is applied to a number of test data sets.
von Funck, W.;Weinkauf, T.;Theisel, H.;Seidel, H.-P.
MPI Informatlk, Saarbrucken|c|;;;
10.1109/VISUAL.1995.485141;10.1109/VISUAL.1993.398846;10.1109/VISUAL.1992.235211;10.1109/VISUAL.2001.964506;10.1109/VISUAL.1993.398877
Unsteady flow visualization, streak surfaces, smoke visualization
Vis
2008
Smooth Surface Extraction from Unstructured Point-based Volume Data Using PDEs
10.1109/TVCG.2008.164
1. 1546
J
Smooth surface extraction using partial differential equations (PDEs) is a well-known and widely used technique for visualizing volume data. Existing approaches operate on gridded data and mainly on regular structured grids. When considering unstructured point-based volume data where sample points do not form regular patterns nor are they connected in any form, one would typically resample the data over a grid prior to applying the known PDE-based methods. We propose an approach that directly extracts smooth surfaces from unstructured point-based volume data without prior resampling or mesh generation. When operating on unstructured data one needs to quickly derive neighborhood information. The respective information is retrieved by partitioning the 3D domain into cells using a fed-tree and operating on its cells. We exploit neighborhood information to estimate gradients and mean curvature at every sample point using a four-dimensional least-squares fitting approach. Gradients and mean curvature are required for applying the chosen PDE-based method that combines hyperbolic advection to an isovalue of a given scalar field and mean curvature flow. Since we are using an explicit time-integration scheme, time steps and neighbor locations are bounded to ensure convergence of the process. To avoid small global time steps, one can use asynchronous local integration. We extract a smooth surface by successively fitting a smooth auxiliary function to the data set. This auxiliary function is initialized as a signed distance function. For each sample and for every time step we compute the respective gradient, the mean curvature, and a stable time step. With these informations the auxiliary function is manipulated using an explicit Euler time integration. The process successively continues with the next sample point in time. If the norm of the auxiliary function gradient in a sample exceeds a given threshold at some time, the auxiliary function is reinitialized to a signed dista- - nce function. After convergence of the evolvution, the resulting smooth surface is obtained by extracting the zero isosurface from the auxiliary function using direct isosurface extraction from unstructured point-based volume data and rendering the extracted surface using point-based rendering methods.
Rosenthal, P.;Linsen, L.
Jacobs Univ. Bremen, Bremen|c|;
10.1109/VISUAL.2002.1183773;10.1109/VISUAL.2003.1250357
PDEs, surface extraction, level sets, point-based visualization
Vis
2008
Surface Extraction from Multi-field Particle Volume Data Using Multi-dimensional Cluster Visualization
10.1109/TVCG.2008.167
1. 1490
J
Data sets resulting from physical simulations typically contain a multitude of physical variables. It is, therefore, desirable that visualization methods take into account the entire multi-field volume data rather than concentrating on one variable. We present a visualization approach based on surface extraction from multi-field particle volume data. The surfaces segment the data with respect to the underlying multi-variate function. Decisions on segmentation properties are based on the analysis of the multi-dimensional feature space. The feature space exploration is performed by an automated multi-dimensional hierarchical clustering method, whose resulting density clusters are shown in the form of density level sets in a 3D star coordinate layout. In the star coordinate layout, the user can select clusters of interest. A selected cluster in feature space corresponds to a segmenting surface in object space. Based on the segmentation property induced by the cluster membership, we extract a surface from the volume data. Our driving applications are smoothed particle hydrodynamics (SPH) simulations, where each particle carries multiple properties. The data sets are given in the form of unstructured point-based volume data. We directly extract our surfaces from such data without prior resampling or grid generation. The surface extraction computes individual points on the surface, which is supported by an efficient neighborhood computation. The extracted surface points are rendered using point-based rendering operations. Our approach combines methods in scientific visualization for object-space operations with methods in information visualization for feature-space operations.
Linsen, L.;Van Long, T.;Rosenthal, P.;Rosswog, S.
Sch. of Eng. & Sci., Jacobs Univ., Bremen|c|;;;
10.1109/TVCG.2007.70615;10.1109/TVCG.2006.164;10.1109/TVCG.2007.70569;10.1109/TVCG.2006.165;10.1109/TVCG.2007.70526
Multi-field and multi-variate visualization, isosurfaces and surface extraction, point-based visualization, star coordinates, visualization in astrophysics, particle simulations
Vis
2008
Text Scaffolds for Effective Surface Labeling
10.1109/TVCG.2008.168
1. 1682
J
In this paper we introduce a technique for applying textual labels to 3D surfaces. An effective labeling must balance the conflicting goals of conveying the shape of the surface while being legible from a range of viewing directions. Shape can be conveyed by placing the text as a texture directly on the surface, providing shape cues, meaningful landmarks and minimally obstructing the rest of the model. But rendering such surface text is problematic both in regions of high curvature, where text would be warped, and in highly occluded regions, where it would be hidden. Our approach achieves both labeling goals by applying surface labels to a psilatext scaffoldpsila, a surface explicitly constructed to hold the labels. Text scaffolds conform to the underlying surface whenever possible, but can also float above problem regions, allowing them to be smooth while still conveying the overall shape. This paper provides methods for constructing scaffolds from a variety of input sources, including meshes, constructive solid geometry, and scalar fields. These sources are first mapped into a distance transform, which is then filtered and used to construct a new mesh on which labels are either manually or automatically placed. In the latter case, annotated regions of the input surface are associated with proximal regions on the new mesh, and labels placed using cartographic principles.
Cipriano, G.;Gleicher, M.
Dept. of Comput. Sci., Wisconsin Univ., Madison, WI|c|;
10.1109/VISUAL.2000.885705
surface labeling, computational cartography, text authoring, annotation
Vis
2008
Texture-based Transfer Functions for Direct Volume Rendering
10.1109/TVCG.2008.169
1. 1371
J
Visualization of volumetric data faces the difficult task of finding effective parameters for the transfer functions. Those parameters can determine the effectiveness and accuracy of the visualization. Frequently, volumetric data includes multiple structures and features that need to be differentiated. However, if those features have the same intensity and gradient values, existing transfer functions are limited at effectively illustrating those similar features with different rendering properties. We introduce texture-based transfer functions for direct volume rendering. In our approach, the voxelpsilas resulting opacity and color are based on local textural properties rather than individual intensity values. For example, if the intensity values of the vessels are similar to those on the boundary of the lungs, our texture-based transfer function will analyze the textural properties in those regions and color them differently even though they have the same intensity values in the volume. The use of texture-based transfer functions has several benefits. First, structures and features with the same intensity and gradient values can be automatically visualized with different rendering properties. Second, segmentation or prior knowledge of the specific features within the volume is not required for classifying these features differently. Third, textural metrics can be combined and/or maximized to capture and better differentiate similar structures. We demonstrate our texture-based transfer function for direct volume rendering with synthetic and real-world medical data to show the strength of our technique.
Caban, J.J.;Rheingans, P.
Dept. of Comput. Sci., Maryland Univ., College Park, MD|c|;
10.1109/VISUAL.2001.964519;10.1109/VISUAL.2003.1250413;10.1109/VISUAL.2003.1250414
visualization, statistical analysis, volume rendering, data variability, medical imaging
Vis
2008
The Seismic Analyzer: Interpreting and Illustrating 2D Seismic Data
10.1109/TVCG.2008.170
1. 1578
J
We present a toolbox for quickly interpreting and illustrating 2D slices of seismic volumetric reflection data. Searching for oil and gas involves creating a structural overview of seismic reflection data to identify hydrocarbon reservoirs. We improve the search of seismic structures by precalculating the horizon structures of the seismic data prior to interpretation. We improve the annotation of seismic structures by applying novel illustrative rendering algorithms tailored to seismic data, such as deformed texturing and line and texture transfer functions. The illustrative rendering results in multi-attribute and scale invariant visualizations where features are represented clearly in both highly zoomed in and zoomed out views. Thumbnail views in combination with interactive appearance control allows for a quick overview of the data before detailed interpretation takes place. These techniques help reduce the work of seismic illustrators and interpreters.
Patel, D.;Giertsen, C.;Thurmond, J.;Gjelberg, J.;Groller, E.
Christian Michelsen Res., Bergen|c|;;;;
10.1109/VISUAL.1999.809905;10.1109/VISUAL.2005.1532802;10.1109/VISUAL.1991.175811
Seismic interpretation, Illustrative rendering, Seismic attributes, Top-down interpretation
Vis
2008
Vectorized Radviz and Its Application to Multiple Cluster Datasets
10.1109/TVCG.2008.173
1. 1427
J
Radviz is a radial visualization with dimensions assigned to points called dimensional anchors (DAs) placed on the circumference of a circle. Records are assigned locations within the circle as a function of its relative attraction to each of the DAs. The DAs can be moved either interactively or algorithmically to reveal different meaningful patterns in the dataset. In this paper we describe Vectorized Radviz (VRV) which extends the number of dimensions through data flattening. We show how VRV increases the power of Radviz through these extra dimensions by enhancing the flexibility in the layout of the DAs. We apply VRV to the problem of analyzing the results of multiple clusterings of the same data set, called multiple cluster sets or cluster ensembles. We show how features of VRV help discern patterns across the multiple cluster sets. We use the Iris data set to explain VRV and a newt gene microarray data set used in studying limb regeneration to show its utility. We then discuss further applications of VRV.
Sharko, J.;Grinstein, G.;Marx, K.
Dept. of Comput. Sci., Univ. of Massachusetts - Lowell, Lowell, MA|c|;;
10.1109/INFVIS.2004.15;10.1109/VISUAL.1997.663916;10.1109/INFVIS.1998.729559
Visualization, Radviz, Vectorized Radviz, Clustering, Multiple Clustering, Cluster Ensembles, Flattening Datasets
Vis
2008
VisComplete: Automating Suggestions for Visualization Pipelines
10.1109/TVCG.2008.174
1. 1698
J
Building visualization and analysis pipelines is a large hurdle in the adoption of visualization and workflow systems by domain scientists. In this paper, we propose techniques to help users construct pipelines by consensus-automatically suggesting completions based on a database of previously created pipelines. In particular, we compute correspondences between existing pipeline subgraphs from the database, and use these to predict sets of likely pipeline additions to a given partial pipeline. By presenting these predictions in a carefully designed interface, users can create visualizations and other data products more efficiently because they can augment their normal work patterns with the suggested completions. We present an implementation of our technique in a publicly-available, open-source scientific workflow system and demonstrate efficiency gains in real-world situations.
Koop, D.;Scheidegger, C.E.;Callahan, S.P.;Freire, J.;Silva, C.T.
Sch. of Comput., Univ. of Utah, Salt Lake City, UT|c|;;;;
10.1109/TVCG.2007.70584;10.1109/TVCG.2007.70577;10.1109/VISUAL.2005.1532834;10.1109/VISUAL.2005.1532833;10.1109/VISUAL.2005.1532788;10.1109/VISUAL.2005.1532795
Scientific Workflows, Scientific Visualization, Auto Completion
Vis
2008
Visibility-driven Mesh Analysis and Visualization through Graph Cuts
10.1109/TVCG.2008.176
1. 1674
J
In this paper we present an algorithm that operates on a triangular mesh and classifies each face of a triangle as either inside or outside. We present three example applications of this core algorithm: normal orientation, inside removal, and layer-based visualization. The distinguishing feature of our algorithm is its robustness even if a difficult input model that includes holes, coplanar triangles, intersecting triangles, and lost connectivity is given. Our algorithm works with the original triangles of the input model and uses sampling to construct a visibility graph that is then segmented using graph cut.
Zhou, K.;Zhang, E.;Bittner, J.;Wonka, P.
Arizona State Univ., Tempe, AZ|c|;;;
10.1109/VISUAL.2002.1183784
Interior/Exterior Classification, Normal Orientation, Layer Classification, Inside Removal, Graph Cut
Vis
2008
Visiting the Gödel Universe
10.1109/TVCG.2008.177
1. 1570
J
Visualization of general relativity illustrates aspects of Einstein's insights into the curved nature of space and time to the expert as well as the layperson. One of the most interesting models which came up with Einstein's theory was developed by Kurt Godel in 1949. The Godel universe is a valid solution of Einstein's field equations, making it a possible physical description of our universe. It offers remarkable features like the existence of an optical horizon beyond which time travel is possible. Although we know that our universe is not a Godel universe, it is interesting to visualize physical aspects of a world model resulting from a theory which is highly confirmed in scientific history. Standard techniques to adopt an egocentric point of view in a relativistic world model have shortcomings with respect to the time needed to render an image as well as difficulties in applying a direct illumination model. In this paper we want to face both issues to reduce the gap between common visualization standards and relativistic visualization. We will introduce two techniques to speed up recalculation of images by means of preprocessing and lookup tables and to increase image quality through a special optimization applicable to the Godel universe. The first technique allows the physicist to understand the different effects of general relativity faster and better by generating images from existing datasets interactively. By using the intrinsic symmetries of Godel's spacetime which are expressed by the Killing vector field, we are able to reduce the necessary calculations to simple cases using the second technique. This even makes it feasible to account for a direct illumination model during the rendering process. Although the presented methods are applied to Godel's universe, they can also be extended to other manifolds, for example light propagation in moving dielectric media. Therefore, other areas of research can benefit from these generic improvements.
Grave, F.;Buser, M.
Inst. for Theor. Phys. & VISUS, Univ. of Stuttgart, Stuttgart|c|;
10.1109/TVCG.2006.176;10.1109/VISUAL.2005.1532803;10.1109/TVCG.2007.70530
General relativity, Godel universe, nonlinear ray tracing, time travel
Vis
2008
Visualization of Cellular and Microvascular Relationships
10.1109/TVCG.2008.179
1. 1618
J
Understanding the structure of microvasculature structures and their relationship to cells in biological tissue is an important and complex problem. Brain microvasculature in particular is known to play an important role in chronic diseases. However, these networks are only visible at the microscopic level and can span large volumes of tissue. Due to recent advances in microscopy, large volumes of data can be imaged at the resolution necessary to reconstruct these structures. Due to the dense and complex nature of microscopy data sets, it is important to limit the amount of information displayed. In this paper, we describe methods for encoding the unique structure of microvascular data, allowing researchers to selectively explore microvascular anatomy. We also identify the queries most useful to researchers studying microvascular and cellular relationships. By associating cellular structures with our microvascular framework, we allow researchers to explore interesting anatomical relationships in dense and complex data sets.
Mayerich, D.;Abbott, L..;Keyser, J.
Dept. of Comput. Sci., Texas A&M Univ., College Station, TX|c|;;
10.1109/VISUAL.2005.1532859;10.1109/VISUAL.1997.663917;10.1109/TVCG.2006.197;10.1109/TVCG.2007.70532
microscopy, biomedical, medical, blood vessels, cells
Vis
2008
Visualization of Myocardial Perfusion Derived from Coronary Anatomy
10.1109/TVCG.2008.180
1. 1602
J
Visually assessing the effect of the coronary artery anatomy on the perfusion of the heart muscle in patients with coronary artery disease remains a challenging task. We explore the feasibility of visualizing this effect on perfusion using a numerical approach. We perform a computational simulation of the way blood is perfused throughout the myocardium purely based on information from a three-dimensional anatomical tomographic scan. The results are subsequently visualized using both three-dimensional visualizations and bullpsilas eye plots, partially inspired by approaches currently common in medical practice. Our approach results in a comprehensive visualization of the coronary anatomy that compares well to visualizations commonly used for other scanning technologies. We demonstrate techniques giving detailed insight in blood supply, coronary territories and feeding coronary arteries of a selected region. We demonstrate the advantages of our approach through visualizations that show information which commonly cannot be directly observed in scanning data, such as a separate visualization of the supply from each coronary artery. We thus show that the results of a computational simulation can be effectively visualized and facilitate visually correlating these results to for example perfusion data.
Termeer, M.;Bescos, J.O.;Breeuwer, M.;Vilanova, A.;Gerritsen, F.;Groller, E.;Nagel, E.
Vienna Univ. of Technol., Vienna|c|;;;;;;
10.1109/TVCG.2007.70550;10.1109/VISUAL.2002.1183754
Cardiac visualization, coronary artery territories, myocardial perfusion
Vis
2008
Visualizing Multiwavelength Astrophysical Data
10.1109/TVCG.2008.182
1. 1562
J
With recent advances in the measurement technology for allsky astrophysical imaging, our view of the sky is no longer limited to the tiny visible spectral range over the 2D Celestial sphere. We now can access a third dimension corresponding to a broad electromagnetic spectrum with a wide range of allsky surveys; these surveys span frequency bands including long long wavelength radio, microwaves, very short X-rays, and gamma rays. These advances motivate us to study and examine multiwavelength visualization techniques to maximize our capabilities to visualize and exploit these informative image data sets. In this work, we begin with the processing of the data themselves, uniformizing the representations and units of raw data obtained from varied detector sources. Then we apply tools to map, convert, color-code, and format the multiwavelength data in forms useful for applications. We explore different visual representations for displaying the data, including such methods as textured image stacks, the horseshoe representation, and GPU-based volume visualization. A family of visual tools and analysis methods are introduced to explore the data, including interactive data mapping on the graphics processing unit (GPU), the mini-map explorer, and GPU-based interactive feature analysis.
Hongwei Li;Chi-Wing Fu;Hanson, A.J.
Hong Kong Univ. of Sci. & Technol., Hong Kong|c|;;
10.1109/VISUAL.2003.1250404;10.1109/TVCG.2006.155;10.1109/VISUAL.1995.485155;10.1109/TVCG.2006.176;10.1109/VISUAL.1992.235222;10.1109/VISUAL.2002.1183824;10.1109/TVCG.2007.70530;10.1109/VISUAL.2003.1250401;10.1109/VISUAL.2005.1532803;10.1109/VISUAL.2004.18;10.1109/TVCG.2007.70526
Astrophysical visualization, multiwavelength data, astronomy
Vis
2008
Visualizing Particle/Flow Structure Interactions in the Small Bronchial Tubes
10.1109/TVCG.2008.183
1. 1427
J
Particle deposition in the small bronchial tubes (generations six through twelve) is strongly influenced by the vortex-dominated secondary flows that are induced by axial curvature of the tubes. In this paper, we employ particle destination maps in conjunction with two-dimensional, finite-time Lyapunov exponent maps to illustrate how the trajectories of finite-mass particles are influenced by the presence of vortices. We consider two three-generation bronchial tube models: a planar, asymmetric geometry and a non-planar, asymmetric geometry. Our visualizations demonstrate that these techniques, coupled with judiciously seeded particle trajectories, are effective tools for studying particle/flow structure interactions.
Soni, B.;Thompson, D.;Machiraju, R.
Mississippi State Univ., Oxford, MS|c|;;
10.1109/TVCG.2007.70551;10.1109/TVCG.2007.70554
FTLE, particle trajectory, visualization, bronchial tube