IEEE VIS Publication Dataset

next
Vis
2009
Exploring 3D DTI fiber Tracts with Linked 2D Representations
10.1109/TVCG.2009.141
1. 1456
J
We present a visual exploration paradigm that facilitates navigation through complex fiber tracts by combining traditional 3D model viewing with lower dimensional representations. To this end, we create standard streamtube models along with two two-dimensional representations, an embedding in the plane and a hierarchical clustering tree, for a given set of fiber tracts. We then link these three representations using both interaction and color obtained by embedding fiber tracts into a perceptually uniform color space. We describe an anecdotal evaluation with neuroscientists to assess the usefulness of our method in exploring anatomical and functional structures in the brain. Expert feedback indicates that, while a standalone clinical use of the proposed method would require anatomical landmarks in the lower dimensional representations, the approach would be particularly useful in accelerating tract bundle selection. Results also suggest that combining traditional 3D model viewing with lower dimensional representations can ease navigation through the complex fiber tract models, improving exploration of the connectivity in the brain.
Jianu, R.;Demiralp, C.;Laidlaw, D.H.
Brown Univ., Providence, RI, USA|c|;;
10.1109/VISUAL.2000.885739;10.1109/VISUAL.1991.175794;10.1109/VISUAL.1996.567787;10.1109/VISUAL.2004.30;10.1109/TVCG.2009.112;10.1109/VISUAL.1994.346302;10.1109/VISUAL.2005.1532779
DTI fiber tracts, embedding, coloring, interaction
Vis
2009
Exploring the Millennium Run - Scalable Rendering of Large-Scale Cosmological Datasets
10.1109/TVCG.2009.142
1. 1258
J
In this paper we investigate scalability limitations in the visualization of large-scale particle-based cosmological simulations, and we present methods to reduce these limitations on current PC architectures. To minimize the amount of data to be streamed from disk to the graphics subsystem, we propose a visually continuous level-of-detail (LOD) particle representation based on a hierarchical quantization scheme for particle coordinates and rules for generating coarse particle distributions. Given the maximal world space error per level, our LOD selection technique guarantees a sub-pixel screen space error during rendering. A brick-based page-tree allows to further reduce the number of disk seek operations to be performed. Additional particle quantities like density, velocity dispersion, and radius are compressed at no visible loss using vector quantization of logarithmically encoded floating point values. By fine-grain view-frustum culling and presence acceleration in a geometry shader the required geometry throughput on the GPU can be significantly reduced. We validate the quality and scalability of our method by presenting visualizations of a particle-based cosmological dark-matter simulation exceeding 10 billion elements.
Fraedrich, R.;Schneider, J.;Westermann, R.
Comput. Graphics & Visualization Group, Tech. Univ. Munchen, Munich, Germany|c|;;
10.1109/VISUAL.2003.1250404;10.1109/VISUAL.2002.1183824;10.1109/VISUAL.2003.1250404;10.1109/VISUAL.2005.1532795;10.1109/VISUAL.2003.1250385;10.1109/TVCG.2006.176;10.1109/TVCG.2007.70530;10.1109/VISUAL.1997.663888;10.1109/TVCG.2007.70526;10.1109/VISUAL.2004.112;10.1109/TVCG.2006.155
Particle Visualization, Scalability, Cosmology
Vis
2009
Focus+Context Route Zooming and Information Overlay in 3D Urban Environments
10.1109/TVCG.2009.144
1. 1554
J
In this paper we present a novel focus+context zooming technique, which allows users to zoom into a route and its associated landmarks in a 3D urban environment from a 45-degree bird's-eye view. Through the creative utilization of the empty space in an urban environment, our technique can informatively reveal the focus region and minimize distortions to the context buildings. We first create more empty space in the 2D map by broadening the road with an adapted seam carving algorithm. A grid-based zooming technique is then used to enlarge the landmarks to reclaim the created empty space and thus reduce distortions to the other parts. Finally,an occlusion-free route visualization scheme adaptively scales the buildings occluding the route to make the route always visible to users. Our method can be conveniently integrated into Google Earth and Virtual Earth to provide seamless route zooming and help users better explore a city and plan their tours. It can also be used in other applications such as information overlay to a virtual city.
Huamin Qu;Haomian Wang;Weiwei Cui;Yingcai Wu;Ming-Yuen Chan
Hong Kong Univ. of Sci. & Technol., Hong Kong, China|c|;;;;
10.1109/TVCG.2008.124;10.1109/TVCG.2006.163;10.1109/TVCG.2006.167;10.1109/INFVIS.1998.729558;10.1109/TVCG.2008.132
focus+context visualization, zooming, 3D virtual environment, seam carving
Vis
2009
GL4D: A GPU-based Architecture for Interactive 4D Visualization
10.1109/TVCG.2009.147
1. 1594
J
This paper describes GL4D, an interactive system for visualizing 2-manifolds and 3-manifolds embedded in four Euclidean dimensions and illuminated by 4D light sources. It is a tetrahedron-based rendering pipeline that projects geometry into volume images, an exact parallel to the conventional triangle-based rendering pipeline for 3D graphics. Novel features include GPU-based algorithms for real-time 4D occlusion handling and transparency compositing; we thus enable a previously impossible level of quality and interactivity for exploring lit 4D objects. The 4D tetrahedrons are stored in GPU memory as vertex buffer objects, and the vertex shader is used to perform per-vertex 4D modelview transformations and 4D-to-3D projection. The geometry shader extension is utilized to slice the projected tetrahedrons and rasterize the slices into individual 2D layers of voxel fragments. Finally, the fragment shader performs per-voxel operations such as lighting and alpha blending with previously computed layers. We account for 4D voxel occlusion along the 4D-to-3D projection ray by supporting a multi-pass back-to-front fragment composition along the projection ray; to accomplish this, we exploit a new adaptation of the dual depth peeling technique to produce correct volume image data and to simultaneously render the resulting volume data using 3D transfer functions into the final 2D image. Previous CPU implementations of the rendering of 4D-embedded 3-manifolds could not perform either the 4D depth-buffered projection or manipulation of the volume-rendered image in real-time; in particular, the dual depth peeling algorithm is a novel GPU-based solution to the real-time 4D depth-buffering problem. GL4D is implemented as an integrated OpenGL-style API library, so that the underlying shader operations are as transparent as possible to the user.
Chu, A.;Chi-Wing Fu;Hanson, A.J.;Pheng-Ann Heng
Chinese Univ. of Hong Kong, Hong Kong, China|c|;;;
10.1109/VISUAL.1994.346318;10.1109/VISUAL.2000.885704;10.1109/VISUAL.1992.235222;10.1109/VISUAL.2005.1532804;10.1109/TVCG.2007.70593;10.1109/VISUAL.1994.346324;10.1109/VISUAL.1993.398869
Mathematical visualization, four-dimensional visualization, graphics hardware, interactive illumination
Vis
2009
High-Quality, Semi-Analytical Volume Rendering for AMR Data
10.1109/TVCG.2009.149
1. 1618
J
This paper presents a pipeline for high quality volume rendering of adaptive mesh refinement (AMR) datasets. We introduce a new method allowing high quality visualization of hexahedral cells in this context; this method avoids artifacts like discontinuities in the isosurfaces. To achieve this, we choose the number and placement of sampling points over the cast rays according to the analytical properties of the reconstructed signal inside each cell. We extend our method to handle volume shading of such cells. We propose an interpolation scheme that guarantees continuity between adjacent cells of different AMR levels. We introduce an efficient hybrid CPU-GPU mesh traversal technique. We present an implementation of our AMR visualization method on current graphics hardware, and show results demonstrating both the quality and performance of our method.
Marchesin, S.;de Verdiere, G.C.
DAM, CEA, Arpajon, France|c|;
10.1109/VISUAL.2000.885683;10.1109/VISUAL.2004.85;10.1109/VISUAL.2005.1532793;10.1109/TVCG.2008.157;10.1109/VISUAL.2003.1250384;10.1109/TVCG.2008.186
Volume rendering, AMR data, Volume shading
Vis
2009
Hue-Preserving Color Blending
10.1109/TVCG.2009.150
1. 1282
J
We propose a new perception-guided compositing operator for color blending. The operator maintains the same rules for achromatic compositing as standard operators (such as the over operator), but it modifies the computation of the chromatic channels. Chromatic compositing aims at preserving the hue of the input colors; color continuity is achieved by reducing the saturation of colors that are to change their hue value. The main benefit of hue preservation is that color can be used for proper visual labeling, even under the constraint of transparency rendering or image overlays. Therefore, the visualization of nominal data is improved. Hue-preserving blending can be used in any existing compositing algorithm, and it is particularly useful for volume rendering. The usefulness of hue-preserving blending and its visual characteristics are shown for several examples of volume visualization.
Chuang, J.;Weiskopf, D.;Moller, T.
Simon Fraser Univ., Burnaby, BC, Canada|c|;;
10.1109/VISUAL.1996.568118;10.1109/TVCG.2008.118;10.1109/TVCG.2006.183
Image compositing, perceptual transparency, color blending, volume rendering, illustrative visualization
Vis
2009
Interactive Coordinated Multiple-View Visualization of Biomechanical Motion Data
10.1109/TVCG.2009.152
1. 1390
J
We present an interactive framework for exploring space-time and form-function relationships in experimentally collected high-resolution biomechanical data sets. These data describe complex 3D motions (e.g. chewing, walking, flying) performed by animals and humans and captured via high-speed imaging technologies, such as biplane fluoroscopy. In analyzing these 3D biomechanical motions, interactive 3D visualizations are important, in particular, for supporting spatial analysis. However, as researchers in information visualization have pointed out, 2D visualizations can also be effective tools for multi-dimensional data analysis, especially for identifying trends over time. Our approach, therefore, combines techniques from both 3D and 2D visualizations. Specifically, it utilizes a multi-view visualization strategy including a small multiples view of motion sequences, a parallel coordinates view, and detailed 3D inspection views. The resulting framework follows an overview first, zoom and filter, then details-on-demand style of analysis, and it explicitly targets a limitation of current tools, namely, supporting analysis and comparison at the level of a collection of motions rather than sequential analysis of a single or small number of motions. Scientific motion collections appropriate for this style of analysis exist in clinical work in orthopedics and physical rehabilitation, in the study of functional morphology within evolutionary biology, and in other contexts. An application is described based on a collaboration with evolutionary biologists studying the mechanics of chewing motions in pigs. Interactive exploration of data describing a collection of more than one hundred experimentally captured pig chewing cycles is described.
Keefe, D.F.;Ewert, M.;Ribarsky, W.;Chang, R.
Dept. of Comput. Sci. & Eng., Univ. of Minnesota, Minneapolis, MN, USA|c|;;;
10.1109/TVCG.2008.125;10.1109/TVCG.2007.70569;10.1109/TVCG.2008.109
Scientific visualization, information visualization, coordinated multiple views, biomechanics
Vis
2009
Interactive Streak Surface Visualization on the GPU
10.1109/TVCG.2009.154
1. 1266
J
In this paper we present techniques for the visualization of unsteady flows using streak surfaces, which allow for the first time an adaptive integration and rendering of such surfaces in real-time. The techniques consist of two main components, which are both realized on the GPU to exploit computational and bandwidth capacities for numerical particle integration and to minimize bandwidth requirements in the rendering of the surface. In the construction stage, an adaptive surface representation is generated. Surface refinement and coarsening strategies are based on local surface properties like distortion and curvature. We compare two different methods to generate a streak surface: a) by computing a patch-based surface representation that avoids any interdependence between patches, and b) by computing a particle-based surface representation including particle connectivity, and by updating this connectivity during particle refinement and coarsening. In the rendering stage, the surface is either rendered as a set of quadrilateral surface patches using high-quality point-based approaches, or a surface triangulation is built in turn from the given particle connectivity and the resulting triangle mesh is rendered. We perform a comparative study of the proposed techniques with respect to surface quality, visual quality and performance by visualizing streak surfaces in real flows using different rendering options.
Burger, K.;Ferstl, F.;Theisel, H.;Westermann, R.
Comput. Graphics & Visualization group, Tech. Univ. Munchen, Munich, Germany|c|;;;
10.1109/VISUAL.1992.235211;10.1109/VISUAL.2001.964506;10.1109/TVCG.2008.133;10.1109/VISUAL.1993.398875;10.1109/TVCG.2008.163
Unsteady flow visualization, streak surface generation, GPUs
Vis
2009
Interactive Visual Analysis of Complex Scientific Data as Families of Data Surfaces
10.1109/TVCG.2009.155
1. 1358
J
The widespread use of computational simulation in science and engineering provides challenging research opportunities. Multiple independent variables are considered and large and complex data are computed, especially in the case of multi-run simulation. Classical visualization techniques deal well with 2D or 3D data and also with time-dependent data. Additional independent dimensions, however, provide interesting new challenges. We present an advanced visual analysis approach that enables a thorough investigation of families of data surfaces, i.e., datasets, with respect to pairs of independent dimensions. While it is almost trivial to visualize one such data surface, the visual exploration and analysis of many such data surfaces is a grand challenge, stressing the users' perception and cognition. We propose an approach that integrates projections and aggregations of the data surfaces at different levels (one scalar aggregate per surface, a 1D profile per surface, or the surface as such). We demonstrate the necessity for a flexible visual analysis system that integrates many different (linked) views for making sense of this highly complex data. To demonstrate its usefulness, we exemplify our approach in the context of a meteorological multi-run simulation data case and in the context of the engineering domain, where our collaborators are working with the simulation of elastohydrodynamic (EHD) lubrication bearing in the automotive industry.
Matkovic, K.;Gracanin, D.;Klarin, B.;Hauser, H.
VRVis Res. Center, Vienna, Austria|c|;;;
10.1109/VISUAL.1997.663867;10.1109/TVCG.2008.145;10.1109/INFVIS.2001.963273;10.1109/TVCG.2006.170
Interactive visual analysis, family of surfaces, coordinated multiple views, multidimensional multivariate data
Vis
2009
Interactive Visual Optimization and Analysis for RfiD Benchmarking
10.1109/TVCG.2009.156
1. 1342
J
Radiofrequency identification (RFID) is a powerful automatic remote identification technique that has wide applications. To facilitate RFID deployment, an RFID benchmarking instrument called aGate has been invented to identify the strengths and weaknesses of different RFID technologies in various environments. However, the data acquired by aGate are usually complex time varying multidimensional 3D volumetric data, which are extremely challenging for engineers to analyze. In this paper, we introduce a set of visualization techniques, namely, parallel coordinate plots, orientation plots, a visual history mechanism, and a 3D spatial viewer, to help RFID engineers analyze benchmark data visually and intuitively. With the techniques, we further introduce two workflow procedures (a visual optimization procedure for finding the optimum reader antenna configuration and a visual analysis procedure for comparing the performance and identifying the flaws of RFID devices) for the RFID benchmarking, with focus on the performance analysis of the aGate system. The usefulness and usability of the system are demonstrated in the user evaluation.
Yingcai Wu;Ka-Kei Chung;Huamin Qu;Xiaoru Yuan;Cheung, S.C.
Dept. of Comput. Sci. & Eng., Hong Kong Univ. of Sci. & Technol., Kowloon, China|c|;;;;
10.1109/TVCG.2008.131;10.1109/VISUAL.1990.146402;10.1109/TVCG.2007.70535;10.1109/INFVIS.2005.1532141;10.1109/VISUAL.1996.567800;10.1109/INFVIS.2004.2
RfiD Visualization, Visual analytics, Visual Optimization
Vis
2009
Interactive Visualization of Molecular Surface Dynamics
10.1109/TVCG.2009.157
1. 1398
J
Molecular dynamics simulations of proteins play a growing role in various fields such as pharmaceutical, biochemical and medical research. Accordingly, the need for high quality visualization of these protein systems raises. Highly interactive visualization techniques are especially needed for the analysis of time-dependent molecular simulations. Beside various other molecular representations the surface representations are of high importance for these applications. So far, users had to accept a trade-off between rendering quality and performance - particularly when visualizing trajectories of time-dependent protein data. We present a new approach for visualizing the solvent excluded surface of proteins using a GPU ray casting technique and thus achieving interactive frame rates even for long protein trajectories where conventional methods based on precomputation are not applicable. Furthermore, we propose a semantic simplification of the raw protein data to reduce the visual complexity of the surface and thereby accelerate the rendering without impeding perception of the protein's basic shape. We also demonstrate the application of our solvent excluded surface method to visualize the spatial probability density for the protein atoms over the whole period of the trajectory in one frame, providing a qualitative analysis of the protein flexibility.
Krone, M.;Bidmon, K.;Ertl, T.
Visualization Res. Center VISUS, Univ. Stuttgart, Stuttgart, Germany|c|;;
10.1109/VISUAL.2004.103;10.1109/TVCG.2006.115
Point-based Data, Time-varying Data, GPU, Ray Casting, Molecular Visualization, Surface Extraction, Isosurfaces
Vis
2009
Interactive Volume Rendering of Functional Representations in Quantum Chemistry
10.1109/TVCG.2009.158
1. 5186
J
Simulation and computation in chemistry studies have been improved as computational power has increased over decades. Many types of chemistry simulation results are available, from atomic level bonding to volumetric representations of electron density. However, tools for the visualization of the results from quantum chemistry computations are still limited to showing atomic bonds and isosurfaces or isocontours corresponding to certain isovalues. In this work, we study the volumetric representations of the results from quantum chemistry computations, and evaluate and visualize the representations directly on the GPU without resampling the result in grid structures. Our visualization tool handles the direct evaluation of the approximated wavefunctions described as a combination of Gaussian-like primitive basis functions. For visualizations, we use a slice based volume rendering technique with a 2D transfer function, volume clipping, and illustrative rendering in order to reveal and enhance the quantum chemistry structure. Since there is no need of resampling the volume from the functional representations, two issues, data transfer and resampling resolution, can be ignored, therefore, it is possible to interactively explore large amount of different information in the computation results.
Yun Jang;Varetto, U.
ETH Zurich, Zurich, Switzerland|c|;
10.1109/TVCG.2007.70614;10.1109/TVCG.2007.70517;10.1109/VISUAL.2003.1250384;10.1109/VISUAL.2002.1183780;10.1109/TVCG.2006.133;10.1109/VISUAL.2005.1532811;10.1109/VISUAL.2000.885694;10.1109/VISUAL.2004.23;10.1109/TVCG.2007.70578;10.1109/TVCG.2006.150;10.1109/VISUAL.2004.36;10.1109/TVCG.2006.115;10.1109/VISUAL.2005.1532858;10.1109/VISUAL.2004.103
Quantum Chemistry, GTO, Volume Rendering, GPU
Vis
2009
Intrinsic Geometric Scale Space by Shape Diffusion
10.1109/TVCG.2009.159
1. 1200
J
This paper formalizes a novel, intrinsic geometric scale space (IGSS) of 3D surface shapes. The intrinsic geometry of a surface is diffused by means of the Ricci flow for the generation of a geometric scale space. We rigorously prove that this multiscale shape representation satisfies the axiomatic causality property. Within the theoretical framework, we further present a feature-based shape representation derived from IGSS processing, which is shown to be theoretically plausible and practically effective. By integrating the concept of scale-dependent saliency into the shape description, this representation is not only highly descriptive of the local structures, but also exhibits several desired characteristics of global shape representations, such as being compact, robust to noise and computationally efficient. We demonstrate the capabilities of our approach through salient geometric feature detection and highly discriminative matching of 3D scans.
Guangyu Zou;Jing Hua;Zhaoqiang Lai;Xianfeng Gu;Ming Dong
Wayne State Univ., Detroit, MI, USA|c|;;;;
10.1109/TVCG.2008.134
Scale space, feature extraction, geometric flow, Riemannian manifolds
Vis
2009
Isosurface Extraction and View-Dependent filtering from Time-Varying fields Using Persistent Time-Octree (PTOT)
10.1109/TVCG.2009.160
1. 1374
J
We develop a new algorithm for isosurface extraction and view-dependent filtering from large time-varying fields, by using a novel persistent time-octree (PTOT) indexing structure. Previously, the persistent octree (POT) was proposed to perform isosurface extraction and view-dependent filtering, which combines the advantages of the interval tree (for optimal searches of active cells) and of the branch-on-need octree (BONO, for view-dependent filtering), but it only works for steady-state(i.e., single time step) data. For time-varying fields, a 4D version of POT, 4D-POT, was proposed for 4D isocontour slicing, where slicing on the time domain gives all active cells in the queried timestep and isovalue. However, such slicing is not output sensitive and thus the searching is sub-optimal. Moreover, it was not known how to support view-dependent filtering in addition to time-domain slicing.In this paper, we develop a novel persistent time-octree (PTOT) indexing structure, which has the advantages of POT and performs 4D isocontour slicing on the time domain with an output-sensitive and optimal searching. In addition, when we query the same iso value q over m consecutive time steps, there is no additional searching overhead (except for reporting the additional active cells) compared to querying just the first time step. Such searching performance for finding active cells is asymptotically optimal, with asymptotically optimal space and preprocessing time as well. Moreover, our PTOT supports view-dependent filtering in addition to time-domain slicing. We propose a simple and effective out-of-core scheme, where we integrate our PTOT with implicit occluders, batched occlusion queries and batched CUDA computing tasks, so that we can greatly reduce the I/O cost as well as increase the amount of data being concurrently computed in GPU.This results in an efficient algorithm for isosurface extraction with view-dependent filtering utilizing a state-of-the-art programmable GPU for ti me-varying fields larger than main memory. Our experiments on datasets as large as 192 GB (with 4 GB per time step) having no more than 870 MB of memory footprint in both preprocessing and run-time phases demonstrate the efficacy of our new technique.
Cong Wang;Yi-Jen Chiang
CSE Dept, Polytech. Inst. of New York Univ., Brooklyn, NY, USA|c|;
10.1109/VISUAL.2003.1250375;10.1109/VISUAL.1998.745299;10.1109/VISUAL.1997.663895;10.1109/VISUAL.1998.745300;10.1109/VISUAL.2003.1250373
Isosurface extraction, time-varying fields, persistent data structure, view-dependent filtering, out-of-core methods
Vis
2009
Kd-Jump: a Path-Preserving Stackless Traversal for Faster Isosurface Raytracing on GPUs
10.1109/TVCG.2009.161
1. 1562
J
Stackless traversal techniques are often used to circumvent memory bottlenecks by avoiding a stack and replacing return traversal with extra computation. This paper addresses whether the stackless traversal approaches are useful on newer hardware and technology (such as CUDA). To this end, we present a novel stackless approach for implicit kd-trees, which exploits the benefits of index-based node traversal, without incurring extra node visitation. This approach, which we term Kd-Jump, enables the traversal to immediately return to the next valid node, like a stack, without incurring extra node visitation (kd-restart). Also, Kd-Jump does not require global memory (stack) at all and only requires a small matrix in fast constant-memory. We report that Kd-Jump outperforms a stack by 10 to 20% and kd-restar t by 100%. We also present a Hybrid Kd-Jump, which utilizes a volume stepper for leaf testing and a run-time depth threshold to define where kd-tree traversal stops and volume-stepping occurs. By using both methods, we gain the benefits of empty space removal, fast texture-caching and realtime ability to determine the best threshold for current isosurface and view direction.
Hughes, D.M.;Ik Soo Lim
Sch. of Comput. Sci., Bangor Univ., Bangor, UK|c|;
10.1109/VISUAL.2004.48
Raytracing, isosurface, GPU, parallel computing, volume visualization
Vis
2009
Loop surgery for volumetric meshes: Reeb graphs reduced to contour trees
10.1109/TVCG.2009.163
1. 1184
J
This paper introduces an efficient algorithm for computing the Reeb graph of a scalar function f defined on a volumetric mesh M in Ropf3. We introduce a procedure called "loop surgery" that transforms M into a mesh M' by a sequence of cuts and guarantees the Reeb graph of f(M') to be loop free. Therefore, loop surgery reduces Reeb graph computation to the simpler problem of computing a contour tree, for which well-known algorithms exist that are theoretically efficient (O(n log n)) and fast in practice. Inverse cuts reconstruct the loops removed at the beginning. The time complexity of our algorithm is that of a contour tree computation plus a loop surgery overhead, which depends on the number of handles of the mesh. Our systematic experiments confirm that for real-life data, this overhead is comparable to the computation of the contour tree, demonstrating virtually linear scalability on meshes ranging from 70 thousand to 3.5 million tetrahedra. Performance numbers show that our algorithm, although restricted to volumetric data, has an average speedup factor of 6,500 over the previous fastest techniques, handling larger and more complex data-sets. We demonstrate the verstility of our approach by extending fast topologically clean isosurface extraction to non simply-connected domains. We apply this technique in the context of pressure analysis for mechanical design. In this case, our technique produces results in matter of seconds even for the largest meshes. For the same models, previous Reeb graph techniques do not produce a result.
Tierny, J.;Gyulassy, A.;Simon, E.;Pascucci, V.
Sci. Comput. & Imaging Inst., Univ. of Utah, Salt Lake City, UT, USA|c|;;;
10.1109/VISUAL.2004.96;10.1109/TVCG.2007.70601;10.1109/VISUAL.1997.663875
Reeb graph, scalar field topology, isosurfaces, topological simplification
Vis
2009
Mapping High-fidelity Volume Rendering for Medical Imaging to CPU, GPU and Many-Core Architectures
10.1109/TVCG.2009.164
1. 1570
J
Medical volumetric imaging requires high fidelity, high performance rendering algorithms. We motivate and analyze new volumetric rendering algorithms that are suited to modern parallel processing architectures. First, we describe the three major categories of volume rendering algorithms and confirm through an imaging scientist-guided evaluation that ray-casting is the most acceptable. We describe a thread- and data-parallel implementation of ray-casting that makes it amenable to key architectural trends of three modern commodity parallel architectures: multi-core, GPU, and an upcoming many-core Intelreg architecture code-named Larrabee. We achieve more than an order of magnitude performance improvement on a number of large 3D medical datasets. We further describe a data compression scheme that significantly reduces data-transfer overhead. This allows our approach to scale well to large numbers of Larrabee cores.
Smelyanskiy, M.;Holmes, D.;Chhugani, J.;Larson, A.;Carmean, D.M.;Hanson, D.;Dubey, P.;Augustine, K.;Daehyun Kim;Kyker, A.;Lee, V.W.;Nguyen, A.D.;Seiler, L.;Robb, R.
Intel Corp., Santa Clara, CA, USA|c|;;;;;;;;;;;;;
10.1109/VISUAL.2003.1250384;10.1109/VISUAL.1998.745309
Volume Compositing, Parallel Processing, Many-core Computing, Medical Imaging, Graphics Architecture, GPGPU
Vis
2009
Markerless View-Independent Registration of Multiple Distorted Projectors on Extruded Surfaces Using an Uncalibrated Camera
10.1109/TVCG.2009.166
1. 1316
J
In this paper, we present the first algorithm to geometrically register multiple projectors in a view-independent manner (i.e. wallpapered) on a common type of curved surface, vertically extruded surface, using an uncalibrated camera without attaching any obtrusive markers to the display screen. Further, it can also tolerate large non-linear geometric distortions in the projectors as is common when mounting short throw lenses to allow a compact set-up. Our registration achieves sub-pixel accuracy on a large number of different vertically extruded surfaces and the image correction to achieve this registration can be run in real time on the GPU. This simple markerless registration has the potential to have a large impact on easy set-up and maintenance of large curved multi-projector displays, common for visualization, edutainment, training and simulation applications.
Sajadi, B.;Majumder, A.
Comput. Sci. Dept., Univ. of California, Irvine, Irvine, CA, USA|c|;
10.1109/VISUAL.2001.964508;10.1109/VISUAL.2002.1183793;10.1109/VISUAL.1999.809883;10.1109/TVCG.2009.124;10.1109/TVCG.2007.70586
Registration, Calibration, Multi-Projector Displays, Tiled Displays
Vis
2009
Multi-Scale Surface Descriptors
10.1109/TVCG.2009.168
1. 1208
J
Local shape descriptors compactly characterize regions of a surface, and have been applied to tasks in visualization, shape matching, and analysis. Classically, curvature has be used as a shape descriptor; however, this differential property characterizes only an infinitesimal neighborhood. In this paper, we provide shape descriptors for surface meshes designed to be multi-scale, that is, capable of characterizing regions of varying size. These descriptors capture statistically the shape of a neighborhood around a central point by fitting a quadratic surface. They therefore mimic differential curvature, are efficient to compute, and encode anisotropy. We show how simple variants of mesh operations can be used to compute the descriptors without resorting to expensive parameterizations, and additionally provide a statistical approximation for reduced computational cost. We show how these descriptors apply to a number of uses in visualization, analysis, and matching of surfaces, particularly to tasks in protein surface analysis.
Cipriano, G.;Phillips, G.N.;Gleicher, M.
Dept. of Comput. Sci., Univ. of Wisconsin, Madison, WI, USA|c|;;
10.1109/VISUAL.2003.1250414;10.1109/VISUAL.2002.1183787;10.1109/VISUAL.2002.1183785
Curvature, descriptors, npr, stylized rendering, shape matching
Vis
2009
Multimodal Vessel Visualization of Mouse Aorta PET/CT Scans
10.1109/TVCG.2009.169
1. 1522
J
In this paper, we present a visualization system for the visual analysis of PET/CT scans of aortic arches of mice. The system has been designed in close collaboration between researchers from the areas of visualization and molecular imaging with the objective to get deeper insights into the structural and molecular processes which take place during plaque development. Understanding the development of plaques might lead to a better and earlier diagnosis of cardiovascular diseases, which are still the main cause of death in the western world. After motivating our approach, we will briefly describe the multimodal data acquisition process before explaining the visualization techniques used. The main goal is to develop a system which supports visual comparison of the data of different species. Therefore, we have chosen a linked multi-view approach, which amongst others integrates a specialized straightened multipath curved planar reformation and a multimodal vessel flattening technique. We have applied the visualization concepts to multiple data sets, and we will present the results of this investigation.
Ropinski, T.;Hermann, S.;Reich, R.;Schafers, M.;Hinrichs, K.
Visualization & Comput. Graphics Res. Group (VisCG), Univ. of Munster, Munster, Germany|c|;;;;
10.1109/VISUAL.2003.1250353;10.1109/VISUAL.1992.235203;10.1109/TVCG.2007.70576;10.1109/VISUAL.2004.104;10.1109/VISUAL.2003.1250384;10.1109/VISUAL.2001.964538;10.1109/TVCG.2007.70560;10.1109/VISUAL.2002.1183754;10.1109/VISUAL.2003.1250396
Vessel visualization, plaque growth, multipath CPR, vessel flattening