IEEE VIS Publication Dataset

next
Vis
2005
OpenGL multipipe SDK: a toolkit for scalable parallel rendering
10.1109/VISUAL.2005.1532786
1. 126
C
We describe OpenGL multipipe SDK (MPK), a toolkit for scalable parallel rendering based on OpenGL. MPK provides a uniform application programming interface (API) to manage scalable graphics applications across many different graphics subsystems. MPK-based applications run seamlessly from single-processor, single-pipe desktop systems to large multi-processor, multipipe scalable graphics systems. The application is oblivious of the system configuration, which can be specified through a configuration file at run time. To scale application performance, MPK uses a decomposition system that supports different modes for task partitioning and implements optimized CPU-based composition algorithms. MPK also provides a customizable image composition interface, which can be used to apply post-processing algorithms on raw pixel data obtained from executing sub-tasks on multiple graphics pipes in parallel. This can be used to implement parallel versions of any CPU-based algorithm, not necessarily used for rendering. In this paper, we motivate the need for a scalable graphics API and discuss the architecture of MPK. We present MPK's graphics configuration interface, introduce the notion of compound-based decomposition schemes and describe our implementation. We present some results from our work on a couple of target system architectures and conclude with future directions of research in this area.
Bhaniramka, P.;Robert, P.C.D.;Eilemann, S.
;;
10.1109/VISUAL.1999.809890
Scalable Rendering, Parallel Rendering, Immersive Environments, Scalable Graphics Hardware
Vis
2005
Opening the black box - data driven visualization of neural networks
10.1109/VISUAL.2005.1532820
3. 390
C
Artificial neural networks are computer software or hardware models inspired by the structure and behavior of neurons in the human nervous system. As a powerful learning tool, increasingly neural networks have been adopted by many large-scale information processing applications but there is no a set of well defined criteria for choosing a neural network. The user mostly treats a neural network as a black box and cannot explain how learning from input data was done nor how performance can be consistently ensured. We have experimented with several information visualization designs aiming to open the black box to possibly uncover underlying dependencies between the input data and the output data of a neural network. In this paper, we present our designs and show that the visualizations not only help us design more efficient neural networks, but also assist us in the process of using neural networks for problem solving such as performing a classification task.
Tzeng, F.-Y.;Kwan-Liu Ma
Dept. of Comput. Sci., California Univ., Davis, CA, USA|c|;
10.1109/INFVIS.2002.1173157;10.1109/VISUAL.1999.809866
Artificial Neural Network, Information Visualization, Visualization Application, Classification, Machine Learning
Vis
2005
Opening the can of worms: an exploration tool for vortical flows
10.1109/VISUAL.2005.1532830
4. 470
C
Gaining a comprehensive understanding of turbulent flows still poses one of the great challenges in fluid dynamics. A well-established approach to advance this research is the analysis of the vortex structures contained in the flow. In order to be able to perform this analysis efficiently, supporting visualization tools with clearly defined requirements are needed. In this paper, we present a visualization system which matches these requirements to a large extent. The system consists of two components. The first component analyzes the flow by means of a novel combination of vortex core line detection and the λ2 method. The second component is a vortex browser which allows for an interactive exploration and manipulation of the vortices detected and separated during the first phase. Our system improves the reliability and applicability of existing vortex detection methods and allows for a more efficient study of vortical flows which is demonstrated in an evaluation performed by experts.
Stegmaier, S.;Rist, U.;Ertl, T.
Inst. for Visualization & Interactive Syst., Stuttgart Univ., Germany|c|;;
10.1109/VISUAL.1990.146359;10.1109/VISUAL.1998.745297;10.1109/VISUAL.1994.346327;10.1109/VISUAL.2004.3;10.1109/VISUAL.1998.745288;10.1109/VISUAL.1998.745296
Flow Features, Vortex Detection, Interactive Manipulation, 3D Vector Field Visualization
Vis
2005
Particle and texture based spatiotemporal visualization of time-dependent vector fields
10.1109/VISUAL.2005.1532852
6. 646
C
We propose a hybrid particle and texture based approach for the visualization of time-dependent vector fields. The underlying space-time framework builds a dense vector field representation in a two-step process: 1) particle-based forward integration of trajectories in spacetime for temporal coherence, and 2) texture-based convolution along another set of paths through the spacetime for spatially correlated patterns. Particle density is controlled by stochastically injecting and removing particles, taking into account the divergence of the vector field. Alternatively, a uniform density can be maintained by placing exactly one particle in each cell of a uniform grid, which leads to particle-in-cell forward advection. Moreover, we discuss strategies of previous visualization methods for unsteady flow and show how they address issues of spatiotemporal coherence and dense visual representations. We demonstrate how our framework is capable of realizing several of these strategies. Finally, we present an efficient GPU implementation that facilitates an interactive visualization of unsteady 2D flow on Shader Model 3 compliant graphics hardware.
Weiskopf, D.;Schramm, F.;Erlebacher, G.;Ertl, T.
Graphics, Usability, & Visualization Lab., Simon Fraser Univ., Burnaby, BC, Canada|c|;;;
10.1109/VISUAL.2003.1250377;10.1109/VISUAL.2003.1250363;10.1109/VISUAL.2003.1250361;10.1109/VISUAL.2000.885689;10.1109/VISUAL.2003.1250402;10.1109/VISUAL.2003.1250364
Unsteady flow visualization, visualization framework, LIC, texture advection, particle systems, GPU methods
Vis
2005
Phonon tracing for auralization and visualization of sound
10.1109/VISUAL.2005.1532790
1. 158
C
We present a new particle tracing approach for the simulation of mid- and high-frequency sound. Inspired by the photorealism obtained by methods like photon mapping, we develop a similar method for the physical simulation of sound within rooms. For given source and listener positions, our method computes a finite-response filter accounting for the different reflections at various surfaces with frequency-dependent absorption coefficients. Convoluting this filter with an anechoic input signal reproduces a realistic aural impression of the simulated room. We do not consider diffraction effects due to low frequencies, since these can be better computed by finite elements. Our method allows the visualization of a wave front propagation using color-coded blobs traversing the paths of individual phonons.
Bertram, M.;Deines, E.;Mohring, J.;Jegorovs, J.;Hagen, H.
TU Kaiserslautern, Germany|c|;;;;
acoustics, auralization, raytracing, photon mapping
Vis
2005
Prefiltered Gaussian reconstruction for high-quality rendering of volumetric data sampled on a body-centered cubic grid
10.1109/VISUAL.2005.1532810
3. 318
C
In this paper a novel high-quality reconstruction scheme is presented. Although our method is mainly proposed to reconstruct volumetric data sampled on an optimal body-centered cubic (BCC) grid, it can be easily adapted lo the conventional regular rectilinear grid as well. The reconstruction process is decomposed into two steps. The first step, which is considered to be a preprocessing, is a discrete Gaussian deconvolution performed only once in the frequency domain. Afterwards, the second step is a spatial-domain convolution with a truncated Gaussian kernel, which is used to interpolate arbitrary samples for ray casting. Since the preprocessing is actually a discrete prefiltering, we call our technique prefiltered Gaussian reconstruction (PGR). It is shown that the impulse response of PGR well approximates the ideal reconstruction kernel. Therefore the quality of PGR is much higher than that of previous reconstruction techniques proposed for optimally sampled data, which are based on linear and cubic box splines adapted to the BCC grid. Concerning the performance, PGR is slower than linear box-spline reconstruction but significantly faster than cubic box-spline reconstruction.
Csebfalvi, B.
Dept. of Control Eng. & Inf. Technol., Budapest Univ., Hungary|c|
10.1109/VISUAL.2004.70;10.1109/VISUAL.2004.65;10.1109/VISUAL.2001.964498;10.1109/VISUAL.1997.663848;10.1109/VISUAL.1994.346331;10.1109/VISUAL.2001.964499
Body-Centered Cubic Grid, Reconstruction, Optimal Regular Volume Sampling, Radial Basis Function Interpolation
Vis
2005
Profile Flags: a novel metaphor for probing of T<sub>2</sub> maps
10.1109/VISUAL.2005.1532847
5. 606
C
This paper describes a tool for the visualization of T2 maps of knee cartilage. Given the anatomical scan, and the T2 map of the cartilage, we combine the information on the shape and the quality of the cartilage in a single image. The Profile Flag is an intuitive 3D glyph for probing and annotating of the underlying data. It comprises a bulletin board pin-like shape with a small flag on top of it. While moving the glyph along the reconstructed surface of an object, the curve data measured along the pin's needle and in its neighborhood are shown on the flag. The application area of the Profile Flag is manifold, enabling the visualization of profile data of dense but in-homogeneous objects. Furthermore, it extracts the essential part of the data without removing or even reducing the context information. By sticking Profile Flags into the investigated structure, one or more significant locations can be annotated by showing the local characteristics of the data at that locations. In this paper we are demonstrating the properties of the tool by visualizing T2 maps of knee cartilage.
Mlejnek, M.;Ermest, P.;Vilanova, A.;van der Rijt, R.;van den Bosch, H.;Gerritsen, F.;Groller, E.
Inst. of Comput. Graphics & Algorithms, Vienna Univ. of Technol., Austria|c|;;;;;;
10.1109/VISUAL.2000.885733;10.1109/VISUAL.1993.398849;10.1109/VISUAL.2002.1183752;10.1109/VISUAL.2004.56
visualization in medicine, applications of visualization
Vis
2005
Quality mesh generation for molecular skin surfaces using restricted union of balls
10.1109/VISUAL.2005.1532822
3. 405
C
Quality surface meshes for molecular models are desirable in the studies of protein shapes and functionalities. However, there is still no robust software that is capable to generate such meshes with good quality. In this paper, we present a Delaunay-based surface triangulation algorithm generating quality surface meshes for the molecular skin model. We expand the restricted union of balls along the surface and generate an ╬Á-sampling of the skin surface incrementally. At the same time, a quality surface mesh is extracted from the Delaunay triangulation of the sample points. The algorithm supports robust and efficient implementation and guarantees the mesh quality and topology as well. Our results facilitate molecular visualization and have made a contribution towards generating quality volumetric tetrahedral meshes for the macromolecules.
Cheng, H.-L.;Shi, X.
Sch. of Comput., Nat. Univ. of Singapore, Singapore|c|;
10.1109/VISUAL.2004.36
Smooth surfaces, meshing, restricted union of balls, Delaunay triangulation, guaranteed quality triangulation, homeomorphism
Vis
2005
Query-driven visualization of large data sets
10.1109/VISUAL.2005.1532792
1. 174
C
We present a practical and general-purpose approach to large and complex visual data analysis where visualization processing, rendering and subsequent human interpretation is constrained to the subset of data deemed interesting by the user. In many scientific data analysis applications, “interesting” data can be defined by compound Boolean range queries of the form (temperature > 1000) AND (70 < pressure < 90). As data sizes grow larger, a central challenge is to answer such queries as efficiently as possible. Prior work in the visualization community has focused on answering range queries for scalar fields within the context of accelerating the search phase of isosurface algorithms. In contrast, our work describes an approach that leverages state-of-the-art indexing technology from the scientific data management community called “bitmap indexing.” Our implementation, which we call “DEX” (short for dextrous data explorer), uses bitmap indexing to efficiently answer multivariate, multidimensional data queries to provide input to a visualization pipeline. We present an analysis overview and benchmark results that show bitmap indexing offers significant storage and performance improvements when compared to previous approaches for accelerating the search phase of isosurface algorithms. More importantly, since bitmap indexing supports complex multidimensional, multivariate range queries, it is more generally applicable to scientific data visualization and analysis problems. In addition to benchmark performance and analysis, we apply DEX to a typical scientific visualization problem encountered in combustion simulation data analysis.
Stockinger, K.;Shalf, J.;Kesheng Wu;Bethel, E.W.
Computational Res. Div., Lawrence Berkeley Lab., CA, USA|c|;;;
10.1109/VISUAL.1999.809864;10.1109/VISUAL.2004.95;10.1109/VISUAL.1998.745299;10.1109/VISUAL.1996.568121
query-driven visualization, visual analytics, bitmap index, multivariate visualization, large data visualization, data analysis, scientific data management
Vis
2005
Reconstructing manifold and non-manifold surfaces from point clouds
10.1109/VISUAL.2005.1532824
4. 422
C
This paper presents a novel approach for surface reconstruction from point clouds. The proposed technique is general in the sense that it naturally handles both manifold and non-manifold surfaces, providing a consistent way for reconstructing closed surfaces as well as surfaces with boundaries. It is also robust in the presence of noise, irregular sampling and surface gaps. Furthermore, it is fast, parallelizable and easy to implement because it is based on simple local operations. In this approach, surface reconstruction consists of three major steps: first, the space containing the point cloud is subdivided, creating a voxel representation. Then, a voxel surface is computed using gap filling and topological thinning operations. Finally, the resulting voxel surface is converted into a polygonal mesh. We demonstrate the effectiveness of our approach by reconstructing polygonal models from range scans of real objects as well as from synthetic data.
Wang, J.;Oliveira, M.M.;Kaufman, A.
Stony Brook Univ., NY, USA|c|;;
10.1109/VISUAL.2001.964489;10.1109/VISUAL.2001.964528
surface reconstruction, non-manifold surfaces, topological thinning
Vis
2005
Reflection nebula visualization
10.1109/VISUAL.2005.1532803
2. 262
C
Stars form in dense clouds of interstellar gas and dust. The residual dust surrounding a young star scatters and diffuses its light, making the star's "cocoon" of dust observable from Earth. The resulting structures, called reflection nebulae, are commonly very colorful in appearance due to wavelength-dependent effects in the scattering and extinction of light. The intricate interplay of scattering and extinction cause the color hues, brightness distributions, and the apparent shapes of such nebulae to vary greatly with viewpoint. We describe an interactive visualization tool for realistically rendering the appearance of arbitrary 3D dust distributions surrounding one or more illuminating stars. Our rendering algorithm is based on the physical models used in astrophysics research. The tool can be used to create virtual fly-throughs of reflection nebulae for interactive desktop visualizations, or to produce scientifically accurate animations for educational purposes, e.g., in planetarium shows. The algorithm is also applicable to investigate on-the-fly the visual effects of physical parameter variations, exploiting visualization technology to help gain a deeper and more intuitive understanding of the complex interaction of light and dust in real astrophysical settings.
Magnor, M.;Hildebrand, K.;Lintu, A.;Hanson, A.J.
;;;
10.1109/VISUAL.2003.1250384;10.1109/VISUAL.2004.18
volume rendering, global illumination, dust, nebula,astronomy
Vis
2005
Rendering tetrahedral meshes with higher-order attenuation functions for digital radiograph reconstruction
10.1109/VISUAL.2005.1532809
3. 310
C
This paper presents a novel method for computing simulated x-ray images, or DRRs (digitally reconstructed radiographs), of tetrahedral meshes with higher-order attenuation functions. DRRs are commonly used in computer assisted surgery (CAS), with the attenuation function consisting of a voxelized CT study, which is viewed from different directions. Our application of DRRs is in intra-operative "2D-3D" registration, i.e., finding the pose of the CT dataset given a small number of patient radiographs. We register 2D patient images with a statistical tetrahedral model, which encodes the CT intensity numbers as Bernstein polynomials, and includes knowledge about typical shape variation modes. The unstructured grid is more suitable for applying deformations than a rectilinear grid, and the higher-order polynomials provide a better approximation of the actual density than constant or linear models. The infra-operative environment demands a fast method for creating the DRRs, which we present here. We demonstrate this application through the creation and use of a deformable atlas of human pelvis bones. Compared with other works on rendering unstructured grids, the main contributions of this work are: 1) Simple and perspective-correct interpolation of the thickness of a tetrahedral cell. 2) Simple and perspective-correct interpolation of front and back barycentric coordinates with respect to the cell. 3) Computing line integrals of higher-order functions. 4) Capability of applying shape deformations and variations in the attenuation function without significant performance loss. The method does not depend on for pre-integration, and does not require depth-sorting of the visualized cells. We present imaging and timing results of implementing the algorithm, and discuss the impact of using higher-order functions on the quality of the result and the performance.
Sadowsky, O.;Cohen, J.D.;Taylor, R.H.
Johns Hopkins Univ., Laurel, MD, USA|c|;;
10.1109/VISUAL.2000.885683;10.1109/VISUAL.2004.85
volume rendering, unstructured grids, projected tetrahedra, DRR, higher-order volumetric functions
Vis
2005
Scale-invariant volume rendering
10.1109/VISUAL.2005.1532808
2. 302
C
As standard volume rendering is based on an integral in physical space (or "coordinate space"), it is inherently dependent on the scaling of this space. Although this dependency is appropriate for the realistic rendering of semitransparent volumetric objects, it has several unpleasant consequences for volume visualization. In order to overcome these disadvantages, a new variant of the volume rendering integral is proposed, which is defined in data space instead of physical space. Apart from achieving scale invariance, this new method supports the rendering of isosurfaces of uniform opacity and color, independently of the local gradient or" the visualized scalar field. Moreover, it reveals certain structures in scalar fields even with constant transfer functions. Furthermore, it can be defined as the limit of infinitely many semitransparent isosurfaces, and is therefore based on an intuitive and at the same time precise definition. In addition to the discussion of these features of scale-invariant volume rendering, efficient adaptations of existing volume rendering algorithms and extensions for silhouette enhancement and local illumination by transmitted light are presented.
Kraus, M.
10.1109/VISUAL.2000.885683;10.1109/VISUAL.2000.885694;10.1109/VISUAL.1994.346331
volume visualization, volume rendering, isosurfaces, silhouette enhancement, volume shading, translucence
Vis
2005
Sort-middle multi-projector immediate-mode rendering in Chromium
10.1109/VISUAL.2005.1532784
1. 110
C
Traditionally, sort-middle is a technique that has been difficult to attain on clusters because of the tight coupling of geometry and rasterization processes on commodity graphics hardware. In this paper, we describe the implementation of a new sort-middle approach for performing immediate-mode rendering in Chromium. The Chromium Rendering System is used extensively to drive multi-projector displays on PC clusters with inexpensive commodity graphics components. By default, Chromium uses a sort-first approach to distribute rendering work to individual nodes in a PC cluster. While this sort-first approach works effectively in retained-mode rendering, it suffers from various network bottlenecks when rendering in immediate-mode. Current techniques avoid these bottlenecks by sorting vertex data as a pre-processing step and grouping vertices into specific bounding boxes, using Chromium's bounding box extension. These steps may be expensive, especially if the dataset is dynamic. In our approach, we utilize standard programmable graphics hardware and extend standard APIs to achieve a separation in the rendering pipeline. The pre-processing of vertex data or the grouping of vertices into bounding boxes are not required. Additionally, the amount of OpenGL state commands transmitted through the network are reduced. Our results indicate that the approach can attain twice the frame rates as compared to Chromium's sort-first approach when rendering in immediate-mode.
Williams, J.L.;Hiromoto, R.E.
Dept. of Comput. Sci., Idaho Univ., Moscow, ID, USA|c|;
Cluster Rendering, Sort-Middle, Multi-Projector, Tile Displays, Immediate-Mode Rendering
Vis
2005
Statistically quantitative volume visualization
10.1109/VISUAL.2005.1532807
2. 294
C
Visualization users are increasingly in need of techniques for assessing quantitative uncertainty and error in the images produced. Statistical segmentation algorithms compute these quantitative results, yet volume rendering tools typically produce only qualitative imagery via transfer function-based classification. This paper presents a visualization technique that allows users to interactively explore the uncertainty, risk, and probabilistic decision of surface boundaries. Our approach makes it possible to directly visualize the combined "fuzzy" classification results from multiple segmentations by combining these data into a unified probabilistic data space. We represent this unified space, the combination of scalar volumes from numerous segmentations, using a novel graph-based dimensionality reduction scheme. The scheme both dramatically reduces the dataset size and is suitable for efficient, high quality, quantitative visualization. Lastly, we show that the statistical risk arising from overlapping segmentations is a robust measure for visualizing features and assigning optical properties.
Kniss, J.;Van Uitert, R.;Stephens, A.;Li, G.-S.;Tasdizen, T.;Hansen, C.
Utah Univ., Salt Lake City, UT, USA|c|;;;;;
10.1109/VISUAL.2003.1250386;10.1109/VISUAL.1998.745311;10.1109/VISUAL.2004.48;10.1109/VISUAL.1997.663875
volume visualization, uncertainty, classification, risk analysis
Vis
2005
Strategy for seeding 3D streamlines
10.1109/VISUAL.2005.1532831
4. 478
C
This paper presents a strategy for seeding streamlines in 3D flow fields. Its main goal is to capture the essential flow patterns and to provide sufficient coverage in the field while reducing clutter. First, critical points of the flow field are extracted to identify regions with important flow patterns that need to be presented. Different seeding templates are then used around the vicinity of the different critical points. Because there is significant variability in the flow pattern even for the same type of critical point, our template can change shape depending on how far the critical point is from transitioning into another type of critical point. To accomplish this, we introduce the ╬▒-╬▓ map of 3D critical points. Next, we use Poisson seeding to populate the empty regions. Finally, we filter the streamlines based on their geometric and spatial properties. Altogether, this multi-step strategy reduces clutter and yet captures the important 3D flow features.
Xiangong Ye;Kao, D.;Pang, A.
Comput. Sci. Dept., UCSC, USA|c|;;
10.1109/VISUAL.2000.885690;10.1109/VISUAL.1996.567777;10.1109/VISUAL.2000.885688;10.1109/VISUAL.1999.809865;10.1109/VISUAL.1991.175771;10.1109/VISUAL.2003.1250376
streamlines, flow guided, feature based, filtering, critical points, variable templates
Vis
2005
Stream-processing points
10.1109/VISUAL.2005.1532801
2. 246
C
With the growing size of captured 3D models it has become increasingly important to provide basic efficient processing methods for large unorganized raw surface-sample point data sets. In this paper we introduce a novel stream-based (and out-of-core) point processing framework. The proposed approach processes points in an orderly sequential way by sorting them and sweeping along a spatial dimension. The major advantages of this new concept are: (1) support of extensible and concatenate local operators called stream operators, (2) low main-memory usage and (3) applicability to process very large data sets out-of-core.
Pajarola, R.
Dept. of Informatics, Zurich Univ., Switzerland|c|
10.1109/VISUAL.2001.964489;10.1109/VISUAL.2000.885721;10.1109/VISUAL.2002.1183770;10.1109/VISUAL.2003.1250408;10.1109/VISUAL.2005.1532800;10.1109/VISUAL.2002.1183771
point processing, sequential processing, normal estimation, curvature estimation, fairing
Vis
2005
Streaming meshes
10.1109/VISUAL.2005.1532800
2. 238
C
Recent years have seen an immense increase in the complexity of geometric data sets. Today's gigabyte-sized polygon models can no longer be completely loaded into the main memory of common desktop PCs. Unfortunately, current mesh formats, which were designed years ago when meshes were orders of magnitudes smaller, do not account for this. Using such formats to store large meshes is inefficient and complicates all subsequent processing. We describe a streaming format for polygon meshes that is simple enough to replace current offline mesh formats and is more suitable for representing large data sets. Furthermore, it is an ideal input and output format for I/O-efficient out-of-core algorithms that process meshes in a streaming, possibly pipelined, fashion. This paper chiefly concerns the underlying theory and the practical aspects of creating and working with this new representation. In particular, we describe desirable qualities for streaming meshes and methods for converting meshes from a traditional to a streaming format. A central theme of this paper is the issue of coherent and compatible layouts of the mesh vertices and polygons. We present metrics and diagrams that characterize the coherence of a mesh layout and suggest appropriate strategies for improving its "streamability". To this end, we outline several out-of-core algorithms for reordering meshes with poor coherence, and present results for a menagerie of well known and generally incoherent surface meshes.
Isenburg, M.;Lindstrom, P.
North Carolina Univ., Chapel Hill, NC, USA|c|;
10.1109/INFVIS.2002.1173159;10.1109/VISUAL.1997.663895;10.1109/VISUAL.2001.964532;10.1109/VISUAL.2003.1250408
Vis
2005
Surface reconstruction via contour metamorphosis: an Eulerian approach with Lagrangian particle tracking
10.1109/VISUAL.2005.1532823
4. 414
C
We present a robust method for 3D reconstruction of closed surfaces from sparsely sampled parallel contours. A solution to this problem is especially important for medical segmentation, where manual contouring of 2D imaging scans is still extensively used. Our proposed method is based on a morphing process applied to neighboring contours that sweeps out a 3D surface. Our method is guaranteed to produce closed surfaces that exactly pass through the input contours, regardless of the topology of the reconstruction. Our general approach consecutively morphs between sets of input contours using an Eulerian formulation (i.e. fixed grid) augmented with Lagrangian particles (i.e. interface tracking). This is numerically accomplished by propagating the input contours as 2D level sets with carefully constructed continuous speed functions. Specifically this involves particle advection to estimate distances between the contours, monotonicity constrained spline interpolation to compute continuous speed functions without overshooting, and state-of-the-art numerical techniques for solving the level set equations. We demonstrate the robustness of our method on a variety of medical, topographic and synthetic data sets.
Nilsson, O.;Breen, D.;Museth, K.
Linkoping Univ., Sweden|c|;;
10.1109/VISUAL.1996.567812;10.1109/VISUAL.2002.1183773;10.1109/VISUAL.1998.745281;10.1109/VISUAL.1995.480820
3D reconstruction, contours, level sets
Vis
2005
Teniae coli guided navigation and registration for virtual colonoscopy
10.1109/VISUAL.2005.1532806
2. 285
C
We present a new method for guiding virtual colonoscopic navigation and registration by using teniae coli as anatomical landmarks. As most existing protocols require a patient to be scanned in both supine and prone positions to increase sensitivity in detecting colonic polyps, reference and registration between scans are necessary. However, the conventional centerline approach, generating only the longitudinal distance along the colon, lacks the necessary orientation information to synchronize the virtual navigation cameras in both scanned positions. In this paper we describe a semi-automatic method to detect teniae coli from a colonic surface model reconstructed from CT colonography. Teniae coli are three bands of longitudinal smooth muscle on the surface of the colon. They form a triple helix structure from the appendix to the sigmoid colon and are ideal references for virtual navigation. Our method was applied to 3 patients resulting in 6 data sets (supine and prone scans). The detected teniae coli matched well with our visual inspection. In addition, we demonstrate that polyps visible on both scans can be located and matched more efficiently with the aid of a teniae coli guided navigation implementation.
Huang, A.;Roy, D.;Franaszek, M.;Summers, R.M.
Diagnostic Radiol. Dept., Nat. Inst. of Health, Bethesda, MD, USA|c|;;;
10.1109/VISUAL.2002.1183808;10.1109/VISUAL.2001.964540
virtual colonoscopy, CT colonography, virtual endoscopy, camera control, computer-aided diagnosis, colon flattening, parameterization