IEEE VIS Publication Dataset

next
Vis
2006
Detection and Visualization of Defects in 3D Unstructured Models of Nematic Liquid Crystals
10.1109/TVCG.2006.133
1. 1052
J
A method for the semi-automatic detection and visualization of defects in models of nematic liquid crystals (NLCs) is introduced; this method is suitable for unstructured models, a previously unsolved problem. The detected defects - also known as disclinations - are regions were the alignment of the liquid crystal rapidly changes over space; these defects play a large role in the physical behavior of the NLC substrate. Defect detection is based upon a measure of total angular change of crystal orientation (the director) over a node neighborhood via the use of a nearest neighbor path. Visualizations based upon the detection algorithm clearly identify complete defect regions as opposed to incomplete visual descriptions provided by cutting-plane and isosurface approaches. The introduced techniques are currently in use by scientists studying the dynamics of defect change
Mehta, K.;Jankun-Kelly, T.J.
Mississippi State Univ., MS|c|;
10.1109/TVCG.2006.181;10.1109/TVCG.2006.182;10.1109/VISUAL.1997.663894;10.1109/VISUAL.2004.23;10.1109/TVCG.2010.212;10.1109/VISUAL.2001.964507
scientific visualization, disclination, nematic liquid crystal, defects, unstructured grid, feature extraction
Vis
2006
Diffusion Tensor Visualization with Glyph Packing
10.1109/TVCG.2006.134
1. 1336
J
A common goal of multivariate visualization is to enable data inspection at discrete points, while also illustrating larger-scale continuous structures. In diffusion tensor visualization, glyphs are typically used to meet the first goal, and methods such as texture synthesis or fiber tractography can address the second. We adapt particle systems originally developed for surface modeling and anisotropic mesh generation to enhance the utility of glyph-based tensor visualizations. By carefully distributing glyphs throughout the field (either on a slice, or in the volume) into a dense packing, using potential energy profiles shaped by the local tensor value, we remove undue visual emphasis of the regular sampling grid of the data, and the underlying continuous features become more apparent. The method is demonstrated on a DT-MRI scan of a patient with a brain tumor
Kindlmann, G.;Westin, C.-F.
Dept. of Radiol., Harvard Med. Sch., Boston, MA|c|;
10.1109/VISUAL.2004.25;10.1109/VISUAL.1998.745294;10.1109/VISUAL.2004.80;10.1109/VISUAL.2002.1183797;10.1109/VISUAL.1995.485141;10.1109/VISUAL.1999.809905;10.1109/VISUAL.2003.1250379
Diffusion tensor, glyphs, particle systems, anisotropic sampling, fiber tractography
Vis
2006
Distributed Shared Memory for Roaming Large Volumes
10.1109/TVCG.2006.135
1. 1306
J
We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming
Castanie, L.;Mion, C.;Cavin, X.;Levy, B.
ALICE Group, INRIA, Lorraine|c|;;;
10.1109/VISUAL.2005.1532794;10.1109/VISUAL.1997.663888;10.1109/VISUAL.2005.1532785;10.1109/VISUAL.2005.1532802
Large volumes, volume roaming, out-of-core, hierarchical caching, distributed shared memory, hardware-accelerated volume visualization, graphics hardware, parallel rendering, graphics cluster
Vis
2006
Dynamic View Selection for Time-Varying Volumes
10.1109/TVCG.2006.137
1. 1116
J
Animation is an effective way to show how time-varying phenomena evolve over time. A key issue of generating a good animation is to select ideal views through which the user can perceive the maximum amount of information from the time-varying dataset. In this paper, we first propose an improved view selection method for static data. The method measures the quality of a static view by analyzing the opacity, color and curvature distributions of the corresponding volume rendering images from the given view. Our view selection metric prefers an even opacity distribution with a larger projection area, a larger area of salient features' colors with an even distribution among the salient features, and more perceived curvatures. We use this static view selection method and a dynamic programming approach to select time-varying views. The time-varying view selection maximizes the information perceived from the time-varying dataset based on the constraints that the time-varying view should show smooth changes of direction and near-constant speed. We also introduce a method that allows the user to generate a smooth transition between any two views in a given time step, with the perceived information maximized as well. By combining the static and dynamic view selection methods, the users are able to generate a time-varying view that shows the maximum amount of information from a time-varying data set
Guangfeng Ji;Han-Wei Shen
Ohio State Univ., Columbus, OH|c|;
10.1109/VISUAL.2003.1250414;10.1109/VISUAL.1996.567807;10.1109/VISUAL.1999.809893;10.1109/INFVIS.2003.1249004;10.1109/VISUAL.2005.1532834;10.1109/VISUAL.2005.1532857;10.1109/VISUAL.2005.1532833;10.1109/VISUAL.2002.1183785;10.1109/VISUAL.2003.1250402
Static view selection, image based method, dynamic view selection, information entropy, optimization
Vis
2006
Enhancing Depth Perception in Translucent Volumes
10.1109/TVCG.2006.139
1. 1124
J
We present empirical studies that consider the effects of stereopsis and simulated aerial perspective on depth perception in translucent volumes. We consider a purely absorptive lighting model, in which light is not scattered or reflected, but is simply absorbed as it passes through the volume. A purely absorptive lighting model is used, for example, when rendering digitally reconstructed radiographs (DRRs), which are synthetic X-ray images reconstructed from CT volumes. Surgeons make use of DRRs in planning and performing operations, so an improvement of depth perception in DRRs may help diagnosis and surgical planning
Kersten, M.A.;Stewart, A.J.;Troje, N.;Ellis, R.
Med. Comput. Lab., Queen''s Univ.|c|;;;
Stereo, Stereopsis, X-ray, Radiograph, Volume Rendering
Vis
2006
Exploded Views for Volume Data
10.1109/TVCG.2006.140
1. 1084
J
Exploded views are an illustration technique where an object is partitioned into several segments. These segments are displaced to reveal otherwise hidden detail. In this paper we apply the concept of exploded views to volumetric data in order to solve the general problem of occlusion. In many cases an object of interest is occluded by other structures. While transparency or cutaways can be used to reveal a focus object, these techniques remove parts of the context information. Exploded views, on the other hand, do not suffer from this drawback. Our approach employs a force-based model: the volume is divided into a part configuration controlled by a number of forces and constraints. The focus object exerts an explosion force causing the parts to arrange according to the given constraints. We show that this novel and flexible approach allows for a wide variety of explosion-based visualizations including view-dependent explosions. Furthermore, we present a high-quality GPU-based volume ray casting algorithm for exploded views which allows rendering and interaction at several frames per second
Bruckner, S.;Groller, E.
Inst. of Comput. Graphics & Algorithms, Vienna Univ. of Technol.|c|;
10.1109/VISUAL.2003.1250400;10.1109/VISUAL.2005.1532783;10.1109/VISUAL.2005.1532856;10.1109/VISUAL.2005.1532807;10.1109/VISUAL.2003.1250384;10.1109/INFVIS.1996.559215;10.1109/VISUAL.2004.104;10.1109/VISUAL.2005.1532817
Illustrative visualization, exploded views, volume rendering
Vis
2006
Extensions of the Zwart-Powell Box Spline for Volumetric Data Reconstruction on the Cartesian Lattice
10.1109/TVCG.2006.141
1. 1344
J
In this article we propose a box spline and its variants for reconstructing volumetric data sampled on the Cartesian lattice. In particular we present a tri-variate box spline reconstruction kernel that is superior to tensor product reconstruction schemes in terms of recovering the proper Cartesian spectrum of the underlying function. This box spline produces a C2 reconstruction that can be considered as a three dimensional extension of the well known Zwart-Powell element in 2D. While its smoothness and approximation power are equivalent to those of the tri-cubic B-spline, we illustrate the superiority of this reconstruction on functions sampled on the Cartesian lattice and contrast it to tensor product B-splines. Our construction is validated through a Fourier domain analysis of the reconstruction behavior of this box spline. Moreover, we present a stable method for evaluation of this box spline by means of a decomposition. Through a convolution, this decomposition reduces the problem to evaluation of a four directional box spline that we previously published in its explicit closed form.
Entezari, A.;Möller, T.
Sch. of Comput. Sci., Simon Fraser Univ., Burnaby, BC|c|;
10.1109/VISUAL.1994.346331;10.1109/VISUAL.1993.398851;10.1109/VISUAL.2005.1532811;10.1109/VISUAL.2004.65;10.1109/VISUAL.1997.663848
Volumetric data interpolation, reconstruction, box splines
Vis
2006
Fast and Efficient Compression of Floating-Point Data
10.1109/TVCG.2006.143
1. 1250
J
Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data
Lindstrom, P.;Isenburg, M.
Lawrence Livermore Nat. Lab., Berkeley, CA|c|;
10.1109/VISUAL.1999.809868;10.1109/VISUAL.2000.885711;10.1109/VISUAL.2002.1183768;10.1109/VISUAL.1996.568138
High throughput, lossless compression, file compaction for I/O efficiency, fast entropy coding, range coder, predictive coding, large scale simulation and visualization
Vis
2006
Feature Aligned Volume Manipulation for Illustration and Visualization
10.1109/TVCG.2006.144
1. 1076
J
In this paper we describe a GPU-based technique for creating illustrative visualization through interactive manipulation of volumetric models. It is partly inspired by medical illustrations, where it is common to depict cuts and deformation in order to provide a better understanding of anatomical and biological structures or surgical processes, and partly motivated by the need for a real-time solution that supports the specification and visualization of such illustrative manipulation. We propose two new feature aligned techniques, namely surface alignment and segment alignment, and compare them with the axis-aligned techniques which were reported in previous work on volume manipulation. We also present a mechanism for defining features using texture volumes, and methods for computing correct normals for the deformed volume in respect to different alignments. We describe a GPU-based implementation to achieve real-time performance of the techniques and a collection of manipulation operators including peelers, retractors, pliers and dilators which are adaptations of the metaphors and tools used in surgical procedures and medical illustrations. Our approach is directly applicable in medical and biological illustration, and we demonstrate how it works as an interactive tool for focus+context visualization, as well as a generic technique for volume graphics
Correa, C.;Silver, D.;Chen, M.
Dept. of Electr. & Comput. Eng., State Univ. of New Jersey, NJ|c|;;
10.1109/VISUAL.2003.1250400;10.1109/VISUAL.2000.885694
Illustrative visualization, Illustrative manipulation, GPU computing, volume rendering, volume deformation, computerassisted medical illustration
Vis
2006
fine-grained Visualization Pipelines and Lazy Functional Languages
10.1109/TVCG.2006.145
9. 980
J
The pipeline model in visualization has evolved from a conceptual model of data processing into a widely used architecture for implementing visualization systems. In the process, a number of capabilities have been introduced, including streaming of data in chunks, distributed pipelines, and demand-driven processing. Visualization systems have invariably built on stateful programming technologies, and these capabilities have had to be implemented explicitly within the lower layers of a complex hierarchy of services. The good news for developers is that applications built on top of this hierarchy can access these capabilities without concern for how they are implemented. The bad news is that by freezing capabilities into low-level services expressive power and flexibility is lost. In this paper we express visualization systems in a programming language that more naturally supports this kind of processing model. Lazy functional languages support fine-grained demand-driven processing, a natural form of streaming, and pipeline-like function composition for assembling applications. The technology thus appears well suited to visualization applications. Using surface extraction algorithms as illustrative examples, and the lazy functional language Haskell, we argue the benefits of clear and concise expression combined with fine-grained, demand-driven computation. Just as visualization provides insight into data, functional abstraction provides new insight into visualization
Duke, D.;Wallace, M.;Borgo, R.;Runciman, C.
Sch. of Comput., Leeds Univ.|c|;;;
10.1109/VISUAL.1994.346311;10.1109/VISUAL.1999.809864;10.1109/VISUAL.1993.398880;10.1109/VISUAL.1999.809891;10.1109/VISUAL.2005.1532800;10.1109/VISUAL.1997.663888
Pipeline model, laziness, functional programming
Vis
2006
Full Body Virtual Autopsies using a State-of-the-art Volume Rendering Pipeline
10.1109/TVCG.2006.146
8. 876
J
This paper presents a procedure for virtual autopsies based on interactive 3D visualizations of large scale, high resolution data from CT-scans of human cadavers. The procedure is described using examples from forensic medicine and the added value and future potential of virtual autopsies is shown from a medical and forensic perspective. Based on the technical demands of the procedure state-of-the-art volume rendering techniques are applied and refined to enable real-time, full body virtual autopsies involving gigabyte sized data on standard GPUs. The techniques applied include transfer function based data reduction using level-of-detail selection and multi-resolution rendering techniques. The paper also describes a data management component for large, out-of-core data sets and an extension to the GPU-based raycaster for efficient dual TF rendering. Detailed benchmarks of the pipeline are presented using data sets from forensic cases
Ljung, P.;Winskog, C.;Persson, A.;Lundstrom, C.;Ynnerman, A.
Div. for Visual Inf. Technol. & Applications, Linkoping Univ.|c|;;;;
10.1109/VISUAL.2002.1183757;10.1109/VISUAL.2003.1250384;10.1109/VISUAL.2005.1532794;10.1109/VISUAL.2003.1250391;10.1109/VISUAL.2005.1532799;10.1109/VISUAL.1999.809908
Forensics, autopsies, medical visualization, volume rendering, large scale data
Vis
2006
High-Level User Interfaces for Transfer Function Design with Semantics
10.1109/TVCG.2006.148
1. 1028
J
Many sophisticated techniques for the visualization of volumetric data such as medical data have been published. While existing techniques are mature from a technical point of view, managing the complexity of visual parameters is still difficult for nonexpert users. To this end, this paper presents new ideas to facilitate the specification of optical properties for direct volume rendering. We introduce an additional level of abstraction for parametric models of transfer functions. The proposed framework allows visualization experts to design high-level transfer function models which can intuitively be used by non-expert users. The results are user interfaces which provide semantic information for specialized visualization problems. The proposed method is based on principal component analysis as well as on concepts borrowed from computer animation
Salama, C.R.;Keller, M.;Kohlmann, P.
Comput. Graphics & Multimedia Syst. Group, Siegen Univ.|c|;;
10.1109/VISUAL.2003.1250384;10.1109/VISUAL.2003.1250413;10.1109/VISUAL.2002.1183764;10.1109/VISUAL.1998.745319;10.1109/VISUAL.2001.964519;10.1109/VISUAL.2003.1250412;10.1109/VISUAL.1996.568113;10.1109/VISUAL.1997.663875
Volume rendering, transfer function design, semantic models
Vis
2006
High-Quality Extraction of Isosurfaces from Regular and Irregular Grids
10.1109/TVCG.2006.149
1. 1212
J
Isosurfaces are ubiquitous in many fields, including visualization, graphics, and vision. They are often the main computational component of important processing pipelines (e.g., surface reconstruction), and are heavily used in practice. The classical approach to compute isosurfaces is to apply the Marching Cubes algorithm, which although robust and simple to implement, generates surfaces that require additional processing steps to improve triangle quality and mesh size. An important issue is that in some cases, the surfaces generated by Marching Cubes are irreparably damaged, and important details are lost which can not be recovered by subsequent processing. The main motivation of this work is to develop a technique capable of constructing high-quality and high-fidelity isosurfaces. We propose a new advancing front technique that is capable of creating high-quality isosurfaces from regular and irregular volumetric datasets. Our work extends the guidance field framework of Schreiner et al. to implicit surfaces, and improves it in significant ways. In particular, we describe a set of sampling conditions that guarantee that surface features will be captured by the algorithm. We also describe an efficient technique to compute a minimal guidance field, which greatly improves performance. Our experimental results show that our technique can generate high-quality meshes from complex datasets
Schreiner, J.;Scheidegger, C.E.;Silva, C.T.
SCI Inst., Utah Univ., Salt Lake City, UT|c|;;
10.1109/VISUAL.1991.175782;10.1109/VISUAL.2000.885705;10.1109/VISUAL.1997.663930;10.1109/VISUAL.2003.1250414;10.1109/VISUAL.2004.52
Isosurface Extraction, Curvature, Advancing Front
Vis
2006
Hub-based Simulation and Graphics Hardware Accelerated Visualization for Nanotechnology Applications
10.1109/TVCG.2006.150
1. 1068
J
The Network for computational nanotechnology (NCN) has developed a science gateway at nanoHUB.org for nanotechnology education and research. Remote users can browse through online seminars and courses, and launch sophisticated nanotechnology simulation tools, all within their Web browser. Simulations are supported by a middleware that can route complex jobs to grid supercomputing resources. But what is truly unique about the middleware is the way that it uses hardware accelerated graphics to support both problem setup and result visualization. This paper describes the design and integration of a remote visualization framework into the nanoHUB for interactive visual analytics of nanotechnology simulations. Our services flexibly handle a variety of nanoscience simulations, render them utilizing graphics hardware acceleration in a scalable manner, and deliver them seamlessly through the middleware to the user. Rendering is done only on-demand, as needed, so each graphics hardware unit can simultaneously support many user sessions. Additionally, a novel node distribution scheme further improves our system's scalability. Our approach is not only efficient but also cost-effective. Only half-dozen render nodes are anticipated to support hundreds of active tool sessions on the nanoHUB. Moreover, this architecture and visual analytics environment provides capabilities that can serve many areas of scientific simulation and analysis beyond nanotechnology with its ability to interactively analyze and visualize multivariate scalar and vector fields
Wei Qiao;McLennan, M.;Kennell, R.;Ebert, D.S.;Klimeck, G.
Purdue Univ., West Lafayette, IN|c|;;;;
10.1109/VISUAL.2002.1183758;10.1109/VISUAL.2003.1250377;10.1109/VISUAL.2005.1532795;10.1109/VISUAL.1992.235211;10.1109/VISUAL.2005.1532811;10.1109/VISUAL.2003.1250361;10.1109/VISUAL.2000.885683;10.1109/VISUAL.2005.1532794;10.1109/VISUAL.2005.1532793;10.1109/VISUAL.1994.346315;10.1109/VISUAL.2000.885689
remote visualization, volume visualization, flow visualization, graphics hardware, nanotechnology simulation
Vis
2006
Hybrid Visualization for White Matter Tracts using Triangle Strips and Point Sprites
10.1109/TVCG.2006.151
1. 1188
J
Diffusion tensor imaging is of high value in neurosurgery, providing information about the location of white matter tracts in the human brain. For their reconstruction, streamline techniques commonly referred to as fiber tracking model the underlying fiber structures and have therefore gained interest. To meet the requirements of surgical planning and to overcome the visual limitations of line representations, a new real-time visualization approach of high visual quality is introduced. For this purpose, textured triangle strips and point sprites are combined in a hybrid strategy employing GPU programming. The triangle strips follow the fiber streamlines and are textured to obtain a tube-like appearance. A vertex program is used to orient the triangle strips towards the camera. In order to avoid triangle flipping in case of fiber segments where the viewing and segment direction are parallel, a correct visual representation is achieved in these areas by chains of point sprites. As a result, high quality visualization similar to tubes is provided allowing for interactive multimodal inspection. Overall, the presented approach is faster than existing techniques of similar visualization quality and at the same time allows for real-time rendering of dense bundles encompassing a high number of fibers, which is of high importance for diagnosis and surgical planning
Merhof, D.;Sonntag, M.;Enders, F.;Nimsky, C.;Hastreiter, P.;Greiner, G.
Dept. of Neurosurgery, Univ. Erlangen|c|;;;;;
10.1109/VISUAL.2005.1532859;10.1109/VISUAL.2005.1532772;10.1109/VISUAL.2002.1183799;10.1109/VISUAL.2005.1532773;10.1109/VISUAL.2005.1532778;10.1109/VISUAL.1996.567777;10.1109/VISUAL.2005.1532779;10.1109/VISUAL.2004.30
Diffusion tensor data, fiber tracking, streamline visualization
Vis
2006
Importance-Driven Focus of Attention
10.1109/TVCG.2006.152
9. 940
J
This paper introduces a concept for automatic focusing on features within a volumetric data set. The user selects a focus, i.e., object of interest, from a set of pre-defined features. Our system automatically determines the most expressive view on this feature. A characteristic viewpoint is estimated by a novel information-theoretic framework which is based on the mutual information measure. Viewpoints change smoothly by switching the focus from one feature to another one. This mechanism is controlled by changes in the importance distribution among features in the volume. The highest importance is assigned to the feature in focus. Apart from viewpoint selection, the focusing mechanism also steers visual emphasis by assigning a visually more prominent representation. To allow a clear view on features that are normally occluded by other parts of the volume, the focusing for example incorporates cut-away views
Viola, I.;Feixas, M.;Sbert, M.;Groller, E.
Inst. of Comput. Graphics & Algorithms, Vienna Univ. of Technol.|c|;;;
10.1109/VISUAL.2005.1532856;10.1109/VISUAL.2005.1532834;10.1109/INFVIS.2001.963286;10.1109/VISUAL.2005.1532833
Illustrative visualization, volume visualization, interacting with volumetric datasets, characteristic viewpoint estimation, focus+context techniques
Vis
2006
Interactive Point-based Isosurface Exploration and High-quality Rendering
10.1109/TVCG.2006.153
1. 1274
J
We present an efficient point-based isosurface exploration system with high quality rendering. Our system incorporates two point-based isosurface extraction and visualization methods: edge splatting and the edge kernel method. In a volume, two neighboring voxels define an edge. The intersection points between the active edges and the isosurface are used for exact isosurface representation. The point generation is incorporated in the GPU-based hardware-accelerated rendering, thus avoiding any overhead when changing the isovalue in the exploration. We call this method edge splatting. In order to generate high quality isosurface rendering regardless of the volume resolution and the view, we introduce an edge kernel method. The edge kernel upsamples the isosurface by subdividing every active cell of the volume data. Enough sample points are generated to preserve the exact shape of the isosurface defined by the trilinear interpolation of the volume data. By employing these two methods, we can achieve interactive isosurface exploration with high quality rendering
Zhang, H.;Kaufman, A.
Stony Brook Univ., NY|c|;
10.1109/VISUAL.1998.745713;10.1109/VISUAL.2004.29;10.1109/VISUAL.1996.568121;10.1109/VISUAL.2004.52;10.1109/VISUAL.1998.745300
Isosurface, point-based visualization, isosurface extraction, hardware acceleration, GPU acceleration
Vis
2006
Interactive Point-Based Rendering of Higher-Order Tetrahedral Data
10.1109/TVCG.2006.154
1. 1236
J
Computational simulations frequently generate solutions defined over very large tetrahedral volume meshes containing many millions of elements. Furthermore, such solutions may often be expressed using non-linear basis functions. Certain solution techniques, such as discontinuous Galerkin methods, may even produce non-conforming meshes. Such data is difficult to visualize interactively, as it is far too large to fit in memory and many common data reduction techniques, such as mesh simplification, cannot be applied to non-conforming meshes. We introduce a point-based visualization system for interactive rendering of large, potentially non-conforming, tetrahedral meshes. We propose methods for adaptively sampling points from non-linear solution data and for decimating points at run time to fit GPU memory limits. Because these are streaming processes, memory consumption is independent of the input size. We also present an order-independent point rendering method that can efficiently render volumes on the order of 20 million tetrahedra at interactive rates
Zhou, Y.;Garland, M.
Dept. of Comput. Sci., Illinois Univ., Urbana, IL|c|;
10.1109/VISUAL.2003.1250406;10.1109/VISUAL.2005.1532796;10.1109/VISUAL.2005.1532776;10.1109/VISUAL.2005.1532809;10.1109/VISUAL.2003.1250404;10.1109/VISUAL.2002.1183757;10.1109/VISUAL.2002.1183771;10.1109/VISUAL.2004.91;10.1109/VISUAL.2003.1250390;10.1109/VISUAL.1999.809868;10.1109/VISUAL.2004.38;10.1109/VISUAL.2003.1250384;10.1109/VISUAL.2000.885683;10.1109/VISUAL.2002.1183778;10.1109/VISUAL.2005.1532808;10.1109/VISUAL.2003.1250389;10.1109/VISUAL.1995.480790;10.1109/VISUAL.2004.81;10.1109/VISUAL.2005.1532801
Interactive large higher-order tetrahedral volume visualization, point-based visualization
Vis
2006
Interactive Visualization of Intercluster Galaxy Structures in the Horologium-Reticulum Supercluster
10.1109/TVCG.2006.155
1. 1156
J
We present GyVe, an interactive visualization tool for understanding structure in sparse three-dimensional (3D) point data. The scientific goal driving the tool's development is to determine the presence of filaments and voids as defined by inferred 3D galaxy positions within the horologium-reticulum supercluster (HRS). GyVe provides visualization techniques tailored to examine structures defined by the intercluster galaxies. Specific techniques include: interactive user control to move between a global overview and local viewpoints, labelled axes and curved drop lines to indicate positions in the astronomical RA-DEC-cz coordinate system, torsional rocking and stereo to enhance 3D perception, and geometrically distinct glyphs to show potential correlation between intercluster galaxies and known clusters. We discuss the rationale for each design decision and review the success of the techniques in accomplishing the scientific goals. In practice, GyVe has been useful for gaining intuition about structures that were difficult to perceive with 2D projection techniques alone. For example, during their initial session with GyVe, our collaborators quickly confirmed scientific conclusions regarding the large-scale structure of the HRS previously obtained over months of study with 2D projections and statistical techniques. Further use of GyVe revealed the spherical shape of voids and showed that a presumed filament was actually two disconnected structures
Miller, J.;Quammen, C.W.;Fleenor, M.C.
Dept of Comput. Sci., North Carolina Univ., Chapel Hill, NC|c|;;
10.1109/VISUAL.1992.235181;10.1109/VISUAL.2002.1183824;10.1109/VISUAL.2003.1250404
Sparse point visualization, astronomy, cosmology
Vis
2006
Isosurface Extraction and Spatial filtering using Persistent Octree (POT)
10.1109/TVCG.2006.157
1. 1290
J
We propose a novel persistent octree (POT) indexing structure for accelerating isosurface extraction and spatial filtering from volumetric data. This data structure efficiently handles a wide range of visualization problems such as the generation of view-dependent isosurfaces, ray tracing, and isocontour slicing for high dimensional data. POT can be viewed as a hybrid data structure between the interval tree and the branch-on-need octree (BONO) in the sense that it achieves the asymptotic bound of the interval tree for identifying the active cells corresponding to an isosurface and is more efficient than BONO for handling spatial queries. We encode a compact octree for each isovalue. Each such octree contains only the corresponding active cells, in such a way that the combined structure has linear space. The inherent hierarchical structure associated with the active cells enables very fast filtering of the active cells based on spatial constraints. We demonstrate the effectiveness of our approach by performing view-dependent isosurfacing on a wide variety of volumetric data sets and 4D isocontour slicing on the time-varying Richtmyer-Meshkov instability dataset
Shi, Q.;JaJa, J.
Dept. of Electr. & Comput. Eng., Maryland Univ., College Park, MD|c|;
10.1109/VISUAL.1998.745713;10.1109/VISUAL.1991.175780;10.1109/VISUAL.1998.745299;10.1109/VISUAL.1996.568121;10.1109/VISUAL.1999.809910;10.1109/VISUAL.2002.1183810;10.1109/VISUAL.1998.745298;10.1109/VISUAL.1999.809879;10.1109/VISUAL.2004.52;10.1109/VISUAL.1998.745300;10.1109/VISUAL.2003.1250373
scientific visualization, isosurface extraction, indexing