IEEE VIS Publication Dataset

next
Vis
1995
Astronomers and their shady algorithms
10.1109/VISUAL.1995.485155
3. 377, 477
C
The vast quantities of data which may be produced by modern radio telescopes have outstripped conventional visualisation techniques available to astronomers. While research in other areas of visualisation finds some application in astronomy, problems peculiar to the field require new techniques. This paper presents a brief overview of some of the problems of visualisation for astronomy and compares different shading algorithms. A more comprehensive overview may be found in Norris (1994) and Gooch (1995)
Gooch, R.E.
CSIRO Australia Telescope Nat. Facility, Sydney, NSW, Australia|c|
Vis
1995
Authenticity analysis of wavelet approximations in visualization
10.1109/VISUAL.1995.480811
1. 191, 452
C
Wavelet transforms include data decompositions and reconstructions. This paper is concerned with the authenticity issues of the data decomposition, particularly for data visualization. A total of six datasets are used to clarify the approximation characteristics of compactly supported orthogonal wavelets. We present an error tracking mechanism, which uses the available wavelet resources to measure the quality of the wavelet approximations
Pak Chung Wong;Bergeron, R.D.
Dept. of Comput. Sci., New Hampshire Univ., Durham, NH, USA|c|;
10.1109/VISUAL.1994.346333;10.1109/VISUAL.1994.346332
Vis
1995
Automated generation of visual simulation databases using remote sensing and GIS
10.1109/VISUAL.1995.480799
8. 93, 442
C
This paper reports on the development of a strategy to generate databases used for real-time interactive landscape visualization. The database construction from real world data is intended to be as automated as possible. The primary sources of information are remote sensing imagery recorded by Landsat's Thematic Mapper (TM) and digital elevation models (DEM). Additional datasets (traffic networks and buildings) are added to extend the database. In a first step the TM images are geocoded and then segmented into areas of different land coverage. During the visual simulation highly detailed photo textures are applied onto the terrain based on the classification results to increase the apparent amount of detail. The data processing and integration is carried out using custom image processing and geographic information systems (GIS) software. Finally, a sample visual simulation application is implemented. Emphasis is put on practical implementation to test the feasibility of the approach as a whole
Suter, M.;Nuesch, D.
Dept. of Geogr., Zurich Univ., Switzerland|c|;
remote sensing, geographic information systems,geographic databases,satellite images,classification, visual simulation, level of detail
Vis
1995
Automatic generation of triangular irregular networks using greedy cuts
10.1109/VISUAL.1995.480813
2. 208, 453
C
Proposes a new approach to the automatic generation of triangular irregular networks (TINs) from dense terrain models. We have developed and implemented an algorithm based on the greedy principle used to compute minimum-link paths in polygons. Our algorithm works by taking greedy cuts (ÔÇ£bitesÔÇØ) out of a simple closed polygon that bounds the yet-to-be triangulated region. The algorithm starts with a large polygon, bounding the whole extent of the terrain to be triangulated, and works its way inward, performing at each step one of three basic operations: ear cutting, greedy biting, and edge splitting. We give experimental evidence that our method is competitive with current algorithms and has the potential to be faster and to generate many fewer triangles. Also, it is able to keep the structural terrain fidelity at almost no extra cost in running time and it requires very little memory beyond that for the input height array
Silva, C.T.;Mitchell, J.S.B.;Kaufman, A.
Dept. of Comput. Sci., State Univ. of New York, Stony Brook, NY, USA|c|;;
10.1109/VISUAL.1992.235191;10.1109/VISUAL.1990.146379
Vis
1995
Case study: an integrated approach for steering, visualization, and analysis of atmospheric simulations
10.1109/VISUAL.1995.485157
3. 387, 479
C
In the research described, we have constructed a tightly coupled set of methods for monitoring, steering, and applying visual analysis to large scale simulations. The work shows how a collaborative, interdisciplinary process that teams application and computer scientists can result in a powerful integrated approach. The integrated design allows great flexibility in the development and use of analysis tools. The work also shows that visual analysis is a necessary component for full understanding of spatially complex, time dependent atmospheric processes
Jean, Y.;Kindler, T.;Ribarsky, W.;Weiming Gu;Eisenhauer, G.;Schwan, K.;Alyea, F.
Coll. of Comput., Georgia Inst. of Technol., Atlanta, GA, USA|c|;;;;;;
Vis
1995
Case study: using spatial access methods to support the visualization of environmental data
10.1109/VISUAL.1995.485171
4. 403, 483
C
As part of a large effort evaluating the effect of the Exxon Valdez oil spill, we are using the spatial selection features of an object relational database management system to support the visualization of the ecological data. The effort, called the Sound Ecosystem Assessment project (SEA), is collecting and analyzing oceanographic and biological data from Prince William Sound in Alaska. To support visualization of the SEA data we are building a data management system which includes a spatial index over a bounding polygon for all of the datasets which are collected. In addition to other selection criteria the prototype provides several methods for selecting data within an arbitrary region. This case study presents the requirements and the implementation for the application prototype which combines visualization and database technology. The spatial indexing features of the Illustra object relational database management system are linked with the visualization capabilities of AVS to create an interactive environment for analysis of SEA data
Falkenberg, C.;Kulkarni, R.
Dept. of Comput. Sci., Maryland Univ., College Park, MD, USA|c|;
Vis
1995
Compression domain rendering of time-resolved volume data
10.1109/VISUAL.1995.480809
1. 175, 450
C
An important challenge in the visualization of three-dimensional volume data is the efficient processing and rendering of time-resolved sequences. Only the use of compression techniques, which allow the reconstruction of the original domain from the compressed one locally, makes it possible to evaluate these sequences in their entirety. In this paper, a new approach for the extraction and visualization of so-called time features from within time-resolved volume data is presented. Based on the asymptotic decay of multiscale representations of spatially localized time evolutions of the data, singular points can be discriminated. Also, the corresponding Lipschitz exponents, which describe the signals' local regularity, can be determined, and can be taken as a measure of the variation in time. The compression ratio and the comprehension of the underlying signal is improved if we first restore the extracted regions which contain the most important information
Westermann, R.
German Nat. Res. Center for Comput. Sci., St. Augustin, Germany|c|
10.1109/VISUAL.1990.146391;10.1109/VISUAL.1992.235230
volume rendering, wavelet transforms, singularities, Lipschitz exponents
Vis
1995
Defining, computing, and visualizing molecular interfaces
10.1109/VISUAL.1995.480793
3. 43, 436
C
A parallel, analytic approach for defining and computing the inter and intra molecular interfaces in three dimensions is described. The molecular interface surfaces are derived from approximations to the power diagrams over the participating molecular units. For a given molecular interface our approach can generate a family of interface surfaces parametrized by α and β, where α is the radius of the solvent molecule (also known as the probe radius) and β is the interface radius that defines the size of the molecular interface. Molecular interface surfaces provide biochemists with a powerful tool to study surface complementarity and to efficiently characterize the interactions during a protein substrate docking. The complexity of our algorithm for molecular environments is O(nk log2 k), where n is the number of atoms in the participating molecular units and k is the average number of neighboring atoms-a constant, given α and β.
Varshney, A.;Brooks, F.P., Jr.;Richardson, D.C.;Wright, W.V.;Manocha, D.
Dept. of Comput. Sci., State Univ. of New York, Stony Brook, NY, USA|c|;;;;
10.1109/VISUAL.1993.398878
Vis
1995
Direct rendering of Laplacian pyramid compressed volume data
10.1109/VISUAL.1995.480812
1. 199
C
Volume rendering generates 2D images by ray tracing 3D volume data. This technique imposes considerable demands on storage space as the data set grows in size. In this paper, we describe a method to render compressed volume data directly to reduce the memory requirements of the rendering process. The volume data was compressed by a technique called the Laplacian pyramid. A compression ratio of 10:1 was achieved by uniform quantization over the Laplacian pyramid. The quality of the images obtained by this technique as virtually indistinguishable from that of the images generated from the uncompressed volume data. A significant improvement in computational performance was achieved by using a cache algorithm to temporarily retain the reconstructed voxels to be used by the adjacent rays
Ghavamnia, M.H.;Yang, X.D.
Dept. of Comput. Sci., Regina Univ., Sask., Canada|c|;
10.1109/VISUAL.1993.398845
Vis
1995
Enhanced spot noise for vector field visualization
10.1109/VISUAL.1995.480817
2. 239, 457
C
Spot noise is a technique for texture synthesis, which is very useful for vector field visualization. This paper describes improvements and extensions of the basic principle of spot noise. First, better visualization of highly curved vector fields with spot noise is achieved, by adapting the shape of the spots to the local velocity field. Second, filtering of spots is proposed to eliminate undesired low frequency components from the spot noise texture. Third, methods are described to utilize graphics hardware to generate the texture, and to produce variable viewpoint animations of spot noise on surfaces. Fourth, the synthesis of spot noise on grids with highly irregular cell sizes is described
de Leeuw, W.;van Wijk, J.J.
Fac. of Tech. Math. & Inf., Delft Univ. of Technol., Netherlands|c|;
10.1109/VISUAL.1994.346312;10.1109/VISUAL.1994.346313;10.1109/VISUAL.1993.398877
Vis
1995
Enhancing transparent skin surfaces with ridge and valley lines
10.1109/VISUAL.1995.480795
5. 59, 438
C
There are many applications that can benefit from the simultaneous display of multiple layers of data. The objective in these cases is to render the layered surfaces in a such way that the outer structures can be seen and seen through at the same time. The paper focuses on the particular application of radiation therapy treatment planning, in which physicians need to understand the three dimensional distribution of radiation dose in the context of patient anatomy. We describe a promising technique for communicating the shape and position of the transparent skin surface while at the same time minimally occluding underlying isointensity dose surfaces and anatomical objects: adding a sparse, opaque texture comprised of a small set of carefully chosen lines. We explain the perceptual motivation for explicitly drawing ridge and valley curves on a transparent surface, describe straightforward mathematical techniques for detecting and rendering these lines, and propose a small number of reasonably effective methods for selectively emphasizing the most perceptually relevant lines in the display
Interrante, V.;Fuchs, H.;Pizer, S.
Dept. of Comput. Sci., North Carolina Univ., Chapel Hill, NC, USA|c|;;
Vis
1995
Fast Algorithms for Visualizing Fluid Motion in Steady Flow on Unstructured Grids
10.1109/VISUAL.1995.485144
3.
C
The plotting of streamlines is an effective way of visualizing fluid motion in steady flows. Additional information about the flowfield, such as local rotation and expansion, can be shown by drawing in the form of a ribbon or tube. In this paper, we present efficient algorithms for the construction of streamlines, streamribbons and streamtubes on unstructured grids. A specialized version of the Runge-Kutta method has been developed to speed up the integration of particle pathes. We have also derived close-form solutions for calculating angular rotation rate and radius to construct streamribbons and streamtubes, respectively. According to our analysis and test results, these formulations are two to four times better in performance than previous numerical methods. As a large number of traces are calculated, the improved performance could be significant.
Ueng, S.K.;Sikorski, K.;Kwan-Liu Ma
;;
10.1109/VISUAL.1992.235211;10.1109/VISUAL.1994.346329;10.1109/VISUAL.1991.175789;10.1109/VISUAL.1993.398876
Vis
1995
Fast multiresolution surface meshing
10.1109/VISUAL.1995.480805
1. 142, 446
C
Presents a new method for adaptive surface meshing and triangulation which controls the local level-of-detail of the surface approximation by local spectral estimates. These estimates are determined by a wavelet representation of the surface data. The basic idea is to decompose the initial data set by means of an orthogonal or semi-orthogonal tensor product wavelet transform (WT) and to analyze the resulting coefficients. In surface regions where the partial energy of the resulting coefficients is low, the polygonal approximation of the surface can be performed with larger triangles without losing too much fine-grain detail. However, since the localization of the WT is bound by the Heisenberg principle, the meshing method has to be controlled by the detail signals rather than directly by the coefficients. The dyadic scaling of the WT stimulated us to build a hierarchical meshing algorithm which transforms the initially regular data grid into a quadtree representation by rejection of unimportant mesh vertices. The optimum triangulation of the resulting quadtree cells is carried out by selection from a look-up table. The tree grows recursively, as controlled by the detail signals, which are computed from a modified inverse WT. In order to control the local level-of-detail, we introduce a new class of wavelet space filters acting as ÔÇ£magnifying glassesÔÇØ on the data
Gross, M.;Gatti, R.;Staadt, O.
Dept. of Comput. Sci., Eidgenossische Tech. Hochschule, Zurich, Switzerland|c|;;
10.1109/VISUAL.1994.346333
Vis
1995
Fast normal estimation using surface characteristics
10.1109/VISUAL.1995.480808
1. 166, 449
C
To visualize the volume data acquired from computation or sampling, it is necessary to estimate normals at the points corresponding to object surfaces. Volume data does not holds the geometric information for the surface comprising points, so it is necessary to calculate normals using local information at each point. The existing normal estimation methods have some problems of estimating incorrect normals at discontinuous, aliased or noisy points. Yagel et al. (1992) solved some of these problems using their context-sensitive method. However, this method requires too much processing time and it loses some information on detailed parts of the object surfaces. This paper proposes the surface-characteristic-sensitive normal estimation method which applies different operators according to characteristics of each surface for the normal calculation. This method has the same advantages of the context-sensitive method, and also some other advantages such as less processing time and the reduction of the information loss on detailed parts
Byeong Seok Shin;Yeong Gil Shin
Dept. of Comput. Eng., Seoul Nat. Univ., South Korea|c|;
10.1109/VISUAL.1990.146378;10.1109/VISUAL.1993.398848
Vis
1995
Flow visualization in a hypersonic fin/ramp flow
10.1109/VISUAL.1995.485156
3. 382, 478
C
A recent study of a flow detail of an engine intake of future ground to orbit transport systems provided extremely complex data from numerical flow simulation and experimental flow visualization. The data posed a challenging problem to flow visualization, computational flow imaging (CFI), and the comparison of experimental imaging techniques versus computational imaging techniques. Some new visualization techniques have been implemented to provide compact representations of the complex features in the data. It turned out to be most useful to combine various specialized techniques for an icon-like representation of phenomena in a single image in order to study interaction of flow features. Some lessons were learned by simulating experimental visualization techniques on the numerical data
Pagendarm, H.-G.;Gerhold, T.
German Aerosp. Res. Establ., Gottingen, Germany|c|;
10.1109/VISUAL.1994.346329;10.1109/VISUAL.1993.398875
Vis
1995
High Dimensional Brushing for Interactive Exploration of Multivariate Data
10.1109/VISUAL.1995.485139
2.
C
Brushing is an operation found in many data visualization systems. It is a mechanism for interactively selecting subsets of the data so that they may be highlighted, deleted, or masked. Traditionally, brushes have been defined in screen space via methods such as painting and rubberband rectangles. In this paper we describe the design of N-dimensional brushes which are defined in data space rather than screen space, and show how they have been integrated into XmdvTool, a visualization package for displaying multivariate data. Depending on the data display technique in use, brushes may be specified and manipulated via direct or indirect methods, and the specification may be demand-driven or data-driven. Various brush operations such as highlighting, linking, masking, moving average, and quantitative display have been developed to apply to the selected data. In addition, we have explored several new brush concepts, such as non-discrete brush boundaries, simultaneous display of multiple brushes, and creating composite brushes via logical operators. Preliminary experimental evaluation with test subjects supports the usefulness of N-dimensional brushes in data exploration tasks.
Martin, A.R.;Ward, M.O.
;
10.1109/VISUAL.1990.146386;10.1109/VISUAL.1990.146402;10.1109/VISUAL.1994.346302
Vis
1995
High-speed volume rendering using redundant block compression
10.1109/VISUAL.1995.480810
1. 183, 451
C
Presents a novel volume rendering method which offers high rendering speed on standard workstations. It is based on a lossy data compression scheme which drastically reduces the memory bandwidth and computing requirements of perspective raycasting. Starting from classified and shaded data sets, we use block truncation coding or color cell compression to compress a block of 12 voxels into 32 bits. All blocks of the data set are processed redundantly, yielding a data structure which avoids multiple memory accesses per raypoint. As a side effect, the tri-linear interpolation of data coded in such a way is very much simplified. These techniques allow us to perform walkthroughs at interactive frame rates. Furthermore, the algorithm provides depth-cueing and the semi-transparent display of different materials. The algorithm achieves a sustained frame generation rate of about 2 Hz for large data sets (~2003) at an acceptable image quality on an SGI Indy workstation. A number of examples are shown.
Knittel, G.
Wilhelm-Schickard-Inst. fur Inf., Tubingen Univ., Germany|c|
10.1109/VISUAL.1993.398845;10.1109/VISUAL.1993.398852;10.1109/VISUAL.1992.235231
Vis
1995
Iconic techniques for feature visualization
10.1109/VISUAL.1995.485141
2. 295, 464
C
Presents a conceptual framework and a process model for feature extraction and iconic visualization. Feature extraction is viewed as a process of data abstraction, which can proceed in multiple stages, and corresponding data abstraction levels. The features are represented by attribute sets, which play a key role in the visualization process. Icons are symbolic parametric objects, designed as visual representations of features. The attributes are mapped to the parameters (or degrees of freedom) of an icon. We describe some generic techniques to generate attribute sets, such as volume integrals and medial axis transforms. A simple but powerful modeling language was developed to create icons, and to link the attributes to the icon parameters. We present illustrative examples of iconic visualization created with the techniques described, showing the effectiveness of this approach
Post, F.J.;Van Walsum, T.;Post, F.H.;Silver, D.
Fac. of Tech. Math. & Inf., Delft Univ. of Technol., Netherlands|c|;;;
10.1109/VISUAL.1993.398849;10.1109/VISUAL.1991.175809;10.1109/VISUAL.1992.235174
scientific visualization, feature extraction, iconic visualization, attribute calculation
Vis
1995
IFS fractal interpolation for 2D and 3D visualization
10.1109/VISUAL.1995.480798
7. 84, 441
C
Reconstruction is used frequently in visualization of one, two, and three dimensional data. Data uncertainty is typically ignored, and a deficiency of many interpolation schemes is smoothing which may indicate features or characteristics of the data that are not there. The author investigates the use of iterated function systems (IFS's) for interpolation. He shows new derivations for fractal interpolation in two and three dimensional scalar data, and new point and polytope rendering algorithms with tremendous speed advantages over ray tracing. The interpolations may be used to give an indication of the uncertainty of the data, statistically represent the data at a variety of scales, allow tunability from the data, and may allow more accurate data analysis
Wittenbrink, C.M.
Baskin Center for Comput. Eng. & Inf. Sci., California Univ., Santa Cruz, CA, USA|c|
Vis
1995
Interactive 3D visualization of actual anatomy and simulated chemical time-course data for fish
10.1109/VISUAL.1995.485169
3. 396, 481
C
Outputs from a physiologically based toxicokinetic (PB-TK) model for fish were visualized by mapping time series data for specific tissues onto a three dimensional representation of a rainbow trout. The trout representation was generated in stepwise fashion: cross sectional images were obtained from an anesthetized fish using a magnetic resonance imaging (MRI) system; images were processed to classify tissue types; images were stacked and processed to create a three dimensional representation of the fish, encapsulating five volumes corresponding to the liver, kidney, muscle, gastrointestinal tract, and fat. Kinetic data for the disposition of pentachloroethane in trout were generated using a PB-TK model. Model outputs were mapped onto corresponding tissue volumes, representing chemical concentration as color intensity. The visualization was then animated, to show the accumulation of pentachloroethane in each tissue during a continuous branchial (gill) exposure
Rheingans, P.;Marietta, M.;Nichols, J.
Sci. Visualization Lab., US Environ. Protection Agency, Research Triangle Park, NC, USA|c|;;