IEEE VIS Publication Dataset

next
Vis
2004
Lighting transfer functions using gradient aligned sampling
10.1109/VISUAL.2004.64
2. 296
C
An important task in volume rendering is the visualization of boundaries between materials. This is typically accomplished using transfer functions that increase opacity based on a voxel's value and gradient. Lighting also plays a crucial role in illustrating surfaces. In this paper we present a multi-dimensional transfer function method for enhancing surfaces, not through the variation of opacity, but through the modification of surface shading. The technique uses a lighting transfer function that takes into account the distribution of values along a material boundary and features a novel interface for visualizing and specifying these transfer functions. With our method, the user is given a means of visualizing boundaries without modifying opacity, allowing opacity to be used for illustrating the thickness of homogeneous materials through the absorption of light.
Lum, E.B.;Kwan-Liu Ma
California Univ., Davis, CA, USA|c|;
10.1109/VISUAL.2001.964519;10.1109/VISUAL.2000.885697;10.1109/VISUAL.1990.146391;10.1109/VISUAL.1999.809886
direct volume rendering, volume visualization, multi-dimensional transfer functions, shading, transfer functions
Vis
2004
Linear and cubic box splines for the body centered cubic lattice
10.1109/VISUAL.2004.65
1. 18
C
We derive piecewise linear and piecewise cubic box spline reconstruction filters for data sampled on the body centered cubic (BCC) lattice. We analytically derive a time domain representation of these reconstruction filters and using the Fourier slice-projection theorem we derive their frequency responses. The quality of these filters, when used in reconstructing BCC sampled volumetric data, is discussed and is demonstrated with a raycaster. Moreover, to demonstrate the superiority of the BCC sampling, the resulting reconstructions are compared with those produced from similar filters applied to data sampled on the Cartesian lattice.
Entezari, A.;Dyer, R.;Moller, T.
Graphics, Usability, & Visualization Lab., Simon Fraser Univ., Burnaby, BC, Canada|c|;;
10.1109/VISUAL.1993.398851;10.1109/VISUAL.2001.964498;10.1109/VISUAL.1997.663848;10.1109/VISUAL.1994.346331;10.1109/VISUAL.2001.964499
Body Centered Cubic Lattice, Reconstruction, Optimal Regular Sampling
Vis
2004
Linking Representation with Meaning
10.1109/VISUAL.2004.66
5. 5
M
The purpose of visualization is not just to depict data, but to gain or present insight into the domain represented in data. However in visualization systems, this link between features in the data and the meaning of those features is often missing or implicit. It is assumed that the user, through looking at the output, will close the loop between representation and insight. An alternative is to view visualization tools as interfaces between data and insight, and to enrich this interface with capabilities linked to users’ conceptual models of the data. Preliminary work has been carried out to develop such an interface as a modular component that can be installed in a pipelined architecture. This poster expands the motivation for this work, and describes the initial implementation carried out within the Visualization Toolkit (VTK).
Duke, D.
University of Leeds|c|
Vis
2004
Live Range Visibility Constraints for Adaptive Terrain Visualization
10.1109/VISUAL.2004.67
1. 12
M
Although there is a remarkable pace in the advance of computational resources and storage for real-time visualization the immensity of the input data continues to outstrip any advances. The task for interactively visualizing such a massive terrain is to render a triangulated mesh using a view-dependent error tolerance, thus intelligently and perceptually managing the scene??s geometric complexity. At any particular instance in time (i.e. displayed frame), this level-of-detail (LOD) terrain surface consists of a mesh composed of hundreds of thousands of dynamically selected triangles. The triangles are selected using the current time-step??s view parameters and the view-dependent error tolerance. Massive terrain data easily exceeds main memory storage capacity such that out-of-core rendering must be performed. This further complicates the triangle selection and terrain rendering owing to tertiary storage??s relatively poor performance.
Xiaohong Bao;Pajarola, R.;Shafae, M.
University of California at Irvine|c|;;
Vis
2004
Local and global comparison of continuous functions
10.1109/VISUAL.2004.68
2. 280
C
We introduce local and global comparison measures for a collection of k /spl les/ d real-valued smooth functions on a common d-dimensional Riemannian manifold. For k = d = 2 we relate the measures to the set of critical points of one function restricted to the level sets of the other. The definition of the measures extends to piecewise linear functions for which they are easy to compute. The computation of the measures forms the centerpiece of a software tool which we use to study scientific datasets.
Edelsbrunner, H.;Harer, J.;Natarajan, V.;Pascucci, V.
Dept. of Comput. Sci. & Math., Duke Univ., Durham, NC, USA|c|;;;
Visualization, Riemannian manifolds, smooth functions, time-varying data, comparison measure, differential forms
Vis
2004
LoD volume rendering of FEA data
10.1109/VISUAL.2004.69
4. 424
C
A new multiple resolution volume rendering method for finite element analysis (FEA) data is presented. Our method is composed of three stages: in the first stage, the Gauss points of the FEA cells are calculated. The function values, gradients, diffusions, and influence scopes of the Gauss points are computed. By representing the Gauss points as graph vertices and connecting adjacent Gauss points with edges, an adjacency graph is created. The adjacency graph is used to represent the FEA data in the subsequent computation. In the second stage, a hierarchical structure is established upon the adjacency graph. Any two neighboring vertices with similar function values are merged into a new vertex. The similarity is measured by using a user-defined threshold. Consequently, a new adjacency graph is constructed. Then the threshold is increased, and the graph reduction is triggered again to generate another adjacency graph. By repeating the processing, multiple adjacency graphs are computed, and a level of detail (LoD) representation of the FEA data is established. In the third stage, the LoD structure is rendered by using a splatting method. At first, a level of adjacency graph is selected by users. The graph vertices arc sorted based on their visibility orders and projected onto the image plane in back-to-front order. Billboards are used to render the vertices in the projection. The function values, gradients, and influence scopes of the vertices are utilized to decide the colors, opacities, orientations, and shapes of the billboards. The billboards are then modulated with texture maps to generate the footprints of the vertices. Finally, these footprints are composited to produce the volume rendering image.
Shyh-Kuang Ueng;Yan-Jen Su;Chi-Tang Chang
Dept. of Comput. Sci., Nat. Taiwan Ocean Univ., Keelung, Taiwan|c|;;
10.1109/VISUAL.1999.809908;10.1109/VISUAL.1995.485144;10.1109/VISUAL.2002.1183767;10.1109/VISUAL.2000.885680;10.1109/VISUAL.2000.885682;10.1109/VISUAL.2000.885680;10.1109/VISUAL.2001.964490;10.1109/VISUAL.1993.398877;10.1109/VISUAL.1992.235228;10.1109/VISUAL.1998.745309;10.1109/VISUAL.1999.809909
Volume rendering, splatting method, level-of-detail, unstructured data, scientific visualization
Vis
2004
Methods for efficient, high quality volume resampling in the frequency domain
10.1109/VISUAL.2004.70
3. 10
C
Resampling is a frequent task in visualization and medical imaging. It occurs whenever images or volumes are magnified, rotated, translated, or warped. Resampling is also an integral procedure in the registration of multimodal datasets, such as CT, PET, and MRI, in the correction of motion artifacts in MRI, and in the alignment of temporal volume sequences in fMRI. It is well known that the quality of the resampling result depends heavily on the quality of the interpolation filter used. However, high-quality filters are rarely employed in practice due to their large spatial extents. We explore a new resampling technique that operates in the frequency-domain where high-quality filtering is feasible. Further, unlike previous methods of this kind, our technique is not limited to integer-ratio scaling factors, but can resample image and volume datasets at any rate. This would usually require the application of slow discrete Fourier transforms (DFT) to return the data to the spatial domain. We studied two methods that successfully avoid these delays: the chirp-z transform and the FFTW package. We also outline techniques to avoid the ringing artifacts that may occur with frequency-domain filtering. Thus, our method can achieve high-quality interpolation at speeds that are usually associated with spatial filters of far lower quality.
Li, A.;Mueller, K.;Ernst, T.
Comput. Sci. Dept., State Univ. of New York, Stony Brook, NY, USA|c|;;
10.1109/VISUAL.1994.346331
resampling, filters, Fourier Transform
Vis
2004
Modeling Decomposing Objects under Combustion
10.1109/VISUAL.2004.71
1. 14
M
We present a simple yet effective method for modeling of object decomposition under combustion. A separate simulation models the flame production and generates heat from a combustion process, which is used to trigger pyrolysis of the solid object. The decomposition is modeled using level set methods, and can handle complex topological changes. Even with a very simple flame model on a coarse grid, we can achieve a plausible decomposition of the burning object.
Melek, Z.;Keyser, J.
Texas A&M University|c|;
Vis
2004
Non-linear model fitting to parameterize diseased blood vessels
10.1109/VISUAL.2004.72
3. 400
C
Accurate estimation of vessel parameters is a prerequisite for automated visualization and analysis of healthy and diseased blood vessels. The objective of this research is to estimate the dimensions of lower extremity arteries, imaged by computed tomography (CT). These parameters are required to get a good quality visualization of healthy as well as diseased arteries using a visualization technique such as curved planar reformation (CPR). The vessel is modeled using an elliptical or cylindrical structure with specific dimensions, orientation and blood vessel mean density. The model separates two homogeneous regions: its inner side represents a region of density for vessels, and its outer side a region for background. Taking into account the point spread function (PSF) of a CT scanner, a function is modeled with a Gaussian kernel, in order to smooth the vessel boundary in the model. A new strategy for vessel parameter estimation is presented. It stems from vessel model and model parameter optimization by a nonlinear optimization procedure, i.e., the Levenberg-Marquardt technique. The method provides center location, diameter and orientation of the vessel as well as blood and background mean density values. The method is tested on synthetic data and real patient data with encouraging results.
La Cruz, A.;Straka, M.;Kochl, A.;Sramek, M.;Groller, E.;Fleischmann, D.
Vienna Univ. of Technol., Austria|c|;;;;;
10.1109/VISUAL.2001.964555
Visualization, Segmentation, Blood Vessel Detection
Vis
2004
On the role of color in the perception of motion in animated visualizations
10.1109/VISUAL.2004.73
3. 312
C
Although luminance contrast plays a predominant role in motion perception, significant additional effects are introduced by chromatic contrasts. In this paper, relevant results from psychophysical and physiological research are described to clarify the role of color in motion detection. Interpreting these psychophysical experiments, we propose guidelines for the design of animated visualizations, and a calibration procedure that improves the reliability of visual motion representation. The guidelines are applied to examples from texture-based flow visualization, as well as graph and tree visualisation.
Weiskopf, D.
Inst. of Visualization & Interactive Syst., Stuttgart Univ., Germany|c|
10.1109/VISUAL.2002.1183788;10.1109/VISUAL.2003.1250362;10.1109/VISUAL.2003.1250361;10.1109/VISUAL.1997.663874
Color, luminance, motion detection, perception, human visual system, flow visualization, information visualization
Vis
2004
On the Visualization of Time-Varying Structured Grids Using a 3D Warp Texture
10.1109/VISUAL.2004.74
1. 17
M
We present a novel scheme to interactively visualize time-varying scalar fields defined on a structured grid. The underlying approach is to maximize the use of current graphics hardware by using 3D texture mapping. This approach commonly suffers from an expensive voxelization of each time-step as well as from large size of the voxel array approximating each step. Hence, in our scheme, instead of explicitly voxelizing each scalar field, we directly store each time-step as a three dimensional texture in its native form. We create the function that warps a voxel grid into the given structured grid. At rendering time, we reconstruct the function at each pixel using hardware-based trilinear interpolation. The resulting coordinates allow us to compute the scalar value at this pixel using a second texture lookup. For fixed grids, the function remains constant across time-steps and only the scalar field table needs to be re-loaded as a texture. Our new approach achieves excellent performance with relatively low texture memory requirements and low approximation error.
Yuan Chen;Cohen, J.D.;Subodh Kumar
Johns Hopkins University|c|;;
Vis
2004
Optimal global conformal surface parameterization
10.1109/VISUAL.2004.75
2. 274
C
All orientable metric surfaces are Riemann surfaces and admit global conformal parameterizations. Riemann surface structure is a fundamental structure and governs many natural physical phenomena, such as heat diffusion and electro-magnetic fields on the surface. A good parameterization is crucial for simulation and visualization. This paper provides an explicit method for finding optimal global conformal parameterizations of arbitrary surfaces. It relies on certain holomorphic differential forms and conformal mappings from differential geometry and Riemann surface theories. Algorithms are developed to modify topology, locate zero points, and determine cohomology types of differential forms. The implementation is based on a finite dimensional optimization method. The optimal parameterization is intrinsic to the geometry, preserves angular structure, and can play an important role in various applications including texture mapping, remeshing, morphing and simulation. The method is demonstrated by visualizing the Riemann surface structure of real surfaces represented as triangle meshes.
Miao Jin;Yalin Wang;Shing-Tung Yau;Gu, X.
Dept. of Comput. Sci., State Univ. of New York, Stony Brook, NY, USA|c|;;;
Computational geometry and object modeling, Curve / surface / solid and object representations, Surface parameterization
Vis
2004
Panel 1: Can We Determine the Top Unresolved Problems of Visualization?
10.1109/VISUAL.2004.76
5. 566
M
Many of us working in visualization have our own list of our top 5 or 10 unresolved problems in visualization. We have assembled a group of panelists to debate and perhaps reach concensus on the top problems in visualization that still need to be explored. We include panelists from both the information and scientific visualization domains. After our presentations, we encourage interaction with the audience to see if we can further formulate and perhaps finalize our list of top unresolved problems in visualization.
Rhyne, T.M.;Hibbard, B.;Johnson, C.R.;Chen, C.;Eick, S.G.
North Carolina State University|c|;;;;
Vis
2004
Panel 2: In the Eye of the Beholder: The Role of Perception in Scientific Visualization
10.1109/VISUAL.2004.77
5. 568
M
The evolution of computational science over the last decade has resulted in a dramatic increase in raw problem solving capabilities. This growth has given rise to advances in scientific and engineering simulations that have put a high demand on tools for high-performance large-scale data exploration and analysis. These simulations have the potential to generate large amounts of data. Humans, however are relatively poor at gaining insight from raw numerical data, and as a result, have used visualization as a tool for understanding, interpreting and exploring data of all types and sizes. Allowing for efficient visual explorations of data, however, requires that the ratio of knowledge gained versus the cost of the visualization be maximized. This, in turn, mandates the integration of principles from human perception. Understanding perception as it relates to visualization requires that we understand not only the biology of the human visual system, but principles from vision theory, and perceptual psychology as well. This panel is the result of bringing together practioners and researchers from a broad spectrum of interests relating to the ability to maximize the amount of information that is effectively perceived from a given visualization. Position statements will be given by researchers interested in perceptual psychology and the perception of natural images, integrating art and design principles, non-photorealistic rendering techniques, and the use of global illumination methods to provide benefical perceptual cues.
Gaither, K.;Ebert, D.S.;Gaither, K.;Geisler, B.;Laidlaw, D.H.
University of Texas at Austin|c|;;;;
Vis
2004
Panel 3: The Future Visualization Platform
10.1109/VISUAL.2004.78
5. 571
M
Advances in graphics hardware and rendering methods are shaping the future of visualization. For example, programmable graphics processors are redefining the traditional visualization cycle. In some cases it is now possible to run the computational simulation and associated visualization side-by-side on the same chip. Moreover, global illumination and non-photorealistic effects promise to deliver imagery which enables greater insight into high resolution, multivariate, and higher-dimensional data. The panelists will offer distinct viewpoints on the direction of future graphics hardware and its potential impact on visualization, and on the nature of advanced visualizationrelated tools and techniques. Presentation of these viewpoints will be followed by audience participation in the form of a question and answer period moderated by the panel organizer.
Johnson, G.P.;Ebert, D.S.;Hansen, C.;Kirk, D.;Mark, B.;Pfister, H.
University of Texas at Austin|c|;;;;;
Vis
2004
Panel 4: What Should We Teach in a Scientific Visualization Class?
10.1109/VISUAL.2004.79
5. 575
M
Scientific Visualization (SciVis) has evolved past the point where one undergraduate course can cover all of the necessary topics. So the question becomes "how do we teach SciVis to this generation of students?" Some examples of current courses are: A graduate Computer Science (CS) course that prepares the next generation of SciVis researchers. An undergraduate CS course that prepares the future software architects/developers of packages such as vtk, vis5D and AVS. A class that teaches students how to do SciVis with existing software packages and how to deal with the lack of interoperability between those packages (via either a CS service course or a supercomputing center training course). An inter-disciplinary course designed to prepare computer scientists to work with the "real" scientists (via either a CS or Computational Science course). In this panel, we will discuss these types of courses and the advantages and disadvantages of each. We will also talk about some issues that you have probably encountered at your university: How do we keep the graphics/vis-oriented students from going to industry? How does SciVis fit in with evolving Computational Science programs? Is SciVis destined to be a service course at most universities? How do we deal with the diverse backgrounds of students that need SciVis?
Genetti, J.D.;Bailey, M.;Genetti, J.D.;Laidlaw, D.H.;Moorhead, R.J.;Whitaker, R.T.
University of Alaska Fairbanks|c|;;;;;
Vis
2004
Physically based methods for tensor field visualization
10.1109/VISUAL.2004.80
1. 130
C
The physical interpretation of mathematical features of tensor fields is highly application-specific. Existing visualization methods for tensor fields only cover a fraction of the broad application areas. We present a visualization method tailored specifically to the class of tensor field exhibiting properties similar to stress and strain tensors, which are commonly encountered in geomechanics. Our technique is a global method that represents the physical meaning of these tensor fields with their central features: regions of compression or expansion. The method is based on two steps: first, we define a positive definite metric, with the same topological structure as the tensor field; second, we visualize the resulting metric. The eigenvector fields are represented using a texture-based approach resembling line integral convolution (LIC) methods. The eigenvalues of the metric are encoded in free parameters of the texture definition. Our method supports an intuitive distinction between positive and negative eigenvalues. We have applied our method to synthetic and some standard data sets, and "real" data from earth science and mechanical engineering application.
Hotz, I.;Feng, L.;Hagen, H.;Hamann, B.;Joy, K.I.;Jeremic, B.
IDAV, California Univ., Davis, CA, USA|c|;;;;;
10.1109/VISUAL.1998.745316;10.1109/VISUAL.1999.809894;10.1109/VISUAL.1993.398849;10.1109/VISUAL.2002.1183798;10.1109/VISUAL.1994.346326;10.1109/VISUAL.2002.1183782;10.1109/VISUAL.2002.1183799;10.1109/VISUAL.2003.1250379
tensors field, stress tensor, strain tensor, LIC
Vis
2004
Pixel-exact rendering of spacetime finite element solutions
10.1109/VISUAL.2004.81
4. 432
C
Computational simulation of time-varying physical processes is of fundamental importance for many scientific and engineering applications. Most frequently, time-varying simulations are performed over multiple spatial grids at discrete points in time. We investigate a new approach to time-varying simulation: spacetime discontinuous Galerkin finite element methods. The result of this simulation method is a simplicial tessellation of spacetime with per-element polynomial solutions for physical quantities such as strain, stress, and velocity. To provide accurate visualizations of the resulting solutions, we have developed a method for per-pixel evaluation of solution data on the GPU. We demonstrate the importance of per-pixel rendering versus simple linear interpolation for producing high quality visualizations. We also show that our system can accommodate reasonably large datasets - spacetime meshes containing up to 20 million tetrahedra are not uncommon in this domain.
Zhou, Y.;Garland, M.;Haber, R.
Dept. of Comput. Sci., Illinois Univ., Urbana, IL, USA|c|;;
10.1109/VISUAL.2000.885704;10.1109/VISUAL.1990.146361;10.1109/VISUAL.2003.1250354;10.1109/VISUAL.2003.1250384;10.1109/VISUAL.2003.1250386
pixel-exact visualization, pixel shaders, spacetime finite elements, discontinuous Galerkin methods
Vis
2004
PQuad: visualization of predicted peptides and proteins
10.1109/VISUAL.2004.82
4. 480
C
New high-throughput proteomic techniques generate data faster than biologists can analyze it. Hidden within this massive and complex data are answers to basic questions about how cells function. The data afford an opportunity to take a global or systems approach studying whole proteomes comprising all the proteins in an organism. However, the tremendous size and complexity of the high-throughput data make it difficult to process and interpret. Existing tools for studying a few proteins at a time are not suitable for global analysis. Visualization provides powerful analysis capabilities for enormous, complex data at multiple resolutions. We developed a novel interactive visualization tool, PQuad, for the visual analysis of proteins and peptides identified from high-throughput data on biological samples. PQuad depicts the peptides in the context of their source protein and DNA, thereby integrating proteomic and genomic information. A wrapped line metaphor is applied across key resolutions of the data, from a compressed view of an entire chromosome to the actual nucleotide sequence. PQuad provides a difference visualization for comparing peptides from samples prepared under different experimental conditions. We describe the requirements for such a visual analysis tool, the design decisions, and the novel aspects of PQuad.
Havre, S.;Singhal, M.;Payne, D.A.;Webb-Robertson, B.-J.M.
Pacific Northwest Nat. Lab., Richland, WA, USA|c|;;;
10.1109/INFVIS.1995.528685
visualization, metaphor, context, proteomics, differential proteomics, difference visualization
Vis
2004
Projecting tetrahedra without rendering artifacts
10.1109/VISUAL.2004.85
2. 34
C
Hardware-accelerated direct volume rendering of unstructured volumetric meshes is often based on tetrahedral cell projection, in particular, the projected tetrahedra (PT) algorithm and its variants. Unfortunately, even implementations of the most advanced variants of the PT algorithm are very prone to rendering artifacts. In this work, we identify linear interpolation in screen coordinates as a cause for significant rendering artifacts and implement the correct perspective interpolation for the PT algorithm with programmable graphics hardware. We also demonstrate how to use features of modern graphics hardware to improve the accuracy of the coloring of individual tetrahedra and the compositing of the resulting colors, in particular, by employing a logarithmic scale for the preintegrated color lookup table, using textures with high color resolution, rendering to floating-point color buffers, and alpha dithering. Combined with a correct visibility ordering, these techniques result in the first implementation of the PT algorithm without objectionable rendering artifacts. Apart from the important improvement in rendering quality, our approach also provides a test bed for different implementations of the PT algorithm that allows us to study the particular rendering artifacts introduced by these variants.
Kraus, M.;Wei Qiao;Ebert, D.S.
Purdue Univ., West Lafayette, IN, USA|c|;;
10.1109/VISUAL.2000.885683;10.1109/VISUAL.2003.1250390;10.1109/VISUAL.2001.964514;10.1109/VISUAL.2003.1250384
volume visualization, volume rendering, cell projection, projected tetrahedra, perspective interpolation, dithering, programmable graphics hardware