IEEE VIS Publication Dataset

next
Vis
2005
Eyelet particle tracing - steady visualization of unsteady flow
10.1109/VISUAL.2005.1532848
6. 614
C
It is a challenging task to visualize the behavior of time-dependent 3D vector fields. Most of the time an overview of unsteady fields is provided via animations, but, unfortunately, animations provide only transient impressions of momentary flow. In this paper we present two approaches to visualize time varying fields with fixed geometry. Path lines and streak lines represent such a steady visualization of unsteady vector fields, but because of occlusion and visual clutter it is useless to draw them all over the spatial domain. A selection is needed. We show how bundles of streak lines and path lines, running at different times through one point in space, like through an eyelet, yield an insightful visualization of flow structure ("eyelet lines"). To provide a more intuitive and appealing visualization we also explain how to construct a surface from these lines. As second approach, we use a simple measurement of local changes of a field over time to determine regions with strong changes. We visualize these regions with isosurfaces to give an overview of the activity in the dataset. Finally we use the regions as a guide for placing eyelets.
Wiebel, A.;Scheuermann, G.
Dept. of Comput. Sci., Leipzig Univ., Germany|c|;
10.1109/VISUAL.2001.964493;10.1109/VISUAL.1995.485146;10.1109/VISUAL.2004.107;10.1109/VISUAL.1992.235211;10.1109/VISUAL.1993.398848;10.1109/VISUAL.2004.113;10.1109/VISUAL.1993.398850;10.1109/VISUAL.2003.1250372;10.1109/VISUAL.1996.568121;10.1109/VISUAL.2004.99
3D Vector Field Visualization, Time-Varying Data Visualization, Flow Visualization, Vector/Tensor Visualization
Vis
2005
Farthest point seeding for efficient placement of streamlines
10.1109/VISUAL.2005.1532832
4. 486
C
We propose a novel algorithm for placement of streamlines from two-dimensional steady vector or direction fields. Our method consists of placing one streamline at a time by numerical integration starting at the furthest away from all previously placed streamlines. Such a farthest point seeding strategy leads to high quality placements by favoring long streamlines, while retaining uniformity with the increasing density. Our greedy approach generates placements of comparable quality with respect to the optimization approach from Turk and Banks, while being 200 times faster. Simplicity, robustness as well as efficiency is achieved through the use of a Delaunay triangulation to model the streamlines, address proximity queries and determine the biggest voids by exploiting the empty circle property. Our method handles variable density and extends to multiresolution.
Mebarki, A.;Alliez, P.;Devillers, O.
INRIA Sophia-Antipolis, France|c|;;
10.1109/VISUAL.2000.885690
Streamline placement, farthest point seeding, Delaunay triangulation, variable density, multiresolution
Vis
2005
Fast and reproducible fiber bundle selection in DTI visualization
10.1109/VISUAL.2005.1532778
5. 64
C
Diffusion tensor imaging (DTI) is an MRI-based technique for quantifying water diffusion in living tissue. In the white matter of the brain, water diffuses more rapidly along the neuronal axons than in the perpendicular direction. By exploiting this phenomenon, DTI can be used to determine trajectories of fiber bundles, or neuronal connections between regions, in the brain. The resulting bundles can be visualized. However, the resulting visualizations can be complex and difficult to interpret. An effective approach is to pre-determine trajectories from a large number of positions throughout the white matter (full brain fiber tracking) and to offer facilities to aid the user in selecting fiber bundles of interest. Two factors are crucial for the use and acceptance of this technique in clinical studies: firstly, the selection of the bundles by brain experts should be interactive, supported by real-time visualization of the trajectories registered with anatomical MRI scans. Secondly, the fiber selections should be reproducible, so that different experts will achieve the same results. In this paper we present a practical technique for the interactive selection of fiber-bundles using multiple convex objects that is an order of magnitude faster than similar techniques published earlier. We also present the results of a clinical study with ten subjects that show that our selection approach is highly reproducible for fractional anisotropy (FA) calculated over the selected fiber bundles.
Blaas, J.;Botha, C.P.;Peters, B.;Vos, F.M.;Post, F.H.
Data Visualization Group, Delft Univ. of Technol., Netherlands|c|;;;;
10.1109/VISUAL.1999.809894;10.1109/VISUAL.2004.30
diffusion tensor imaging, tractography, white matter
Vis
2005
Fast visualization by shear-warp on quadratic super-spline models using wavelet data decompositions
10.1109/VISUAL.2005.1532816
3. 358
C
We develop the first approach Tor interactive volume visualization based on a sophisticated rendering method of shear-warp type, wavelet data encoding techniques, and a trivariate spline model, which has been introduced recently. As a first step of our algorithm, we apply standard wavelet expansions to represent and decimate the given gridded three-dimensional data. Based on this data encoding, we give a sophisticated version of the shear-warp based volume rendering method. Our new algorithm visits each voxel only once taking advantage of the particular data organization of octrees. In addition, the hierarchies of the data guide the local (re)construction of the quadratic super-spline models, which we apply as a pure visualization tool. The low total degree of the polynomial pieces allows to numerically approximate the volume rendering integral efficiently. Since the coefficients of the splines are almost immediately available from the given data, Bernstein-Bezier techniques can be fully employed in our algorithms. In this way, we demonstrate that these models can be successfully applied to full volume rendering of hierarchically organized data. Our computational results show that (even when hierarchical approximations are used) the new approach leads to almost artifact-free visualizations of high quality for complicated and noise-contaminated volume data sets, while the computational effort is considerable low, i.e. our current implementation yields 1-2 frames per second for parallel perspective rendering a 2563 volume data set (using simple opacity transfer functions) in a 5122 view-port.
Schlosser, G.;Hesser, J.;Zeilfelder, F.;Rossl, C.;Nurnberger, G.;Seidel, H.-P.;Männer, R.
ICM, Mannheim Univ., Germany|c|;;;;;;
10.1109/VISUAL.1998.745713;10.1109/VISUAL.2001.964513;10.1109/VISUAL.1994.346331;10.1109/VISUAL.1990.146391
Volume Rendering, Quadratic Super-Splines, Shear-Warp Algorithm, Hierarchical Data Encoding
Vis
2005
Framework for visualizing higher-order basis functions
10.1109/VISUAL.2005.1532776
4. 50
C
Techniques in numerical simulation such as the finite element method depend on basis functions for approximating the geometry and variation of the solution over discrete regions of a domain. Existing visualization systems can visualize these basis functions if they are linear, or for a small set of simple non-linear bases. However, newer numerical approaches often use basis functions of elevated and mixed order or complex form; hence existing visualization systems cannot directly process them. In this paper we describe an approach that supports automatic, adaptive tessellation of general basis functions using a flexible and extensible software architecture in conjunction with an on demand, edge-based recursive subdivision algorithm. The framework supports the use of functions implemented in external simulation packages, eliminating the need to reimplement the bases within the visualization system. We demonstrate our method on several examples, and have implemented the framework in the open-source visualization system VTK.
Schroeder, W.J.;Bertel, F.;Malaterre, M.;Thompson, D.;Pebay, P.P.;O'Bara, R.;Saurabh Tendulkar
;;;;;;
10.1109/VISUAL.1997.663886;10.1109/VISUAL.2004.15;10.1109/VISUAL.1991.175818;10.1109/VISUAL.1995.480821
finite element, basis function, tessellation, framework
Vis
2005
Hardware-accelerated 3D visualization of mass spectrometry data
10.1109/VISUAL.2005.1532827
4. 446
C
We present a system for three-dimensional visualization of complex liquid chromatography-mass spectrometry (LCMS) data. Every LCMS data point has three attributes: time, mass, and intensity. Instead of the traditional visualization of two-dimensional subsets of the data, we visualize it as a height field or terrain in 3D. Unlike traditional terrains, LCMS data has non-linear sampling and consists mainly of tall needle-like features. We adapt the level-of-detail techniques of geometry clipmaps for hardware-accelerated rendering of LCMS data. The data is cached in video memory as a set of nested rectilinear grids centered about the view frustum. We introduce a simple compression scheme and dynamically stream data from the CPU to the GPU as the viewpoint moves. Our system allows interactive investigation of complex LCMS data with close to one billion data points at up to 130 frames per second, depending on the view conditions.
de Corral, J.;Pfister, H.
Waters Corp., Milford, MA, USA|c|;
10.1109/VISUAL.1996.567600;10.1109/VISUAL.1998.745282;10.1109/VISUAL.1997.663860
Mass Spectrometry, Terrain Rendering, GPU Rendering
Vis
2005
Hardware-accelerated simulated radiography
10.1109/VISUAL.2005.1532815
3. 350
C
We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32-bit floating point texture capabilities to obtain solutions to the radiative transport equation for X-rays. The hardware accelerated solutions are accurate enough to enable scientists to explore the experimental design space with greater efficiency than the methods currently in use. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedral meshes that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester.
Laney, D.;Callahan, S.P.;Max, N.;Silva, C.T.;Langer, S.;Frank, R.
Lawrence Livermore Nat. Lab., Berkeley, CA, USA|c|;;;;;
10.1109/VISUAL.2000.885683;10.1109/VISUAL.2004.85;10.1109/VISUAL.2003.1250390
volume rendering, hardware acceleration
Vis
2005
High dynamic range volume visualization
10.1109/VISUAL.2005.1532812
3. 334
C
High resolution volumes require high precision compositing to preserve detailed structures. This is even more desirable for volumes with high dynamic range values. After the high precision intermediate image has been computed, simply rounding up pixel values to regular display scales loses the computed details. In this paper, we present a novel high dynamic range volume visualization method for rendering volume data with both high spatial and intensity resolutions. Our method performs high precision volume rendering followed by dynamic tone mapping to preserve details on regular display devices. By leveraging available high dynamic range image display algorithms, this dynamic tone mapping can be automatically adjusted to enhance selected features for the final display. We also present a novel transfer function design interface with nonlinear magnification of the density range and logarithmic scaling of the color/opacity range to facilitate high dynamic range volume visualization. By leveraging modern commodity graphics hardware and out-of-core acceleration, our system can produce an effective visualization of huge volume data.
Xiaoru Yuan;Nguyen, M.Z.;Baoquan Chen;Porter, D.H.
Dept. of Comput. Sci. & Eng;, Minnesota Univ., MN, USA|c|;;;
10.1109/INFVIS.1997.636718;10.1109/VISUAL.1999.809908
Volume Rendering, High Dynamic Range, Focus+Context Techniques, User Interfaces, Transfer Function Design, Non-linear Magnification
Vis
2005
High performance volume splatting for visualization of neurovascular data
10.1109/VISUAL.2005.1532805
2. 278
C
A new technique is presented to increase the performance of volume splatting by using hardware accelerated point sprites. This allows creating screen aligned elliptical splats for high quality volume splatting at very low cost on the GPU. Only one vertex per splat is stored on the graphics card. GPU generated point sprite texture coordinates are used for computing splats and per-fragment 3D-texture coordinates on the fly. Thus, only 6 bytes per splat are stored on the GPU and vertex shader load is 25% in comparison to applying textured quads. For eight predefined viewing directions, depth-sorting of the splats is performed in a pre-processing step where the resulting indices are stored on the GPU. Thereby, there is no data transfer between CPU and GPU during rendering. Post-classificative two dimensional transfer functions with lighting for scalar data and tagged volumes were implemented. Thereby, we focused on the visualization of neurovascular structures, where typically no more than 2% of the voxels contribute to the resulting 3D-representation. A comparison with a 3D-texture-based slicing algorithm showed frame rates up to 11 times higher for the presented approach on current CPUs. The presented technique was evaluated with a broad medical database and its value for highly sparse volume visualization is shown.
Vega-Higuera, F.;Hastreiter, P.;Fahlbusch, R.;Greiner, G.
Dept. of Neurosurg. & Comput. Graphics Group, Univ. of Erlangen, Germany|c|;;;
10.1109/VISUAL.2004.38;10.1109/VISUAL.1997.663882;10.1109/VISUAL.2003.1250384;10.1109/VISUAL.1996.567608;10.1109/VISUAL.2003.1250404;10.1109/VISUAL.2001.964519;10.1109/VISUAL.2003.1250388;10.1109/VISUAL.2001.964490;10.1109/VISUAL.1999.809909;10.1109/VISUAL.2003.1250386
volume visualization, volume splatting, neurovascular structures, segmented data
Vis
2005
HOT-lines: tracking lines in higher order tensor fields
10.1109/VISUAL.2005.1532773
2. 34
C
Tensors occur in many areas of science and engineering. Especially, they are used to describe charge, mass and energy transport (i.e. electrical conductivity tensor, diffusion tensor, thermal conduction tensor resp.) If the locale transport pattern is complicated, usual second order tensor representation is not sufficient. So far, there are no appropriate visualization methods for this case. We point out similarities of symmetric higher order tensors and spherical harmonics. A spherical harmonic representation is used to improve tensor glyphs. This paper unites the definition of streamlines and tensor lines and generalizes tensor lines to those applications where second order tensors representations fail. The algorithm is tested on the tractography problem in diffusion tensor magnetic resonance imaging (DT-MRI) and improved for this special application.
Hlawitschka, M.;Scheuermann, G.
Inst. for Comput. Sci., Leipzig Univ., Germany|c|;
10.1109/VISUAL.2004.105;10.1109/VISUAL.1992.235193;10.1109/VISUAL.2000.885716;10.1109/VISUAL.2002.1183797;10.1109/VISUAL.1994.346326
Higher order tensors, spherical harmonics, tensor lines, tractography, vector/tensor visualization, visualization in medicine, DT-MRI
Vis
2005
Illuminated lines revisited
10.1109/VISUAL.2005.1532772
1. 26
C
For the rendering of vector and tensor fields, several texture-based volumetric rendering methods were presented in recent years. While they have indisputable merits, the classical vertex-based rendering of integral curves has the advantage of better zooming capabilities as it is not bound to a fixed resolution. It has been shown that lighting can improve spatial perception of lines significantly, especially if lines appear in bundles. Although OpenGL does not directly support lighting of lines, fast rendering of illuminated lines can be achieved by using basic texture mapping. This existing technique is based on a maximum principle which gives a good approximation of specular reflection. Diffuse reflection however is essentially limited to bidirectional lights at infinity. We show how the realism can be further increased by improving diffuse reflection. We present simplified expressions for the Phong/Blinn lighting of infinitesimally thin cylindrical tubes. Based on these, we propose a fast rendering technique with diffuse and specular reflection for orthographic and perspective views and for multiple local and infinite lights. The method requires commonly available programmable vertex and fragment shaders and only two-dimensional lookup textures.
Mallo, O.;Peikert, R.;Sigg, C.;Sadlo, F.
Eidgenossische Tech. Hochschule, Zurich, Switzerland|c|;;;
10.1109/VISUAL.2004.5;10.1109/VISUAL.2003.1250378;10.1109/VISUAL.2002.1183797;10.1109/VISUAL.1996.567777;10.1109/VISUAL.1997.663912
Field lines, illumination, vector field visualization,texture mapping, graphics hardware
Vis
2005
Illustration and photography inspired visualization of flows and volumes
10.1109/VISUAL.2005.1532858
6. 694
C
Understanding and analyzing complex volumetrically varying data is a difficult problem. Many computational visualization techniques have had only limited success in succinctly portraying the structure of three-dimensional turbulent flow. Motivated by both the extensive history and success of illustration and photographic flow visualization techniques, we have developed a new interactive volume rendering and visualization system for flows and volumes that simulates and enhances traditional illustration, experimental advection, and photographic flow visualization techniques. Our system uses a combination of varying focal and contextual illustrative styles, new advanced two-dimensional transfer functions, enhanced Schlieren and shadowgraphy shaders, and novel oriented structure enhancement techniques to allow interactive visualization, exploration, and comparative analysis of scalar, vector, and time-varying volume datasets. Both traditional illustration techniques and photographic flow visualization techniques effectively reduce visual clutter by using compact oriented structure information to convey three-dimensional structures. Therefore, a key to the effectiveness of our system is using one-dimensional (Schlieren and shadowgraphy) and two-dimensional (silhouette) oriented structural information to reduce visual clutter, while still providing enough three-dimensional structural information for the user's visual system to understand complex three-dimensional flow data. By combining these oriented feature visualization techniques with flexible transfer function controls, we can visualize scalar and vector data, allow comparative visualization of flow properties in a succinct, informative manner, and provide continuity for visualizing time-varying datasets.
Svakhine, N.;Yun Jang;Ebert, D.S.;Gaither, K.
Purdue Univ., West Lafayette, IN, USA|c|;;;
10.1109/VISUAL.1995.485141;10.1109/VISUAL.1993.398846;10.1109/VISUAL.2003.1250378;10.1109/VISUAL.1996.567777;10.1109/VISUAL.1997.663912;10.1109/VISUAL.2003.1250361;10.1109/VISUAL.1999.809905;10.1109/VISUAL.2000.885694;10.1109/VISUAL.2000.885689;10.1109/VISUAL.2005.1532857;10.1109/VISUAL.2000.885696;10.1109/VISUAL.1993.398877
interactive volume illustration, flow visualization, non-photorealistic rendering, photographic techniques
Vis
2005
Illustration-inspired techniques for visualizing time-varying data
10.1109/VISUAL.2005.1532857
6. 686
C
Traditionally, time-varying data has been visualized using snapshots of the individual time steps or an animation of the snapshots shown in a sequential manner. For larger datasets with many time-varying features, animation can be limited in its use, as an observer can only track a limited number of features over the last few frames. Visually inspecting each snapshot is not practical either for a large number of time-steps. We propose new techniques inspired from the illustration literature to convey change over time more effectively in a time-varying dataset. Speedlines are used extensively by cartoonists to convey motion, speed, or change over different panels. Flow ribbons are another technique used by cartoonists to depict motion in a single frame. Strobe silhouettes are used to depict previous positions of an object to convey the previous positions of the object to the user. These illustration-inspired techniques can be used in conjunction with animation to convey change over time.
Joshi, A.;Rheingans, P.
Maryland Univ., Baltimore, MD, USA|c|;
10.1109/VISUAL.2001.964520;10.1109/VISUAL.1999.809910;10.1109/VISUAL.1994.346321;10.1109/VISUAL.1996.567807;10.1109/VISUAL.1999.809879;10.1109/VISUAL.2005.1532858;10.1109/VISUAL.2002.1183777;10.1109/VISUAL.2000.885694;10.1109/VISUAL.2003.1250386
Flow visualization, Non-photorealistic rendering, time-varying data, illustration
Vis
2005
Illustrative display of hidden iso-surface structures
10.1109/VISUAL.2005.1532855
6. 670
C
Indirect volume rendering is a widespread method for the display of volume datasets. It is based on the extraction of polygonal iso-surfaces from volumetric data, which are then rendered using conventional rasterization methods. Whereas this rendering approach is fast and relatively easy to implement, it cannot easily provide an understandable display of structures occluded by the directly visible iso-surface. Simple approaches like alpha-blending for transparency when drawing the iso-surface often generate a visually complex output, which is difficult to interpret. Moreover, such methods can significantly increase the computational complexity of the rendering process. In this paper, we therefore propose a new approach for the illustrative indirect rendering of volume data in real-time. This algorithm emphasizes the silhouette of objects represented by the iso-surface. Additionally, shading intensities on objects are reproduced with a monochrome hatching technique. Using a specially designed two-pass rendering process, structures behind the front layer of the iso-surface are automatically extracted with a depth peeling method. The shapes of these hidden structures are also displayed as silhouette outlines. As an additional option, the geometry of explicitly specified inner objects can be displayed with constant translucency. Although these inner objects always remain visible, a specific shading and depth attenuation method is used to convey the depth relationships. We describe the implementation of the algorithm, which exploits the programmability of state-of-the-art graphics processing units (GPUs). The algorithm described in this paper does not require any preprocessing of the input data or a manual definition of inner structures. Since the presented method works on iso-surfaces, which are stored as polygonal datasets, it can also be applied to other types of polygonal models.
Fischer, J.;Bartz, D.;Strasser, W.
Visual Comput. for Medicine, Tubingen Univ., Germany|c|;;
10.1109/VISUAL.2002.1183777;10.1109/VISUAL.2000.885694;10.1109/VISUAL.2003.1250387;10.1109/VISUAL.2000.885723;10.1109/VISUAL.2004.48
illustrative rendering, non-photorealistic rendering, transparency, indirect volume rendering, hatching, shading language
Vis
2005
Interactive rendering of large unstructured grids using dynamic level-of-detail
10.1109/VISUAL.2005.1532796
1. 206
C
We describe a new dynamic level-of-detail (LOD) technique that allows real-time rendering of large tetrahedral meshes. Unlike approaches that require hierarchies of tetrahedra, our approach uses a subset of the faces that compose the mesh. No connectivity is used for these faces so our technique eliminates the need for topological information and hierarchical data structures. By operating on a simple set of triangular faces, our algorithm allows a robust and straightforward graphics hardware (GPU) implementation. Because the subset of faces processed can be constrained to arbitrary size, interactive rendering is possible for a wide range of data sets and hardware configurations.
Callahan, S.P.;Comba, J.L.D.;Shirley, P.;Silva, C.T.
Sci. Comput. & Imaging Inst., Utah State Univ., Logan, UT, USA|c|;;;
10.1109/VISUAL.2004.102;10.1109/VISUAL.1999.809908;10.1109/VISUAL.1998.745283;10.1109/VISUAL.2004.85;10.1109/VISUAL.2003.1250390;10.1109/VISUAL.2002.1183778;10.1109/VISUAL.1998.745329;10.1109/VISUAL.2002.1183767;10.1109/VISUAL.2000.885711
interactive volume rendering, multiresolution meshes, level-of-detail, tetrahedral meshes
Vis
2005
Interactive visual analysis and exploration of injection systems simulations
10.1109/VISUAL.2005.1532821
3. 398
C
Simulations often generate large amounts of data that require use of SciVis techniques for effective exploration of simulation results. In some cases, like 1D theory of fluid dynamics, conventional SciVis techniques are not very useful. One such example is a simulation of injection systems that is becoming more and more important due to an increasingly restrictive emission regulations. There are many parameters and correlations among them that influence the simulation results. We describe how basic information visualization techniques can help in visualizing, understanding and analyzing this kind of data. The Com Vis tool is developed and used to analyze and explore the data. Com Vis supports multiple linked views and common information visualization displays such as 2D and 3D scatter-plot, histogram, parallel coordinates, pie-chart, etc. A diesel common rail injector with 2/2 way valve is used for a case study. Data sets were generated using a commercially available AVL HYDSIM simulation tool for dynamic analysis of hydraulic and hydro-mechanical systems, with the main application area in the simulation of fuel injection systems.
Matkovic, K.;Jelovic, M.;Juric, J.;Konyha, Z.;Gracanin, D.
VRVis Res. Center, Vienna, Austria|c|;;;;
10.1109/INFVIS.1997.636793;10.1109/VISUAL.2000.885739;10.1109/VISUAL.1990.146402
Information visualization, visual exploration, simulation, injection system
Vis
2005
Interpolation and visualization for advected scalar fields
10.1109/VISUAL.2005.1532849
6. 622
C
Doppler radars are useful facilities for weather forecasting. The data sampled by using Doppler radars are used to measure the distributions and densities of rain drops, snow crystals, hail stones, or even insects in the atmosphere. In this paper, we propose to build up a graphics-based software system for visualizing Doppler radar data. In the system, the reflectivity data gathered by using Doppler radars are post-processed to generate virtual cloud images which reveal the densities of precipitation in the air. An optical flow based method is adopted to compute the velocities of clouds, advected by winds. Therefore, the movement of clouds is depicted. The cloud velocities are also used to interpolate reflectivities for arbitrary time steps. Therefore, the reflectivities at any time can be produced. Our system composes of three stages. At the first stage, the raw radar data are re-sampled and filtered to create a multiple resolution data structure, based on a pyramid structure. At the second stage, a numeric method is employed to compute cloud velocities in the air and to interpolate radar reflectivity data at given time steps. The radar reflectivity data and cloud velocities are displayed at the last stage. The reflectivities are rendered by using splatting methods to produce semi-transparent cloud images. Two kinds of media are created for analyzing the reflectivity data. The first kind media consists of a group of still images of clouds which displays the distribution and density of water in the air. The second type media is a short animation of cloud images to show the formation and movement of the clouds. To show the advection of clouds, the cloud velocities are displayed by using two dimensional images. In these images, the velocities are represented by arrows and superimposed on cloud images. To enhance image quality, gradients and diffusion of the radar data are computed and used in the rendering process. Therefore the cloud structures are better portrayed. In order to achieve interactive visualization, our system is also comprised with a view-dependent visualization module. The radar data at far distance are rendered in lower resolutions, while the data closer to the eye position is rendered in details.
Shyh-Kuang Ueng;Sheng-Chuan Wang
Dept. of Comput. Sci., Nat. Taiwan Ocean Univ., Taipei, Taiwan|c|;
10.1109/VISUAL.2004.69;10.1109/VISUAL.2001.964490;10.1109/VISUAL.1999.809916;10.1109/VISUAL.2002.1183823
Doppler radar, volume rendering, optical flow, level of details, vector field visualization
Vis
2005
Marching diamonds for unstructured meshes
10.1109/VISUAL.2005.1532825
4. 429
C
We present a higher-order approach to the extraction of isosurfaces from unstructured meshes. Existing methods use linear interpolation along each mesh edge to find isosurface intersections. In contrast, our method determines intersections by performing barycentric interpolation over diamonds formed by the tetrahedra incident to each edge. Our method produces smoother, more accurate isosurfaces. Additionally, interpolating over diamonds, rather than linearly interpolating edge endpoints. enables us to identify up to two isosurface intersections per edge. This paper details how our new technique extracts isopoints, and presents a simple connection strategy for forming a triangle mesh isosurface.
Anderson, J.C.;Bennett, J.C.;Joy, K.I.
Comput. Sci. Dept., California Univ., Davis, CA, USA|c|;;
10.1109/VISUAL.1994.346331;10.1109/VISUAL.1991.175782
isosurface extraction, interpolation, unstructured mesh
Vis
2005
Multimodal exploration of the fourth dimension
10.1109/VISUAL.2005.1532804
2. 270
C
We present a multimodal paradigm for exploring topological surfaces embedded in four dimensions; we exploit haptic methods in particular to overcome the intrinsic limitations of 3D graphics images and 3D physical models. The basic problem is that, just as 2D shadows of 3D curves lose structure where lines cross, 3D graphics projections of smooth 4D topological surfaces are interrupted where one surface intersects another. Furthermore, if one attempts to trace real knotted ropes or a plastic models of self-intersecting surfaces with a fingertip, one inevitably collides with parts of the physical artifact. In this work, we exploit the free motion of a computer-based haptic probe to support a continuous motion that follows the local continuity of the object being explored. For our principal test case of 4D-embedded surfaces projected to 3D, this permits us to follow the full local continuity of the surface as though in fact we were touching an actual 4D object. We exploit additional sensory cues to provide supplementary or redundant information. For example, we can use audio tags to note the relative 4D depth of illusory 3D surface intersections produced by projection from 4D, as well as providing automated refinement of the tactile exploration path to eliminate jitter and snagging, resulting in a much cleaner exploratory motion than a bare uncorrected motion. Visual enhancements provide still further improvement to the feedback: by opening a view-direction-defined cutaway into the interior of the 3D surface projection, we allow the viewer to keep the haptic probe continuously in view as it traverses any touchable part of the object. Finally, we extend the static tactile exploration framework using a dynamic mode that links each stylus motion to a change in orientation that creates at each instant a maximal-area screen projection of a neighborhood of the current point of interest. This minimizes 4D distortion and permits true metric sizes to be deduced locally at any point. All these methods combine to reveal the full richness of the complex spatial relationships of the target shapes, and to overcome many expected perceptual limitations in 4D visualization.
Hanson, A.J.;Hui Zhang
Dept. of Comput. Sci., Indiana Univ., Bloomington, IN, USA|c|;
10.1109/VISUAL.1995.480804
multimodal, haptics, visualization
Vis
2005
On the optimization of visualizations of complex phenomena
10.1109/VISUAL.2005.1532782
8. 94
C
The problem of perceptually optimizing complex visualizations is a difficult one, involving perceptual as well as aesthetic issues. In our experience, controlled experiments are quite limited in their ability to uncover interrelationships among visualization parameters, and thus may not be the most useful way to develop rules-of-thumb or theory to guide the production of high-quality visualizations. In this paper, we propose a new experimental approach to optimizing visualization quality that integrates some of the strong points of controlled experiments with methods more suited to investigating complex highly-coupled phenomena. We use human-in-the-loop experiments to search through visualization parameter space, generating large databases of rated visualization solutions. This is followed by data mining to extract results such as exemplar visualizations, guidelines for producing visualizations, and hypotheses about strategies leading to strong visualizations. The approach can easily address both perceptual and aesthetic concerns, and can handle complex parameter interactions. We suggest a genetic algorithm as a valuable way of guiding the human-in-the-loop search through visualization parameter space. We describe our methods for using clustering, histogramming, principal component analysis, and neural networks for data mining. The experimental approach is illustrated with a study of the problem of optimal texturing for viewing layered surfaces so that both surfaces are maximally observable.
House, D.;Bair, A.;Ware, C.
Texas A&M Univ., College Station, TX, USA|c|;;
10.1109/TVCG.2009.126;10.1109/VISUAL.1996.568113;10.1109/VISUAL.1996.567784
perception, visualization evaluation,layered surfaces, genetic algorithm, data mining, principal component analysis, neural networks