IEEE VIS Publication Dataset

next
Vis
2005
A contract based system for large data visualization
10.1109/VISUAL.2005.1532795
1. 198
C
VisIt is a richly featured visualization tool that is used to visualize some of the largest simulations ever run. The scale of these simulations requires that optimizations are incorporated into every operation VisIt performs. But the set of applicable optimizations that VisIt can perform is dependent on the types of operations being done. Complicating the issue, VisIt has a plugin capability that allows new, unforeseen components to be added, making it even harder to determine which optimizations can be applied. We introduce the concept of a contract to the standard data flow network design. This contract enables each component of the data flow network to modify the set of optimizations used. In addition, the contract allows for new components to be accommodated gracefully within VisIt's data flow network system.
Childs, H.;Brugger, E.;Bonnell, K.;Meredith, J.;Miller, M.;Whitlock, B.;Max, N.
California Univ., Davis, CA, USA|c|;;;;;;
10.1109/VISUAL.1996.567752;10.1109/VISUAL.1990.146416;10.1109/VISUAL.1995.480821;10.1109/VISUAL.1991.175794;10.1109/VISUAL.1997.663895
large data set visualization, data flow networks, contract-based system
Vis
2005
A feature-driven approach to locating optimal viewpoints for volume visualization
10.1109/VISUAL.2005.1532834
4. 502
C
Optimal viewpoint selection is an important task because it considerably influences the amount of information contained in the 2D projected images of 3D objects, and thus dominates their first impressions from a psychological point of view. Although several methods have been proposed that calculate the optimal positions of viewpoints especially for 3D surface meshes, none has been done for solid objects such as volumes. This paper presents a new method of locating such optimal viewpoints when visualizing volumes using direct volume rendering. The major idea behind our method is to decompose an entire volume into a set of feature components, and then find a globally optimal viewpoint by finding a compromise between locally optimal viewpoints for the components. As the feature components, the method employs interval volumes and their combinations that characterize the topological transitions of isosurfaces according to the scalar field. Furthermore, opacity transfer functions are also utilized to assign different weights to the decomposed components so that users can emphasize features of specific interest in the volumes. Several examples of volume datasets together with their optimal positions of viewpoints are exhibited in order to demonstrate that the method can effectively guide naive users to find optimal projections of volumes.
Takahashi, S.;Fujishiro, I.;Takeshima, Y.;Nishita, T.
Tokyo Univ., Japan|c|;;;
10.1109/VISUAL.1995.480789;10.1109/VISUAL.2004.96;10.1109/VISUAL.2002.1183774;10.1109/VISUAL.2005.1532833;10.1109/VISUAL.1997.663875;10.1109/VISUAL.2002.1183785
viewpoint selection, viewpoint entropy, direct volume rendering, interval volumes, level-set graphs
Vis
2005
A handheld flexible display system
10.1109/VISUAL.2005.1532846
5. 597
C
A new close range virtual reality system is introduced that allows intuitive and immersive user interaction with computer generated objects. A projector with a special spherical lens is combined with a flexible, tracked rear projection screen that users hold in their hands. Unlike normal projectors, the spherical lens allows for a 180 degree field of view and nearly infinite depth of focus. This allows the user to move the screen around the environment and use it as a virtual "slice" to examine the interior of 3D volumes. This provides a concrete correspondence between the virtual representation of the 3D volume and how that volume would actually appear if its real counterpart was sliced open. The screen can also be used as a "magic window" to view the mesh of the volume from different angles prior to taking cross sections of it. Real time rendering of the desired 3D volume or mesh is accomplished using current graphics hardware. Additional applications of the system are also discussed.
Konieczny, J.;Shimizu, C.;Meyer, G.;Colucci, D.
Digital Technol. Center, Minnesota Univ., Minneapolis, MN, USA|c|;;;
10.1109/VISUAL.2001.964508;10.1109/VISUAL.2003.1250351
visualization, virtual reality, user interfaces, projectors, volume rendering, curved sections
Vis
2005
A shader-based parallel rendering framework
10.1109/VISUAL.2005.1532787
1. 134
C
Existing parallel or remote rendering solutions rely on communicating pixels, OpenGL commands, scene-graph changes or application-specific data. We propose an intermediate solution based on a set of independent graphics primitives that use hardware shaders to specify their visual appearance. Compared to an OpenGL based approach, it reduces the complexity of the model by eliminating most fixed function parameters while giving access to the latest functionalities of graphics cards. It also suppresses the OpenGL state machine that creates data dependencies making primitive re-scheduling difficult. Using a retained-mode communication protocol transmitting changes between each frame, combined with the possibility to use shaders to implement interactive data processing operations instead of sending final colors and geometry, we are able to optimize the network load. High level information such as bounding volumes is used to setup advanced schemes where primitives are issued in parallel, routed according to their visibility, merged and re-ordered when received for rendering. Different optimization algorithms can be efficiently implemented, saving network bandwidth or reducing texture switches for instance. We present performance results based on two VTK applications, a parallel iso-surface extraction and a parallel volume renderer. We compare our approach with Chromium. Results show that our approach leads to significantly better performance and scalability, while offering easy access to hardware accelerated rendering algorithms.
Allard, J.;Raffin, B.
ID-IMAG, CNRS, France|c|;
10.1109/VISUAL.1995.480821;10.1109/VISUAL.2002.1183812
Distributed Rendering, Shaders, Volume Rendering
Vis
2005
Batched multi triangulation
10.1109/VISUAL.2005.1532797
2. 214
C
The multi triangulation framework (MT) is a very general approach for managing adaptive resolution in triangle meshes. The key idea is arranging mesh fragments at different resolution in a directed acyclic graph (DAG) which encodes the dependencies between fragments, thereby encompassing a wide class of multiresolution approaches that use hierarchies or DAGs with predefined topology. On current architectures, the classic MT is however unfit for real-time rendering, since DAG traversal costs vastly dominate raw rendering costs. In this paper, we redesign the MT framework in a GPU friendly fashion, moving its granularity from triangles to precomputed optimized triangle patches. The patches can be conveniently tri-stripped and stored in secondary memory to be loaded on demand, ready to be sent to the GPU using preferential paths. In this manner, central memory only contains the DAG structure and CPU workload becomes negligible. The major contributions of this work are: a new out-of-core multiresolution framework, that, just like the MT, encompasses a wide class of multiresolution structures; a robust and elegant way to build a well conditioned MT DAG by introducing the concept of V-partitions, that can encompass various state of the art multiresolution algorithms; an efficient multithreaded rendering engine and a general subsystem for the external memory processing and simplification of huge meshes.
Cignoni, P.;Ganovelli, F.;Gobbetti, E.;Marton, F.;Ponchio, F.;Scopigno, R.
ISTI, CNR, Italy|c|;;;;;
10.1109/VISUAL.1997.663860;10.1109/VISUAL.2002.1183783;10.1109/VISUAL.1998.745282;10.1109/VISUAL.1996.567600;10.1109/VISUAL.2002.1183796;10.1109/VISUAL.2004.86
Vis
2005
Build-by-number: rearranging the real world to visualize novel architectural spaces
10.1109/VISUAL.2005.1532789
1. 150
C
We present build-by-number, a technique for quickly designing architectural structures that can be rendered photorealistically at interactive rates. We combine image-based capturing and rendering with procedural modeling techniques to allow the creation of novel structures in the style of real-world structures. Starting with a simple model recovered from a sparse image set, the model is divided into feature regions, such as doorways, windows, and brick. These feature regions essentially comprise a mapping from model space to image space, and can be recombined to texture a novel model. Procedural rules for the growth and reorganization of the model are automatically derived to allow for very fast editing and design. Further, the redundancies marked by the feature labeling can be used to perform automatic occlusion replacement and color equalization in the finished scene, which is rendered using view-dependent texture mapping on standard graphics hardware. Results using four captured scenes show that a great variety of novel structures can be created very quickly once a captured scene is available, and rendered with a degree of realism comparable to the original scene.
Bekins, D.;Aliaga, D.
Dept. of Comput. Sci., Purdue Univ., West Lafayette, IN, USA|c|;
Vis
2005
COTS cluster-based sort-last rendering: performance evaluation and pipelined implementation
10.1109/VISUAL.2005.1532785
1. 118
C
Sort-last parallel rendering is an efficient technique to visualize huge datasets on COTS clusters. The dataset is subdivided and distributed across the cluster nodes. For every frame, each node renders a full resolution image of its data using its local GPU, and the images are composited together using a parallel image compositing algorithm. In this paper, we present a performance evaluation of standard sort-last parallel rendering methods and of the different improvements proposed in the literature. This evaluation is based on a detailed analysis of the different hardware and software components. We present a new implementation of sort-last rendering that fully overlaps CPU(s), GPU and network usage all along the algorithm. We present experiments on a 3 years old 32-node PC cluster and on a 1.5 years old 5-node PC cluster, both with Gigabit interconnect, showing volume rendering at respectively 13 and 31 frames per second and polygon rendering at respectively 8 and 17 frames per second on a 1024 x 768 render area, and we show that our implementation outperforms or equals many other implementations and specialized visualization clusters.
Cavin, X.;Mion, C.;Filbois, A.
;;
cluster-based visualization, sort-last rendering, parallel image compositing
Vis
2005
Curve-skeleton applications
10.1109/VISUAL.2005.1532783
9. 102
C
Curve-skeletons are a 1D subset of the medial surface of a 3D object and are useful for many visualization tasks including virtual navigation, reduced-model formulation, visualization improvement, mesh repair, animation, etc. There are many algorithms in the literature describing extraction methodologies for different applications; however, it is unclear how general and robust they are. In this paper, we provide an overview of many curve-skeleton applications and compile a set of desired properties of such representations. We also give a taxonomy of methods and analyze the advantages and drawbacks of each class of algorithms.
Cornea, N.D.;Silver, D.;Min, P.
Rutgers Univ., NJ, USA|c|;;
10.1109/VISUAL.2004.34;10.1109/VISUAL.2004.104;10.1109/VISUAL.2002.1183754;10.1109/VISUAL.1994.346327;10.1109/VISUAL.1999.809912;10.1109/VISUAL.2003.1250353;10.1109/VISUAL.2001.964517
skeleton, curve-skeleton
Vis
2005
Dataset traversal with motion-controlled transfer functions
10.1109/VISUAL.2005.1532817
3. 366
C
In this paper, we describe a methodology and implementation for interactive dataset traversal using motion-controlled transfer functions. Dataset traversal here refers lo the process of translating a transfer function along a specific path. In scientific visualization, it is often necessary to manipulate transfer functions in order to visualize datasets more effectively. This manipulation of transfer functions is usually performed globally, i.e., a new transfer function is applied to the entire dataset. Our approach allows one to locally manipulate transfer functions while controling its movement along a traversal path. The method we propose allows the user to select a traversal path within the dataset, based on the shape of the volumetric model and manipulate a transfer function along this path. Examples of dataset traversal include the animation of transfer functions along a pre-defined path, the simulation of flow in vascular structures, and the visualization of convoluted shapes. For example, this type of traversal is often used in medical illustration to highlight flow in blood vessels. We present an interactive implementation of our method using graphics hardware, based on the decomposition of the volume. We show examples of our approach using a variety of volumetric datasets, and we also demonstrate that with our novel decomposition, the rendering process is faster.
Correa, C.;Silver, D.
Dept. of Electr. & Comput. Eng., State Univ. of New Jersey, Newark, NJ, USA|c|;
10.1109/VISUAL.2003.1250388;10.1109/VISUAL.2002.1183820;10.1109/VISUAL.2002.1183777;10.1109/VISUAL.2003.1250386;10.1109/VISUAL.2004.48;10.1109/VISUAL.2001.964517
Dataset traversal, illustrative visualization, volume manipulation, animation, transfer functions
Vis
2005
Differential protein expression analysis via liquid-chromatography/mass-spectrometry data visualization
10.1109/VISUAL.2005.1532828
4. 454
C
Differential protein expression analysis is one of the main challenges in proteomics. It denotes the search for proteins, whose encoding genes are differentially expressed under a given experimental setup. An important task in this context is to identify the differentially expressed proteins or, more generally, all proteins present in the sample. One of the most promising and recently widely used approaches for protein identification is to cleave proteins into peptides, separate the peptides using liquid chromatography, and determine the masses of the separated peptides using mass spectrometry. The resulting data needs to be analyzed and matched against protein sequence databases. The analysis step is typically done by searching for intensity peaks in a large number of 2D graphs. We present an interactive visualization tool for the exploration of liquid-chromatography/mass-spectrometry data in a 3D space, which allows for the understanding of the data in its entirety and a detailed analysis of regions of interest. We compute differential expression over the liquid-chromatography/mass-spectrometry domain and embed it visually in our system. Our exploration tool can treat single liquid-chromatography/mass-spectrometry data sets as well as data acquired using multi-dimensional protein identification technology. For efficiency purposes we perform a peak-preserving data resampling and multiresolution hierarchy generation prior to visualization.
Linsen, L.;Locherbach, J.;Berth, M.;Bernhardt, J.;Becher, D.
Dept. of Math. & Comput. Sci., Ernst-Moritz-Arndt-Univ., Greifswald, Germany|c|;;;;
10.1109/VISUAL.1997.663907
interactive visual exploration, hierarchical data representation, visualization in bioinformatics, proteomics
Vis
2005
Distributed data management for large volume visualization
10.1109/VISUAL.2005.1532794
1. 189
C
We propose a distributed data management scheme for large data visualization that emphasizes efficient data sharing and access. To minimize data access time and support users with a variety of local computing capabilities, we introduce an adaptive data selection method based on an "enhanced time-space partitioning" (ETSP) tree that assists with effective visibility culling, as well as multiresolution data selection. By traversing the tree, our data management algorithm can quickly identify the visible regions of data, and, for each region, adaptively choose the lowest resolution satisfying user-specified error tolerances. Only necessary data elements are accessed and sent to the visualization pipeline. To further address the issue of sharing large-scale data among geographically distributed collaborative teams, we have designed an infrastructure for integrating our data management technique with a distributed data storage system provided by logistical networking (LoN). Data sets at different resolutions are generated and uploaded to LoN for wide-area access. We describe a parallel volume rendering system that verifies the effectiveness of our data storage, selection and access scheme.
Gao, J.;Huang, J.;Johnson, C.R.;Atchley, S.
Oak Ridge Nat. Lab., TN, USA|c|;;;
10.1109/VISUAL.2002.1183758;10.1109/VISUAL.2002.1183757;10.1109/VISUAL.1999.809910;10.1109/VISUAL.1998.745300;10.1109/VISUAL.2004.110;10.1109/VISUAL.2004.112;10.1109/VISUAL.1999.809879
large data visualization, distributed storage, logistical networking, visibility culling, volume rendering, multiresolution rendering
Vis
2005
Effectively visualizing large networks through sampling
10.1109/VISUAL.2005.1532819
3. 382
C
We study the problem of visualizing large networks and develop techniques for effectively abstracting a network and reducing the size to a level that can be clearly viewed. Our size reduction techniques are based on sampling, where only a sample instead of the full network is visualized. We propose a randomized notion of "focus" that specifies a part of the network and the degree to which it needs to be magnified. Visualizing a sample allows our method to overcome the scalability issues inherent in visualizing massive networks. We report some characteristics that frequently occur in large networks and the conditions under which they are preserved when sampling from a network. This can be useful in selecting a proper sampling scheme that yields a sample with similar characteristics as the original network. Our method is built on top of a relational database, thus it can be easily and efficiently implemented using any off-the-shelf database software. As a proof of concept, we implement our methods and report some of our experiments over the movie database and the connectivity graph of the Web.
Rafiei, D.
Dept. of Comput. Sci., Alberta Univ., Edmonton, Alta., Canada|c|
10.1109/INFVIS.2001.963282;10.1109/INFVIS.2004.66;10.1109/INFVIS.2002.1173148;10.1109/INFVIS.2003.1249011
visualizing the Web, large network visualization, network sampling
Vis
2005
Evaluation of fiber clustering methods for diffusion tensor imaging
10.1109/VISUAL.2005.1532779
6. 72
C
Fiber tracking is a standard approach for the visualization of the results of diffusion tensor imaging (DTI). If fibers are reconstructed and visualized individually through the complete white matter, the display gets easily cluttered making it difficult to get insight in the data. Various clustering techniques have been proposed to automatically obtain bundles that should represent anatomical structures, but it is unclear which clustering methods and parameter settings give the best results. We propose a framework to validate clustering methods for white-matter fibers. Clusters are compared with a manual classification which is used as a ground truth. For the quantitative evaluation of the methods, we developed a new measure to assess the difference between the ground truth and the clusterings. The measure was validated and calibrated by presenting different clusterings to physicians and asking them for their judgement. We found that the values of our new measure for different clusterings match well with the opinions of physicians. Using this framework, we have evaluated different clustering algorithms, including shared nearest neighbor clustering, which has not been used before for this purpose. We found that the use of hierarchical clustering using single-link and a fiber similarity measure based on the mean distance between fibers gave the best results.
Moberts, B.;Vilanova, A.;van Wijk, J.J.
Dept. of Math. & Comput. Sci., Technische Univ. Eindhoven, Netherlands|c|;;
10.1109/VISUAL.2001.964549
Diffusion Tensor Imaging, Fiber tracking, Clustering,Clustering Validation, External Indices
Vis
2005
Evolutionary morphing
10.1109/VISUAL.2005.1532826
4. 438
C
We introduce a technique to visualize the gradual evolutionary change of the shapes of living things as a morph between known three-dimensional shapes. Given geometric computer models of anatomical shapes for some collection of specimens - here the skulls of the some of the extant members of a family of monkeys - an evolutionary tree for the group implies a hypothesis about the way in which the shape changed through time. We use a statistical model which expresses the value of some continuous variable at an internal point in the tree as a weighted average of the values at the leaves. The framework of geometric morphometrics can then be used to define a shape-space, based on the correspondences of landmark points on the surfaces, within which these weighted averages can be realized as actual surfaces. Our software provides tools for performing and visualizing such an analysis in three dimensions. Beginning with laser range scans of crania, we use our landmark editor to interactively place landmark points on the surface. We use these to compute a "tree-morph" that smoothly interpolates the shapes across the tree. Each intermediate shape in the morph is a linear combination of all of the input surfaces. We create a surface model for an intermediate shape by warping all the input meshes towards the correct shape and then blending them together. To do the blending, we compute a weighted average of their associated trivariate distance functions and then extract a surface from the resulting function. We implement this idea using the squared distance function, rather than the usual signed distance function, in a novel way.
Wiley, D.F.;Amenta, N.;Alcantara, D.A.;Ghosh, D.;Kil, Y.J.;Delson, E.;Harcourt-Smith, W.;Rohlf, F.J.;St John, K.;Hamann, B.
Dept. of Comput. Sci., California Univ., Davis, CA, USA|c|;;;;;;;;;
morphometrics, morphing, surface blending, merging, warping, distance fields, extremal surface
Vis
2005
Example-based volume illustrations
10.1109/VISUAL.2005.1532854
6. 662
C
Scientific illustrations use accepted conventions and methodologies to effectively convey object properties and improve our understanding. We present a method to illustrate volume datasets by emulating example illustrations. As with technical illustrations, our volume illustrations more clearly delineate objects, enrich details, and artistically visualize volume datasets. For both color and scalar 3D volumes, we have developed an automatic color transfer method based on the clustering and similarities in the example illustrations and volume sources. As an extension to 2D Wang tiles, we provide a new, general texture synthesis method for Wang cubes that solves the edge discontinuity problem. We have developed a 2D illustrative slice viewer and a GPU-based direct volume rendering system that uses these non-periodic 3D textures to generate illustrative results similar to the 2D examples. Both applications simulate scientific illustrations to provide more information than the original data and visualize objects more effectively, while only requiring simple user interaction.
Lu, A.;Ebert, D.S.
Purdue Univ., West Lafayette, IN, USA|c|;
10.1109/VISUAL.2004.35;10.1109/VISUAL.2000.885694;10.1109/VISUAL.2003.1250386;10.1109/VISUAL.1999.809905
Volume Illustration, Example-based Rendering, Wang Cubes, Texture Synthesis, Color Transfer
Vis
2005
Exploiting frame-to-frame coherence for accelerating high-quality volume raycasting on graphics hardware
10.1109/VISUAL.2005.1532799
2. 230
C
GPU-based raycasting offers an interesting alternative to conventional slice-based volume rendering due to the inherent flexibility and the high quality of the generated images. Recent advances in graphics hardware allow for the ray traversal and volume sampling to be executed on a per-fragment level completely on the GPU leading to interactive framerates. In this work we present optimization techniques that improve the performance and quality of GPU-based volume raycasting. We apply a hybrid image/object space approach to accelerate the ray traversal in animation sequences that works for both isosurface rendering and semi-transparent volume rendering. An empty-space-leaping technique that exploits the spatial coherence between consecutively rendered images is used to estimate the optimal initial ray sampling point for each image pixel. These can double the rendering performance for typical volumetric data sets without sacrificing image quality. The achieved speed-up allows for further improvements of image quality. We demonstrate an object space antialiasing technique based on selective super-sampling at sharp creases and silhouette edges which also benefits from exploiting frame-to-frame coherence.
Klein, T.;Strengert, M.;Stegmaier, S.;Ertl, T.
Inst. for Visualization & Interactive Syst., Stuttgart Univ., Germany|c|;;;
10.1109/VISUAL.2002.1183764;10.1109/VISUAL.1993.398852;10.1109/VISUAL.2001.964521;10.1109/VISUAL.2003.1250388;10.1109/VISUAL.2002.1183775;10.1109/VISUAL.2003.1250390;10.1109/VISUAL.2002.1183776;10.1109/VISUAL.2003.1250384;10.1109/VISUAL.2004.63
Volume Raycasting, Programmable Graphics Hardware, Frame-to-Frame Coherence, Space Leaping
Vis
2005
Exploring 2D tensor fields using stress nets
10.1109/VISUAL.2005.1532771
1. 18
C
In this article we describe stress nets, a technique for exploring 2D tensor fields. Our method allows a user to examine simultaneously the tensors' eigenvectors (both major and minor) as well as scalar-valued tensor invariants. By avoiding noise-advection techniques, we are able to display both principal directions of the tensor field as well as the derived scalars without cluttering the display. We present a CPU-only implementation of stress nets as well as a hybrid CPU/GPU approach and discuss the relative strengths and weaknesses of each. Stress nets have been used as part of an investigation into crack propagation. They were used to display the directions of maximum shear in a slab of material under tension as well as the magnitude of the shear forces acting on each point. Our methods allowed users to find new features in the data that were not visible on standard plots of tensor invariants. These features disagree with commonly accepted analytical crack propagation solutions and have sparked renewed investigation. Though developed for a materials mechanics problem, our method applies equally well to any 2D tensor field having unique characteristic directions.
Wilson, A.;Brannon, R.
Sandia Nat. Labs., Albuquerque, NM, USA|c|;
10.1109/VISUAL.1998.745316;10.1109/VISUAL.1992.235193;10.1109/VISUAL.2002.1183799;10.1109/VISUAL.1994.346326;10.1109/VISUAL.1999.809894;10.1109/VISUAL.2000.885690;10.1109/VISUAL.1993.398849
tensor field, stress tensor, streamlines,controlled density streamlines, crack propagation
Vis
2005
Extracting higher order critical points and topological simplification of 3D vector fields
10.1109/VISUAL.2005.1532842
5. 566
C
This paper presents an approach to extracting and classifying higher order critical points of 3D vector fields. To do so, we place a closed convex surface s around the area of interest. Then we show that the complete 3D classification of a critical point into areas of different flow behavior is equivalent to extracting the topological skeleton of an appropriate 2D vector field on s, if each critical point is equipped with an additional bit of information. Out of this skeleton, we create an icon which replaces the complete topological structure inside s for the visualization. We apply our method to find a simplified visual representation of clusters of critical points, leading to expressive visualizations of topologically complex 3D vector fields.
Weinkauf, T.;Theisel, H.;Kuangyu Shi;Hege, H.-C.;Seidel, H.-P.
ZIB, Berlin, Germany|c|;;;;
10.1109/VISUAL.1999.809907;10.1109/VISUAL.2002.1183786;10.1109/VISUAL.2000.885714;10.1109/VISUAL.1991.175773;10.1109/VISUAL.2000.885716;10.1109/VISUAL.2001.964507;10.1109/VISUAL.2003.1250376
Vis
2005
Extraction of parallel vector surfaces in 3D time-dependent fields and application to vortex core line tracking
10.1109/VISUAL.2005.1532851
6. 638
C
We introduce an approach to tracking vortex core lines in time-dependent 3D flow fields which are defined by the parallel vectors approach. They build surface structures in the 4D space-time domain. To extract them, we introduce two 4D vector fields which act as feature flow fields, i.e., their integration gives the vortex core structures. As part of this approach, we extract and classify local bifurcations of vortex core lines in space-time. Based on a 4D stream surface integration, we provide an algorithm to extract the complete vortex core structure. We apply our technique to a number of test data sets.
Theisel, H.;Sahner, J.;Weinkauf, T.;Hege, H.-C.;Seidel, H.-P.
MPI Saarbrucken, Germany|c|;;;;
10.1109/VISUAL.2004.99;10.1109/VISUAL.1994.346327;10.1109/VISUAL.1999.809896;10.1109/VISUAL.1992.235211;10.1109/VISUAL.1993.398875;10.1109/VISUAL.2001.964506;10.1109/VISUAL.1998.745290;10.1109/VISUAL.1998.745296
flow visualization, vortex core lines, bifurcations
Vis
2005
Eyegaze analysis of displays with combined 2D and 3D views
10.1109/VISUAL.2005.1532837
5. 526
C
Displays combining both 2D and 3D views have been shown to support higher performance on certain visualization tasks. However, it is not clear how best to arrange a combination of 2D and 3D views spatially in a display. In this study, we analyzed the eyegaze strategies of participants using two arrangements of 2D and 3D views to estimate the relative position of objects in a 3D scene. Our results show that the 3D view was used significantly more often than individual 2D views in both displays, indicating the importance of the 3D view for successful task completion. However, viewing patterns were significantly different between the two displays: transitions through centrally-placed views were always more frequent, and users avoided saccades between views that were far apart. Although the change in viewing strategy did not result in significant performance differences, error analysis indicates that a 3D overview in the center may reduce the number of serious errors compared to a 3D overview placed off to the side.
Tory, M.;Atkins, M.S.;Kirkpatrick, A.E.;Nicolaou, M.;Guang-Zhong Yang
Dept. of Comput. Sci., British Columbia Univ., Vancouver, BC, Canada|c|;;;;
10.1109/VISUAL.2003.1250396;10.1109/VISUAL.1997.663914
visualization, 2D/3D combination display, user study, experiment, eyegaze analysis