IEEE VIS Publication Dataset

next
VAST
2006
Visual Analysis of Historic Hotel Visitation Patterns
10.1109/VAST.2006.261428
3. 42
C
Understanding the space and time characteristics of human interaction in complex social networks is a critical component of visual tools for intelligence analysis, consumer behavior analysis, and human geography. Visual identification and comparison of patterns of recurring events is an essential feature of such tools. In this paper, we describe a tool for exploring hotel visitation patterns in and around Rebersburg, Pennsylvania from 1898-1900. The tool uses a wrapping spreadsheet technique, called reruns, to display cyclic patterns of geographic events in multiple overlapping natural and artificial calendars. Implemented as an improvise visualization, the tool is in active development through a iterative process of data collection, hypothesis, design, discovery, and evaluation in close collaboration with historical geographers. Several discoveries have inspired ongoing data collection and plans to expand exploration to include historic weather records and railroad schedules. Distributed online evaluations of usability and usefulness have resulted in numerous feature and design recommendations
Weaver, C.;Fyfe, D.;Robinson, A.;Holdsworth, D.;Peuquet, D.;MacEachren, A.M.
Dept. of Geogr., Pennsylvania State Univ.|c|;;;;;
10.1109/INFVIS.2004.12;10.1109/INFVIS.2002.1173155;10.1109/INFVIS.2004.64
Geovisualization, exploratory visualization, historical geography, coordinated multiple views, travel pattern analysis
VAST
2006
Visual Analytics of Paleoceanographic Conditions
10.1109/VAST.2006.261452
1. 26
C
Decade scale oceanic phenomena like El Nino are correlated with weather anomalies all over the globe. Only by understanding the events that produced the climatic conditions in the past will it be possible to forecast abrupt climate changes and prevent disastrous consequences for human beings and their environment. Paleoceanography research is a collaborative effort that requires the analysis of paleo time-series, which are obtained from a number of independent techniques and instruments and produced by a variety of different researchers and/or laboratories. The complexity of these phenomena that consist of massive, dynamic and often conflicting data can only be faced by means of analytical reasoning supported by a highly interactive visual interface. This paper presents an interactive visual analysis environment for paleoceanography that permits to gain insight into the paleodata and allow the control and steering of the analytical methods involved in the reconstruction of the climatic conditions of the past
Theron, R.
Departamento de Informalica y Automatica, Univ. de Salamanca|c|
Infovis, parallel coordinates, multiple linked views, exploratory analysis
VAST
2006
Visual Exploration of Spatio-temporal Relationships for Scientific Data
10.1109/VAST.2006.261451
1. 18
C
Spatio-temporal relationships among features extracted from temporally-varying scientific datasets can provide useful information about the evolution of an individual feature and its interactions with other features. However, extracting such useful relationships without user guidance is cumbersome and often an error prone process. In this paper, we present a visual analysis system that interactively discovers such relationships from the trajectories of derived features. We describe analysis algorithms to derive various spatial and spatio-temporal relationships. A visual interface is presented using which the user can interactively select spatial and temporal extents to guide the knowledge discovery process. We show the usefulness of our proposed algorithms on datasets originating from computational fluid dynamics. We also demonstrate how the derived relationships can help in explaining the occurrence of critical events like merging and bifurcation of the vortices
Mehta, S.;Parthasarathy, S.;Machiraju, R.
Comput. Sci. & Eng., Ohio State Univ., Columbus, OH|c|;;
10.1109/VISUAL.2002.1183789
Knowledge Discovery, Scientific Analytics, Trajectory Analysis, Feature Extraction, Spatio-temporal Predicates, Visual Analytics
VAST
2006
Visualizing the Performance of Computational Linguistics Algorithms
10.1109/VAST.2006.261417
1. 157
C
We have built a visualization system and analysis portal for evaluating the performance of computational linguistics algorithms. Our system focuses on algorithms that classify and cluster documents by assigning weights to words and scoring each document against high dimensional reference concept vectors. The visualization and algorithm analysis techniques include confusion matrices, ROC curves, document visualizations showing word importance, and interactive reports. One of the unique aspects of our system is that the visualizations are thin-client Web-based components built using SVG visualization components
Eick, S.G.;Mauger, J.;Ratner, A.
SSS Res., Inc., Naperville, IL|c|;;
AJAX, thin-client, SVG, ROC curves, confusion matrices, document categorization
Vis
2006
A Generic and Scalable Pipeline for GPU Tetrahedral Grid Rendering
10.1109/TVCG.2006.110
1. 1352
J
Recent advances in algorithms and graphics hardware have opened the possibility to render tetrahedral grids at interactive rates on commodity PCs. This paper extends on this work in that it presents a direct volume rendering method for such grids which supports both current and upcoming graphics hardware architectures, large and deformable grids, as well as different rendering options. At the core of our method is the idea to perform the sampling of tetrahedral elements along the view rays entirely in local barycentric coordinates. Then, sampling requires minimum GPU memory and texture access operations, and it maps efficiently onto a feed-forward pipeline of multiple stages performing computation and geometry construction. We propose to spawn rendered elements from one single vertex. This makes the method amenable to upcoming Direct3D 10 graphics hardware which allows to create geometry on the GPU. By only modifying the algorithm slightly it can be used to render per-pixel iso-surfaces and to perform tetrahedral cell projection. As our method neither requires any pre-processing nor an intermediate grid representation it can efficiently deal with dynamic and large 3D meshes
Georgii, J.;Westermann, R.
Comput. Graphics & Visualization Group, Technische Univ. Munchen|c|;
10.1109/VISUAL.2003.1250390;10.1109/VISUAL.1997.663853;10.1109/VISUAL.2000.885683;10.1109/VISUAL.2003.1250384;10.1109/VISUAL.2001.964512;10.1109/VISUAL.1996.567606
Direct volume rendering, unstructured grids, programmable graphics hardware
Vis
2006
A Novel Visualization Model for Web Search Results
10.1109/TVCG.2006.111
9. 988
J
This paper presents an interactive visualization system, named WebSearchViz, for visualizing the Web search results and facilitating users' navigation and exploration. The metaphor in our model is the solar system with its planets and asteroids revolving around the sun. Location, color, movement, and spatial distance of objects in the visual space are used to represent the semantic relationships between a query and relevant Web pages. Especially, the movement of objects and their speeds add a new dimension to the visual space, illustrating the degree of relevance among a query and Web search results in the context of users' subjects of interest. By interacting with the visual space, users are able to observe the semantic relevance between a query and a resulting Web page with respect to their subjects of interest, context information, or concern. Users' subjects of interest can be dynamically changed, redefined, added, or deleted from the visual space
Nguyen, T.;Zhang, J.
Dept. of Electr. & Comput. Eng., Iowa State Univ., Ames, IA|c|;
10.1109/INFVIS.1995.528691;10.1109/INFVIS.2001.963287;10.1109/INFVIS.1995.528692;10.1109/INFVIS.1998.729553;10.1109/INFVIS.1999.801864;10.1109/INFVIS.2000.885099
Visualization model, Web search results, movement, speed
Vis
2006
A Pipeline for Computer Aided Polyp Detection
10.1109/TVCG.2006.112
8. 868
J
We present a novel pipeline for computer-aided detection (CAD) of colonic polyps by integrating texture and shape analysis with volume rendering and conformal colon flattening. Using our automatic method, the 3D polyp detection problem is converted into a 2D pattern recognition problem. The colon surface is first segmented and extracted from the CT data set of the patient's abdomen, which is then mapped to a 2D rectangle using conformal mapping. This flattened image is rendered using a direct volume rendering technique with a translucent electronic biopsy transfer function. The polyps are detected by a 2D clustering method on the flattened image. The false positives are further reduced by analyzing the volumetric shape and texture features. Compared with shape based methods, our method is much more efficient without the need of computing curvature and other shape parameters for the whole colon surface. The final detection results are stored in the 2D image, which can be easily incorporated into a virtual colonoscopy (VC) system to highlight the polyp locations. The extracted colon surface mesh can be used to accelerate the volumetric ray casting algorithm used to generate the VC endoscopic view. The proposed automatic CAD pipeline is incorporated into an interactive VC system, with a goal of helping radiologists detect polyps faster and with higher accuracy
Hong, W.;Feng Qiu;Kaufman, A.
Dept. of Comput. Sci., Stony Brook Univ., NY|c|;;
10.1109/VISUAL.2001.964540;10.1109/VISUAL.2004.27;10.1109/VISUAL.1992.235231;10.1109/VISUAL.2003.1250384
Computer Aided Detection, Virtual Colonoscopy, Texture Analysis, Volume Rendering
Vis
2006
A Spectral Analysis of Function Composition and its Implications for Sampling in Direct Volume Visualization
10.1109/TVCG.2006.113
1. 1360
J
In this paper we investigate the effects of function composition in the form g(f(x)) = h(x) by means of a spectral analysis of h. We decompose the spectral description of h(x) into a scalar product of the spectral description of g(x) and a term that solely depends on f(x) and that is independent of g(x). We then use the method of stationary phase to derive the essential maximum frequency of g(f(x)) bounding the main portion of the energy of its spectrum. This limit is the product of the maximum frequency of g(x) and the maximum derivative of f(x). This leads to a proper sampling of the composition h of the two functions g and f. We apply our theoretical results to a fundamental open problem in volume rendering - the proper sampling of the rendering integral after the application of a transfer function. In particular, we demonstrate how the sampling criterion can be incorporated in adaptive ray integration, visualization with multi-dimensional transfer functions, and pre-integrated volume rendering
Bergner, S.;Moller, T.;Weiskopf, D.;Muraki, D.J.
GrUVi-Lab, Simon Fraser Univ., Burnaby, BC|c|;;;
10.1109/VISUAL.2003.1250388;10.1109/VISUAL.2005.1532812;10.1109/VISUAL.2003.1250412;10.1109/VISUAL.1993.398852;10.1109/VISUAL.2000.885683;10.1109/VISUAL.2001.964519;10.1109/VISUAL.2003.1250384;10.1109/VISUAL.1999.809908
volume rendering, transfer function, signal processing, Fourier transform, adaptive sampling
Vis
2006
A Trajectory-Preserving Synchronization Method for Collaborative Visualization
10.1109/TVCG.2006.114
9. 996
J
In the past decade, a lot of research work has been conducted to support collaborative visualization among remote users over the networks, allowing them to visualize and manipulate shared data for problem solving. There are many applications of collaborative visualization, such as oceanography, meteorology and medical science. To facilitate user interaction, a critical system requirement for collaborative visualization is to ensure that remote users would perceive a synchronized view of the shared data. Failing this requirement, the user's ability in performing the desirable collaborative tasks would be affected. In this paper, we propose a synchronization method to support collaborative visualization. It considers how interaction with dynamic objects is perceived by application participants under the existence of network latency, and remedies the motion trajectory of the dynamic objects. It also handles the false positive and false negative collision detection problems. The new method is particularly well designed for handling content changes due to unpredictable user interventions or object collisions. We demonstrate the effectiveness of our method through a number of experiments
Li, L.W.F.;Li, F.W.B.;Lau, R.W.H.
Dept. of Comput. Sci., City Univ. of Hong Kong|c|;;
10.1109/VISUAL.1997.663890;10.1109/VISUAL.1997.663896
Collaborative visualization, network latency, motion synchronization, distributed synchronization
Vis
2006
Ambient Occlusion and Edge Cueing for Enhancing Real Time Molecular Visualization
10.1109/TVCG.2006.115
1. 1244
J
The paper presents a set of combined techniques to enhance the real-time visualization of simple or complex molecules (up to order of 106 atoms) space fill mode. The proposed approach includes an innovative technique for efficient computation and storage of ambient occlusion terms, a small set of GPU accelerated procedural impostors for space-fill and ball-and-stick rendering, and novel edge-cueing techniques. As a result, the user's understanding of the three-dimensional structure under inspection is strongly increased (even for'still images), while the rendering still occurs in real time.
Tarini, M.;Cignoni, P.;Montani, C.
Universita dell''Insubria, Varese|c|;;
10.1109/VISUAL.2000.885694;10.1109/VISUAL.2003.1250394
Vis
2006
An Advanced Evenly-Spaced Streamline Placement Algorithm
10.1109/TVCG.2006.116
9. 972
J
This paper presents an advanced evenly-spaced streamline placement algorithm for fast, high-quality, and robust layout of flow lines. A fourth-order Runge-Kutta integrator with adaptive step size and error control is employed for rapid accurate streamline advection. Cubic Hermite polynomial interpolation with large sample-spacing is adopted to create fewer evenly-spaced samples along each streamline to reduce the amount of distance checking. We propose two methods to enhance placement quality. Double queues are used to prioritize topological seeding and to favor long streamlines to minimize discontinuities. Adaptive distance control based on the local flow variance is explored to reduce cavities. Furthermore, we propose a universal, effective, fast, and robust loop detection strategy to address closed and spiraling streamlines. Our algorithm is an order-of-magnitude faster than Jobard and Lefer's algorithm with better placement quality and over 5 times faster than Mebarki et al.'s algorithm with comparable placement quality, but with a more robust solution to loop detection
Liu, Z.;Moorhead, R.J.;Groner, J.
HPC, Mississippi State Univ., MS|c|;;
10.1109/VISUAL.2000.885690;10.1109/VISUAL.1998.745295;10.1109/VISUAL.2005.1532831;10.1109/VISUAL.2005.1532832
Flow visualization, evenly-spaced streamlines, streamline placement, seeding strategy, closed streamlines
Vis
2006
An Atmospheric Visual Analysis and Exploration System
10.1109/TVCG.2006.117
1. 1164
J
Meteorological research involves the analysis of multi-field, multi-scale, and multi-source data sets. Unfortunately, traditional atmospheric visualization systems only provide tools to view a limited number of variables and small segments of the data. These tools are often restricted to 2D contour or vector plots or 3D isosurfaces. The meteorologist must mentally synthesize the data from multiple plots to glean the information needed to produce a coherent picture of the weather phenomenon of interest. In order to provide better tools to meteorologists and reduce system limitations, we have designed an integrated atmospheric visual analysis and exploration system for interactive analysis of weather data sets. Our system allows for the integrated visualization of 1D, 2D, and 3D atmospheric data sets in common meteorological grid structures and utilizes a variety of rendering techniques. These tools provide meteorologists with new abilities to analyze their data and answer questions on regions of interest, ranging from physics-based atmospheric rendering to illustrative rendering containing particles and glyphs. In this paper, we discuss the use and performance of our visual analysis for two important meteorological applications. The first application is warm rain formation in small cumulus clouds. In this, our three-dimensional, interactive visualization of modeled drop trajectories within spatially correlated fields from a cloud simulation has provided researchers with new insight. Our second application is improving and validating severe storm models, specifically the weather research and forecasting (WRF) model. This is done through correlative visualization of WRF model and experimental Doppler storm data
Song, Y.;Ye, J.;Svakhine, N.;Lasher-Trapp, S.;Baldwin, M.;Ebert, D.S.
Purdue Univ., West Lafayette, IN|c|;;;;;
10.1109/VISUAL.2000.885745;10.1109/VISUAL.2003.1250390;10.1109/VISUAL.1998.745330;10.1109/VISUAL.1992.235215;10.1109/VISUAL.1996.568113;10.1109/VISUAL.2003.1250383;10.1109/VISUAL.1990.146361
weather visualization, grid structures, transfer function, volume rendering, volume visualization, glyph rendering, warm rain entrainment process
Vis
2006
Analyzing Complex FTMS Simulations: a Case Study in High-Level Visualization of Ion Motions
10.1109/TVCG.2006.118
1. 1044
J
Current practice in particle visualization renders particle position data directly onto the screen as points or glyphs. Using a camera placed at a fixed position, particle motions can be visualized by rendering trajectories or by animations. Applying such direct techniques to large, time dependent particle data sets often results in cluttered images in which the dynamic properties of the underlying system are difficult to interpret. In this case study we take an alternative approach to the visualization of ion motions. Instead of rendering ion position data directly, we first extract meaningful motion information from the ion position data and then map this information onto geometric primitives. Our goal is to produce high-level visualizations that reflect the physicists' way of thinking about ion dynamics. Parameterized geometric icons are defined to encode motion information of clusters of related ions. In addition, a parameterized camera control mechanism is used to analyze relative instead of only absolute ion motions. We apply the techniques to simulations of Fourier transform mass spectrometry (FTMS) experiments. The data produced by such simulations can amount to 5.104 ions and 105 timesteps. This paper discusses the requirements, design and informal evaluation of the implemented system
Burakiewicz, W.;van Liere, R.
;
10.1109/VISUAL.2001.964552;10.1109/VISUAL.2004.121;10.1109/VISUAL.2000.885733;10.1109/VISUAL.2000.885734
Particle visualization, motion, motion features
Vis
2006
Analyzing Vortex Breakdown Flow Structures by Assignment of Colors to Tensor Invariants
10.1109/TVCG.2006.119
1. 1196
J
Topological methods are often used to describe flow structures in fluid dynamics and topological flow field analysis usually relies on the invariants of the associated tensor fields. A visual impression of the local properties of tensor fields is often complex and the search of a suitable technique for achieving this is an ongoing topic in visualization. This paper introduces and assesses a method of representing the topological properties of tensor fields and their respective flow patterns with the use of colors. First, a tensor norm is introduced, which preserves the properties of the tensor and assigns the tensor invariants to values of the RGB color space. Secondly, the RGB colors of the tensor invariants are transferred to corresponding hue values as an alternative color representation. The vectorial tensor invariants field is reduced to a scalar hue field and visualization of iso-surfaces of this hue value field allows us to identify locations with equivalent flow topology. Additionally highlighting by the maximum of the eigenvalue difference field reflects the magnitude of the structural change of the flow. The method is applied on a vortex breakdown flow structure inside a cylinder with a rotating lid
Rutten, M.;Chong, M.S.
German Aerosp. Center|c|;
10.1109/VISUAL.2003.1250379;10.1109/VISUAL.1997.663858;10.1109/VISUAL.2004.99;10.1109/VISUAL.2004.80;10.1109/VISUAL.2004.113;10.1109/VISUAL.1994.346326;10.1109/VISUAL.1993.398849
Flow visualization, Tensor field Topology, Invariants
Vis
2006
Asynchronous Distributed Calibration for Scalable and Reconfigurable Multi-Projector Displays
10.1109/TVCG.2006.121
1. 1108
J
Centralized techniques have been used until now when automatically calibrating (both geometrically and photometrically) large high-resolution displays created by tiling multiple projectors in a 2D array. A centralized server managed all the projectors and also the camera(s) used to calibrate the display. In this paper, we propose an asynchronous distributed calibration methodology via a display unit called the plug-and-play projector (PPP). The PPP consists of a projector, camera, computation and communication unit, thus creating a self-sufficient module that enables an asynchronous distributed architecture for multi-projector displays. We present a single-program-multiple-data (SPMD) calibration algorithm that runs on each PPP and achieves a truly scalable and reconfigurable display without any input from the user. It instruments novel capabilities like adding/removing PPPs from the display dynamically, detecting faults, and reshaping the display to a reasonable rectangular shape to react to the addition/removal/faults. To the best of our knowledge, this is the first attempt to realize a completely asynchronous and distributed calibration architecture and methodology for multi-projector displays
Bhasker, E.;Sinha, P.;Majumder, A.
Dept. of Comput. Sci., California Univ., Irvine, CA|c|;;
10.1109/VISUAL.2002.1183793;10.1109/VISUAL.2000.885685;10.1109/VISUAL.2000.885684;10.1109/VISUAL.1999.809883;10.1109/VISUAL.2001.964508
Multi-projector displays, projector-camera systems, geometric and color calibration, distributed algorithms
Vis
2006
Caricaturistic Visualization
10.1109/TVCG.2006.123
1. 1092
J
Caricatures are pieces of art depicting persons or sociological conditions in a non-veridical way. In both cases caricatures are referring to a reference model. The deviations from the reference model are the characteristic features of the depicted subject. Good caricatures exaggerate the characteristics of a subject in order to accent them. The concept of caricaturistic visualization is based on the caricature metaphor. The aim of caricaturistic visualization is an illustrative depiction of characteristics of a given dataset by exaggerating deviations from the reference model. We present the general concept of caricaturistic visualization as well as a variety of examples. We investigate different visual representations for the depiction of caricatures. Further, we present the caricature matrix, a technique to make differences between datasets easily identifiable
Rautek, P.;Viola, I.;Groller, E.
Inst. of Comput. Graphics & Algorithms, Vienna Univ. of Technol.|c|;;
10.1109/VISUAL.2005.1532856;10.1109/VISUAL.2004.48;10.1109/VISUAL.2005.1532857;10.1109/VISUAL.2005.1532835
Illustrative Visualization, Focus+Context Techniques, Volume Visualization
Vis
2006
ClearView: An Interactive Context Preserving Hotspot Visualization Technique
10.1109/TVCG.2006.124
9. 948
J
Volume rendered imagery often includes a barrage of 3D information like shape, appearance and topology of complex structures, and it thus quickly overwhelms the user. In particular, when focusing on a specific region a user cannot observe the relationship between various structures unless he has a mental picture of the entire data. In this paper we present ClearView, a GPU-based, interactive framework for texture-based volume ray-casting that allows users which do not have the visualization skills for this mental exercise to quickly obtain a picture of the data in a very intuitive and user-friendly way. ClearView is designed to enable the user to focus on particular areas in the data while preserving context information without visual clutter. ClearView does not require additional feature volumes as it derives any features in the data from image information only. A simple point-and-click interface enables the user to interactively highlight structures in the data. ClearView provides an easy to use interface to complex volumetric data as it only uses transparency in combination with a few specific shaders to convey focus and context information
Kruger, J.;Schneider, J.;Westermann, R.
Comput. Graphics & Visualization Group, Technische Univ. Munchen|c|;;
10.1109/VISUAL.2000.885694;10.1109/VISUAL.2003.1250400;10.1109/VISUAL.1996.568110;10.1109/VISUAL.2002.1183762;10.1109/VISUAL.2002.1183777;10.1109/VISUAL.1999.809882;10.1109/VISUAL.2000.885694;10.1109/VISUAL.2004.48;10.1109/VISUAL.2003.1250384;10.1109/VISUAL.2005.1532856;10.1109/VISUAL.2005.1532818
Focus & Context, GPU rendering, volume raycasting
Vis
2006
Comparative Visualization for Wave-based and Geometric Acoustics
10.1109/TVCG.2006.125
1. 1180
J
We present a comparative visualization of the acoustic simulation results obtained by two different approaches that were combined into a single simulation algorithm. The first method solves the wave equation on a volume grid based on finite elements. The second method, phonon tracing, is a geometric approach that we have previously developed for interactive simulation, visualization and modeling of room acoustics. Geometric approaches of this kind are more efficient than FEM in the high and medium frequency range. For low frequencies they fail to represent diffraction, which on the other hand can be simulated properly by means of FEM. When combining both methods we need to calibrate them properly and estimate in which frequency range they provide comparable results. For this purpose we use an acoustic metric called gain and display the resulting error. Furthermore we visualize interference patterns, since these depend not only on diffraction, but also exhibit phase-dependent amplification and neutralization effects
Deines, E.;Bertram, M.;Mohring, J.;Jegorovs, J.;Michel, F.;Hagen, H.;Nielson, G.M.
IRTG, Kaiserslautern|c|;;;;;;
10.1109/VISUAL.2005.1532790
Acoustic simulation, comparative visualization, ray tracing, finite element method, phonon map
Vis
2006
Composite Rectilinear Deformation for Stretch and Squish Navigation
10.1109/TVCG.2006.127
9. 908
J
We present the first scalable algorithm that supports the composition of successive rectilinear deformations. Earlier systems that provided stretch and squish navigation could only handle small datasets. More recent work featuring rubber sheet navigation for large datasets has focused on rendering and on application-specific issues. However, no algorithm has yet been presented for carrying out such navigation methods; our paper addresses this problem. For maximum flexibility with large datasets, a stretch and squish navigation algorithm should allow for millions of potentially deformable regions. However, typical usage only changes the extents of a small subset k of these n regions at a time. The challenge is to avoid computations that are linear in n, because a single deformation can affect the absolute screen-space location of every deformable region. We provide an O(klogn) algorithm that supports any application that can lay out a dataset on a generic grid, and show an implementation that allows navigation of trees and gene sequences with millions of items in sub-millisecond time
Slack, J.;Munzner, T.
Dept. of Comput. Sci., British Columbia Univ., Vancouver, BC|c|;
10.1109/INFVIS.1997.636786;10.1109/INFVIS.2005.1532127;10.1109/INFVIS.2002.1173156;10.1109/INFVIS.2005.1532127;10.1109/VISUAL.2002.1183791
Focus+Context, information visualization, real time rendering, navigation
Vis
2006
Concurrent Visualization in a Production Supercomputing Environment
10.1109/TVCG.2006.128
9. 1004
J
We describe a concurrent visualization pipeline designed for operation in a production supercomputing environment. The facility was initially developed on the NASA Ames "Columbia" supercomputer for a massively parallel forecast model (GEOS4). During the 2005 Atlantic hurricane season, GEOS4 was run 4 times a day under tight time constraints so that its output could be included in an ensemble prediction that was made available to forecasters at the National Hurricane Center. Given this time-critical context, we designed a configurable concurrent pipeline to visualize multiple global fields without significantly affecting the runtime model performance or reliability. We use MPEG compression of the accruing images to facilitate live low-bandwidth distribution of multiple visualization streams to remote sites. We also describe the use of our concurrent visualization framework with a global ocean circulation model, which provides a 864-fold increase in the temporal resolution of practically achievable animations. In both the atmospheric and oceanic circulation models, the application scientists gained new insights into their model dynamics, due to the high temporal resolution animations attainable
Ellsworth, D.;Green, B.;Henze, C.;Moran, P.J.;Sandstrom, T.
AMTl, NASA Ames Res. Center, Moffett Field, CA|c|;;;;
10.1109/VISUAL.2005.1532795
Supercomputing, concurrent visualization, interactive visual computing, time-varying data, high temporal resolution visualization, GEOS4 global climate model, hurricane visualization, ECCO, ocean modeling