IEEE VIS Publication Dataset

next
VAST
2009
Visual opinion analysis of customer feedback data
10.1109/VAST.2009.5333919
1. 194
C
Today, online stores collect a lot of customer feedback in the form of surveys, reviews, and comments. This feedback is categorized and in some cases responded to, but in general it is underutilized - even though customer satisfaction is essential to the success of their business. In this paper, we introduce several new techniques to interactively analyze customer comments and ratings to determine the positive and negative opinions expressed by the customers. First, we introduce a new discrimination-based technique to automatically extract the terms that are the subject of the positive or negative opinion (such as price or customer service) and that are frequently commented on. Second, we derive a Reverse-Distance-Weighting method to map the attributes to the related positive and negative opinions in the text. Third, the resulting high-dimensional feature vectors are visualized in a new summary representation that provides a quick overview. We also cluster the reviews according to the similarity of the comments. Special thumbnails are used to provide insight into the composition of the clusters and their relationship. In addition, an interactive circular correlation map is provided to allow analysts to detect the relationships of the comments to other important attributes and the scores. We have applied these techniques to customer comments from real-world online stores and product reviews from web sites to identify the strength and problems of different products and services, and show the potential of our technique.
Oelke, D.;Ming Hao;Rohrdantz, C.;Keim, D.A.;Dayal, U.;Haug, L.;Janetzko, H.
Univ. of Konstanz, Konstanz, Germany|c|;;;;;;
Visual Opinion Analysis, Visual Sentiment Analysis, Visual Document Analysis, Attribute Extraction
VAST
2009
Visualization of uncertainty and analysis of geographical data
10.1109/VAST.2009.5333965
.
M
A team of five worked on this challenge to identify a possible criminal structure within the Flitter social network. Initially we worked on the problem individually, deliberately not sharing any data, results or conclusions. This maximised the chances of spotting any blunders, unjustified assumptions or inferences and allowed us to triangulate any common conclusions. After an agreed period we shared our results demonstrating the visualization applications we had built and the reasoning behind our conclusions. This sharing of assumptions encouraged us to incorporate uncertainty in our visualization approaches as it became clear that there was a number of possible interpretations of the rules and assumptions governing the challenge. This summary of the work emphasises one of those applications detailing the geographic analysis and uncertainty handling of the network data.
Wood, J.;Slingsby, A.;Khalili-Shavarini, N.;Dykes, J.;Mountain, D.
Sch. of Inf., City Univ. London, London, UK|c|;;;;
VAST
2009
Visualized subgraph search
10.1109/VAST.2009.5333968
.
M
We present a visually supported search and browsing system for network-type data, especially a novel module for subgraph search with a GUI to define subgraphs for queries. We describe how this prototype was applied for the Vast Challenge 2009, Flitter Mini Challenge.
Erdos, D.;Fekete, Z.;Lukacs, A.
Data Min. & Web Search Group, Hungarian Acad. of Sci., Budapest, Hungary|c|;;
VAST
2009
What's being said near "Martha"? Exploring name entities in literary text collections
10.1109/VAST.2009.5333248
1. 114
C
A common task in literary analysis is to study characters in a novel or collection. Automatic entity extraction, text analysis and effective user interfaces facilitate character analysis. Using our interface, called POSvis, the scholar uses word clouds and self-organizing graphs to review vocabulary, to filter by part of speech, and to explore the network of characters located near characters under review. Further, visualizations show word usages within an analysis window (i.e. a book chapter), which can be compared with a reference window (i.e. the whole book). We describe the interface and report on an early case study with a humanities scholar.
Vuillemot, R.;Clement, T.;Plaisant, C.;Kumar, A.
Univ. de Lyon, Lyon, France|c|;;;
10.1109/TVCG.2008.172;10.1109/TVCG.2007.70577;10.1109/VAST.2008.4677359;10.1109/VAST.2007.4389004;10.1109/VAST.2007.4389006
Visual Analytics, Design, Experimentation, Human Factors
VAST
2009
Working memory load as a novel tool for evaluating visual analytics
10.1109/VAST.2009.5333468
2. 218
M
The current visual analytics literature highlights design and evaluation processes that are highly variable and situation dependent, which raises at least two broad challenges. First, lack of a standardized evaluation criterion leads to costly re-designs for each task and specific user community. Second, this inadequacy in criterion validation raises significant uncertainty regarding visualization outputs and their related decisions, which may be especially troubling in high consequence environments like those of the intelligence community. As an attempt to standardize the ldquoapples and orangesrdquo of the extant situation, we propose the creation of standardized evaluation tools using general principles of human cognition. Theoretically, visual analytics enables the user to see information in a way that should attenuate the user's memory load and increase the user's task-available cognitive resources. By using general cognitive abilities like available working memory resources as our dependent measures, we propose to develop standardized evaluative capabilities that can be generalized across contexts, tasks, and user communities.
Dornburg, C.C.;Matzen, L.E.;Bauer, T.L.;McNamara, L.A.
Sandia Nat. Labs., Albuquerque, NM, USA|c|;;;
Vis
2009
A Novel Interface for Interactive Exploration of DTI fibers
10.1109/TVCG.2009.112
1. 1440
J
Visual exploration is essential to the visualization and analysis of densely sampled 3D DTI fibers in biological speciments, due to the high geometric, spatial, and anatomical complexity of fiber tracts. Previous methods for DTI fiber visualization use zooming, color-mapping, selection, and abstraction to deliver the characteristics of the fibers. However, these schemes mainly focus on the optimization of visualization in the 3D space where cluttering and occlusion make grasping even a few thousand fibers difficult. This paper introduces a novel interaction method that augments the 3D visualization with a 2D representation containing a low-dimensional embedding of the DTI fibers. This embedding preserves the relationship between the fibers and removes the visual clutter that is inherent in 3D renderings of the fibers. This new interface allows the user to manipulate the DTI fibers as both 3D curves and 2D embedded points and easily compare or validate his or her results in both domains. The implementation of the framework is GPU based to achieve real-time interaction. The framework was applied to several tasks, and the results show that our method reduces the user's workload in recognizing 3D DTI fibers and permits quick and accurate DTI fiber selection.
Wei Chen;Zi'ang Ding;Song Zhang;MacKay-Brandt, A.;Correia, S.;Huamin Qu;Crow, J.A.;Tate, D.F.;Zhicheng Yan;Qunsheng Peng
State Key Lab. of CAD & CG, Zhejiang Univ., Hangzhou, China|c|;;;;;;;;;
10.1109/TVCG.2007.70602;10.1109/TVCG.2009.141;10.1109/VISUAL.2005.1532777;10.1109/VISUAL.2003.1250413;10.1109/VISUAL.2005.1532778;10.1109/VISUAL.2005.1532779;10.1109/VISUAL.2005.1532772;10.1109/VISUAL.2003.1250379;10.1109/VISUAL.2004.30
Diffusion Tensor Imaging, fibers, fiber Clustering, Visualization Interface
Vis
2009
A Physiologically-based Model for Simulation of Color Vision Deficiency
10.1109/TVCG.2009.113
1. 1298
J
Color vision deficiency (CVD) affects approximately 200 million people worldwide, compromising the ability of these individuals to effectively perform color and visualization-related tasks. This has a significant impact on their private and professional lives. We present a physiologically-based model for simulating color vision. Our model is based on the stage theory of human color vision and is derived from data reported in electrophysiological studies. It is the first model to consistently handle normal color vision, anomalous trichromacy, and dichromacy in a unified way. We have validated the proposed model through an experimental evaluation involving groups of color vision deficient individuals and normal color vision ones. Our model can provide insights and feedback on how to improve visualization experiences for individuals with CVD. It also provides a framework for testing hypotheses about some aspects of the retinal photoreceptors in color vision deficient individuals.
Machado, G.;Oliveira, M.M.;Fernandes, L.A.F.
UFRGS, Porto Alegre, Brazil|c|;;
10.1109/VISUAL.1996.568118;10.1109/VISUAL.1995.480803;10.1109/TVCG.2008.112
Models of Color Vision, Color Perception, Simulation of Color Vision Deficiency, Anomalous Trichromacy, Dichromacy
Vis
2009
A User Study to Compare Four Uncertainty Visualization Methods for 1D and 2D Datasets
10.1109/TVCG.2009.114
1. 1218
J
Many techniques have been proposed to show uncertainty in data visualizations. However, very little is known about their effectiveness in conveying meaningful information. In this paper, we present a user study that evaluates the perception of uncertainty amongst four of the most commonly used techniques for visualizing uncertainty in one-dimensional and two-dimensional data. The techniques evaluated are traditional errorbars, scaled size of glyphs, color-mapping on glyphs, and color-mapping of uncertainty on the data surface. The study uses generated data that was designed to represent the systematic and random uncertainty components. Twenty-seven users performed two types of search tasks and two types of counting tasks on 1D and 2D datasets. The search tasks involved finding data points that were least or most uncertain. The counting tasks involved counting data features or uncertainty features. A 4 times 4 full-factorial ANOVA indicated a significant interaction between the techniques used and the type of tasks assigned for both datasets indicating that differences in performance between the four techniques depended on the type of task performed. Several one-way ANOVAs were computed to explore the simple main effects. Bonferronni's correction was used to control for the family-wise error rate for alpha-inflation. Although we did not find a consistent order among the four techniques for all the tasks, there are several findings from the study that we think are useful for uncertainty visualization design. We found a significant difference in user performance between searching for locations of high and searching for locations of low uncertainty. Errorbars consistently underperformed throughout the experiment. Scaling the size of glyphs and color-mapping of the surface performed reasonably well. The efficiency of most of these techniques were highly dependent on the tasks performed. We believe that these findings can be used in future uncertainty visualization desig- - n. In addition, the framework developed in this user study presents a structured approach to evaluate uncertainty visualization techniques, as well as provides a basis for future research in uncertainty visualization.
Sanyal, J.;Song Zhang;Bhattacharya, G.;Amburn, P.;Moorhead, R.J.
Geosystems Res. Inst., Mississippi State Univ., Starkville, MS, USA|c|;;;;
10.1109/TVCG.2007.70518;10.1109/VISUAL.1996.568105;10.1109/VISUAL.2000.885679;10.1109/INFVIS.2002.1173145;10.1109/INFVIS.2004.59;10.1109/TVCG.2007.70530
User study, uncertainty visualization
Vis
2009
A Visual Approach to Efficient Analysis and Quantification of Ductile Iron and Reinforced Sprayed Concrete
10.1109/TVCG.2009.115
1. 1350
J
This paper describes advanced volume visualization and quantification for applications in non-destructive testing (NDT), which results in novel and highly effective interactive workflows for NDT practitioners. We employ a visual approach to explore and quantify the features of interest, based on transfer functions in the parameter spaces of specific application scenarios. Examples are the orientations of fibres or the roundness of particles. The applicability and effectiveness of our approach is illustrated using two specific scenarios of high practical relevance. First, we discuss the analysis of Steel Fibre Reinforced Sprayed Concrete (SFRSpC). We investigate the orientations of the enclosed steel fibres and their distribution, depending on the concrete's application direction. This is a crucial step in assessing the material's behavior under mechanical stress, which is still in its infancy and therefore a hot topic in the building industry. The second application scenario is the designation of the microstructure of ductile cast irons with respect to the contained graphite. This corresponds to the requirements of the ISO standard 945-1, which deals with 2D metallographic samples. We illustrate how the necessary analysis steps can be carried out much more efficiently using our system for 3D volumes. Overall, we show that a visual approach with custom transfer functions in specific application domains offers significant benefits and has the potential of greatly improving and optimizing the workflows of domain scientists and engineers.
Fritz, L.;Hadwiger, M.;Geier, G.;Pittino, G.;Groller, E.
VRVis Res. Center, Vienna, Austria|c|;;;;
10.1109/TVCG.2008.147;10.1109/VISUAL.2003.1250418;10.1109/TVCG.2008.162;10.1109/VISUAL.2001.964519;10.1109/VISUAL.2003.1250384;10.1109/TVCG.2007.70603
Non-Destructive Testing, Multi-Dimensional Transfer Functions, Direction Visualization, Volume Rendering
Vis
2009
An interactive visualization tool for multi-channel confocal microscopy data in neurobiology research
10.1109/TVCG.2009.118
1. 1496
J
Confocal microscopy is widely used in neurobiology for studying the three-dimensional structure of the nervous system. Confocal image data are often multi-channel, with each channel resulting from a different fluorescent dye or fluorescent protein; one channel may have dense data, while another has sparse; and there are often structures at several spatial scales: subneuronal domains, neurons, and large groups of neurons (brain regions). Even qualitative analysis can therefore require visualization using techniques and parameters fine-tuned to a particular dataset. Despite the plethora of volume rendering techniques that have been available for many years, the techniques standardly used in neurobiological research are somewhat rudimentary, such as looking at image slices or maximal intensity projections. Thus there is a real demand from neurobiologists, and biologists in general, for a flexible visualization tool that allows interactive visualization of multi-channel confocal data, with rapid fine-tuning of parameters to reveal the three-dimensional relationships of structures of interest. Together with neurobiologists, we have designed such a tool, choosing visualization methods to suit the characteristics of confocal data and a typical biologist's workflow. We use interactive volume rendering with intuitive settings for multidimensional transfer functions, multiple render modes and multi-views for multi-channel volume data, and embedding of polygon data into volume data for rendering and editing. As an example, we apply this tool to visualize confocal microscopy datasets of the developing zebrafish visual system.
Yong Wan;Otsuna, H.;Chi-Bin Chien;Hansen, C.
Sci. & Imaging Inst., Univ. of Utah, Salt Lake City, UT, USA|c|;;;
10.1109/VISUAL.1999.809887;10.1109/TVCG.2006.148
Visualization, neurobiology, confocal microscopy, qualitative analysis, volume rendering
Vis
2009
Applying Manifold Learning to Plotting Approximate Contour Trees
10.1109/TVCG.2009.119
1. 1192
J
A contour tree is a powerful tool for delineating the topological evolution of isosurfaces of a single-valued function, and thus has been frequently used as a means of extracting features from volumes and their time-varying behaviors. Several sophisticated algorithms have been proposed for constructing contour trees while they often complicate the software implementation especially for higher-dimensional cases such as time-varying volumes. This paper presents a simple yet effective approach to plotting in 3D space, approximate contour trees from a set of scattered samples embedded in the high-dimensional space. Our main idea is to take advantage of manifold learning so that we can elongate the distribution of high-dimensional data samples to embed it into a low-dimensional space while respecting its local proximity of sample points. The contribution of this paper lies in the introduction of new distance metrics to manifold learning, which allows us to reformulate existing algorithms as a variant of currently available dimensionality reduction scheme. Efficient reduction of data sizes together with segmentation capability is also developed to equip our approach with a coarse-to-fine analysis even for large-scale datasets. Examples are provided to demonstrate that our proposed scheme can successfully traverse the features of volumes and their temporal behaviors through the constructed contour trees.
Takahashi, S.;Fujishiro, I.;Okada, M.
Univ. of Tokyo, Tokyo, Japan|c|;;
10.1109/VISUAL.2002.1183772;10.1109/TVCG.2007.70601;10.1109/VISUAL.2004.96;10.1109/VISUAL.2002.1183774;10.1109/VISUAL.1997.663875
Contour trees, manifold learning, time-varying volumes, high-dimensional data analysis
Vis
2009
Automatic Transfer Function Generation Using Contour Tree Controlled Residue Flow Model and Color Harmonics
10.1109/TVCG.2009.120
1. 1488
J
Transfer functions facilitate the volumetric data visualization by assigning optical properties to various data features and scalar values. Automation of transfer function specifications still remains a challenge in volume rendering. This paper presents an approach for automating transfer function generations by utilizing topological attributes derived from the contour tree of a volume. The contour tree acts as a visual index to volume segments, and captures associated topological attributes involved in volumetric data. A residue flow model based on Darcy's law is employed to control distributions of opacity between branches of the contour tree. Topological attributes are also used to control color selection in a perceptual color space and create harmonic color transfer functions. The generated transfer functions can depict inclusion relationship between structures and maximize opacity and color differences between them. The proposed approach allows efficient automation of transfer function generations, and exploration on the data to be carried out based on controlling of opacity residue flow rate instead of complex low-level transfer function parameter adjustments. Experiments on various data sets demonstrate the practical use of our approach in transfer function generations.
Jianlong Zhou;Takatsuka, M.
Sch. of Inf. Technol., Univ. of Sydney, Sydney, NSW, Australia|c|;
10.1109/VISUAL.1998.745319;10.1109/TVCG.2008.118;10.1109/VISUAL.1999.809932;10.1109/VISUAL.2003.1250414;10.1109/VISUAL.2004.96;10.1109/TVCG.2007.70591;10.1109/TVCG.2008.162;10.1109/VISUAL.2001.964519;10.1109/VISUAL.2003.1250413;10.1109/VISUAL.1997.663875;10.1109/TVCG.2006.148
Volume Rendering, Transfer Function, Contour Tree, Residue Flow, Harmonic Color
Vis
2009
BrainGazer - Visual Queries for Neurobiology Research
10.1109/TVCG.2009.121
1. 1504
J
Neurobiology investigates how anatomical and physiological relationships in the nervous system mediate behavior. Molecular genetic techniques, applied to species such as the common fruit fly Drosophila melanogaster, have proven to be an important tool in this research. Large databases of transgenic specimens are being built and need to be analyzed to establish models of neural information processing. In this paper we present an approach for the exploration and analysis of neural circuits based on such a database. We have designed and implemented emph{BrainGazer}, a system which integrates visualization techniques for volume data acquired through confocal microscopy as well as annotated anatomical structures with an intuitive approach for accessing the available information. We focus on the ability to visually query the data based on semantic as well as spatial relationships. Additionally, we present visualization techniques for the concurrent depiction of neurobiological volume data and geometric objects which aim to reduce visual clutter. The described system is the result of an ongoing interdisciplinary collaboration between neurobiologists and visualization researchers.
Bruckner, S.;Solteszova, V.;Groller, E.;Hladuvka, J.;Buhler, K.;Yu, J.Y.;Dickson, B.J.
Inst. of Comput. Graphics & Algorithms, Vienna Univ. of Technol., Vienna, Austria|c|;;;;;;
10.1109/VISUAL.2004.104;10.1109/VISUAL.1990.146378;10.1109/VISUAL.2003.1250412;10.1109/TVCG.2006.197;10.1109/VISUAL.1995.485139;10.1109/VISUAL.1996.568136;10.1109/TVCG.2006.195;10.1109/VAST.2008.4677354
Biomedical visualization, neurobiology, visual queries, volume visualization
Vis
2009
Color Seamlessness in Multi-Projector Displays Using Constrained Gamut Morphing
10.1109/TVCG.2009.124
1. 1326
J
Multi-projector displays show significant spatial variation in 3D color gamut due to variation in the chromaticity gamuts across the projectors, vignetting effect of each projector and also overlap across adjacent projectors. In this paper we present a new constrained gamut morphing algorithm that removes all these variations and results in true color seamlessness across tiled multi-projector displays. Our color morphing algorithm adjusts the intensities of light from each pixel of each projector precisely to achieve a smooth morphing from one projector's gamut to the other's through the overlap region. This morphing is achieved by imposing precise constraints on the perceptual difference between the gamuts of two adjacent pixels. In addition, our gamut morphing assures a C1 continuity yielding visually pleasing appearance across the entire display. We demonstrate our method successfully on a planar and a curved display using both low and high-end projectors. Our approach is completely scalable, efficient and automatic. We also demonstrate the real-time performance of our image correction algorithm on GPUs for interactive applications. To the best of our knowledge, this is the first work that presents a scalable method with a strong foundation in perception and realizes, for the first time, a truly seamless display where the number of projectors cannot be deciphered.
Sajadi, B.;Lazarov, M.;Gopi, M.;Majumder, A.
Comput. Sci. Dept., Univ. of California, Irvine, CA, USA|c|;;;
10.1109/VISUAL.2001.964508;10.1109/VISUAL.2002.1183793;10.1109/VISUAL.2000.885684;10.1109/VISUAL.1999.809883;10.1109/TVCG.2007.70586;10.1109/TVCG.2006.121
Color Calibration, Multi-Projector Displays, Tiled Displays
Vis
2009
Coloring 3D Line fields Using Boy's Real Projective Plane Immersion
10.1109/TVCG.2009.125
1. 1464
J
We introduce a new method for coloring 3D line fields and show results from its application in visualizing orientation in DTI brain data sets. The method uses Boy's surface, an immersion of RP2 in 3D. This coloring method is smooth and one-to-one except on a set of measure zero, the double curve of Boy's surface.
Demiralp, C.;Hughes, J.F.;Laidlaw, D.H.
Brown Univ., Providence, RI, USA|c|;;
10.1109/VISUAL.1994.346338;10.1109/VISUAL.1993.398867
Line field, colormapping, orientation, real projective plane, tensor field, DTI
Vis
2009
Comparing 3D Vector field Visualization Methods: A User Study
10.1109/TVCG.2009.126
1. 1226
J
In a user study comparing four visualization methods for three-dimensional vector data, participants used visualizations from each method to perform five simple but representative tasks: 1) determining whether a given point was a critical point, 2) determining the type of a critical point, 3) determining whether an integral curve would advect through two points, 4) determining whether swirling movement is present at a point, and 5) determining whether the vector field is moving faster at one point than another. The visualization methods were line and tube representations of integral curves with both monoscopic and stereoscopic viewing. While participants reported a preference for stereo lines, quantitative results showed performance among the tasks varied by method. Users performed all tasks better with methods that: 1) gave a clear representation with no perceived occlusion, 2) clearly visualized curve speed and direction information, and 3) provided fewer rich 3D cues (e.g., shading, polygonal arrows, overlap cues, and surface textures). These results provide quantitative support for anecdotal evidence on visualization methods. The tasks and testing framework also give a basis for comparing other visualization methods, for creating more effective methods, and for defining additional tasks to explore further the tradeoffs among the methods.
Forsberg, A.;Jian Chen;Laidlaw, D.H.
Comput. Sci. Dept., Brown Univ., RI, USA|c|;;
10.1109/VISUAL.1996.567777;10.1109/VISUAL.2005.1532831;10.1109/VISUAL.2004.59;10.1109/VISUAL.2005.1532772
3D vector fields, visualization, user study, tubes, lines, stereoscopic and monoscopic viewing
Vis
2009
Continuous Parallel Coordinates
10.1109/TVCG.2009.131
1. 1538
J
Typical scientific data is represented on a grid with appropriate interpolation or approximation schemes,defined on a continuous domain. The visualization of such data in parallel coordinates may reveal patterns latently contained in the data and thus can improve the understanding of multidimensional relations. In this paper, we adopt the concept of continuous scatterplots for the visualization of spatially continuous input data to derive a density model for parallel coordinates. Based on the point-line duality between scatterplots and parallel coordinates, we propose a mathematical model that maps density from a continuous scatterplot to parallel coordinates and present different algorithms for both numerical and analytical computation of the resulting density field. In addition, we show how the 2-D model can be used to successively construct continuous parallel coordinates with an arbitrary number of dimensions. Since continuous parallel coordinates interpolate data values within grid cells, a scalable and dense visualization is achieved, which will be demonstrated for typical multi-variate scientific data.
Heinrich, J.;Weiskopf, D.
VISUS (Visualization Res. Center), Univ. Stuttgart, Stuttgart, Germany|c|;
10.1109/TVCG.2006.168;10.1109/TVCG.2008.119;10.1109/TVCG.2008.131;10.1109/INFVIS.2005.1532139;10.1109/TVCG.2009.179;10.1109/TVCG.2006.138;10.1109/VISUAL.1990.146402;10.1109/INFVIS.2005.1532138;10.1109/TVCG.2008.160;10.1109/INFVIS.2002.1173157;10.1109/VISUAL.1999.809866;10.1109/TVCG.2006.170;10.1109/INFVIS.2004.68
Parallel coordinates, integrating spatial and non-spatial data visualization, multi-variate visualization, interpolation
Vis
2009
Curve-Centric Volume Reformation for Comparative Visualization
10.1109/TVCG.2009.136
1. 1242
J
We present two visualization techniques for curve-centric volume reformation with the aim to create compelling comparative visualizations. A curve-centric volume reformation deforms a volume, with regards to a curve in space, to create a new space in which the curve evaluates to zero in two dimensions and spans its arc-length in the third. The volume surrounding the curve is deformed such that spatial neighborhood to the curve is preserved. The result of the curve-centric reformation produces images where one axis is aligned to arc-length, and thus allows researchers and practitioners to apply their arc-length parameterized data visualizations in parallel for comparison. Furthermore we show that when visualizing dense data, our technique provides an inside out projection, from the curve and out into the volume, which allows for inspection what is around the curve. Finally we demonstrate the usefulness of our techniques in the context of two application cases. We show that existing data visualizations of arc-length parameterized data can be enhanced by using our techniques, in addition to creating a new view and perspective on volumetric data around curves. Additionally we show how volumetric data can be brought into plotting environments that allow precise readouts. In the first case we inspect streamlines in a flow field around a car, and in the second we inspect seismic volumes and well logs from drilling.
Lampe, O.D.;Correa, C.;Kwan-Liu Ma;Hauser, H.
CMR AS, Univ. of Bergen, Bergen, Norway|c|;;;
10.1109/TVCG.2006.144;10.1109/VISUAL.2002.1183754;10.1109/VISUAL.2001.964540;10.1109/VISUAL.1992.235194;10.1109/VISUAL.2003.1250353
Volume Deformation, Curve-Centric-Reformation, Comparative Visualization, Radial Ray-Casting
Vis
2009
Decoupling Illumination from Isosurface Generation Using 4D Light Transport
10.1109/TVCG.2009.137
1. 1602
J
One way to provide global illumination for the scientist who performs an interactive sweep through a 3D scalar dataset is to pre-compute global illumination, resample the radiance onto a 3D grid, then use it as a 3D texture. The basic approach of repeatedly extracting isosurfaces, illuminating them, and then building a 3D illumination grid suffers from the non-uniform sampling that arises from coupling the sampling of radiance with the sampling of isosurfaces. We demonstrate how the illumination step can be decoupled from the isosurface extraction step by illuminating the entire 3D scalar function as a 3-manifold in 4-dimensional space. By reformulating light transport in a higher dimension, one can sample a 3D volume without requiring the radiance samples to aggregate along individual isosurfaces in the pre-computed illumination grid.
Banks, D.C.;Beason, K.
ORNL Sci. Comput. Group, Univ. of Tennessee, Knoxville, TN, USA|c|;
10.1109/TVCG.2008.108;10.1109/VISUAL.2000.885692;10.1109/VISUAL.2003.1250394
physically-based illumination, isosurface, level set, light transport
Vis
2009
Depth-Dependent Halos: Illustrative Rendering of Dense Line Data
10.1109/TVCG.2009.138
1. 1306
J
We present a technique for the illustrative rendering of 3D line data at interactive frame rates. We create depth-dependent halos around lines to emphasize tight line bundles while less structured lines are de-emphasized. Moreover, the depth-dependent halos combined with depth cueing via line width attenuation increase depth perception, extending techniques from sparse line rendering to the illustrative visualization of dense line data. We demonstrate how the technique can be used, in particular, for illustrating DTI fiber tracts but also show examples from gas and fluid flow simulations and mathematics as well as describe how the technique extends to point data. We report on an informal evaluation of the illustrative DTI fiber tract visualizations with domain experts in neurosurgery and tractography who commented positively about the results and suggested a number of directions for future work.
Everts, M.H.;Bekker, H.;Roerdink, J.B.T.;Isenberg, T.
Univ. of Groningen, Groningen, Netherlands|c|;;;
10.1109/VISUAL.2000.885694;10.1109/TVCG.2007.70532;10.1109/TVCG.2006.172;10.1109/VISUAL.2000.885696;10.1109/VISUAL.2005.1532778;10.1109/TVCG.2006.115;10.1109/VISUAL.2005.1532859;10.1109/TVCG.2006.197;10.1109/VISUAL.2005.1532858;10.1109/TVCG.2007.70555;10.1109/VISUAL.1996.567777;10.1109/VISUAL.2004.48
Illustrative rendering and visualization, NPR, dense line data, DTI, black-and-white rendering, GPU technique