IEEE VIS Publication Dataset

next
SciVis
2015
A Classification of User Tasks in Visual Analysis of Volume Data
10.1109/SciVis.2015.7429485
1. 8
C
Empirical findings from studies in one scientific domain have very limited applicability to other domains, unless we formally establish deeper insights on the generalizability of task types. We present a domain-independent classification of visual analysis tasks with volume visualizations. This taxonomy will help researchers design experiments, ensure coverage, and generate hypotheses in empirical studies with volume datasets. To develop our taxonomy, we first interviewed scientists working with spatial data in disparate domains. We then ran a survey to evaluate the design participants in which were scientists and professionals from around the world, working with volume data in various scientific domains. Respondents agreed substantially with our taxonomy design, but also suggested important refinements. We report the results in the form of a goal-based generic categorization of visual analysis tasks with volume visualizations. Our taxonomy covers tasks performed with a wide variety of volume datasets.
Laha, B.;Bowman, D.A.;Laidlaw, D.H.;Socha, J.J.
Stanford University|c|;;;
10.1109/INFVIS.2004.10;10.1109/TVCG.2013.124;10.1109/TVCG.2012.216;10.1109/TVCG.2009.126;10.1109/TVCG.2013.130;10.1109/TVCG.2013.120;10.1109/TVCG.2014.2346321;10.1109/INFVIS.2004.59
Task Taxonomy, Empirical Evaluation, Volume Visualization, Scientific Visualization, Virtual Reality, 3D Interaction
SciVis
2015
A proposed multivariate visualization taxonomy from user data
10.1109/SciVis.2015.7429511
1. 158
M
We revisited past user study data on multivariate visualizations, looking at whether image processing measures offer any insight into user performance. While we find statistically significant correlations, some of the greatest insights into user performance came from variables that have strong ties to two key properties of multivariate representations. We discuss our analysis and propose a taxonomy of multivariate visualizations that arises.
Livingston, M.A.;Decker, J.W.;Ai, Z.
;;
SciVis
2015
A Visual Voting Framework for Weather Forecast Calibration
10.1109/SciVis.2015.7429488
2. 32
C
Numerical weather predictions have been widely used for weather forecasting. Many large meteorological centers are producing highly accurate ensemble forecasts routinely to provide effective weather forecast services. However, biases frequently exist in forecast products because of various reasons, such as the imperfection of the weather forecast models. Failure to identify and neutralize the biases would result in unreliable forecast products that might mislead analysts; consequently, unreliable weather predictions are produced. The analog method has been commonly used to overcome the biases. Nevertheless, this method has some serious limitations including the difficulties in finding effective similar past forecasts, the large search space for proper parameters and the lack of support for interactive, real-time analysis. In this study, we develop a visual analytics system based on a novel voting framework to circumvent the problems. The framework adopts the idea of majority voting to combine judiciously the different variants of analog methods towards effective retrieval of the proper analogs for calibration. The system seamlessly integrates the analog methods into an interactive visualization pipeline with a set of coordinated views that characterizes the different methods. Instant visual hints are provided in the views to guide users in finding and refining analogs. We have worked closely with the domain experts in the meteorological research to develop the system. The effectiveness of the system is demonstrated using two case studies. An informal evaluation with the experts proves the usability and usefulness of the system.
Liao, H.;Wu, Y.;Chen, L.;Hamill, T.M.;Wang, Y.;Dai, K.;Zhang, H.;Chen, W.
School of Software, Tsinghua National Laboratory for Information Science and Technology, Tsinghua University|c|;;;;;;;
10.1109/TVCG.2013.131;10.1109/TVCG.2013.138;10.1109/TVCG.2013.144;10.1109/TVCG.2009.197;10.1109/TVCG.2008.139;10.1109/TVCG.2014.2346755;10.1109/TVCG.2010.181;10.1109/VISUAL.1994.346298;10.1109/TVCG.2013.143
Weather forecast, analog method, calibration, majority voting, visual analytics
SciVis
2015
Accurate Interactive Visualization of Large Deformations and Variability in Biomedical Image Ensembles
10.1109/TVCG.2015.2467198
7. 717
J
Large image deformations pose a challenging problem for the visualization and statistical analysis of 3D image ensembles which have a multitude of applications in biology and medicine. Simple linear interpolation in the tangent space of the ensemble introduces artifactual anatomical structures that hamper the application of targeted visual shape analysis techniques. In this work we make use of the theory of stationary velocity fields to facilitate interactive non-linear image interpolation and plausible extrapolation for high quality rendering of large deformations and devise an efficient image warping method on the GPU. This does not only improve quality of existing visualization techniques, but opens up a field of novel interactive methods for shape ensemble analysis. Taking advantage of the efficient non-linear 3D image warping, we showcase four visualizations: 1) browsing on-the-fly computed group mean shapes to learn about shape differences between specific classes, 2) interactive reformation to investigate complex morphologies in a single view, 3) likelihood volumes to gain a concise overview of variability and 4) streamline visualization to show variation in detail, specifically uncovering its component tangential to a reference surface. Evaluation on a real world dataset shows that the presented method outperforms the state-of-the-art in terms of visual quality while retaining interactive frame rates. A case study with a domain expert was performed in which the novel analysis and visualization methods are applied on standard model structures, namely skull and mandible of different rodents, to investigate and compare influence of phylogeny, diet and geography on shape. The visualizations enable for instance to distinguish (population-)normal and pathological morphology, assist in uncovering correlation to extrinsic factors and potentially support assessment of model quality.
Hermann, M.;Schunke, A.C.;Schultz, T.;Klein, R.
Inst. fur Inf. II, Univ. Bonn, Bonn, Germany|c|;;;
10.1109/TVCG.2006.140;10.1109/VISUAL.2002.1183754;10.1109/TVCG.2014.2346591;10.1109/TVCG.2014.2346405;10.1109/TVCG.2006.123
Statistical deformation model, stationary velocity fields, image warping, interactive visual analysis
SciVis
2015
Adaptive Multilinear Tensor Product Wavelets
10.1109/TVCG.2015.2467412
9. 994
J
Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how to generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. We focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells.
Weiss, K.;Lindstrom, P.
Lawrence Livermore Nat. Lab., Livermore, CA, USA|c|;
10.1109/TVCG.2010.145;10.1109/VISUAL.1997.663860;10.1109/VISUAL.2002.1183810;10.1109/TVCG.2011.252;10.1109/VISUAL.1996.568127;10.1109/TVCG.2009.186
Multilinear interpolation, adaptive wavelets, multiresolution models, octrees, continuous reconstruction
SciVis
2015
An evaluation of three methods for visualizing uncertainty in architecture and archaeology
10.1109/SciVis.2015.7429507
1. 150
M
This project explores the representation of uncertainty in visualizations for archaeological research and provides insights obtained from user feedback. Our 3D models brought together information from standing architecture and excavated remains, surveyed plans, ground penetrating radar (GPR) data from the Carthusian monastery of Bourgfontaine in northern France. We also included information from comparative Carthusian sites and a bird's eye representation of the site in an early modern painting. Each source was assigned a certainty value which was then mapped to a color or texture for the model. Certainty values between one and zero were assigned by one subject matter expert and should be considered qualitative. Students and faculty from the fields of architectural history and archaeology at two institutions interacted with the models and answered a short survey with four questions about each. We discovered equal preference for color and transparency and a strong dislike for the texture model. Discoveries during model building also led to changes of the excavation plans for summer 2015.
Houde, S.;Bonde, S.;Laidlaw, D.H.
Brown University|c|;;
SciVis
2015
AnimoAminoMiner: Exploration of Protein Tunnels and their Properties in Molecular Dynamics
10.1109/TVCG.2015.2467434
7. 756
J
In this paper we propose a novel method for the interactive exploration of protein tunnels. The basic principle of our approach is that we entirely abstract from the 3D/4D space the simulated phenomenon is embedded in. A complex 3D structure and its curvature information is represented only by a straightened tunnel centerline and its width profile. This representation focuses on a key aspect of the studied geometry and frees up graphical estate to key chemical and physical properties represented by surrounding amino acids. The method shows the detailed tunnel profile and its temporal aggregation. The profile is interactively linked with a visual overview of all amino acids which are lining the tunnel over time. In this overview, each amino acid is represented by a set of colored lines depicting the spatial and temporal impact of the amino acid on the corresponding tunnel. This representation clearly shows the importance of amino acids with respect to selected criteria. It helps the biochemists to select the candidate amino acids for mutation which changes the protein function in a desired way. The AnimoAminoMiner was designed in close cooperation with domain experts. Its usefulness is documented by their feedback and a case study, which are included.
Byska, J.;Le Muzic, M.;Groller, E.;Viola, I.;Kozlikova, B.
Masaryk Univ., Brno, Czech Republic|c|;;;;
10.1109/VISUAL.2002.1183754;10.1109/TVCG.2009.136;10.1109/TVCG.2011.259;10.1109/VISUAL.2001.964540
Protein, tunnel, molecular dynamics, aggregation, interaction
SciVis
2015
Anisotropic Ambient Volume Shading
10.1109/TVCG.2015.2467963
1. 1024
J
We present a novel method to compute anisotropic shading for direct volume rendering to improve the perception of the orientation and shape of surface-like structures. We determine the scale-aware anisotropy of a shading point by analyzing its ambient region. We sample adjacent points with similar scalar values to perform a principal component analysis by computing the eigenvectors and eigenvalues of the covariance matrix. In particular, we estimate the tangent directions, which serve as the tangent frame for anisotropic bidirectional reflectance distribution functions. Moreover, we exploit the ratio of the eigenvalues to measure the magnitude of the anisotropy at each shading point. Altogether, this allows us to model a data-driven, smooth transition from isotropic to strongly anisotropic volume shading. In this way, the shape of volumetric features can be enhanced significantly by aligning specular highlights along the principal direction of anisotropy. Our algorithm is independent of the transfer function, which allows us to compute all shading parameters once and store them with the data set. We integrated our method in a GPU-based volume renderer, which offers interactive control of the transfer function, light source positions, and viewpoint. Our results demonstrate the benefit of anisotropic shading for visualization to achieve data-driven local illumination for improved perception compared to isotropic shading.
Ament, M.;Dachsbacher, C.
Karlsruhe Inst. of Technol., Karlsruhe, Germany|c|;
10.1109/TVCG.2014.2346333;10.1109/TVCG.2013.129;10.1109/TVCG.2014.2346411;10.1109/TVCG.2012.232;10.1109/VISUAL.1999.809886;10.1109/VISUAL.2003.1250414;10.1109/TVCG.2011.161;10.1109/VISUAL.2005.1532772;10.1109/VISUAL.1994.346331;10.1109/VISUAL.2002.1183771;10.1109/TVCG.2011.198;10.1109/VISUAL.2004.5;10.1109/TVCG.2012.267;10.1109/VISUAL.1996.567777
Direct volume rendering, volume illumination, anisotropic shading
SciVis
2015
Association Analysis for Visual Exploration of Multivariate Scientific Data Sets
10.1109/TVCG.2015.2467431
9. 964
J
The heterogeneity and complexity of multivariate characteristics poses a unique challenge to visual exploration of multivariate scientific data sets, as it requires investigating the usually hidden associations between different variables and specific scalar values to understand the data's multi-faceted properties. In this paper, we present a novel association analysis method that guides visual exploration of scalar-level associations in the multivariate context. We model the directional interactions between scalars of different variables as information flows based on association rules. We introduce the concepts of informativeness and uniqueness to describe how information flows between scalars of different variables and how they are associated with each other in the multivariate domain. Based on scalar-level associations represented by a probabilistic association graph, we propose the Multi-Scalar Informativeness-Uniqueness (MSIU) algorithm to evaluate the informativeness and uniqueness of scalars. We present an exploration framework with multiple interactive views to explore the scalars of interest with confident associations in the multivariate spatial domain, and provide guidelines for visual exploration using our framework. We demonstrate the effectiveness and usefulness of our approach through case studies using three representative multivariate scientific data sets.
Xiaotong Liu;Han-Wei Shen
;
10.1109/TVCG.2013.133;10.1109/TVCG.2007.70519;10.1109/TVCG.2008.116;10.1109/TVCG.2007.70615;10.1109/VISUAL.1995.485139;10.1109/TVCG.2006.165;10.1109/VAST.2012.6400488;10.1109/TVCG.2011.178;10.1109/VAST.2007.4389000
Multivariate data, association analysis, visual exploration, multiple views
SciVis
2015
Auto-Calibration of Multi-Projector Displays with a Single Handheld Camera
10.1109/SciVis.2015.7429493
6. 72
C
We present a novel approach that utilizes a simple handheld camera to automatically calibrate multi-projector displays. Most existing studies adopt active structured light patterns to verify the relationship between the camera and the projectors. The utilized camera is typically expensive and requires an elaborate installation process depending on the scalability of its applications. Moreover, the observation of the entire area by the camera is almost impossible for a small space surrounded by walls as there is not enough distance for the camera to capture the entire scene. We tackle these issues by requiring only a portion of the walls to be visible to a handheld camera that is widely used these days. This becomes possible by the introduction of our new structured light pattern scheme based on a perfect submap and a geometric calibration that successfully utilizes the geometric information of multi-planar environments. We demonstrate that immersive display in a small space such as an ordinary room can be effectively created using images captured by a handheld camera.
Park, S.;Seo, H.;Cha, S.;Noh, J.
KAIST|c|;;;
10.1109/VISUAL.2002.1183793;10.1109/VISUAL.2000.885685;10.1109/VISUAL.1999.809883
SciVis
2015
Automated visualization workflow for simulation experiments
10.1109/SciVis.2015.7429509
1. 154
M
Modeling and simulation is often used to predict future events and plan accordingly. Experiments in this domain often produce thousands of results from individual simulations, based on slightly varying input parameters. Geo-spatial visualizations can be a powerful tool to help health researchers and decision-makers to take measures during catastrophic and epidemic events such as Ebola outbreaks. The work produced a web-based geo-visualization tool to visualize and compare the spread of Ebola in the West African countries Ivory Coast and Senegal based on multiple simulation results. The visualization is not Ebola specific and may visualize any time-varying frequencies for given geo-locations.
Leidig, J.P.;Dharmapuri, S.
School of Computing and Information Systems, Grand Valley State University|c|;
SciVis
2015
CAST: Effective and Efficient User Interaction for Context-Aware Selection in 3D Particle Clouds
10.1109/TVCG.2015.2467202
8. 895
J
We present a family of three interactive Context-Aware Selection Techniques (CAST) for the analysis of large 3D particle datasets. For these datasets, spatial selection is an essential prerequisite to many other analysis tasks. Traditionally, such interactive target selection has been particularly challenging when the data subsets of interest were implicitly defined in the form of complicated structures of thousands of particles. Our new techniques SpaceCast, TraceCast, and PointCast improve usability and speed of spatial selection in point clouds through novel context-aware algorithms. They are able to infer a user's subtle selection intention from gestural input, can deal with complex situations such as partially occluded point clusters or multiple cluster layers, and can all be fine-tuned after the selection interaction has been completed. Together, they provide an effective and efficient tool set for the fast exploratory analysis of large datasets. In addition to presenting Cast, we report on a formal user study that compares our new techniques not only to each other but also to existing state-of-the-art selection methods. Our results show that Cast family members are virtually always faster than existing methods without tradeoffs in accuracy. In addition, qualitative feedback shows that PointCast and TraceCast were strongly favored by our participants for intuitiveness and efficiency.
Lingyun Yu;Efstathiou, K.;Isenberg, P.;Isenberg, T.
Hangzhou Dianzi Univ., Hangzhou, China|c|;;;
10.1109/TVCG.2008.153;10.1109/VISUAL.1999.809932;10.1109/TVCG.2013.126;10.1109/TVCG.2012.292;10.1109/INFVIS.1996.559216;10.1109/TVCG.2012.217;10.1109/TVCG.2010.157
Selection, spatial selection, structure-aware selection, context-aware selection, exploratory data visualization and analysis, 3D interaction, user interaction
SciVis
2015
Cluster Analysis of Vortical Flow in Simulations of Cerebral Aneurysm Hemodynamics
10.1109/TVCG.2015.2467203
7. 766
J
Computational fluid dynamic (CFD) simulations of blood flow provide new insights into the hemodynamics of vascular pathologies such as cerebral aneurysms. Understanding the relations between hemodynamics and aneurysm initiation, progression, and risk of rupture is crucial in diagnosis and treatment. Recent studies link the existence of vortices in the blood flow pattern to aneurysm rupture and report observations of embedded vortices - a larger vortex encloses a smaller one flowing in the opposite direction - whose implications are unclear. We present a clustering-based approach for the visual analysis of vortical flow in simulated cerebral aneurysm hemodynamics. We show how embedded vortices develop at saddle-node bifurcations on vortex core lines and convey the participating flow at full manifestation of the vortex by a fast and smart grouping of streamlines and the visualization of group representatives. The grouping result may be refined based on spectral clustering generating a more detailed visualization of the flow pattern, especially further off the core lines. We aim at supporting CFD engineers researching the biological implications of embedded vortices.
Oeltze-Jafra, S.;Cebral, J.R.;Janiga, G.;Preim, B.
Dept. of Simulation & Graphics, Univ. of Magdeburg, Magdeburg, Germany|c|;;;
10.1109/TVCG.2009.138;10.1109/TVCG.2012.202;10.1109/TVCG.2014.2346406;10.1109/TVCG.2006.201;10.1109/VISUAL.2002.1183789;10.1109/TVCG.2013.189;10.1109/VISUAL.2004.59;10.1109/TVCG.2006.199;10.1109/VISUAL.2005.1532830;10.1109/VISUAL.2005.1532859
Blood Flow, Aneurysm, Clustering, Vortex Dynamics, Embedded Vortices
SciVis
2015
Correlation analysis in multidimensional multivariate time-varying datasets
10.1109/SciVis.2015.7429502
1. 140
M
One of the most vital challenges for weather forecasters is the correlation between two geographical phenomena that are distributed continuously in multidimensional multivariate time-varying datasets. In this research, we have visualized the correlation between Pressure and Temperature in the climate datasets. Pearson correlation is used in this study to measure the major linear relationship between two variables in the dataset. Using glyphs in the spatial location, we highlighted the significant association between variables. Based on the positive or negative slope of correlation lines, we can conclude how much they are correlated. The principal of this research is visualizing the local trend of variables versus each other in multidimensional multivariate time-varying datasets, which needs to be visualized with their spatial locations in meteorological datasets. Using glyphs, not only can we visualize the correlation between two variables in the coordinate system, but we can also discern whether any of these variables is separately increasing or decreasing. Moreover, we can visualize the background color as another variable and see the correlation lines around of a particular zone such as storm area.
Abedzadeh, N.
Mississippi State University|c|
SciVis
2015
CPU Ray Tracing Large Particle Data with Balanced P-k-d Trees
10.1109/SciVis.2015.7429492
5. 64
C
We present a novel approach to rendering large particle data sets from molecular dynamics, astrophysics and other sources. We employ a new data structure adapted from the original balanced k-d tree, which allows for representation of data with trivial or no overhead. In the OSPRay visualization framework, we have developed an efficient CPU algorithm for traversing, classifying and ray tracing these data. Our approach is able to render up to billions of particles on a typical workstation, purely on the CPU, without any approximations or level-of-detail techniques, and optionally with attribute-based color mapping, dynamic range query, and advanced lighting models such as ambient occlusion and path tracing.
Wald, I.;Knoll, A.;Johnson, G.P.;Usher, W.;Pascucci, V.;Papka, M.E.
Intel Corporation|c|;;;;;
10.1109/TVCG.2010.148;10.1109/TVCG.2009.142;10.1109/TVCG.2012.282
Ray tracing, Visualization, Particle Data, k-d Trees
SciVis
2015
Diderot: a Domain-Specific Language for Portable Parallel Scientific Visualization and Image Analysis
10.1109/TVCG.2015.2467449
8. 876
J
Many algorithms for scientific visualization and image analysis are rooted in the world of continuous scalar, vector, and tensor fields, but are programmed in low-level languages and libraries that obscure their mathematical foundations. Diderot is a parallel domain-specific language that is designed to bridge this semantic gap by providing the programmer with a high-level, mathematical programming notation that allows direct expression of mathematical concepts in code. Furthermore, Diderot provides parallel performance that takes advantage of modern multicore processors and GPUs. The high-level notation allows a concise and natural expression of the algorithms and the parallelism allows efficient execution on real-world datasets.
Kindlmann, G.;Chiw, C.;Seltzer, N.;Samuels, L.;Reppy, J.
;;;;
10.1109/TVCG.2009.174;10.1109/TVCG.2011.185;10.1109/VISUAL.2005.1532856;10.1109/TVCG.2014.2346322;10.1109/TVCG.2012.240;10.1109/VISUAL.2003.1250414;10.1109/VISUAL.1999.809896;10.1109/TVCG.2007.70534;10.1109/TVCG.2014.2346318;10.1109/VISUAL.1998.745290;10.1109/TVCG.2008.148;10.1109/TVCG.2008.163
Domain specific language, portable parallel programming, scientific visualization, tensor fields
SciVis
2015
Distribution Driven Extraction and Tracking of Features for Time-varying Data Analysis
10.1109/TVCG.2015.2467436
8. 846
J
Effective analysis of features in time-varying data is essential in numerous scientific applications. Feature extraction and tracking are two important tasks scientists rely upon to get insights about the dynamic nature of the large scale time-varying data. However, often the complexity of the scientific phenomena only allows scientists to vaguely define their feature of interest. Furthermore, such features can have varying motion patterns and dynamic evolution over time. As a result, automatic extraction and tracking of features becomes a non-trivial task. In this work, we investigate these issues and propose a distribution driven approach which allows us to construct novel algorithms for reliable feature extraction and tracking with high confidence in the absence of accurate feature definition. We exploit two key properties of an object, motion and similarity to the target feature, and fuse the information gained from them to generate a robust feature-aware classification field at every time step. Tracking of features is done using such classified fields which enhances the accuracy and robustness of the proposed algorithm. The efficacy of our method is demonstrated by successfully applying it on several scientific data sets containing a wide range of dynamic time-varying features.
Dutta, S.;Han-Wei Shen
;
10.1109/TVCG.2007.70599;10.1109/VISUAL.1993.398877;10.1109/VISUAL.2004.107;10.1109/TVCG.2011.246;10.1109/TVCG.2007.70615;10.1109/VISUAL.2003.1250374;10.1109/TVCG.2013.152;10.1109/TVCG.2014.2346423;10.1109/TVCG.2007.70579;10.1109/VISUAL.1996.567807;10.1109/VISUAL.1998.745288;10.1109/TVCG.2008.163;10.1109/TVCG.2008.140
Gaussian mixture model (GMM), Incremental learning, Feature extraction and tracking, Time-varying data analysis
SciVis
2015
Effective Visualization of Temporal Ensembles
10.1109/TVCG.2015.2468093
7. 796
J
An ensemble is a collection of related datasets, called members, built from a series of runs of a simulation or an experiment. Ensembles are large, temporal, multidimensional, and multivariate, making them difficult to analyze. Another important challenge is visualizing ensembles that vary both in space and time. Initial visualization techniques displayed ensembles with a small number of members, or presented an overview of an entire ensemble, but without potentially important details. Recently, researchers have suggested combining these two directions, allowing users to choose subsets of members to visualization. This manual selection process places the burden on the user to identify which members to explore. We first introduce a static ensemble visualization system that automatically helps users locate interesting subsets of members to visualize. We next extend the system to support analysis and visualization of temporal ensembles. We employ 3D shape comparison, cluster tree visualization, and glyph based visualization to represent different levels of detail within an ensemble. This strategy is used to provide two approaches for temporal ensemble analysis: (1) segment based ensemble analysis, to capture important shape transition time-steps, clusters groups of similar members, and identify common shape changes over time across multiple members; and (2) time-step based ensemble analysis, which assumes ensemble members are aligned in time by combining similar shapes at common time-steps. Both approaches enable users to interactively visualize and analyze a temporal ensemble from different perspectives at different levels of detail. We demonstrate our techniques on an ensemble studying matter transition from hadronic gas to quark-gluon plasma during gold-on-gold particle collisions.
Lihua Hao;Healey, C.;Bass, S.A.
;;
10.1109/TVCG.2014.2346448;10.1109/VISUAL.2005.1532839;10.1109/VISUAL.2005.1532838;10.1109/TVCG.2014.2346751;10.1109/TVCG.2009.155;10.1109/TVCG.2014.2346455;10.1109/TVCG.2010.181;10.1109/TVCG.2013.143
Ensemble visualization
SciVis
2015
Effectiveness of Structured Textures on Dynamically Changing Terrain-like Surfaces
10.1109/TVCG.2015.2467962
9. 934
J
Previous perceptual research and human factors studies have identified several effective methods for texturing 3D surfaces to ensure that their curvature is accurately perceived by viewers. However, most of these studies examined the application of these techniques to static surfaces. This paper explores the effectiveness of applying these techniques to dynamically changing surfaces. When these surfaces change shape, common texturing methods, such as grids and contours, induce a range of different motion cues, which can draw attention and provide information about the size, shape, and rate of change. A human factors study was conducted to evaluate the relative effectiveness of these methods when applied to dynamically changing pseudo-terrain surfaces. The results indicate that, while no technique is most effective for all cases, contour lines generally perform best, and that the pseudo-contour lines induced by banded color scales convey the same benefits.
Butkiewicz, T.;Stevens, A.H.
Center for Coastal & Ocean Mapping, Univ. of New Hampshire, Durham, NH, USA|c|;
Structured textures, terrain, deformation, dynamic surfaces
SciVis
2015
Explicit Frequency Control for High-Quality Texture-Based Flow Visualization
10.1109/SciVis.2015.7429490
4. 48
C
In this work we propose an effective method for frequency-controlled dense flow visualization derived from a generalization of the Line Integral Convolution (LIC) technique. Our approach consists in considering the spectral properties of the dense flow visualization process as an integral operator defined in a local curvilinear coordinate system aligned with the flow. Exploring LIC from this point of view, we suggest a systematic way to design a flow visualization process with particular local spatial frequency properties of the resulting image. Our method is efficient, intuitive, and based on a long-standing model developed as a result of numerous perception studies. The method can be described as an iterative application of line integral convolution, followed by a one-dimensional Gabor filtering orthogonal to the flow. To demonstrate the utility of the technique, we generated novel adaptive multi-frequency flow visualizations, that according to our evaluation, feature a higher level of frequency control and higher quality scores than traditional approaches in texture-based flow visualization.
Matvienko, V.;Kruger, J.
Saarland University|c|;
10.1109/VISUAL.2005.1532853;10.1109/TVCG.2007.70595;10.1109/TVCG.2006.161;10.1109/VISUAL.1994.346313;10.1109/VISUAL.1996.567784;10.1109/VISUAL.2001.964505;10.1109/TVCG.2009.126;10.1109/VISUAL.1999.809892;10.1109/VISUAL.2003.1250362;10.1109/VISUAL.2005.1532781
flow visualization, texture-based visualization, LIC, Gabor filter, spatial frequency, image contrast