Biomedical Visual Computing: Case Studies and Challenges

By Chris R. Johnson

NOTE: This is an overview of the article, which appeared in the January/February 2012 issue of Computing in Science & Engineering magazine.
Click here to read the entire article.

Introduction

A continuing trend in all computational science and engineering applications is the increasing size of resulting datasets. This trend is also evident in data acquisition, especially in image acquisition in biology and medical image databases. For example, the author mentions datasets of brain electron microscopy (EM) mosaics that are 16 terabytes in size. However, while there’s no foreseeable end to the increase in the ability to produce simulation data or record observational data, the ability to use this data in meaningful ways is inhibited by current data analysis capabilities, which already lag far behind.

Visual data analysis involves creating images that convey salient information about underlying data and processes, enabling the detection and validation of expected results while leading to unexpected discoveries in science. As applied to biomedical systems, visualization plays a crucial role in the ability to comprehend large and complex data.

The article highlights three case studies where biomedical researchers and computational scientists are collaborating to facilitate visual data analysis.

Case Study 1 – Simulation of Implantable Cardiac Defibrillators

Here the article describes a general pipeline for image-based model generation, simulation, and visualization that can be useful in many different fields, including biomedicine. This first case study’s goal was to calculate the electric potentials in the body, and especially in the fibrillating heart, that arise during a shock from an ICD, more than 90,000 of which are implanted annually in the US alone. Of special interest was the use of such devices in children, who are much smaller than adults and almost always have some form of anatomical abnormality in the heart that makes patient-specific modeling essential.

Biomedical image-based modeling, simulation, and visualization pipeline. Once created, the pipeline can be used in many different biomedical or other science and engineering fields.
Biomedical image-based modeling, simulation, and visualization pipeline. Once created, the pipeline can be used in many different biomedical or other science and engineering fields.

Click to enlarge

Case Study 2: Neural Circuit Reconstruction

Case study 2 illuminates the large-scale image analysis and visualization needs associated with better understanding the neural connectivity within the brain. Models of neural circuits are essential for studying the central nervous system. However, relatively little is known about the connectivities of neurons, and state-of-the-art models are typically not based on anatomical ground truth. Serial-section EM images can provide the data necessary to reconstruct large-scale neural circuits.

However, the images’ complexity and vast size make human interpretation an extremely labor-intensive task. The pipeline for reconstructing neural circuits from serial-section EM includes preprocessing the images, assembling 3D volumes, segmenting individual neurons, and identifying and visualizing synapses and other structures.

Assembling the 3D neural image volumes is an important challenge complicated by the numerous 2D images that must first be mosaicked to form 2D sections and then be aligned in 3D to create volumes. The figure below shows a set of approximately 1,000 individual EM images used to form a 2D image mosaic. An individual tile, denoted by the red square, is 4,096 X 4,096 pixels in resolution (as a reference, full HD TV resolution is 1,920 X 1,080 pixels). A 2D image mosaic is approximately 130,000 pixels in diameter.

One section from the 342-section 3D electron microscopy retina connectome dataset. Each section is a mosaic of approximately 1,000 image tiles that are each 4,096 X 4,096 pixels. The inplane resolution is 2.18 pixels per nanometer and the section thickness is 90 nm. The full 3D connectome dataset is 16.5 terabytes.
One section from the 342-section 3D electron microscopy retina connectome dataset. Each section is a mosaic of approximately 1,000 image tiles that are each 4,096 X 4,096 pixels. The inplane resolution is 2.18 pixels per nanometer and the section thickness is 90 nm. The full 3D connectome dataset is 16.5 terabytes.

Click to enlarge

Case Study 3: Medical Visualization on Mobile Platforms

In medicine as elsewhere, there’s an increasing need for visual analysis capabilities on smart phones, tablet computers, and netbooks. With their small size and less powerful processing capabilities, client-server and data streaming technologies play an important role in allowing interactive visualization on mobile computing platforms.

In recent years, researchers have increasingly used patient-specific models to predict the effects of neuromodulation therapies such as Deep-Brain Stimulation (DBS). However, translating these models from a research environment to the everyday clinical work- flow has been a challenge, primarily because of the models’ complexity and the specialized software required to provide the visualization. The article describes the Vis3D Mobile software which was designed for mobile computing devices such as the iPhone or iPad; it was used to visualize models of four Parkinson’s patients who received DBS therapy. The ImageVis3D Mobile provided models to movement-disorder clinicians who were asked to use the software to determine which of the four electrode contacts they would select for therapy, and which stimulation settings they would choose.

Biomedical Visual Computing Challenges

The article concludes with some thoughts on the challenges faced in connection with visual data analysis. One challenge certainly is that the volumes of data available are exponentially increasing, so the capabilities for making use of this data must expand likewise. As the author states, “We’re data rich, but analysis poor.”

The author discusses at some length the issue of reliability – the need for capabilities for verification and validation of simulation and visualization codes. Likewise, scientists must be provided with uncertainty representation and quantification, uncertainty propagation, and uncertainty visualization techniques so that they can better understand the limits of their simulations and visualizations.

ABOUT THE AUTHOR

Chris R. Johnson directs the Scientific Computing and Imaging (SCI) Institute at the University of Utah where he is a Distinguished Professor of Computer Science and holds faculty appointments in the Departments of Physics and Bioengineering. His research interests are in the areas of scientific computing and scientific visualization. He founded the SCI research group in 1992, which has since grown to become the SCI Institute, employing more than 200 faculty, staff, and students. Johnson has a PhD in biophysics and computing from the University of Utah. He is a recipient of the US NSF Presidential Faculty Fellow award and the IEEE Visualization Career Award. He is a fellow of the American Institute for Medical and Biological Engineering, the American Association for the Advancement of Science, and the Society for Industrial and Applied Mathematics (SIAM). Contact him at crj@sci.utah.edu.