Supercomputer Instrumentation for Biomedical Image Analysis and Simulation
Principal Investigator: Russell Taylor
Funding Agency: National Center for Research Resources
Agency Number: 1-S10-RR023069-01
Abstract
We propose a High-End Instrumentation Biomedical Image Analysis Supercomputer (BIAS) at the University of North Carolina as a critical component of several interdisciplinary Centers and Institutes that have strong research programs coupled to biomedical research imaging. This is in response to NIH Program Announcement (PA) Number: PAR-05-124 which describes typical examples as imaging systems, macromolecular NMR spectrometers, high-resolution mass spectrometers, cryoelectron microscopes and supercomputers. At the present level of strongly interdisciplinary research, biomedical scientists require ever more powerful tools for their research. As new biomedical instruments with enhanced performance become available, their importance for interdisciplinary research increases along with the already high cost. This proposal offers a completely new approach to increasing the useful information from imaging measurements such as confocal microscopy, atomic-force microscopy, Transmission Electron Microscopy, Scanning Electron Microscopy and fluorescence. This is to couple such measurements to a real-time image analysis supercomputer that offers significant performance enhancement over standard batch-processed supercomputers. This proposal requests funds for the acquisition of a video analysis and simulation supercomputer that will couple real-time data visualization, analysis and microscopy to enable on-the-fly decision making for discovery experiments. Built to support the most demanding applications: The needs of particular NIH-sponsored research projects were used to determine the supercomputerA?s required capabilities. Target applications range from single molecule simulation of protein interactions through clotting disorders to lung defense in Cystic Fibrosis and MRI Atlas formation to diagnose disease. The presumption is that a machine designed to really solve particular problems is likely to be better suited to a wide range of actual applications than a A?general purposeA? design. The system will consist of 436 general-purpose processors tightly coupled to each other and to 90 programmable graphics-processor boards that will function as image and geometry calculation accelerators. This provides the equivalent computing power of over four thousand processors for image-intensive applications. Its processors are configured as 16-CPU shared-memory modules. Most modules have 16GB of shared memory, but application requirements require one of the modules to have 128GB shared memory to enable shared-memory solution to larger problem instances. This provides a total of 548GB of CPU-accessible RAM and 23GB of GPU memory. The shared-memory modules are connected to each other and to 12TB of disk through dual-plane InfiniBand 4X interconnects. The system couples to microscope equipment through five GiGE connections, four of which are dedicated links to on-campus Centers, and one of which goes to the campus Internet2 uplink. Fully utilized by a broad user base: Some of the applications described below, such as real-time video analysis of experiments coupled to real-time parallel simulation codes, will require dedicated use of the entire BIAS system. However, others can be solved effectively by a fraction of the system. Indeed, several of the target applications have large memory needs but only have sequential implementations and many biomedical researchers benefit greatly from having a machine on which to reliably run standard analysis codes. The BIAS system can be software-partitioned into as many as 77 individual systems, each with 4-8 processors; independent operating systems and application codes can be rapidly loaded from local disk to each system, enabling great flexibility in use and load balancing. During the daytime, the machine will be used heavily for real-time calculations that are tightly coupled to experiments. At night, 1-77 independent simulations will run in batches; CPU-parallel codes from UNC sharing the compute nodes and GPU-parallel codes from Illinois using the graphics nodes.

