Performance characterization of a molecular dynamics code on PC clusters: Is there any easy parallelism in CHARMM?

TitlePerformance characterization of a molecular dynamics code on PC clusters: Is there any easy parallelism in CHARMM?
Publication TypeConference Paper
Year of Publication2002
AuthorsTaufer M., Perathoner E., Cavalli A., Caflisch A., Stricker T.
Conference NameParallel and Distributed Processing Symposium, Proceedings International (IPDPS 2002)
Date PublishedApril 2002
PublisherIEEE
Conference LocationFt. Lauderdale, FL
ISBN Number0-7695-1573-8
Accession Number7342321
KeywordsAnalytical models, Biological system modeling, biology computing, CHARMM, computational biology, Computational modeling, design of experiments, digital simulation, Distributed computing, distributed molecular dynamics, distributed processing, Electrostatic analysis, Ethernet, experimental design, Grid computing, Hardware, local area networks, message passing, middleware, molecular biophysics, molecular dynamics code, PC clusters, performance evaluation, performance optimization, performance tuning, Scalability, Software performance, synchronisation, synchronization, workstation clusters
Abstract

The molecular dynamics code CHARMM is a popular research tool for computational biology. An increasing number of researchers are currently looking for affordable and adequate platforms to execute CHARMM or similar codes. To address this need, we analyze the resource requirements of a CHARMM molecular dynamics simulation on PC clusters with a particle mesh Ewald (PME) treatment of long range electrostatics, and investigate the scalability of the short-range interactions and PME separately. We look at the workload characterization and the performance gain of CHARMM with different network technologies and different software infrastructures and show that the performance depends more on the software infrastructures than on the hardware components. In the present study, powerful communication systems like Myrinet deliver performance that comes close to the MPP supercomputers of the past decade, but improved scalability can also be achieved with better communication system software like SCore without the additional hardware cost. The experimental method of workload characterization presented can be easily applied to other codes. The detailed performance figures of the breakdown of the calculation into computation, communication and synchronization allow one to derive good estimates about the benefits of moving applications to novel computing platforms such as widely distributed computers (grid).

DOI10.1109/IPDPS.2002.1015505
pubindex

0044

Full Text PDF: