Homewood High-Performance Cluster (HHPC)
The Deans and faculties of KSAS and WSE have partnered to create a Homewood
High Performance Cluster (HHPC). The HHPC integrates the resources of many
PIs to create a powerful and adaptive shared facility designed to support
large scale computations on the Homewood Campus.
The HHPC is managed under the aegis of IDIES and operates as a Co-Op.
Compute and database hardware is provided by users, who then get a proportional
share of the pooled resources. The networking infrastructure and a systems
administrator, Jason Williams, are provided by the Deans of the
Whiting School of Engineering and the
Krieger School of Arts and Sciences.
The HHPC came online in December 2008. It currently contains compute
nodes with 1200 Intel cores and 2 terabytes of RAM. These are connected through
infiniband to database servers with over a Petabyte of storage, including
the GrayWulf. An additional 50 nodes with 100 NVIDIA graphics processor
units will be connected to the network as part of a project with the IGERT
on Modeling Complex Systems.
Researchers on the Homewood campus are encouraged to consider contributing
hardware to the HHPC. The cluster cannot be expanded in the short term,
but we expect to relocate to new space in the coming months, and to extend the
cluster substantially at that time. Contact
(mr -at- pha.jhu.edu) or Jason Williams (jasonw -at- jhu.edu) if you are interested
in learning more. The minimum buy-in is 8 nodes or about $32,000 at the present time.
Under the management plan for the HHPC, 10 percent of the compute time can be
allocated by the Deans to faculty on the Homewood campus. The priorities
for use of this time are to meet temporary surges in compute needs for
research projects at Homewood, to provide access to new hires and new
contributors before nodes they have ordered arrive, and to allow potential
members of the HHPC to "kick the tires". For more information, and to apply for
time on the HHPC, see the HHPC application (opens
in a new tab or window).
For more detailed information about the cluster, see the HHPC wiki.