Overview

The Deans and faculties of KSAS and WSE have partnered to create a Homewood High Performance Cluster (HHPC). The HHPC integrates the resources of many PIs to create a powerful and adaptive shared facility designed to support large scale computations on the Homewood Campus.

The HHPC is managed under the aegis of IDIES and operates as a Co-Op. Hardware is provided by users, who then get a proportional share of the pooled resources. The networking infrastructure and a systems administrator, Jason Williams, are provided by the Deans of the Whiting School of Engineering and the Krieger School of Arts and Sciences.

The first cluster, HHPCv1, came online in December 2008. It currently contains 130 compute nodes with over 1000 cores connected by a DDR Infiniband switch. A new cluster, HHPCv2, came online in the newly renovated Bloomberg 156 Data Center in November 2011. It currently has over 200 nodes with 12 cores and 48GB of RAM. The QDR Infiniband switch can accommodate up to 400 additional nodes. Both clusters are connected by high speed links to other clusers in Bloomberg 156, including the Datascope and 100TFlop Graphics Processor Laboratory. Bloomberg 156 has 10GB connections to the rest of JHU and Internet2 and these are being upgraded to 100GB connections with funding from the National Science Foundation.

Researchers on the Homewood campus are encouraged to consider contributing hardware to the HHPC. Please contact Mark Robbins or Jason Williams if you are interested in learning more. The minimum buy-in is 8 nodes or about $33,000 at the present time.

Under the management plan for the HHPC, 10 percent of the compute time can be allocated by the Deans to faculty on the Homewood campus. The priorities for use of this time are to meet temporary surges in compute needs for research projects at Homewood, to provide access to new hires and new contributors before nodes they have ordered arrive, and to allow potential members of the HHPC to “kick the tires”. For more information, and to apply for time on the HHPC, see the HHPC application below.

For more detailed information about the cluster, see the HHPC wiki .

Information About your HHPC Application

All users are encouraged to be affiliates of IDIES.
Please attach a .pdf file addressing the following questions:

  1. Please provide your contact information and indicate the Homewood Department of your faculty appointment.
  2. Please describe the software that would be used, whether it runs on Scientific Linux, uses MPI, and if any special packages would need to be installed by the administrator. (Only basic compilers, MPI and queue systems are currently on the cluster nodes.)
  3. Estimated resources required:
    • Please indicate how many parallel nodes you would run on at a time, the typical job length and the total number of CPU processor hours needed to complete the project.
    • Have benchmarks been run on comparable processors to generate these estimates?
    • How much disk space would be required? The cluster has ample scratch space, but users are expected to clear this space and store data on their own machines. A small amount of backed up space (~10GB) is available for programs, etc..
    • Please provide the names of all individuals who would use the facility. (Accounts can not be shared.)
    • Is this a one time need or are you interested in purchasing cluster nodes in the future?