Home | Who We Are | Current Member List | Member Resources | Homewood High-Performance Cluster (HHPC)

Homewood High-Performance Cluster (HHPC)

Overview

The Deans and faculties of KSAS and WSE have partnered to create a Homewood High Performance Cluster (HHPC). The HHPC integrates the resources of many PIs to create a powerful and adaptive shared facility designed to support large scale computations on the Homewood Campus.

The HHPC is managed under the aegis of IDIES and operates as a Co-Op. Hardware is provided by users, who then get a proportional share of the pooled resources. The networking infrastructure and a systems administrator are provided by the Deans of the Whiting School of Engineering and the Krieger School of Arts and Sciences.

The first cluster, HHPCv1, came online in December 2008 and was retired 8 years later. It contained almost 200 compute nodes connected by a DDR Infiniband switch. A new cluster, HHPCv2, came online in the newly renovated Bloomberg 156 Data Center in November 2011. It currently has over 350 nodes with 12 cores and 48GB of RAM and there is no plan for expansion given the newer hardware available at MARCC. HHPCv2 is connected by high speed links to MARCC and other clusters in Bloomberg 156, including the Datascope and 100TFlop Graphics Processor Laboratory.

Under the management plan for the HHPC, 10 percent of the compute time can be allocated by the Deans to faculty on the Homewood campus. The priorities for use of this time are to meet temporary surges in compute needs for research projects at Homewood, to provide access to new hires and new contributors before nodes they have ordered arrive, and to allow potential members of the HHPC to “kick the tires”. For more information, and to apply for time on the HHPC, see the HHPC application below.

Accessing the Cluster

To access the HHPC cluster, first you will need an HHPC account.  Please get approval from your PI and then send an account email request to the hhpc@jhu.edu.

HHPC log-in node, login.hhpc.jhu.edu, is where all HHPC users access to submit their jobs to the HHPC compute nodes.  Login.hhpc.jhu.edu is not a compute node; therefore, no one should be running compute job on it.

Job queue is managed by the Slurm scheduler.  Because we share similar settings as the MARCC cluster, please go here to learn on using Slurm’s scheduler to submit jobs: MARCC SLURM.

If you have a question, please email hhpc@jhu.edu.

Information About Your HHPC Application For Deans' Time

Please attach a .pdf file addressing the following questions:

  1. Please provide your contact information and indicate the Homewood Department of your faculty appointment.
  2. Please describe the software that would be used, whether it runs on Scientific Linux, uses MPI, and if any special packages would need to be installed by the administrator. (Only basic compilers, MPI and queue systems are currently on the cluster nodes.)
  3. Estimated resources required:
    • Please indicate how many parallel nodes you would run on at a time, the typical job length and the total number of CPU processor hours needed to complete the project.
    • Have benchmarks been run on comparable processors to generate these estimates?
    • How much disk space would be required? The cluster has ample scratch space, but users are expected to clear this space and store data on their own machines. A small amount of backed up space (~10GB) is available for programs, etc..
    • Please provide the names of all individuals who would use the facility. (Accounts can not be shared.)
    • Is this a one time need or are you interested in purchasing cluster nodes in the future?
Homewood High-Performance Cluster (HHPC)