Connections in the back of a Cray XC40 supercomputer

Connections between components of a supercomputer


Astroinformatics is a contributor to the Research Computing group at GSU.

We have access to clusters and data servers in the Physics and Astronomy department and the Computer Science department.

Faculty associated with Astroinformatics may request access to the Harlow cluster for MPI-parallel computing. This cluster has been funded by start-up funds and grant funding from GAIN members. The Harlow cluster is named after Frank Harlow, a pioneer of the field of computational fluid dynamics, credited with establishing computational fluid dynamics (CFD) as a discipline. CFD is among those disciplines committed to exascale computing.

The Harlow cluster is a parallel cluster constructed with Intel Xeon Silver 4110 ("Skylake") nodes, connected with infiniband. Software installed on Harlow includes intel compilers for FORTRAN, c++, and MPI. Currently (11/2019) Harlow has 30 compute nodes, each with 8 physical processes/16 compute tasks. A total number of compute tasks possible on the machine is therefore 480. Nodes on Harlow have a nominal clock rate of 2.1GHz, and 128GB in RAM. The shared scratch space on the Harlow cluster is 30 TB. The Harlow cluster was built in November of 2018, and the 2018 intel compilers are available on the cluster.

Harlow is run in the same framework as national facilities such as the Intel Skylake system Stampede 2 at TACC, because its main purpose is research-based code development aimed at production runs at those larger facilities. The user policy for Harlow includes

  1. People can have an account on Harlow only if they are engaged in scientific research with MPI-parallel code (the established protocol for multi-node runs).
  2. New users must undergo training from either system admin Dr. Justin Cantrell or from Dr. Jane Pratt before they can be given an account. The natural training for a student is that they enroll in the Computational Physics class taught by Dr. Pratt. The training provided in that class includes how to set up the bash environment, how to access libraries through the module environment, and how to write batch scripts for SLURM. At the end of the class, a student may request a research account through their faculty advisor. New users may also be asked to complete the Harlow Driving Test before being given an account.
  3. All jobs on Harlow must be submitted through SLURM.
  4. New users will be given access to the "student" queue. More extensive access will need to be requested and justified, for example by providing appropriate scaling studies and projections of usage.
  5. If a faculty member uses Harlow in an ongoing way (including through their students) they should contribute research funding commensurate with that usage. A minimum contribution is currently $7000 per compute node. Adding storage nodes (to provide scratch space) is also possible.
  6. Harlow is a homogeneous cluster, and only the same kind and generation of compute nodes will be added.
  7. Any nodes added will be available for all users.
  8. Data cannot be stored on Harlow's disk in a long-term way. The scratch space is subject to purges at any time. Both scratch and home directories are subject to quotas on space and on inodes.
  9. There is no active system administrator for Harlow. It is run as an expert cluster where users must be able to install and debug their own code, as well as any library dependencies.
  10. The number of user accounts granted must always be less than the number of nodes on the machine. This is to maintain Harlow's usefulness as a parallel resource for the Harlow user community. At the point where too many users are actively working on the cluster, further access will be closed.
  11. At this point (11/2019), Harlow has been entirely built using research funding contributed by members of the Astroinformatics cluster. We are generally willing to share resources with faculty outside of the Astroinformatics cluster, from the Physics and Astronomy department or other college departments, provided that they agree to follow these policies, request an account, get trained, and commit to contributing funding to expand Harlow.

As a final note, we refer to the Dec 2019 issue of Bits & Bytes "Being an interpreted and dynamically-typed language, plain Python is not a language suitable per se to achieve high performance." However python scripts that use inter-node communication in the form of the mpi4py or alternatives may be run on Harlow. In this case we refer to points 3 and 9 above. The Bits & Bytes article provides links and advice on both of those points.

Harlow Quick-Start Guide

Harlow Driving Test