Primer for Pan Users

Mahuika is a Cray CS400 cluster. It replaces Pan, and IBM iDataPlex cluster. The technical differences between the systems are described here. There are some important differences between Pan and Mahuika Services.

Logging in

Before you can access Mahuika, you will need to open an account, reset its password, and set up two-factor authorisation.

Logging in is then a two step process.  First you login to a lander (or jump) node (using both your password and second factor), after which you can ssh onto a Mahuika login node (equivalent to a Pan Build node).

File Systems

Perhaps the most significant differences between Pan and Mahuika filesystems are:

  1. The availability of a very large scratch space (4.4PB) (/nesi/nobackup/<project-id>)
  2. A Hierarchical Storage Management system that allows users to move data to and from offline tape storage, the latter having very large (>100PB) of storage.
  3. On Mahuika, the project file system is /nesi/nobackup/<project-id>

Other differences include:

  1. The per-job temporary directories SCRATCH_DIR, TMP_DIR and SHM_DIR are not provided on Mahuika. For SCRATCH_DIR you can use any location within /nesi/nobackup/<project-id>

Programming Environment

  1. In addition to the familiar GNU and Intel compilers, Mahuika also provides the Cray Programming Environment, which includes the Cray compiler and performance analysis tools. For more information see the section on Compiling software on Mahuika.
  2. Allinea's DDT and MAP tools are available

More details are provided here.

Hardware

The most significant differences between Pan and Mahuika hardware are:

  1. All nodes have the same processors (Broadwell) and clock speeds (2.1GHz).
  2. Hyperthreading is enabled, accordingly compute and large memory nodes have 72 logical cores.
  3. Compute nodes have 128 GB memory (assume 3 GB per physical core, or 1.5 GB per logical core).

Slurm

The Slurm configuration is described here. Generally, you will need to specify the queue (or partition) that your job will run in.  A number of new features are provided, including a debug QoS flag that you can use to get fast turn around on very short test jobs.  The Slurm node-type constraints used on Pan (wm, sb, avx, fermi, kepler) need to be removed from your Slurm scripts.

Was this article helpful?
0 out of 0 found this helpful