Mahuika is a Cray CS400 cluster. It replaces Pan, an IBM iDataPlex cluster. The technical differences between the systems are described here. There are some important differences between Pan and Mahuika Services.
Logging in is then a two-step process. First you login to a lander (or jump) node (using both your password and second factor), after which you can ssh onto a Mahuika login node (equivalent to a Pan Build node).
Perhaps the most significant differences between Pan and Mahuika filesystems are:
- The availability of a very large scratch space (4.4PB) (
- A Hierarchical Storage Management system that allows users to move data to and from offline tape storage, the latter having very large (>100PB) of storage.
- On Mahuika, the project file system is
Other differences include:
- The per-job temporary directories SCRATCH_DIR, TMP_DIR and SHM_DIR are not provided on Mahuika. For SCRATCH_DIR you can use any location of your choice within
- In addition to the familiar GNU and Intel compilers, Mahuika also provides the Cray Programming Environment, which includes the Cray compiler and performance analysis tools. For more information see the section on Compiling software on Mahuika.
- Allinea's DDT and MAP tools are available
More details are provided here.
The most significant differences between Pan and Mahuika hardware are:
- All nodes have the same processors (Broadwell) and clock speeds (2.1GHz).
- Hyperthreading is enabled. Accordingly, compute and large memory nodes have 72 logical cores, which however can only be used in pairs.
- Compute nodes have 128 GB memory (assume 3 GB per physical core, or 1.5 GB per logical core).
The Slurm configuration is described here. Generally, you will need to specify the queue (or partition) that your job will run in. A number of new features are provided, including a debug QoS flag that you can use to get fast turn around on very short test jobs. The Slurm node-type constraints used on Pan (wm, sb, avx, fermi, kepler) need to be removed from your Slurm scripts.