Māui is a Cray XC50 Supercomputer featuring Skylake Xeon nodes, Aries interconnect and IBM ESS Spectrum Scale Storage.   NeSI has access to 57% of Māui resources.

Māui is designed to provide a Capability (or supercomputer) High Performance Computing resource that allows researchers to run simulations and calculations that require large numbers of processing cores working in a tightly-coupled parallel fashion, as well as interactive data analysis. To support workflows that are primarily single core jobs, and to provide virtual lab services Māui has additional nodes optimized for this purpose - the Maui Ancillary nodes.

The Supercomputer login or build nodes (maui01, and maui02) provide access to the full Cray Programming Environment (e.g. editors, compilers, linkers, debug tools). Typically, users will ssh to these nodes after logging onto the NeSI Lander node. Jobs can be submitted to the HPC from these nodes.

Important Notes

  1. The Cray Programming Environment on the XC50 (supercomputer) differs from that on Mahuika and the Māui Ancillary nodes.
  2. The /home, /nesi/project, and /nesi/nobackup filesystems are mounted on Māui.
  3. The I/O subsystem on the XC50 is designed to provide High Bandwidth to disk, but not high IOPS. If your code requires high IOPS, it should be run on either Mahuika or the Maui Ancillary nodes (whichever can provide the necessary computational resources).

All Māui resources are indicated below, and the the Māui Ancillary Node resources here.

Māui Supercomputer (Cray XC50)

Login nodes (also known as eLogin nodes)

80 cores in 2 × Skylake (Gold 6148, 2.4 GHz, dual socket 20 cores per socket) nodes

Compute nodes

18,560 cores in 464 × Skylake (Gold 6148, 2.4 GHz, dual socket 20 cores per socket) nodes;


Enabled (accordingly, SLURM will see 37,120 cores)

Theoretical Peak Performance

1.425 PFLOPS

Memory capacity per compute node

232 nodes have 96 GB, the remaining 232 have 192 GB each

Memory capacity per login (build) node

768 GB

Total System memory

66.8 TB


Cray Aries, Dragonfly topology

Workload Manager

Slurm (Multi-Cluster)

Operating System

Cray Linux Environment (SLES 12 SP2), and CLE 6.0 UP06


Storage (IBM ESS)

Scratch Capacity (accessible from all Māui, Mahuika, and Ancillary nodes).

4,412 TB (IBM Spectrum Scale, version 5.0). Total I/O bandwidth to disks is 130 GB/s

Persistent storage (accessible from all Māui, Mahuika, and Ancillary nodes).

1,765 TB (IBM Spectrum Scale, version 5.0) Shared Storage. Total I/O bandwidth to disks is 65 GB/s (i.e. the /home and /nesi/project filesystems)

Offline storage (accessible from all Māui, Mahuika, and Ancillary nodes).

Of the order of 100 PB (compressed)

Note: Although hyperthreading is enabled, Projects are only charged for the physical cores they use.



Labels: hpc info maui XC50 cs500
Was this article helpful?
0 out of 0 found this helpful