Mahuika

Mahuika is a Cray CS400 Cluster featuring Intel Xeon Broadwell nodes, FDR InfiniBand interconnect, and NVIDIA GPGPUs.

Mahuika is designed to provide a Capacity (or high throughput) HPC resource that allows researchers to run many small (from 1 to a few hundred cores) compute jobs simultaneously, and to conduct interactive data analysis. To support jobs that require large (up to 500GB) or huge (up to 4 TB) memory, or GPGPUs, and to provide virtual lab services, Mahuika has additional nodes optimised for this purpose - the Mahuika Ancillary Nodes.

The Mahuika login (or build) nodes (mahuika01, and mahuika02) provide access to GNU, Intel and Cray programming environments (e.g. editors, compilers, linkers, debug tools). Typically, users will ssh to these nodes after logging onto the NeSI Lander node.

Notes

  1. The Cray Programming Environment on Mahuika, differs from that on Māui.
  2. The /home, /nesi/project, and /nesi/nobackup filesystems are mounted on Mahuika.
  3. Read about how to compile and link code on Mahuika in section entitled: Compiling software on Mahuika.

Mahuika HPC Cluster (Cray CS400)

Login nodes

72 cores in 2× Broadwell (E5-2695v4, 2.1 GHz, dual socket 18 cores per socket) nodes

Compute nodes

8,136 cores in 226 × Broadwell (E5-2695v4, 2.1 GHz, dual socket 18 cores per socket) nodes;

Compute nodes (reserved for NeSI Cloud)

288 cores in 8 × Broadwell (E5-2695v4, 2.1 GHz, dual socket 18 cores per socket) nodes

Hyperthreading

Enabled (accordingly, SLURM will see 16,272 cores)

Theoretical Peak Performance

308.6 TFLOPS

Memory capacity per compute node

128 GB

Memory capacity per login (build) node

512GB

Total System memory

31.0 TB

Interconnect

FDR (54.5Gb/s) InfiniBand to EDR (100Gb/s) Core fabric. 3.97:1 Fat-tree topology

Workload Manager

Slurm (Multi-Cluster)

Operating System

CentOS 7.4

 

 Storage (IBM ESS)

Scratch Capacity (accessible from Mahuika, Maui, and Ancillary nodes).

4,412 TB (IBM Spectrum Scale, version 5.0). Total I/O bandwidth to disks is ~130 GB/s

Persistent storage (accessible from Mahuika, Maui, and Ancillary nodes).

1,765 TB (IBM Spectrum Scale, version 5.0). Shared between Mahuika and Maui. Total I/O bandwidth to disks is ~65 GB/s (i.e. the /home and /nesi/project filesystems)

Offline storage (accessible from Mahuika, Maui, and Ancillary nodes)

Of the order of 100 PB (compressed)

 

 

 

Labels: hpc info mahuika cs400
Was this article helpful?
0 out of 0 found this helpful