Mahuika Ancillary Nodes

 

The Mahuika Ancillary Nodes provide the resources for:

  1. The bigmem Slurm partition for jobs that need up to approximately 500GB of memory;
  2. The hugemem Slurm partition for jobs that require more than 500GB of memory;
  3. The gpu Slurm partition;
  4. A Virtualised environment that supports:
    • Virtual laboratories that provide interactive access to data stored on the Mahuika and Maui filesystems together with domain analysis toolsets (e.g. Seismic, Genomics, Climate, etc.).
    • Remote visualisation of data resident on the filesystems.

Scientific Workflows may access resources across the Mahuika HPC Cluster and any (multi-cluster) Slurm queues on the Mahuika or Maui Ancillary Nodes. The /home, /nesi/project, and /nesi/nobackup filesystems are mounted on the Mahuika Ancillary Nodes.

Notes

  1. The Mahuika Ancillary nodes use Broadwell processors, while the Māui Ancillary Nodes have Skylake processors

Mahuika Ancillary Nodes (Cray CS400)

Large memory nodes

576 cores in 16 × Broadwell (E5-2695v4, 2.1 GHz, dual socket 18 cores per socket) nodes

Huge memory node

64 cores in 1 × Broadwell (E7-4850v4, 2.1 GHz, 4 socket, 16 cores per socket) node

Hyperthreading

Enabled

Local Disk

1.2TB SSD (on all Ancillary nodes)

Operating System

CentOS 7.4

GPGPUs

4 Large memory nodes each with 2 Nvidia Tesla P100s

Memory capacity per Large Memory node

512 GB

Memory capacity per Huge Memory node

4 TB (4096 GB)

Interconnect

FDR (54.5Gb/s) InfiniBand to EDR (100Gb/s)

Workload Manager

Slurm (Multi-Cluster)

Labels: hpc info mahuika cs400
Was this article helpful?
0 out of 0 found this helpful