Job Prioritisation & Project classes

The Priority assigned to each job is determined by:

  • Whether your job is a debug job;
  • The Project Class your job is assigned to (See the Table below);
  • How much compute resource you have used in the recent past;
  • How long your job has been waiting in the queue (i.e. partition);
  • Whether your job is small enough to run without impacting other scheduled jobs, and
  • Whether you have exhausted your HPC Project's core-h/node-h allocation.

Table 1: Base Job priorities on NeSI Platforms (both Maui and Mahuika)

Base Priority

Project Class

QoS

Comment

Highest Priority

Merit

merit

This ensures that whichever partition your job runs in, it will get the highest base priority

High Priority

Institution, Subscriber

institution

Use this QoS if your job is accessing institutional resources, i.e. Collaborator or Subscriber

Low Priority

Proposal Development

proposal-development

Use this QoS if your job is a proposal development project

Lowest Priority

Post-graduate

post-graduate

Use this QoS if your job is part of a Post Graduate project.

The base priorities in the above table do not mean that all jobs in high-priority classes will run ahead of all jobs in lower-priority classes, but they do make a contribution to overall priority in that (for example) a job using the Merit QoS will star before a job using the Institution QoS if the jobs were submitted at the same time and are otherwise equal.

The order in which any job will run in a partition is based on its run-time priority, which is determined by the fair-share score of the project and whether the job can run as backfill or not. The base priority is modulated by the following factors:

  1. Job priority decreases as the project accumulates core-hours over the last 30 days, across all partitions. This "fair share" policy means that projects that have consumed many CPU core hours in the recent past compared to their expected rate of use (either by submitting and running many jobs, or by submitting and running large jobs) will have a lower priority, and projects with little recent activity compared to their expected rate of use will see their waiting jobs start sooner. We do not have a strict "first-in-first-out" queue policy.
  2. Job priority increases with job wait time in the partition (unless the Project has exhausted its CPU core hour allocation, in which case this does not apply). After the history-based user priority calculation in 1), the next most important factor for each job's priority is the amount of time that each job has already waited in the partition. For all the jobs belonging to a Project, these jobs will most closely follow a "first-in-first-out" policy.
  3. Within any partition, job priority increases with job size, in cores. This least important factor slightly favours larger jobs, as a means of somewhat countering the inherently longer wait time necessary for allocating more cores to a single job.
  4. Where a job can run in backfill, without holding up any higher priority job, it will run immediately.
  5. All projects have the same priority in the debug QoS, so jobs submitted using "debug" are scheduled on a first-in, first-out basis.

To view your priority weighting factors, you can type the following command: sprio -w

To see the priorities of your currently pending jobs, type the following command: sprio -u $USER

For an overview of how Slurm is configured on Mahuika and Maui, see the Slurm Job Scheduler Design section

 

 

Was this article helpful?
0 out of 0 found this helpful