Skip to content

Submit Jobs

Submitting jobs on the cluster

The Slurm workload scheduler is used to manage the compute nodes on the cluster. Jobs must be submitted through the scheduler to have access to compute resources on the system.There are two groups of partitions (aka "queues") currently configured on the system. One group is specifically designated for CPU jobs, i.e., non-GPU jobs: * defq - default compute nodes, CPU only, and either 64, 128, or 256GB of memory. NOTE: Running jobs on the login nodes is prohibited and may result in account suspension.

  • 128gb256gb - explicitly request compute nodes with 128GB, or 256GB of memory respectively for larger memory jobs
  • debug - please see DebugPartition for more details
  • short - has access to 128GB Ivy Bridge nodes with a shorter (currently 2-day) timelimit, designed for quicker turnaround of shorter running jobs. Some nodes here overlap with defq.
  • 2tb - a special-purpose machine with 2TB of RAM and 48 3GHz CPU cores. Access is restricted to this partition, please email hpchelp@gwu.edu if you have applications appropriate for this unique system.

The other is specifically for jobs requiring GPU resources and should not be used for jobs that do not require GPU resources:

  • gpu - has access to the GPU nodes, each has two NVIDIA K20 GPUs
  • gpu-noecc - has access to the same GPU nodes, but disables error-correction on the GPU memory before the job runs
  • ivygpu-noecc - has the same NVIDIA K20 GPUs, but with newer Ivy Bridge Xeon processors
  • allgpu-noecc - combines both ivygpu-noecc and gpu-noecc

Note that you must set a timelimit (with the -t flag, for example -t 1:00:00 would set a limit of one hour) for your jobs when submitting, otherwise they will be immediately rejected. This allows the Slurm scheduler to keep the system busy by backfilling when trying to allocate resources for larger jobs. Currently there is no maximum timelimit, but you are encouraged to keep jobs limited to a day - longer running processes should checkpoint and restart to avoid losing significant amounts of outstanding work if there is a problem with the hardware or cluster configuration. The maximum time-limit for any job is 14 days. We will not under any circumstances increase time limits for jobs already begun. Please estimate your required time carefully and request a time limit accordingly. We will provide guidance to help you better understand your job's requirements if you are unsure. The time-node limit for a single job may not exceed 224 node*days which prevents any single job from allocating more the 16 nodes for 14 days.

 

As an example of a simple MPI job script, which could be submitted as sbatch job_script.sh:

#!/bin/sh
# one hour timelimit:
#SBATCH --time 1:00:00
# default queue, 32 processors (two nodes worth)
#SBATCH -p defq -n 32

module load openmpi

mpirun ./test

The Slurm documentation has further documentation of some of the advanced features. The Slurm Quick-Start User Guide provides a good overview. The use of Job Arrays (Job Array Support) is mandatory for people submitting a large quantity of similar jobs..

Matlab Example

#!/bin/bash

# set output and error output filenames, %j will be replaced by Slurm with the jobid
#SBATCH -o testing%j.out
#SBATCH -e testing%j.err 

# single node in the "short" partition
#SBATCH -N 1
#SBATCH -p short

# half hour timelimit
#SBATCH -t 0:30:00

module load matlab
ssh login4 -L 27000:128.164.84.113:27000 -L 27001:128.164.84.113:27001 -N &
export LM_LICENSE_FILE="27000@localhost"

# test.m is your matlab code
matlab -nodesktop < test.m