What is Slurm?

Many software research applications these days tend to consume large amounts of computer resources—CPU, GPU, or RAM. At SCI, there are a number of high-performance machines that provide substantial resources. We use a job scheduling infrastructure, based on Slurm, to manage access to those resources so that they can be used consistently and efficiently.

The general approach when using Slurm is that instead of manually starting processes that consume large amounts of resources, you submit a “job” to the scheduling infrastructure (Slurm), along with a specification of the resources needed to run the job (e.g., GPU, CPU, RAM). That infrastructure is then responsible for finding appropriate computer resources as soon as they become available and running the job on your behalf. This prevents computers from becoming overloaded and bogged down and eliminates the need to hunt around for free resources.

How does it work?

There are several computers and services that comprise the Slurm infrastructure, but there are really only two components that you need to be aware of when submitting jobs through Slurm:

User Node – Instead of logging directly into individual computers to run jobs, the job scheduler is orchestrated from one central machine, which at SCI is compute.sci.utah.edu. When logged into this machine (via SSH), you have access to all the Slurm commands, such as sinfo, sbatch, etc. This is where you go to do your work and manage your jobs. NOTE: this machine is not meant for high-intensity compute workloads, that is what the compute servers are for.

Compute Server – Machines that have capacity to run jobs, i.e. have large amounts of CPU/RAM, GPUs, etc, the nodes that actually do all of the computing, are the compute servers. As a user, you should be less concerned about what specific machine you want to run on and instead think about what resources you need to get work done; Slurm figures out what are the best resources to use given the stated needs, and allocates/reserves those resources for your job and your job alone. And when those resources are available, your job is run.

By orchestrating all of the jobs across all of the available resources, Slurm is able to make efficient use of the available compute power, particularly when dealing with long-running jobs that require a lot of capacity.

How do I use it?

First off, you need a Slurm account created for you. If you are new to SCI, your account has already been created. If you started at SCI before June of 2025, you may need to contact SCI IT to get your account created, either at or on Slack at #slurm-users.

Any powerful tool requires a bit of learning, understanding, and familiarity to use well—and Slurm is no exception. There are many resources available online, but a basic quick-start guide is given here.

Overview

Once you SSH into compute, which is just a normal interactive Linux computer, you’ll have access to a number of commands that allow you to start and stop jobs, monitor running jobs and resource usage, and more. When a scheduled job completes, your program’s output will be logged to a file.

A few basic pieces of Slurm terminology:

Node – an individual computer that provides some compute resources such as CPU, GPU and/or RAM
Partition – a group of nodes with similar sets of resources across which Slurm can schedule jobs
Account – a Slurm entity that groups users and controls which partitions they can submit jobs to (see Accounts and Partitions below)
Job – a submitted task or set of tasks given to Slurm to be scheduled and run whenever resources are available; job steps are discrete (and possibly parallelized) tasks that make up a job
Queue – the list of jobs waiting to be scheduled

Basic Commands

There are many commands and options for using Slurm, which can feel overwhelming at first. However, the basics are relatively straightforward.

One of the first things you may want to do after logging into compute.sci.utah.edu is to see a list of available resources. This is done with the sinfo command:

$ sinfo
PARTITION        AVAIL  TIMELIMIT  NODES  STATE NODELIST
general-cpu*        up 1-00:00:00      8  idle  cibcsm[1-4],spartacus-[4-7]
general-cpu-long    up 5-00:00:00      8  idle  cibcsm[1-4],spartacus-[4-7]
general-gpu         up 1-00:00:00      3  idle  cibcgpu[1-2,4]
general-gpu         up 1-00:00:00      2  mix   spartacus-[10,12]
general-gpu         up 1-00:00:00      2  alloc spartacus-[1-2]
preempt-gpu         up 1-00:00:00      4  idle  atlas,eris,pegasus,spartacus-3
preempt-cpu         up 1-00:00:00      2  idle  spartacus-[8-9]
wormulon            up    2:00:00      4  idle  wormulon-[1-4]

For a more detailed view that includes individual node names, states, and CPU counts, you can use a custom format string:

$ sinfo -o '%16P %12n %.6t %.4c %10l'
PARTITION        NODELIST     STATE CPUS TIMELIMIT
general-cpu*     cibcsm1       idle  192 1-00:00:00
general-cpu*     cibcsm2       idle  192 1-00:00:00
general-cpu*     spartacus-4   idle  128 1-00:00:00
general-gpu      cibcgpu1      idle   32 1-00:00:00
general-gpu      spartacus-1  alloc   64 1-00:00:00
...

As you can see here, there are a number of partitions displayed, each of which are comprised of several nodes, each of which have their own current state (e.g. idle, fully or partially allocated, and down/unavailable). Note that you will only see partitions that you have access to, so your results may vary from what is shown here. You can get detailed information about nodes by running scontrol show node <nodename>.

To see the list of currently running and pending jobs, use the squeue command:

$ squeue
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
             28682 general-g prad_seg    hamid  R    1:37:39      1 spartacus-12
             28225 general-g     bash arefeen.  R   20:14:32      1 cibcgpu3
             28286 general-g    E-t-r shossein  R   17:32:10      1 spartacus-1
             28350 general-g imagenet    tolga  R   12:58:10      1 spartacus-10
             28588 general-g     4121 zrandhaw PD       0:00      1 (Priority)

Here, you can see there are several jobs running (R) across various nodes, and one job pending (PD) waiting for resources to become available. The REASON column for pending jobs tells you why the job hasn’t started yet—Priority means other higher-priority jobs are ahead of it, while Resources means the requested resources aren’t currently available.

To run a job, use the sbatch command. You’ll need to write a file that is essentially a bash script with some special #SBATCH directives that tell Slurm how to run your job. While Slurm has many features, here is a simple example script that:

  • Prints the computer hostname and start time
  • Waits for 5 seconds
  • Prints the end time
  • Requests 1 CPU and 1 GB of RAM on the wormulon (testing) partition
#!/bin/bash
#SBATCH --job-name=mytest
#SBATCH --account=common
#SBATCH --partition=wormulon
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --time=00:05:00
#SBATCH --mem=1G
#SBATCH --output=log_%J.txt
#SBATCH --mail-user=
#SBATCH --mail-type=ALL

srun echo "Running on:" $( hostname )
srun echo "Time start:" $( date )
srun sleep 5
srun echo "Time end  :" $( date )

And here is what submitting and monitoring the job looks like:

$ sbatch myjob.sh
Submitted batch job 28750

$ squeue --me
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
             28750  wormulon   mytest    clake  R       0:03      1 wormulon-1

$ cat log_28750.txt
Running on: wormulon-1
Time start: Sun Feb  8 12:05:01 MST 2026
Time end  : Sun Feb  8 12:05:06 MST 2026

As you can see, Slurm was asked to run this job on the wormulon partition, and it selected the available node wormulon-1 to run it (as verified by the output of the hostname command). Note how the --output directive logs the output to a file named log_%J.txt, where %J is replaced by the Job ID assigned by Slurm. Also note the use of squeue --me, which filters the job queue to show only your own jobs.

Accounts and Partitions

Slurm uses accounts and partitions together to control who can run jobs where. This is one of the most common sources of confusion for new users, and understanding how they work will save you a lot of troubleshooting time.

What is an account?

A Slurm account is not the same as your login account. It’s an organizational entity within Slurm that groups users and determines which resources they can access. Every SCI user is a member of at least the common account, which provides access to the general-use partitions. Some users may also belong to additional accounts tied to their research group (e.g. tolga-lab, medvic, ceg), which provide access to group-specific partitions.

When you submit a job, Slurm needs to know which account to charge it to. If you only belong to one account, Slurm will use it automatically. If you belong to multiple accounts, you may need to specify which one to use with the --account flag.

Checking your account and partition access

To see which Slurm accounts you belong to, run:

$ sacctmgr show association where user=$USER format=Account,User,QOS
   Account       User                  QOS
---------- ---------- --------------------
    common    jdoe                  normal
       ceg    jdoe                  normal

In this example, the user jdoe belongs to two accounts: common and ceg.

Which partitions you can actually submit jobs to is determined by a combination of three things, not just your account:

  1. Account – Your Slurm account (e.g. common, ceg) must be permitted by the partition. At SCI, all partitions currently allow all accounts, so this is rarely the issue on its own.
  2. Linux group membership – Research group partitions (e.g. tolga-lab, medvic, CEG) are restricted to specific Linux groups. If you aren’t in the right Linux group, you won’t be able to submit to that partition regardless of your Slurm account.
  3. QOS (Quality of Service) – Some partitions require a specific QOS. For example, the preemptible partitions (preempt-gpu, preempt-cpu) require --qos=preemptible. If your account doesn’t have access to that QOS, the job will be rejected.

The quickest way to see which partitions are available to you is to simply run sinfo—only partitions you can submit to will be listed.

Using the right account in your job scripts

It is good practice to always include the --account directive in your job scripts, especially if you belong to more than one account. For general-use partitions, use:

#SBATCH --account=common
#SBATCH --partition=general-gpu

If you are submitting to a research-group partition, use your group’s account:

#SBATCH --account=ceg
#SBATCH --partition=CEG

Fixing “Invalid account or account/partition combination”

If you see this error:

sbatch: error: Batch job submission failed: Invalid account or account/partition combination specified

It means the account and partition in your job script don’t match up. Here is how to diagnose and fix it:

  1. Check your accounts by running: sacctmgr show association where user=$USER format=Account,User,QOS
  2. Make sure you’re specifying --account in your script. If you belong to multiple accounts and omit it, Slurm may pick the wrong default.
  3. Match the account to the partition. Common pairings:
    • --account=common works with: general-cpu, general-cpu-long, general-gpu, wormulon, preempt-cpu, preempt-gpu
    • Group accounts (e.g. --account=ceg) work with their group’s partition (e.g. CEG) as well as the general and preemptible partitions
  4. Check the QOS. Some partitions require a specific QOS (see Preemptible Partitions below). For example, the preempt partitions require --qos=preemptible.

If your account is missing or you believe you should have access to a partition that you don’t, contact SCI IT at or on Slack at #slurm-users.

What resources do I have access to?

The compute resources available at SCI change over time, as SCI IT or different research groups purchase or upgrade equipment. Below is a summary of the partitions and what they offer.

General partitions (available to all SCI users)

These partitions are open to everyone with a Slurm account. Use --account=common when submitting to them.

Partition Purpose Max Wall Time Default Time Nodes Key Resources
wormulon Testing and learning Slurm 2 hours 20 min wormulon-[1-4] Minimal CPU/RAM; wormulon-4 has 2x GTX 1080 Ti GPUs
general-cpu (default) CPU-intensive workloads 1 day 1 hour cibcsm[1-4], spartacus-[4-7] Up to 192 CPUs and 3 TB RAM per node
general-cpu-long Long-running CPU workloads 5 days 1 hour Same as general-cpu Same nodes, longer time limit
general-gpu GPU-intensive workloads 1 day 1 hour cibcgpu[1-4], spartacus-[1-2,10,12-13] Various GPUs from Titan V to H200 (see GPU Resources)

The wormulon partition is meant for learning and testing Slurm only. The nodes are not powerful and are not intended for anything computationally intensive—just a playground to try things out and debug, where resource contention is limited.

Note about general-cpu-long: This partition uses the same physical nodes as general-cpu but allows jobs to run for up to 5 days. To use it, you must specify --partition=general-cpu-long. If your job’s time limit exceeds 1 day, you must use this partition.

A note on fairshare priority: On the general partitions, Slurm uses a fairshare algorithm to determine job priority. If you have been running many jobs recently, your priority will gradually decrease relative to other users, ensuring that everyone gets a fair share of the available resources. Priority recovers over time as your usage declines. This means that if your jobs are pending with reason (Priority), it may be because other users who have used fewer resources recently are being given a turn.

Preemptible partitions (available to all, with caveats)

The preemptible partitions give all SCI users access to additional compute resources—specifically the hardware owned by individual research groups—but with an important tradeoff: your job can be stopped and requeued at any time if a member of the owning research group needs those resources.

Partition Max Wall Time Nodes Key Resources
preempt-gpu 1 day atlas, chimera, eris, helios, pegasus, spartacus-[3,11], titan, zagreus Various GPUs including L40S, H200, A6000, Titan RTX, A800
preempt-cpu 1 day spartacus-[8-9] 128 CPUs and ~1.1 TB RAM per node

To use the preemptible partitions, you must specify the preemptible QOS:

#SBATCH --account=common
#SBATCH --partition=preempt-gpu
#SBATCH --qos=preemptible

How preemption works: The nodes in the preempt partitions are the same physical machines owned by research groups (e.g. the tolga-lab, medvic, and CEG groups). When group members submit jobs to their own partition, those jobs run at a higher priority. If the group’s nodes are fully occupied by preemptible jobs, Slurm will preempt the lower-priority jobs to make room.

Here is what happens when your job is preempted:

  1. Your job receives a SIGTERM signal.
  2. Your job is stopped and automatically requeued.
  3. When resources become free again, your job starts over from the beginning.

This has important implications for how you design your workloads on preemptible partitions:

Checkpointing: If your job takes a long time to run, your code should save its state periodically (e.g. to a file on disk). When the job restarts after preemption, it should detect and load the most recent checkpoint and resume from where it left off. Without checkpointing, you lose all progress every time your job is preempted.

Job arrays: Instead of one massive job, consider breaking your work into smaller independent tasks using Slurm job arrays. If a preemption occurs, only the currently running task needs to restart—completed tasks are unaffected.

Check queue status first: Before submitting to a preempt partition, run squeue -p preempt-gpu (or the relevant PI partition like tolga-lab) to see how busy those nodes are. If the owning group is actively running many jobs, your preemptible job is more likely to be interrupted.

Preemptible partitions are best suited for workloads that can tolerate interruption—either because they use checkpointing, because they are made up of many small independent tasks, or because restarting from scratch is acceptable.

Research group partitions

Some research groups have purchased dedicated compute hardware. Members of those groups have priority access to that hardware through group-specific partitions. These partitions are restricted to group members via Linux group membership, so they will only appear in your sinfo output if you have access.

Partition Max Wall Time Nodes Access
tolga-lab 7 days atlas, chimera, eris, helios, pegasus, spartacus-3, titan tolga-lab group members
medvic 7 days spartacus-11 medvic-lab group members
CEG 1 day zagreus, spartacus-[8-9] CEG group members

Consult with your research group to learn what additional resources may be available to you and how to get access. If you believe you should be in a group but aren’t, contact SCI IT.

GPU resources

SCI has a variety of GPU hardware across its compute nodes. When requesting GPUs, you use the --gres flag. You can request GPUs generically (e.g. --gres=gpu:1) and Slurm will assign whatever is available in your chosen partition, or you can request a specific GPU type:

# Request any 1 GPU in the partition
#SBATCH --gres=gpu:1

# Request a specific GPU type
#SBATCH --gres=gpu:nvidia_h200:1
#SBATCH --gres=gpu:nvidia_l40s:2

Here is a summary of the GPU hardware available in each partition:

Partition Node GPU Type Count VRAM
general-gpu cibcgpu[1-3] Nvidia Titan V 4 per node 12 GB
general-gpu cibcgpu4 Nvidia A100 SXM4 4 40 GB
general-gpu spartacus-[1-2] Nvidia L40S 8 per node 48 GB
general-gpu spartacus-10 Nvidia H200 NVL 4 141 GB
general-gpu spartacus-[12-13] Nvidia H200 8 per node 141 GB
preempt-gpu spartacus-3 Nvidia L40S 8 48 GB
preempt-gpu spartacus-11 Nvidia H200 NVL 4 141 GB
preempt-gpu atlas, eris, helios Nvidia Titan RTX 2-4 per node 24 GB
preempt-gpu chimera, pegasus Nvidia RTX A6000 4 per node 48 GB
preempt-gpu zagreus Nvidia A800 4 40 GB
preempt-gpu titan Nvidia Titan X (Pascal) 4 12 GB
wormulon wormulon-4 Nvidia GTX 1080 Ti 2 8 GB

To see exactly what GPUs and features are available on a specific node, run:

$ scontrol show node spartacus-1 | grep -i gres
   Gres=gpu:nvidia_l40s:8(S:0-1)

Feature constraints

In addition to requesting GPUs by type with --gres, Slurm supports feature constraints that let you target specific hardware properties. This is useful when your code requires a particular GPU architecture, CPU instruction set, or when you want to target the newest (or avoid the oldest) hardware available.

To use a feature constraint, add the --constraint flag to your job submission:

#SBATCH --constraint="gpu_hopper"

GPU features

You can request GPUs by their architecture (best for flexibility) or by their specific model (best for reproducibility).

By architecture:

Feature Tag Architecture GPU Models
gpu_hopper Hopper H200, H200 NVL
gpu_ada Ada Lovelace L40S
gpu_ampere Ampere A100, A800, RTX A6000
gpu_turing Turing Titan RTX
gpu_volta Volta Titan V
gpu_pascal Pascal Titan X (Pascal), GTX 1080 Ti

By specific model:

Feature Tag GPU Model VRAM
h200 Nvidia H200 / H200 NVL 141 GB
l40s Nvidia L40S 48 GB
a100 Nvidia A100 SXM4 40 GB
a800 Nvidia A800 40 GB
a6000 Nvidia RTX A6000 48 GB
titan_rtx Nvidia Titan RTX 24 GB
titan_v Nvidia Titan V 12 GB
titan_x_pascal Nvidia Titan X (Pascal) 12 GB
gtx1080ti Nvidia GTX 1080 Ti 8 GB

You can also filter by VRAM size: gpu_141gb, gpu_48gb, gpu_40gb, gpu_24gb, gpu_12gb, gpu_8gb.

CPU features

By manufacturer: cpu_amd, cpu_intel, cpu_epyc (AMD EPYC server), cpu_xeon (Intel Xeon server)

By generation (newest to oldest):

  • AMD: epyc_bergamo, epyc_genoa, epyc_milan
  • Intel: xeon_emerald_rapids, xeon_ice_lake, xeon_cascade_lake, xeon_skylake

Example constraints

# Request any Hopper or Ada Lovelace GPU (the newest generations)
#SBATCH --constraint="gpu_hopper|gpu_ada"

# Request an L40S on an AMD system
#SBATCH --constraint="l40s&cpu_amd"

# Request any GPU node, but exclude Pascal-era hardware
#SBATCH --constraint="gpu&!gpu_pascal"

# Request a node with at least 48 GB of GPU VRAM
#SBATCH --constraint="gpu_48gb|gpu_141gb"

Note the syntax: use | for OR (any of these features), & for AND (all of these features), and ! for NOT (exclude this feature). You can see all features available on a node by running scontrol show node <nodename> and looking at the AvailableFeatures line.

How do I interact with my running job?

When running commands from a command line, you have direct access to the running job—its output is written to your terminal, you can pause or terminate the process, etc. Since Slurm exists as an orchestration layer between the user (i.e. you) and the compute resources, the way you interact with running jobs is a bit different. The STDOUT of your processes get written to a log file (see the --output option for sbatch) and you need to use Slurm commands, such as squeue to show all currently running jobs, or scancel <jobid> to stop a job.

Another difference is that direct login access (i.e. via ssh) to compute nodes is restricted; it is important that the resources Slurm is configured to use remain free to be scheduled and assigned to jobs, so to ensure that conflicts don’t occur, users are not allowed to run anything computationally intensive directly on compute nodes.

There are a few other ways in which you can interact with your jobs that you may find helpful, however, in order to have the same or at least similar types of interactions with running processes.

Connecting to a running job

Once a job has been scheduled and is happily chugging along running on its assigned resources, you can open up an interactive shell on the machine that the job is running on using srun with the --overlap argument, which will fire up a new process utilizing the same resources as your running job. For example:

# Submit a long-running job
$ sbatch long_job.sh
Submitted batch job 28800

# Verify it's running and note the node
$ squeue --me
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
             28800 general-g long_job    clake  R       2:15      1 spartacus-1

# Open a shell on the job's node
$ srun --jobid=28800 --overlap --pty bash
clake@spartacus-1:~$

If your job is running across multiple nodes and you want to connect to a specific one, you can use sacct to see which nodes your job is using and the -w argument to specify the host:

# Check which nodes the job is using
$ sacct -j 28800 --format=JobID,NodeList
JobID         NodeList
------------ ----------
28800        spartacus-[1-2]
28800.batch  spartacus-1

# Connect to a specific node
$ srun --jobid=28800 --overlap -w spartacus-2 --pty bash
clake@spartacus-2:~$

Interactive jobs

Another way to have direct access to your running code is to create an interactive job, which is basically a job where the running process is a shell. To do so, you still go through the Slurm system, so that the appropriate resources are allocated and dedicated just to your use, but you have a shell to work from, like you would on a non-Slurm machine. For example:

$ srun --account=common --partition=wormulon --time=00:30:00 --ntasks=1 --pty bash
clake@wormulon-1:~$ hostname
wormulon-1
clake@wormulon-1:~$ # Do your work here...
clake@wormulon-1:~$ exit

Here you can see that the srun command requested a reserved time of 30 minutes, and the --pty bash argument says to run bash and hook up the terminal to the one currently in use—the result being a shell that you can use to do whatever you like. From that shell you can start processes by hand, call them using srun to parallelize jobs across your allocated nodes, etc. This may be particularly helpful for shorter runs to debug your code; longer running jobs may be easier to deal with when using tmux or screen.

For a GPU-enabled interactive session:

$ srun --account=common --partition=general-gpu --time=02:00:00 --ntasks=1 --gres=gpu:1 --mem=16G --pty bash
clake@cibcgpu1:~$ nvidia-smi
# ... GPU details shown here ...

Using tmux in an interactive job

When using Slurm’s srun to create an interactive job, tmux can greatly enhance your workflow. First, after connecting to a compute node via srun, start a new tmux session by simply typing tmux new-session -s name_of_session. This will create a persistent session you can detach from and reattach to.

Once inside the tmux session, you can create multiple windows (think of them as separate tabs in a terminal) using Ctrl-b c. You can switch between windows using Ctrl-b n (next) and Ctrl-b p (previous). To further organize your work, you can split windows into panes (multiple terminal views within a single window) using Ctrl-b % (vertical split) and Ctrl-b " (horizontal split). You can then navigate between panes using Ctrl-b <arrow key> or just by clicking on it. This setup allows you to, for example, run your main computation in one pane and monitor its progress in another, all within the same tmux session running inside your interactive Slurm job. To detach from a running session you can use Ctrl-b d. To re-attach to your most recent tmux session you can run tmux attach to jump back in.

If you would like to view additional resources for using tmux, we would recommend tmuxcheatsheet.com. There are a lot of great tips and tricks that allow you to navigate tmux like a pro.

Checking Job History

After a job completes (or fails), you can use sacct to review its history, including how long it ran, how much memory it used, and its exit status. This is especially useful for diagnosing failed jobs.

# Show your recent jobs
$ sacct --format=JobID,JobName,Partition,State,Elapsed,MaxRSS,ExitCode -j 28750
JobID           JobName  Partition      State    Elapsed     MaxRSS ExitCode
------------ ---------- ---------- ---------- ---------- ---------- --------
28750            mytest   wormulon  COMPLETED   00:00:06                 0:0
28750.batch       batch             COMPLETED   00:00:06      1684K     0:0
28750.0            echo             COMPLETED   00:00:00          0     0:0

Some useful sacct options:

# Show all your jobs from the last 7 days
$ sacct --starttime=$(date -d '7 days ago' +%Y-%m-%d) --format=JobID,JobName,Partition,State,Elapsed,ExitCode

# Show detailed info for a specific job
$ scontrol show job <jobid>

Common job states you may see: COMPLETED (finished successfully), FAILED (exited with a non-zero exit code), TIMEOUT (exceeded its time limit), CANCELLED (was cancelled by the user or an admin), PREEMPTED (was preempted by a higher-priority job), and OUT_OF_MEMORY (exceeded its memory allocation).

Common Errors and Troubleshooting

Here are some of the most common issues users encounter, and how to resolve them.

“Invalid account or account/partition combination specified”

This is the most common error. See Fixing “Invalid account or account/partition combination” above. In short: make sure your --account and --partition match, and that you have access to both. If using preemptible partitions, make sure you’ve included --qos=preemptible.

“Batch job submission failed: Requested time limit is invalid”

Your --time value exceeds the partition’s maximum. For example, general-gpu allows a maximum of 1 day. If you need longer, use general-cpu-long (up to 5 days) for CPU workloads, or talk to SCI IT about options for long-running GPU jobs.

Job is pending with reason “(Priority)” or “(Resources)”

Priority means other jobs submitted before yours (or with higher priority) are ahead in the queue. Resources means the resources you requested (GPUs, CPUs, memory) aren’t currently available, and your job will start as soon as they free up. Both are normal—your job will run when it’s its turn.

Job immediately fails with “OUT_OF_MEMORY”

Your job used more memory than you requested. Increase the --mem (total memory per node) or --mem-per-cpu value. Note the default memory allocation varies by partition: 4 GB per CPU on general-cpu/general-gpu, and 1 GB per CPU on wormulon. You can check how much memory a failed job actually used with sacct -j <jobid> --format=JobID,MaxRSS,ReqMem.

Job runs but doesn’t seem to use the GPU

Make sure you’ve requested a GPU with --gres=gpu:1 (or more). Simply submitting to general-gpu does not automatically allocate a GPU to your job—you must explicitly request it. You can verify your job has GPU access by running nvidia-smi from within the job or interactive session.

“Requested node configuration is not available”

You’re requesting more resources than any single node in the partition can provide. For example, requesting 256 CPUs on general-cpu when the largest nodes have 192 CPUs. Check available resources with sinfo -N -l -p <partition>.

Compute Node Update Schedule

Like all other computers, the compute nodes in Slurm need to be updated regularly to stay up to date with their software, security patches, etc.

How updates work

The way this works in Slurm is that throughout the month, individual compute nodes will be scheduled for updating by being put into DRAIN mode—this means that any jobs currently running on that node will continue to run but new jobs will not be scheduled to run. Note that you can still submit jobs to a node in DRAIN mode; they just won’t be scheduled until the node is returned to normal operations. Once all currently running jobs have completed, the node will be updated, rebooted, and returned to normal operations. This process generally takes less than 15 minutes, so once the updates start being applied, the node should be available to accept jobs again pretty quickly.

The schedule

The update process is spread across all Slurm nodes evenly throughout the month. While actual details may vary over time, generally speaking you won’t see more than a few machines going through this update process at any one time. Note that if you see that a node is in DRAIN mode (i.e. when running sinfo you see it is listed as “drng”), you can see the reason it is in that state by running sinfo -R.

Additional Resources