Resource Requests
Exercises¶
- Request specific memory and CPUs for a threaded job
Write a job script that simulates a multithreaded bioinformatics tool: request 1 task, 4 CPUs per task, and 8G of memory. Inside the script, print the values of $SLURM_CPUS_PER_TASK and $SLURM_MEM_PER_NODE to confirm the allocation.
Hint / Solution
cat > resource_test.sh << 'EOF'
#!/bin/bash
#SBATCH --job-name=resource_test
#SBATCH --output=resource_test_%j.out
#SBATCH --time=00:05:00
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
#SBATCH --mem=8G
echo "CPUs per task: $SLURM_CPUS_PER_TASK"
echo "Memory per node (MB): $SLURM_MEM_PER_NODE"
echo "Number of tasks: $SLURM_NTASKS"
EOF
sbatch resource_test.sh
- Request exclusive node access
Submit a job with --exclusive and --mem=0 to claim an entire node. Inside the job, run nproc to see how many CPUs you received and free -g to check available memory.
Hint / Solution
- Use --constraint to request a node feature
Check what features are defined on your cluster's nodes using sinfo -o "%n %f". Then submit a job that requires a specific feature using --constraint. If no features are defined, try the exercise conceptually and write the #SBATCH line you would use.
Hint / Solution
- Check actual resource usage after a job completes
Submit a job that allocates 16G of memory but only uses a small amount (e.g., run sleep 30). After it completes, use sacct to compare requested memory (ReqMem) vs. actual peak usage (MaxRSS). Was the job over-provisioned?
Hint / Solution
sbatch --time=00:05:00 --mem=16G --wrap="sleep 30" --job-name=mem_check
# After completion:
sacct -j <jobid> --format=JobID,JobName,ReqMem,MaxRSS,Elapsed,State
# MaxRSS on the .batch step will be much less than 16G,
# indicating over-provisioning. For real workloads, use this
# technique to right-size your future requests.
- Compare --mem vs --mem-per-cpu
Submit two jobs that each request the equivalent of 16G total memory but using different flags: one with --mem=16G and one with --mem-per-cpu=4G --cpus-per-task=4. Use scontrol show job on each to compare how Slurm records the memory request.
Hint / Solution
JOB1=$(sbatch --parsable --time=00:05:00 --mem=16G --cpus-per-task=4 --wrap="sleep 60")
JOB2=$(sbatch --parsable --time=00:05:00 --mem-per-cpu=4G --cpus-per-task=4 --wrap="sleep 60")
scontrol show job $JOB1 | grep MinMemory
scontrol show job $JOB2 | grep MinMemory
# The first shows MinMemoryNode=16G, the second shows MinMemoryCPU=4G
scancel $JOB1 $JOB2