Both sides previous revision
Previous revision
Next revision
|
Previous revision
|
submitting_jobs_to_the_slurm_workload_manager [2020/07/01 14:42] deadline added MPI section link |
submitting_jobs_to_the_slurm_workload_manager [2021/04/28 14:51] (current) brandonm [Job Submission Tutorial] Word, punctuation, case, spacing, and formatting fixes |
=====Submitting Jobs to the Slurm Workload Manager===== | =====Submitting Jobs to the Slurm Workload Manager===== |
| |
All Limulus HPC systems use a workflow (user job) scheduler called [[https://slurm.schedmd.com/|Slurm]]. The purpose of the workflow scheduler is to distribute user programs across the cluster based on the amount of resources needed by each job because the amount of user programs may exceed the amount of resources. For example, if each program needs one core and the cluster has 32 cores, then running more than 32 programs will oversubscribe the cluster resources. To manage a possible over-subscribed situation, the workflow scheduler will place all jobs in a //work queue// and run jobs (empty the queue) with a first-in then first-out scheduling policy. (The scheduling policy can be modified to use priorities etc.) | All Limulus HPC systems use a workflow (user job) scheduler called [[https://slurm.schedmd.com/|Slurm]]. The purpose of the workflow scheduler is to distribute user programs across the cluster based on the amount of resources needed by each job, since the amount of user programs may exceed the amount of resources. For example, if each program needs one core and the cluster has 32 cores, then running more than 32 programs will oversubscribe the cluster resources. To manage a possible over-subscribed situation, the workflow scheduler will place all jobs in a //work queue// and run jobs (empty the queue) with a first-in then first-out scheduling policy. (The scheduling policy can be modified to use priorities, etc.) |
| |
====Job Submission Tutorial==== | ====Job Submission Tutorial==== |
#SBATCH -t 01:30:00 # Run time (hh:mm:ss) - 1.5 hours | #SBATCH -t 01:30:00 # Run time (hh:mm:ss) - 1.5 hours |
| |
# This file is an example Slurm submission script. It resemble a bash script | # This file is an example Slurm submission script. It resembles a bash script |
# and is actual interpreted as such. Thus basic shell commands can be executed | # and is actually interpreted as such. Thus basic shell commands can be executed |
# with the results captured in the output file. | # with the results captured in the output file. |
| |
echo "First Slurm Job" | echo "First Slurm Job" |
# run a system binary files | # run system binary files |
date | date |
uptime | uptime |
# run your binary file, now the use of "./" indicating "this directory" | # run your binary file, with the use of "./" indicating "this directory" |
./hello-world | ./hello-world |
echo -n " from cluster node " | echo -n " from cluster node " |
| |
The ''first-slurm-job.sh'' is a bash script file that tells Slurm how to run the program. The ''#SBATCH'' lines | The ''first-slurm-job.sh'' is a bash script file that tells Slurm how to run the program. The ''#SBATCH'' lines |
are not comments, but are actually directives telling Slurm how to run the applications. The directives are self-explanatory except for the output file name. Since each program submitted to Slurm will produce a unique output that can be viewed at a later time, the output file has a ''%j'' in the name. Slurm will substitute the job number for this variable creating a unique file. (e.g. Slurm assigns every job submitted to the queue a unique job number, starting at 1 and incrementing with each new submission.) | are not comments, but are actually directives telling Slurm how to run the applications. The directives are self-explanatory except for the output file name. Since each program submitted to Slurm will produce a unique output that can be viewed at a later time, the output file has a ''%j'' in the name. Slurm will substitute the job number for this variable, creating a unique file. (Slurm assigns every job submitted to the queue a unique job number, starting at 1 and incrementing with each new submission.) |
| |
To run the program using Slurm simply enter: | To run the program using Slurm simply enter: |
</code> | </code> |
| |
The work queue can be examined using the ''squeue'' command | The work queue can be examined using the ''squeue'' command: |
<code> | <code> |
$ squeue | $ squeue |
There are a few points to note about the above process. | There are a few points to note about the above process. |
| |
First, the ''hostname'' command in the script prints the node name where the job was run. Limulus nodes have the following names ''headnode'', ''n0'', ''n1'', ''n2'' (The double-wide units go to ''n6'') In this case, the job was run on the head node because some of its cores are included in the Slurm resources (this setting is adjustable). If six jobs are submitted at the same time then the "overflow" work will be placed on the nodes. For example: | First, the ''hostname'' command in the script prints the node name where the job was run. Limulus nodes have the following names: ''headnode'', ''n0'', ''n1'', ''n2''. (The double-wide units go to ''n6''.) In this case, the job was run on the head node because some of its cores are included in the Slurm resources (this setting is adjustable). If six jobs are submitted at the same time then the "overflow" work will be placed on the nodes. For example: |
| |
<code> | <code> |
Submitted batch job 667 | Submitted batch job 667 |
Submitted batch job 668 | Submitted batch job 668 |
<code> | </code> |
| |
Checking ''squeue'' indicates that the first four jobs were assigned to the headnode and the next two were assigned to node ''n0''. | Checking ''squeue'' indicates that the first four jobs were assigned to the headnode and the next two were assigned to node ''n0''. |
<code> | <code> |
[testing@headnode slurm-tests]$ squeue | $ squeue |
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) | JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) |
663 normal first-jo testing R 0:04 1 headnode | 663 normal first-jo testing R 0:04 1 headnode |
</code> | </code> |
| |
If the output file from job 668 is inspected, the line ''Hello, World!from cluster node n0'' indicates that the job was run on ''n0''. | If the output file from job 668 is inspected, the line ''Hello, World! from cluster node n0'' indicates that the job was run on ''n0''. |
| |
Second, any program errors are sent to the output file. Slurm does not care what you run or whether it works, it basically will run the job when resources are available and report the results. | Second, any program errors are sent to the output file. Slurm does not care what you run or whether it works, it basically will run the job when resources are available and report the results. |
| |
Finally, Jobs that run on Slurm can request multiple cores on a single node or spread them across multiple nodes. See the [[compiling_and_running_an_mpi_application|Compiling and Running and MPI Application]] section) It can also be configured so that programs must request an amount of memory per node so that any memory contention issues can be avoided. As indicated above, the amount of run time is requested after which Slurm will kill your job. Slurm can be configured to set priorities based on the user or the type and amount of resources required. | Finally, jobs that run on Slurm can request multiple cores on a single node or spread them across multiple nodes. (See the [[compiling_and_running_an_mpi_application|Compiling and Running an MPI Application]] section.) It can also be configured so that programs must request an amount of memory per node so that any memory contention issues can be avoided. As indicated above, the amount of run time is requested, after which Slurm will kill your job. Slurm can be configured to set priorities based on the user or the type and amount of resources required. |
| |