Skip to content
Snippets Groups Projects
Commit 29c1b3dc authored by Lukas Werner's avatar Lukas Werner
Browse files

Update README.md

parent 85a2d213
No related merge requests found
...@@ -24,6 +24,8 @@ Together with the public key being deposited on the cluster, this will ensure pr ...@@ -24,6 +24,8 @@ Together with the public key being deposited on the cluster, this will ensure pr
In this file, we configure options for the SLURM submission on the test cluster. In this file, we configure options for the SLURM submission on the test cluster.
A example config can be found in `example.gitlab-ci.yml`. A example config can be found in `example.gitlab-ci.yml`.
Each job needs to have the tag `testcluster`.
SLURM options can be set either globally in the `variables` section, or on a per-job basis. SLURM options can be set either globally in the `variables` section, or on a per-job basis.
The latter will override global variables with the same name. The latter will override global variables with the same name.
...@@ -32,15 +34,23 @@ variables: ...@@ -32,15 +34,23 @@ variables:
SLURM_NODELIST: "phinally" SLURM_NODELIST: "phinally"
SLURM_TIMELIMIT: "30" SLURM_TIMELIMIT: "30"
...
build:
script:
- make
tags:
- testcluster
... ...
benchmark-broadep2: benchmark-broadep2:
variables: variables:
SLURM_NODELIST: "broadep2" # uses broadep2 instead of phinally for this benchmark SLURM_NODELIST: "broadep2" # uses broadep2 instead of phinally for this benchmark
SLURM_TIMELIMIT: "10" # limit time to 10 instead of 30 minutes SLURM_TIMELIMIT: "10" # limit time to 10 instead of 30 minutes
tags:
- testcluster
``` ```
This configuration already suffices to have the CI jobs running on the node `phinally`. This configuration already suffices to have the CI jobs running on the node `phinally`, with `benchmark-broadep2` being executed on `broadep2`.
With "phinally" being the default node and time limit by default at 120, even having no configuration at all would work just fine. With "phinally" being the default node and time limit by default at 120, even having no configuration at all would work just fine.
To pick a node to run your job on, set `SLURM_NODELIST` to the nodes hostname. To pick a node to run your job on, set `SLURM_NODELIST` to the nodes hostname.
......
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment