nf-core/configs: ceci_nic5
CÉCI NIC5 cluster profiles provided by the GIGA Bioinformatics Team.
NIC5 CÉCI Cluster Configuration
This configuration provides sensible default parameters to run nf-core pipelines (and nf-core compatible pipelines) on the CÉCI NIC5 cluster.
To use it, add -profile ceci_nic5
when running a pipeline.
This will automatically download the ceci_nic5.config
.
The configuration sets slurm
as the default executor and enables singularity
as container runner.
Loading the required modules
Before running a pipeline, one should load nextflow
using the environment module system.
This can be achieved with:
# Load modules
module load 'Nextflow/21.08.0'
Built with ❤️ by the GIGA Bioinformatics Team
Config file
/*
* CÉCI NIC5 CLUSTER
* =================
* This configuration file provides sensible defaults to run Nextflow pipelines
* on the CÉCI NIC5 cluster.
*
* For more information on the CÉCI NIC5 cluster, refer to this page of the
* wiki:
* https://www.ceci-hpc.be/clusters.html#nic5
*/
params {
config_profile_name = 'CÉCI'
config_profile_description = 'CÉCI NIC5 cluster profiles provided by the GIGA Bioinformatics Team.'
config_profile_contact = 'Martin Grignard (@MartinGrignard)'
config_profile_url = 'https://www.ceci-hpc.be/clusters.html#nic5'
}
/*
* Resources limitations
* ---------------------
* These resources limitations are maximum values across all nodes of all
* queues. At least one node matches these maximum resources in all of the
* available queues.
*
* For more information on the available nodes, use the `sinfo` command.
*/
params {
max_cpus = 64
max_memory = 1.TB
max_time = 2.days
}
/*
* Singularity configuration
* -------------------------
* Singularity is used to run containerised tools.
*/
singularity {
autoMounts = true
cacheDir = "${HOME}/.cache/singularity"
enabled = true
pullTimeout = 3.hours
}
/*
* Slurm configuration
* -------------------
* Slurm is used as a workload manager. This configuration makes sure to share
* the available resources in a fairly.
*
* For more information on how to use Slurm on the CÉCI clusters, refer to this
* page of the wiki:
* https://support.ceci-hpc.be/doc/_contents/QuickStart/SubmittingJobs/SlurmTutorial.html
*/
executor {
name = 'slurm'
queueSize = 200
pollInterval = 10.s
}
/*
* Process configuration
* ---------------------
* Several queues are available on the cluster, based on the required memory.
* This configuration makes sure to request resources on the most
* relevant queue.
*
* For more information on the available queues, refer to this page of the
* wiki:
* https://www.ceci-hpc.be/clusters.html#nic5
*/
process {
queue = {
task.memory <= 256.GB ? 'batch' : 'hmem'
}
resourceLimits = [
cpus : 64,
memory: 1.TB,
time : 2.days,
]
stageInMode = 'symlink'
stageOutMode = 'rsync'
}