The Nextflow config file

The information about the resources needed, the scheduler used and the container options can be stored within the nextflow.config file. In this file we just include a number of single configuration files, each one for a different profile for different executors.

Here you the single config files that are included by the main one and used depending on the different Nextflow parameter -profile

Standard (default)

/*
* This configuration file is the default one used by the pipeline
*/

process {
    // definition of the local executor. Run the pipeline in the current computer.
    executor="local"

    // resources for default process execution
    memory='0.6G'
    cpus='1'
    time='6h'

       // resources for execution of processes / modules with the label "two cpus". This override the default ones.
        withLabel: 'twocpus' {
            memory='0.6G'
            cpus='2'
        }
}

SGE

You can activate using this command line:

nextflow run nextflow-io/elixir-workshop-21 -r master -profile hpc_sge -with-docker
/*
* This configuration file is the one used when indicating the Nextflow parameter -profile hpc_sge
*/

process { 
    // definition of the SGE executor. Run the pipeline in a node able to submit jobs to a HPC via qsub
    executor="SGE"       

    // definition of the default queue name. 
    queue = "smallcpus"

    // resources for default process execution
    memory='1G'
    cpus='1'
    time='6h'
    
        
       // resources for execution of processes / modules with the label "two cpus". This override the default ones.
        withLabel: 'twocpus' {
           queue = "bigcpus"
           memory='4G'
           cpus='2'
       }   

} 

SLURM

You can activate using this command line:

nextflow run nextflow-io/elixir-workshop-21 -r master -profile hpc_slurm -with-docker
/*
* This configuration file is the one used when indicating the Nextflow parameter -profile hpc_slurm
*/

process { 
    // definition of the slurm executor. Run the pipeline in a node able to submit jobs to a HPC via sbatch
    executor="slurm"       

    // resources for default process execution
    cpus='1'
    time='6h'

       // resources for execution of processes / modules with the label "two cpus". This override the default ones.
        withLabel: 'twocpus' {
           queue = "bigcpus"
           cpus='2'
        }   

} 

RETRY EXAMPLE

retry_example.config

You can activate using this command line:

nextflow run nextflow-io/elixir-workshop-21 -r master -profile retry -with-docker
/*
* This configuration file is the an example for showing fail retry
*/

process {
        // definition of the local executor. Run the pipeline in the current computer.
        executor="local"

        // resources for default process execution
        memory='0.6G'
        cpus='1'
        time='6h'

           // retry three times if fails and then fail
            withLabel: 'twocpus' {
                  time = { 20.second * task.attempt }
                  errorStrategy = 'retry' 
                  maxRetries = 3	
            }

           // retry three times if fails and then fail
            withLabel: 'retry_and_ignore' {
                  time = { 20.second * task.attempt }
                  errorStrategy = {task.attempt <= 3 ? 'retry' : 'ignore'}
            }
}

AWS BATCH

We can run our pipeline in AWS batch just by changing the following values in the awsbatch.config file:

/*
* This configuration file is the one used when indicating the Nextflow parameter -profile cloud
*/

// Here we define some AWS parameters like the region and the aws executables 
aws.region = 'eu-west-1'
aws.batch.cliPath = '/home/ec2-user/miniconda/bin/aws'

// Please change those values with the ones we provided you for accessing the cloud
aws.accessKey = 'ACCESSKEY'
aws.secretKey = 'SECRETKEY'


// Here we used as and work directory for intermediate files the S3 bucket
workDir = 's3://nf-elixir/scratch'

process {
    // definition of the awsbatch executor. Run the pipeline in a AWS node able to submit jobs via batch submission
    executor = 'awsbatch'

    // definition of the default queue name. 
    queue = 'nextflow-ci'

    // resources for default process execution
    memory='1G'
    cpus='1'
    time='6h'

       // resources for execution of processes / modules with the label "two cpus". This override the default ones.
       withLabel: 'twocpus' {
          memory='0.6G'
          cpus='2'
       }
}

And then launching the pipeline from the local repository:

nextflow run nextflow run main.nf -with-docker  -profile cloud -bg > log