Nextflow Configurations
Explanation and examples of helpful Nextflow configurations.
Was this helpful?
Explanation and examples of helpful Nextflow configurations.
Was this helpful?
Advanced Pipeline settings can be defined without disabling the Pipeline UI by creating a in the Pipeline’s /pipeline folder. This file must be called nextflow.config in order for it to be used by the Pipeline.
The line sleep(Math.pow(2, task.attempt) * 200 as long)
implements an exponential backoff strategy, where sleep pauses execution for the specified number of milliseconds. For example, if a task has failed twice already, it will sleep for 200*2^3 = 1600ms
.
process.resourceLabels = ['your-key': 'your-value']
Replace the key and value with a pattern that suits your organization, for example 'your-key'
could be a group's name, and 'your-value'
could be the name of the Pipeline. This way the group's Pipeline costs will all appear under the same key in the CUR.
In the menu, a retry error strategy can be set, but when retrying submissions due to transient AWS outages, it can be beneficial to add a delay between job submissions.
The cost of Pipeline runs are not currently tracked in the Analytics Dashboard of the Admin Panel, but can be added to the nextflow.config
file so that Pipeline jobs are tagged and included in the . To view these tags in the CUR, they first need to be activated as shown here.
selected in the Pipeline UI are encoded in the corresponding process
in the main.nf
. For example, this is the Nextflow code for "Capsule A" that's allocated 1 core and 8 GB of RAM:
When the compute resources depend on the size of data that's being processed, can be used so that the size of the machine scales with the need. This feature requires the main.nf
file to be unlocked and edited manually. In the example below, the Capsule will run with 1 core and 8 GB of RAM but if it fails with an out of memory error (exit status between 137-140) it will automatically retry with more memory, up to 3 times. For example, if it fails with an out of memory error 3 times, it will retry with 8.GB * task.attempt
= 8.GB * 4
= 32 GB RAM.