Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion docs/hello_nf-core/00_orientation.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,9 +75,10 @@ If you run this inside `hello-nf-core`, you should see the following output.
├── core-hello-part2
├── core-hello-part3
├── core-hello-part4
├── core-hello-part5
└── core-hello-start

8 directories, 3 files
9 directories, 3 files
```

!!! note
Expand Down
34 changes: 19 additions & 15 deletions docs/hello_nf-core/01_run_demo.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Let's start by locating the nf-core/demo pipeline on the project website at [nf-

### 1.1. Find the pipeline on the website

In your web browser, go to https://nf-co.re/pipelines/ and type `demo` in the search bar.
In your web browser, go to [https://nf-co.re/pipelines/](https://nf-co.re/pipelines/) and type `demo` in the search bar.

![search results](./img/search-results.png)

Expand Down Expand Up @@ -105,6 +105,8 @@ tree -L 2 $NXF_HOME/assets/
/workspaces/.nextflow/assets/
└── nf-core
└── demo

2 directories, 0 files
```

!!! note
Expand All @@ -130,6 +132,8 @@ tree -L 2 pipelines
pipelines
└── nf-core
└── demo

2 directories, 0 files
```

Now we can more easily peek into the source code as needed.
Expand Down Expand Up @@ -297,7 +301,7 @@ Here's the console output from the pipeline:

If your output matches that, congratulations! You've just run your first nf-core pipeline.

You'll notice that there is more a lot more console output than when you run a basic Nextflow pipeline.
You'll notice that there is a lot more console output than when you run a basic Nextflow pipeline.
There's a header that includes a summary of the pipeline's version, inputs and outputs, and a few elements of configuration.

!!! note
Expand Down Expand Up @@ -500,21 +504,19 @@ tree -L 3 pipelines/nf-core/demo/modules
pipelines/nf-core/demo/modules
└── nf-core
├── fastqc
   ├── environment.yml
   ├── main.nf
   ├── meta.yml
   └── tests
├── environment.yml
├── main.nf
├── meta.yml
└── tests
├── multiqc
   ├── environment.yml
   ├── main.nf
   ├── meta.yml
   └── tests
├── environment.yml
├── main.nf
├── meta.yml
└── tests
└── seqtk
└── trim
├── environment.yml
├── main.nf
├── meta.yml
└── tests

7 directories, 6 files
```

Here you see that the `fastqc` and `multiqc` modules sit at the top level within the `nf-core` modules, whereas the `trim` module sits under the toolkit that it belongs to, `seqtk`.
Expand Down Expand Up @@ -553,6 +555,8 @@ tree -L 3 pipelines/nf-core/demo/subworkflows
├── main.nf
├── meta.yml
└── tests

9 directories, 7 files
```

As noted above, the `nf-core/demo` pipeline does not include any analysis-specific subworkflows, so all the subworkflows we see here are so-called 'housekeeping' or 'utility' workflows, as denoted by the `utils_` prefix in their names.
Expand Down Expand Up @@ -626,4 +630,4 @@ Take a break! That was a lot. When you're feeling refreshed and ready, move on t

!!! tip

If you would like to learn how to compose workflows with subworkflows before moving on to the next part, check out the [Workflows of Workflows](../side_quests/workflows_of_workflows/) Side Quest.
If you would like to learn how to compose workflows with subworkflows before moving on to the next part, check out the [Workflows of Workflows](../side_quests/workflows_of_workflows.md) Side Quest.
50 changes: 42 additions & 8 deletions docs/hello_nf-core/02_rewrite_hello.md
Original file line number Diff line number Diff line change
Expand Up @@ -214,6 +214,8 @@ tree core-hello-results
├── hello_software_versions.yml
├── params_2025-11-21_04-47-18.json
└── pipeline_dag_2025-11-21_04-47-18.html

1 directory, 6 files
```

You can take a peek at the reports to see what was run, and the answer is: nothing at all!
Expand Down Expand Up @@ -275,7 +277,7 @@ workflow HELLO {
*/
```

Compared to a basic Nextflow workflow like the one developed in Hello Nextflow, you'll notice a few things that are new here (highlighted lines above):
Compared to a basic Nextflow workflow like the one developed in [Hello Nextflow](../hello_nextflow/index.md), you'll notice a few things that are new here (highlighted lines above):

- The workflow block has a name
- Workflow inputs are declared using the `take:` keyword and the channel construction is moved up to the parent workflow
Expand All @@ -286,7 +288,7 @@ These are optional features of Nextflow that make the workflow **composable**, m

!!! note "Composable workflows in depth"

The [Workflows of Workflows](../side_quests/workflows_of_workflows/) Side Quest explores workflow composition in much greater depth, including how to compose multiple workflows together and manage complex data flows between them. We're introducing composability here because it's a fundamental requirement of the nf-core template architecture, which uses nested workflows to organize pipeline initialization, the main analysis workflow, and completion tasks into separate, reusable components.
The [Workflows of Workflows](../side_quests/workflows_of_workflows.md) Side Quest explores workflow composition in much greater depth, including how to compose multiple workflows together and manage complex data flows between them. We're introducing composability here because it's a fundamental requirement of the nf-core template architecture, which uses nested workflows to organize pipeline initialization, the main analysis workflow, and completion tasks into separate, reusable components.

We are going to need to plug the relevant logic from our workflow of interest into that structure.
The first step for that is to make our original workflow composable.
Expand Down Expand Up @@ -371,6 +373,8 @@ tree original-hello/
│ ├── cowpy.nf
│ └── sayHello.nf
└── nextflow.config

1 directory, 6 files
```

Feel free to run it to satisfy yourself that it works:
Expand Down Expand Up @@ -478,15 +482,15 @@ Now, replace the channel construction with a simple `take` statement declaring e

This leaves the details of how the inputs are provided up to the parent workflow.

While we're at it, we can also comment out the line
While we're at it, we can also comment out the line `params.greeting = 'greetings.csv'`

=== "After"

```groovy title="original-hello/hello.nf" linenums="3" hl_lines="4"
/*
* Pipeline parameters
*/
params.greeting = 'greetings.csv'
//params.greeting = 'greetings.csv'
params.batch = 'test-batch'
params.character = 'turkey'
```
Expand All @@ -497,13 +501,11 @@ While we're at it, we can also comment out the line
/*
* Pipeline parameters
*/
// params.greeting = 'greetings.csv'
params.greeting = 'greetings.csv'
params.batch = 'test-batch'
params.character = 'turkey'
```

params.greeting = 'greetings.csv'

!!! note

If you have the Nextflow language server extension installed, the syntax checker will light up your code with red squiggles.
Expand Down Expand Up @@ -566,7 +568,7 @@ This is a net new addition to the code compared to the original workflow.

If you've done all the changes as described, your workflow should now look like this:

```groovy title="original-hello/hello.nf" linenums="1" hl_lines="15 17-19 21 37-38"
```groovy title="original-hello/hello.nf" linenums="1" hl_lines="16 18-20 22 36-37"
#!/usr/bin/env nextflow

/*
Expand Down Expand Up @@ -802,6 +804,8 @@ tree core-hello/modules
├── convertToUpper.nf
├── cowpy.nf
└── sayHello.nf

1 directory, 4 files
```

Now let's set up the module import statements.
Expand Down Expand Up @@ -1301,6 +1305,32 @@ Key points:
- **Absolute paths**: By using `${projectDir}`, we create an absolute path, which is important for test data that ships with the pipeline.
- **Test data location**: nf-core pipelines typically store test data in the `assets/` directory within the pipeline repository for small test files, or reference external test datasets for larger files.

And while we're at it, let's tighten the default resource limits to ensure this will run on very basic machines (like the minimal VMs in Github Codespaces):

=== "After"

```groovy title="core-hello/config/test.config" linenums="13" hl_lines="3-4"
process {
resourceLimits = [
cpus: 2,
memory: '4.GB',
time: '1.h'
]
}
```

=== "Before"

```groovy title="core-hello/config/test.config" linenums="13" hl_lines="3-4"
process {
resourceLimits = [
cpus: 4,
memory: '15.GB',
time: '1.h'
]
}
```

This completes the code modifications we need to do.

### 4.4. Run the pipeline with the test profile
Expand Down Expand Up @@ -1382,6 +1412,8 @@ tree core-hello-results
├── params_2025-11-21_07-29-41.json
└── pipeline_dag_2025-11-21_04-47-18.html
└── pipeline_dag_2025-11-21_07-29-37.html

1 directory, 12 files
```

You see we got another set of execution reports in addition to the ones we got from the first run, when the workflow was still just a placeholder.
Expand Down Expand Up @@ -1416,6 +1448,8 @@ tree results
├── UPPER-Bonjour-output.txt
├── UPPER-Hello-output.txt
└── UPPER-Holà-output.txt

0 directories, 10 files
```

Ah, there they are, mixed in with the outputs of earlier runs of the original Hello pipeline.
Expand Down
13 changes: 7 additions & 6 deletions docs/hello_nf-core/03_use_module.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ nf-core modules list remote | grep 'cat/cat'
```

```console title="Output"
cat/cat
cat/cat
```

Just keep in mind the that `grep` approach will only pull out results with the search term in their name, which would not work for `cat_cat`.
Expand Down Expand Up @@ -156,7 +156,8 @@ cd core-hello
nf-core modules install cat/cat
```

The tool will first prompt you to specify a repository type.
The tool may first prompt you to specify a repository type.
(If not, skip down to "Finally, the tool will proceed to install the module.")

??? example "Output"

Expand All @@ -177,7 +178,7 @@ The tool will first prompt you to specify a repository type.
Modules repository
```

Press enter to accept the default response (`Pipeline`) and continue.
If so, press enter to accept the default response (`Pipeline`) and continue.

The tool will then offer to amend the configuration of your project to avoid this prompt in the future.

Expand Down Expand Up @@ -675,7 +676,7 @@ Since `cowpy` doesn't accept metadata tuples yet (we'll fix this in the next par

=== "After"

```groovy title="core-hello/workflows/hello.nf" linenums="26" hl_lines="16-17"
```groovy title="core-hello/workflows/hello.nf" linenums="26" hl_lines="16-17 20"
// emit a greeting
sayHello(ch_samplesheet)

Expand All @@ -700,7 +701,7 @@ Since `cowpy` doesn't accept metadata tuples yet (we'll fix this in the next par

=== "Before"

```groovy title="core-hello/workflows/hello.nf" linenums="26"
```groovy title="core-hello/workflows/hello.nf" linenums="26" hl_lines="17"
// emit a greeting
sayHello(ch_samplesheet)

Expand All @@ -722,7 +723,7 @@ Since `cowpy` doesn't accept metadata tuples yet (we'll fix this in the next par

The `.map{ meta, file -> file }` operation extracts the file from the `[metadata, file]` tuple produced by `CAT_CAT` into a new channel, `ch_for_cowpy`.

Then it's just a matter of passing `ch_for_cowpy` to `cowpy` instead of `collectGreetings.out.outfile`.
Then it's just a matter of passing `ch_for_cowpy` to `cowpy` instead of `collectGreetings.out.outfile` in that last line.

!!! note

Expand Down
4 changes: 2 additions & 2 deletions docs/hello_nf-core/04_make_module.md
Original file line number Diff line number Diff line change
Expand Up @@ -736,7 +736,7 @@ To override the default `publishDir` directive, you can simply add your own dire

For example, you could override the default for a single process using the `withName:` selector, as in this example where we add a custom `publishDir` directive for the 'cowpy' process.

```groovy title="core-hello/conf/modules.config" linenums="13" hl_lines="6-8"
```groovy title="core-hello/conf/modules.config" linenums="13" hl_lines="8-10"
process {
publishDir = [
path: { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" },
Expand Down Expand Up @@ -833,7 +833,7 @@ Each file serves a specific purpose:

!!! tip "Learn more about testing"

The generated test file uses nf-test, a testing framework for Nextflow pipelines and modules. To learn how to write and run these tests, see the [nf-test side quest](../../side_quests/nf_test/).
The generated test file uses nf-test, a testing framework for Nextflow pipelines and modules. To learn how to write and run these tests, see the [nf-test side quest](../side_quests/nf_test.md).

The generated `main.nf` includes all the patterns you just learned, plus some additional features:

Expand Down
Loading