Skip to content

I want to develop and test scripts and workflows

The instructions in each section include links to the relevant pages in the documentation. Links are tagged as:

  • Tutorials
  • Tools - descriptive
  • Data - descriptive
  • Pre-made workflows
  • Reference lists/tables

Import material to the TRE

You can bring in your own software and data to compare to using either Airlock or containers. We have Singularity on the HPC, which you can use to work with containers on Docker, or Sylabs, via our artifactory re-routing.

Working on the HPC

You will need to work on the HPC for any large-scale analyses. You can learn more about the HPC and how to access it:

You will find many common bioinformatics tools installed in the HPC, which you can incorporate into your pipelines.

There are folders on the HPC for your GECIP domain or Discovery forum. You should use your relevant folder as your working directory. These are also accessible from the desktop:

Create pipelines

You can analyse and combine these data in any way you choose, using any programming languages that are provided on the HPC. We also provide conda environments for working in Python and R libraries.

A number of coding tools are available, including VSCode, Rstudio and Jupyter notebooks.


The only way to export any scripts or pipelines you build is using Airlock. It is your responsibility to ensure your export conforms to the Airlock rules and does not contain any identifying data.