Project Data Management and Run Instructions

10 minute read



Big data space on HPCC

All larger data sets of the course projects will be organized in a big data space under /bigdata/gen242/<user_name>. Within this space, each student will work in a subdirectory named after their project:

  • /bigdata/gen242/<user_name>/<github_user_name>_project

Project GitHub repositories

Students will work on their course projects within GitHub repositories, one for each student. These project repositories are private and have been shared with each student. To populate a course project with an initial project workflow, please follow the instructions given in the following section.

Generate workflow environment with real project data

  1. Log in to the HPCC cluster and set your working directory to bigdata or (/bigdata/gen242/<user_name>)
  2. Clone the GitHub repository for your project with git clone ... (URLs listed here) and then cd into this directory. As mentioned above, the project GitHub repos follow this naming convention: <github_user_name>_project.
  3. Generate the workflow environment for your project on the HPCC cluster with genWorkenvir from systemPipeRdata.
  4. Next, cd into the directory of your workflow, delete its default data and results directories, and then substitute them with empty directories outside of your project GitHub repos as follows (<workflow> needs to be replaced with actual workflow name):
    mkdir ../../<workflow>_data
    mkdir ../../<workflow>_results
    
  5. Within your workflow directory create symbolic links pointing to the new directories created in the previous step. For instance, the projects using the RNA-Seq workflow should create the symbolic links for their data and results directories like this (<user_name> and <workflow> needs to be replaced with your user name and workflow name):
    ln -s /bigdata/gen242/<user_name>/<workflow>_data data
    ln -s /bigdata/gen242/<user_name>/<workflow>_results results
    
  6. Add the workflow directory to the GitHub repository of your project with git add -A and then run commit and push as outlined in the GitHub instructions of this course here. After this check whether the workflow directory and its content shows up on your project’s online repos on GitHub. Very important: make sure that the data and results are empty at this point. If not investigate why and fix the problem in the corresponding step above.
  7. Download the FASTQ files of your project with getSRAfastq (see below) to the data directory of your project.
  8. Generate a proper targets file for your project where the first column(s) point(s) to the downloaded FASTQ files. In addition, provide sample names matching the experimental design (columns: SampleNames and Factor). More details about the structure of targets files are provided here. Ready to use targets files for both the RNA-Seq and ChIP-Seq project can be downloaded as tab separated (TSV) files from here. Alternatively, one can download the corresponding Google Sheets with the read_sheet function from the googlesheets4 package (RNA-Seq GSheet and ChIP-Seq GSheet).
  9. Inspect and adjust the .param files you will be using. For instance, make sure the software modules you are loading and the path to the reference genome are correct.
  10. Every time you start working on your project you cd into the directory of the repository and then run git pull to get the latest change. When you are done, you commit and push your changes back to GitHub with git commit -am "some edits"; git push -u origin main.

Download of project data

After logging in to one of the computer nodes via srun, open R from within the GitHub repository of your project and then run the following code section, but only those that apply to your project.

FASTQ files from SRA

Choose FASTQ data for your project

sraidv <- paste("SRR0388", 45:51, sep="") 
sraidv <- paste("SRR4460", 27:44, sep="")

Load libraries and modules

library(systemPipeR)                                                                                                                                                                
moduleload("sratoolkit/3.0.0")                                                                                                                                                      
system("vdb-config --prefetch-to-cwd") # sets download default to current directory                                                                                          
# system('prefetch --help') # helps to speed up fastq-dump
# system('vdb-config -i') # allows to change SRA Toolkit configuration; instructions are here: https://bit.ly/3lzfU4P
# system('fastq-dump --help') # below uses this one for backwards compatibility                                                                                                     
# system('fasterq-dump --help') # faster than fastq-dump

Define download function

The following function downloads and extracts the FASTQ files for each project from SRA. Internally, it uses the prefetch and fastq-dump utilities of the SRA Toolkit from NCBI. The faster fasterq-dump alternative (see comment line below) is not used here for historical reasons.

getSRAfastq <- function(sraid, threads=1) {                                                                                                                                         
    system(paste("prefetch", sraid)) # makes download faster                                                                                                                        
    system(paste("vdb-validate", sraid)) # checks integrity of the downloaded SRA file                                                                   
    system(paste("fastq-dump --split-files --gzip", sraid)) # gzip option makes it slower but saves storage space                                                                   
    # system(paste("fasterq-dump --threads 4 --split-files --progress ", sraid, "--outdir .")) # Faster alternative to fastq-dump                                                   
    unlink(x=sraid, recursive = TRUE, force = TRUE) # deletes sra download directory                                                                                                
}    

To stop the loop after a failure is detected by vdb-validate, use && operator like this: prefetch sraid && vdb-validate sraid && fastq-dump sraid.

Run download

Note the following performs the download in serialized mode for the chosen data set and saves the extracted FASTQ files to the current working directory.

mydir <- getwd(); setwd("data")
for(i in sraidv) getSRAfastq(sraid=i)
setwd(mydir)

Alternatively, the download can be performed in parallelized mode with BiocParallel. Please run this version only on one of the compute nodes.

mydir <- getwd(); setwd("data")
# bplapply(sraidv, getSRAfastq, BPPARAM = MulticoreParam(workers=4))
setwd(mydir)

Download reference genome and annotation

The following downloadRefs function downloads the Arabidopsis thaliana genome sequence and GFF file from the TAIR FTP site. It also assigns consistent chromosome identifiers to make them the same among both the genome sequence and the GFF file. This is important for many analysis routines such as the read counting in the RNA-Seq workflow.

downloadRefs <- function(rerun=FALSE) {
    if(rerun==TRUE) {
        library(Biostrings)
        download.file("https://www.arabidopsis.org/download_files/Genes/TAIR10_genome_release/TAIR10_chromosome_files/TAIR10_chr_all.fas.gz", "./data/tair10.fasta.gz")
        R.utils::gunzip("./data/tair10.fasta.gz")
        dna <- readDNAStringSet("./data/tair10.fasta")
        names(dna) <- paste(rep("Chr", 7), c(1:5, "M", "C"), sep="") # Fixes chromomse ids
        writeXStringSet(dna, "./data/tair10.fasta")
        download.file("https://www.arabidopsis.org/download_files/Genes/TAIR10_genome_release/TAIR10_gff3/TAIR10_GFF3_genes.gff", "./data/tair10.gff")
        download.file("https://www.arabidopsis.org/download_files/Genes/TAIR10_genome_release/TAIR10_functional_descriptions", "./data/tair10_functional_descriptions")
        }
}

After importing/sourcing the above function, execute it as follows:

downloadRefs(rerun=TRUE) 

Workflow Rmd file

To run the actual data analysis workflows, the RNA-Seq project can use the systemPipeRNAseq.Rmd file obtained from the genWorkenvir(workflow='rnaseq') call directly. However, the ChIP-Seq group should use the Rmd linked on top right corner of the ChIP-Seq tutorial here and then name the downloaded file systemPipeChIPseq.Rmd. To simplify this, the ChIP-Seq group members can run from the command-line within their chipseq workflow directory the following download command.

wget https://raw.githubusercontent.com/tgirke/GEN242/main/static/custom/spWFtemplates/spchipseq.Rmd -O systemPipeChIPseq.Rmd

This will assign the proper file name and overwrite the preloaded version of this file that has the same name.

Recommendations for running workflows

Run instructions

The following provides recommendations and additional options to consider for running and modifying workflows. This also includes parallelization settings for the specific data used by the class projects. Note, additional details can be found in this and other sections of the workflow introduction tutorial here. Importantly, the following should be run from within an srun session.

library(systemPipeR)                                                                                                                                                                
sal <- SPRproject() # when running a WF for first time                                                                                                                                      
sal                                                                                                                                                                                 
sal <- importWF(sal, file_path = "systemPipeRNAseq.Rmd") # populates sal with WF steps defined in Rmd                                                                                                                      
sal
sal <- SPRproject(resume=TRUE) # when restarting a WF, skip above steps and resume WF with this command                                                                                                                                               
getRversion() # should be 4.2.2. Note, R version can be changed with `module load ...`                                                                                                                                                     
system("hostname") # should return number of a compute node; if not close Nvim-R session, log in to a compute node with srun and then restart Nvim-R session                                                                                                                                                                     
# sal <- runWF(sal) # runs WF serialized. Not recommended since this will take much longer than parallel mode introduced below by taking advantage of resource allocation
resources <- list(conffile=".batchtools.conf.R",                                                                                                                                    
                  template="batchtools.slurm.tmpl",                                                                                                                                 
                  Njobs=18, # chipseq should use here number of fastq files (7 or 8)                                                                                                                                                        
                  walltime=180, ## minutes                                                                                                                                          
                  ntasks=1,                                                                                                                                                         
                  ncpus=4,                                                                                                                                                          
                  memory=4096, ## Mb                                                                                                                                                
                  partition = "gen242"                                                                                                                                              
                  )                                                                                                                                                                 
## For RNA-Seq project use:
sal <- addResources(sal, step = c("preprocessing", "trimming", "hisat2_mapping"), resources = resources) # parallelizes time consuming computations assigned to `step` argument                                                                           
## For ChIP-Seq project use:
sal <- addResources(sal, step = c("preprocessing", "bowtie2_alignment"), resources = resources)
sal <- runWF(sal) # runs entire workflow; specific steps can be executed by assigning their corresponding position numbers within the workflow to the `steps` argument (see ?runWF)                                                                                                                                                               
sal <- renderReport(sal) # after workflow has completed render Rmd to HTML report (default name is SPR_Report.html) and view it via web browser which requires symbolic link in your ~/.html folder. 
rmarkdown::render("systemPipeRNAseq.Rmd", clean=TRUE, output_format="BiocStyle::html_document") # Alternative approach for rendering report from Rmd file instead of sal object

Modify a workflow

If needed one can modify existing workflow steps in a pre-populated SYSargsList object, and potentially already executed WF, with the replaceStep(sal) <- replacement function. The following gives an example where step number 3 in a SYSargsList (sal) object will be updated with modified or new code. Note, this is a generalized example where the user needs to insert the code lines and also adjust the values assigned to the arguments: step_name and dependency. Additional details on this topic are available in the corresponding section of systemPipeR’s introductory tutorial here.

replaceStep(sal, step=3) <- LineWise(                                                                                                                                                        
    code = {                                                                                                                                                                        
        << my modified code lines >>
        },                                                                                                                                                                          
    step_name = << "my_step_name" >>,                                                                                                                                                        
    dependency = << "my_dependency" >>)

Since step_names need to be unique, one should avoid using the same step_name as before. If the previous name is used, a default name will be assigned. Rerunning the assignment will then allow to assign the previous name. This behavior is enforced for version tracking. Subsequently, one can view and check the code changes with codeLine(), and then rerun the corresponding step (here 3) as follows:

codeLine(stepsWF(sal)$my_step_name)
runWF(sal, steps=3)

Note, any step in a workflow can only be run in isolation if its expected input exists (see dependency).

Adding steps to a workflow

New steps can be added to the Rmd file of a workflow by inserting new R Markdown code chunks starting and ending with the usual appendStep<- syntax and then creating a new SYSargsList instance with importWF that contain the new step(s). To add steps to a pre-populated SYSargsList object, one can use the after argument of the appendStep<- function. The following example will add a new step after position 3 to the corresponding sal object. This can be useful if a longer workflow has already been completed and one only wants to make some refinements without re-running the entire workflow.

appendStep(sal, after=3) <- << my_step_code >>

Submit workflow from command-line to cluster

In addition to running workflows within interactive R sessions, after logging in to a computer node with srun, one can execute them entirely from the command-line by including the relevant workflow run instructions in an R script. The R script can then be submitted via a Slurm submission script to the cluster. The following gives an example for the RNA-Seq workflow (ChIP-Seq version requires only minor adjustments). Additional details on this topic are available in the corresponding section of systemPipeR’s introductory tutorial here.

To test this out, users can generate in their user account of the cluster a workflow environment populated with the toy data as outlined here). After this, it is best to create within the workflow directory a subdirectory, e.g. called cl_sbatch_run, and then save the above two files to this subdirectory. Next, the parameters in both files need to be adjusted to match the type of workflow and the required computing resources. This includes the name of the Rmd file and scheduler resource settings such as: partition, Njobs, walltime, memory, etc. After all relevant settings have been set correctly, one can execute the workflow with sbatch within the cl_sbatch_run directory as follows:

sbatch wf_run_script.sh

After the submission to the cluster, one usually should check its status and progress with squeue -u <username> as well a by monitoring the content of the slurm-<jobid>.out file generated by the scheduler in the same directory. This file contains most of the STDOUT and STDERROR generated by a cluster job. Once everything is working on the toy data, users can run the workflow on the real data the same way.

Last modified 2023-06-08: some edits (acab87dd5)