Generate workflow environment
Load workflow environment with sample data into your current working directory. The sample data are described here.
library(systemPipeRdata) genWorkenvir(workflow="varseq") setwd("varseq")
Alternatively, this can be done from the command-line as follows:
Rscript -e "systemPipeRdata::genWorkenvir(workflow='varseq')"
In the workflow environments generated by
genWorkenvir all data inputs are stored in
data/ directory and all analysis results will be written to a separate
results/ directory, while the
systemPipeVARseq.Rmd script and the
targets file are expected to be located in
the parent directory. The R session is expected to run from this parent
directory. Additional parameter files are stored under
To work with real data, users want to organize their own data similarly
and substitute all test data for their own data. To rerun an established
workflow on new data, the initial
targets file along with the corresponding
FASTQ files are usually the only inputs the user needs to provide.
Now open the R markdown script
systemPipeVARseq.Rmdin your R IDE (e.g. vim-r or RStudio) and
run the workflow as outlined below.
Run R session on computer node
After opening the
Rmd file of this workflow in Vim and attaching a connected
R session via the
F2 (or other) key, use the following command sequence to run your R
session on a computer node.
q("no") # closes R session on head node srun --x11 --partition=short --mem=2gb --cpus-per-task 4 --ntasks 1 --time 2:00:00 --pty bash -l module load R/3.3.0 R
Now check whether your R session is running on a computer node of the cluster and assess your environment.
system("hostname") # should return name of a compute node starting with i or c getwd() # checks current working directory of R session dir() # returns content of current working directory
systemPipeR package needs to be loaded to perform the analysis steps shown in
this report (H Backman et al., 2016).
If applicable users can load custom functions not provided by
this step if this is not the case.