Import a Custom Pipeline
Bioinformaticians can import and configure customised pipelines by selecting the My Pipelines tab and clicking the Import Pipeline button on the top-right corner of the dashboard.
In the Select the Type window, click on Custom and then Continue to follow these steps.
Step 1: General
- Name your pipeline and provide a brief text summary.
- Under About, you can import a markdown file that provides a detailed summary of your pipeline.
- The default Badge is set to Custom.
- Next, enter the Tags for your pipeline, and input at least one Key and Value.
- Finally select the Category to which your belongs from the options provided in the drop-down menu.
Step 2: Source
- Enter the name of your Repository e.g. git
- The Source is set to default as Inline.
- Click Next.
Step 3: Datasets
- Select Add New Dataset. In the drop-down, search for the name of the dataset you'd like to connect to your pipeline.(e.g. refdata-1)
- Add more datasets as required and click Next.
Step 4: Parameters
- For each parameter that you'd like to add for your pipeline, Name it, select the Type from the drop-down menu (String, Integer, Boolean or File), and add text under Help to describe what the parameter is meant for.
-
If your pipeline configuration requires the user to upload an input file, your first parameter may be:
- Name: Input CSV
- Type: File
-
On selecting File, you will be prompted to choose between two options: based on whether you'd like the user to Upload or Browse for their file input.
-
If you select Upload, specify the file extension or Supported File Type from the drop-down menu. If, for example, you select .csv, you can enter one or more Header and Sample for your pipeline validation. Next, fill-out the Help box by describing what this parameter is for.
-
If you select Browse, tick the checkbox Directory Only if the user needs to specify a directory for the input file. If not, you can also select the Dataset that the user may use for their input file. Fill the Help box by describing what this parameter is for.
-
-
Select Add New Parameter as required.
- For parameters where the Type is String, Boolean, or Integer, you will be prompted to follow the same Browse steps as above to specify the dataset.
- Additionally, select the Field Type for each parameter. Choose based on whether the pipeline's user can Input their value or select them from a list of Dropdown values that you specify.
-
For example, if your pipeline requires you to upload a pre-attached dataset, like a reference map, then your Add New Parameter fields might look like this:
- Name: Reference
- Type: File
- Browse or Upload: Browse
- Directory Only: No
- Dataset: Name of your attached dataset (e.g. refdata-1)
-
Click Next when all parameters have been added.
Step 5: Steps
- Specify the Name for each pipeline step, and configure the step's CPU/GPU settings.
- Under Capacity, specify the CPU and Memory capacity required for individual steps.
- Check GPU only if required for the pipeline step to run in GPU.
- Under Advanced, specify the environment variables (Env) and the arguments (Args) that need to be passed to validate your pipeline's input parameters.
- Enter the Key and Value for each.
- Optional: Provide a Bash script that you would like to execute as part of a step in the Script box.
- Add additional steps steps as required by clicking Add New Step
- Click Next when all steps have been added.
Step 6: Visualization
- Select Add New Visualization App.
- Select the Name of your visualization app from the drop-down menu.
- Enter the Display Name for the visualization app.
Step 7: Permission
- Specify permissions for each service given in the window.
- For example, check whether the user has permission to View, Check/Edit, or Delete each service (S3, Batch, and Logs)
Step 8: Review
- The final window prompts you to Review your pipeline specifications.
- After reviewing, Submit your pipeline.
- Under the My Pipelines tab, you will see your newly imported Nextflow Pipeline.
- Proceed to Version and Publish your pipeline.