Writing a Process

Here is the simplest process:

jobs:
    hello_world:
        steps:
            -
                bash: echo Hello World

It is a process with one job, called hello_world, which outputs Hello World.

The shell script can be multi-line:

jobs:
    hello_world:
        steps:
            -
                bash: |
                    echo This is a multi-line
                    echo script. You could do a great deal with
                    echo in a single step.

As well as bash: there is sh: and python:, and you can add others.

A Job can have many steps - they happen one after the other.

jobs:
    hello_world:
        steps:
            -
                python: |
                    with open('hw.txt', 'w') as f:
                        f.write('Hello World\n')
            -
                bash: cat hw.txt

Each step can be named to clarify your script's progress:

jobs:
    hello_world:
        steps:
            -
                name: Prepare message
                python: |
                    with open('hw.txt', 'w') as f:
                        f.write('Hello World\n')
            -
                name: Display message
                bash: cat hw.txt

Context and Variables

In your job you have a number of values you can use such as user.name.

jobs:
    hello_world:
        steps:
            -
                name: Hello World using bash
                bash: |
                    echo Hello World
                    echo This was run by {user.name}

user.name is enclosed in { and }. Inside the {} is a python expression:

process.url
The URL of the process being run.
user.username
The name of the user running the process.
user.password
The password to use with user.name for this Job's Role. For example:
svn co --username {user.name} --password "{user.password}" https://svn.svnpplace.com/joebloggs_repo/Trunk
repo (.api_id, .urn)
Identifies the repo
repo.owning_user (.api_id, .urn, .username)
Who owns the repo
repo.root_directory (.api_id, .urn)
The repo's root directory
repo.root_directory.owning_user (.api_id, .urn, .username)
Who own's the repo's root directory
repo.root_directory.instance (.api_id, .urn, .owning_user (.api_id, .urn, .username))
The instance hosting the root directory
run.results.id
The api_id of the Blob where the results of the Run will be stored.
run.results.path
The path of the Blob where the results of the Run will be stored.
role.id
The api_id of the Role which gives permissions to the Job.
my.name
The name of the job being run
my.results.id
The api_id of the Blob where results from this Job will be stored.
my.results.path
The path of the Blob where results from this Job will be stored.
role.key
The key for the Job's Role.
role.secret
The secret for this Job's Role.
api.url
The URL to use for HTTP calls to the svnplace api. For your processes this will always be https://svnplace.com. During development testing of svnplace it may be set to something else, but on the customer facing website it won't change, and can be ignored.
api.host
The host to set in HTTP calls to the svnplace api. For your processes this will always be svnplace.com. During development testing of svnplace it may be set to something else, but on the customer facing website it won't change, and can be ignored.
blob.url
The URL to use for HTTP calls to the svnplace api. For your processes this will always be https://blob.svnplace.com. During development testing of svnplace it may be set to something else, but on the customer facing website it won't change, and can be ignored.
blob.host
The URL to use for HTTP calls to the svnplace api. For your processes this will always be blob.svnplace.com. During development testing of svnplace it may be set to something else, but on the customer facing website it won't change, and can be ignored.
script
The name of the script file being run for this step.
var
A special value for variables.
job
A special variable for results of previous Jobs.
others
Depending on what triggered the process other values will be available.

Variables and {var}

You may need to store values from one step to use in a later step. This is done with {var}.

Echo 57 > {var}/my_value

{var} is a directory where files and directories can be stored for use later. This can be accessed as file, or as values.

echo {var.my_value}
cat {var}/my_value

They are always strings:

echo my_value bigger than 23 is {int(var.my_value) > 23}

You can store whole trees of files in {var}:

cc -c *.c -o {var}/obj

Sending Between Jobs

  1. Note the earlier Job's results.

    jobs:
        configure:
            steps:
                -
                    bash: echo myproduct > {var}/conf_prod
            results:
                product: conf_prod
    
  2. Ensure the later Job starts after the earlier one.

    jobs:
        configure:
            ...
        compile:
            after:
                - configure
            steps:
                ...
    

    Only Job's which this one is after will have their results available.

  3. Use the earlier job's values

    jobs:
        configure:
            ...
        compile:
            ...
            steps:
                -
                    workingdirectory: /home/process/{job.configure.product}
                    ...
    

    {job} works like {var}.

Trees of files can be sent:

jobs:
    compile:
        steps:
            -
                bash: gcc -c *.c -o {var}/obj
        results:
            objects: obj

Additional Features of a Step

steps:
    -
        name: Create output directory
        condition: job.configure.build_product
        bash: mkdir -p {var}/obj
        ignoreerror: True
    -
        name: compile
        condition: job.configure.build_product
        workingdirectory: /home/process/{job.configure.product_name}
        timeout: 5*60
        bash: cc *.cpp -o {var}/obj
condition: <expression>
Whether to run this step:
condition: int(var.earlier_result) < 0
workingdirectory: <string>
Where to run this step.
workingdirectory: /home/process/{job.configure.whattobuild}
bash: gcc -c....
ignoreerror: <expression>
Normally an error (return code other than 0) from a step will stop the Job, causing it to fail - the error can be ignored.
bash: exit 1
ignoreerror: True
timeout: <expression>

How long, in seconds, to wait for the step:

timeout: 5*60

If the step takes longer it is stoppped, and the Job fails.

Bash is not the Only Shell

jobs:
    my_perl_job:
        shells:
            perl: perl -f {script}
        steps:
            -
                perl: |
                    use strict;
                    use warnings;
                    print("Hello World\n");

The shells table sets the pattern for each shell. {script} is the name of the script file to run. At time of writing the default shells are:

shells:
    bash: bash --noprofile --norc -eo pipefail {script}
    python: python {script}
    sh: sh -e {script}

Tools Available

blobcp

Copy files between files & blobs.

badge

Update a badge:

badge --repo {repo.api_id} --name operation --label operation --message working

Badge URL:

https://svnplace.com/myname/myrepo/badge/operation