Here is the simplest process:
jobs:
hello_world:
steps:
-
bash: echo Hello World
It is a process with one job, called hello_world, which outputs Hello World.
The shell script can be multi-line:
jobs:
hello_world:
steps:
-
bash: |
echo This is a multi-line
echo script. You could do a great deal with
echo in a single step.
As well as bash: there is sh: and python:, and you can add others.
A Job can have many steps - they happen one after the other.
jobs:
hello_world:
steps:
-
python: |
with open('hw.txt', 'w') as f:
f.write('Hello World\n')
-
bash: cat hw.txt
Each step can be named to clarify your script's progress:
jobs:
hello_world:
steps:
-
name: Prepare message
python: |
with open('hw.txt', 'w') as f:
f.write('Hello World\n')
-
name: Display message
bash: cat hw.txt
In your job you have a number of values you can use such as user.name.
jobs:
hello_world:
steps:
-
name: Hello World using bash
bash: |
echo Hello World
echo This was run by {user.username}
user.name is enclosed in { and }. Inside the {} is a python expression:
user.name for this Job's Role. For example:svn co --username {user.username} --password "{user.password}" https://svn.svnpplace.com/joebloggs_repo/Trunk
api_id of the Blob where the results of the Run will be stored.Blob where the results of the Run will be stored.api_id of the Role which gives permissions to the Job.api_id of the Blob where results from this Job will be stored.Blob where results from this Job will be stored.{var}You may need to store values from one step to use in a later step. This is done with {var}.
Echo 57 > {var}/my_value
{var} is a directory where files and directories can be stored for use later. This can be accessed as file, or as values.
echo {var.my_value}
cat {var}/my_value
They are always strings:
echo my_value bigger than 23 is {int(var.my_value) > 23}
You can store whole trees of files in {var}:
cc -c *.c -o {var}/obj
Note the earlier Job's results.
jobs:
configure:
steps:
-
bash: echo myproduct > {var}/conf_prod
results:
product: conf_prod
Ensure the later Job starts after the earlier one.
jobs:
configure:
...
compile:
after:
- configure
steps:
...
Only Job's which this one is after will have their results available.
Use the earlier job's values
jobs:
configure:
...
compile:
...
steps:
-
workingdirectory: /home/process/{job.configure.product}
...
{job} works like {var}.
Trees of files can be sent:
jobs:
compile:
steps:
-
bash: gcc -c *.c -o {var}/obj
results:
objects: obj
steps:
-
name: Create output directory
condition: job.configure.build_product
bash: mkdir -p {var}/obj
ignoreerror: True
-
name: compile
condition: job.configure.build_product
workingdirectory: /home/process/{job.configure.product_name}
timeout: 5*60
bash: cc *.cpp -o {var}/obj
condition: <expression>condition: int(var.earlier_result) < 0
workingdirectory: <string>workingdirectory: /home/process/{job.configure.whattobuild}
bash: gcc -c....
ignoreerror: <expression>bash: exit 1
ignoreerror: True
timeout: <expression>How long, in seconds, to wait for the step:
timeout: 5*60
If the step takes longer it is stoppped, and the Job fails.
jobs:
my_perl_job:
shells:
perl: perl -f {script}
steps:
-
perl: |
use strict;
use warnings;
print("Hello World\n");
The shells table sets the pattern for each shell. {script} is the name of the script file to run. At time of writing the default shells
are:
shells:
bash: bash --noprofile --norc -eo pipefail {script}
python: python {script}
sh: sh -e {script}
Copy files between files & blobs.
Update a badge:
badge --repo {repo.api_id} --name operation --label operation --message working
Badge URL:
https://svnplace.com/myname/myrepo/badge/operation