Starting a Pipeline

Prev Next

Automatic Triggers

1. Cron Jobs

Using cron jobs is the most common way to trigger a pipeline. They are automatic, repeating triggers, which run a pipeline on a defined time schedule.

Examples

  1. A cron job is configured to run the pipeline every 5 minutes, but it only fetches data with a write-date later than the last time the pipeline was triggered.

  2. A cron job is configured to run the pipeline every month, migrating all data with a state needs migrating from an external system and changing the state to synced.
    Hint: this can be more reliable than write/create times for external data

Configure it

Open your pipeline and navigate to the Cron Jobs tab.

Create a new cron job and configure it as needed.

As well as a title and a list of pipelines to trigger, you can add several intervals on which to trigger them, as well as specifying the next execution time.

Advanced

Cron jobs can also be managed specifically here, as opposed to only in the pipeline which they trigger.Clicking a cron job’s config link to view all of the job intervals configure them individually at a more granular level.

Click and row to edit.

Cron jobs can be paused, or run manually here too.

2. Method Triggers

Method triggers run a pipeline whenever a specific method is run on an odoo model.

It’s a type of trigger setup which is used across several Zebroo modules, for example zSYNC, Odoo Flows and Data-Police.

Examples

A method trigger starts the pipeline line:

  1. to sync new data across systems when a product is created (method: create).

  2. to email users when someone writes an update to their own user information (method: write).

  3. to send an order confirmation when a sales order is confirmed (method: action_confirm).

Configure it

Open your pipeline and navigate to the Method Triggers tab.

Click on the ➕ and icons to add filters to the data, if you don’t want to trigger the pipeline for all records of the chosen model.

Advanced

Link Expression:

The purpose of the link expression is to find the way to the record, which is the base for the pipeline.

This can typically be left with the default configuration, object: referencing the chosen model, however it is useful to modify when a child attribute of a parent object triggers a method, however the pipeline should use the whole parent object as data.

For example:

Model: sale.order.line
Method: write
Link Expression: object.order_id

Here, modifying a sale order line will trigger a pipeline as if the change happened to the sale order itself. Hence, passing the order_id as the link expression, rather than the default order line.

3. Field Triggers

Field triggers work similarly to Method triggers, but are configured to specifically listen to given fields on the specified model.

Examples

  1. A pipeline is configured to listen for changes to the state field of a custom model. When the state changes to unsynced the pipeline sync the relevant data and changed the state to synced.

Configure it

Open your pipeline and navigate to the Field Triggers tab.

Click on the ➕ and icons to add filters to the data, if you don’t want to trigger the pipeline for all records of the chosen model.

4. HTTP/S triggers

Pipelines can be triggered by HTTP requests, taking data from the request parameters and returning data as required.

Examples

  1. A pipeline is configured to expose a public GET endpoint on my.api/v1/products, and returns a list of existing products as a response, filtered according to any given query parameters.

  2. A pipeline is configured to expose a private, authenticated POST endpoint on my.api/v1/sales_order  and triggers the creation of new sales order according to the given request parameters and body.

Configure it

See Configuring a HTTP API.

5. Parent Pipeline: Pipeline trigger workers

Pipeline trigger workers allow fine grained control over how a sub-pipeline is triggered and how the parent pipeline should behave.

Examples

  1. A parent pipeline System Migration 08.2025 automatically triggers the sub-pipeline Migrate Users.

  2. The sub-pipeline Create Product Variants is automatically triggered from the parent pipeline Create Products, as well as from the parent pipeline Create Bills of Materials.

  3. A sub-pipeline Delete All User Data is automatically triggered from the parent pipeline Close User Account.

Configure it

Open your parent pipeline and add a Pipeline trigger worker.

Open and configure the worker with the name of the sub-pipeline to be called, modes and data fields to be given to the worker.

Mode:

  1. Wait until done: parent pipeline waits until successful completion, but no data is retrieved from the sub-pipeline.

  2. Return data: parent pipeline waits until successful completion and expected the sub-pipeline to return some data. In this case the parent pipeline’s keep input option can be useful to prevent losing other pipeline data.

Call Mode:

  1. Start and execute: sub-pipeline runs synchronously

  2. Start in background: sub-pipeline runs asynchronously

Data field:

In some cases not all data being passed through the parent pipeline should be given to the sub-pipeline. This field specifies which subset of data (if any) should be given.

6. Parent Pipeline: Python workers

Python workers can be a useful way to trigger other pipelines via code.

Examples

  1. Several other pipelines need to be triggered in a concise way. Python triggers allow multiple pipelines to be triggered from a single worker, rather than needing a specific pipeline trigger worker for each.

  2. Other pipelines need to be triggered with a subset of the main pipeline data. This can alternately be accomplished using a mapper worker and the keep input option.

  3. Start a new pipeline instance if none exists:

    pipeline = env['zbs.pipeline'].search([('name', '=', '12 Personen Sync')])
    instance = env['zbs.instance'].search([('pipeline_id','=',pipeline.id),('state','=','pending')])
    if not instance:
        pipeline.start(data)
    data

Configure it

Open your parent pipeline and add a Python Transformer.

Add any additional code that is required and trigger the sub-pipeline by following one of these examples:

# Option 1. Execute a complete run immediately:
env['zbs.pipeline'].start_and_execute("pipelinename", data={...})
# --> any errors are raised immediately

# Option 2. Schedule a background run:
env['zbs.pipeline'].start_in_background("pipelinename", data={....})
# --> job is picked up by a cron job and started

# Option 3.
pipeline.start(data)

Manual Triggers

7. UI Buttons

Adding UI buttons to certain form views of a given model can be a convenient way to manually trigger a pipeline from the UI.

Depending on how a button is configured, it can either bring data with it from the click, or not. The data becomes visible as input data to the triggered pipelines start worker.

Examples

  1. A UI button is added to the products view, to manually trigger a data fetch when necessary, since it is a long running process.

Configure it

Navigate to the menu item User-interface>Buttons to create a new button trigger.

Select a pipeline to trigger and a model view to add the button to.

Button States:

Referring to the button being currently configured, allows the button to exist in a draft or hidden state, while being developed.

  1. draft: invisible in the UI, still being developed

  2. done: invisible in the UI, development finished

  3. open: visible in the UI

Once the button is ready, click Update to update the existing view and the button will become visible and useable there.

Note: Buttons can also be added to the UI manually.

Advanced

Invisible (v17+)/Attrs invisible (v16):

Allows for customisation of the invisible attribute on the button.

  • zSYNC v17+ expects zSYNC code. Eg. [('state', '=', 'draft')]

  • zSYNC v16 expects an object of the form { "required": False, "invisible": False} with customisable values for both the required and invisible attributes

After success:

This code is executed only when the pre-flight check returns True and a synchronous pipeline has completed without any errors.

Eg. Show a success notification

notification("Success", "info", next=window_close)

Warning for notification triggers:

If the pre-flight check returns a string which is displayed as a notification, it is not possible to return a further notification here, but just a window close. As a solution, change the pre-flight result to simply "True" and provide messages here.

Pre-flight check:

This field allows you to add a check for whether the pipeline may be started or not, once the button is clicked.

The result should be a boolean value with False indicating that the pipeline must not be started and True indicating that it should start.

Eg. Check for an existing pipeline instance:

if env['zbs.instance'].search_count([
    ('pipeline_id', '=', 3),
    ('state', 'in', ['pending', 'running'])]
):
  return False, 'Please wait until ... finished. Try again later'
return True

Additional initial data for pipeline:

Here, additional data can be defined and passed to the pipeline worker.

8. File uploads

Triggering a pipeline via a file upload allows for data received from customers and co-workers to be process via a pipeline rather than manually.

Examples

  1. Customer X always sends their CSV data with the wrong column names. Uploading those files triggers a pipeline which renames the columns to standard namings.

  2. Customer Y always sends their files in the wrong language. Uploading those files triggers a pipeline which translates them to a language useable by your team.

Configure it

Navigate to the menu item User-interface>Buttons to create a new button trigger.

Select a pipeline to trigger and a parent menu to add the new menu item under.

Here you can configure menu items which should be added, actions which should be triggered, and the required user permissions for both.

Note: Files are ingested as byte64 strings

9. Start in Background

The UI button Start in Background starts a queue job with the selected pipeline.

Examples

  1. A pipeline rarely needs to be, so users trigger it manually as required.

Configure it

There’s no configuration needed for this type of trigger. Open the pipeline and click Start in Background.

10. Test Runs

Starting a pipeline in test mode is helpful for development and debugging. Read how to do it here: Debugging a Pipeline