1. Create an empty pipeline
In your running Odoo instance, navigate to the Zebroo Sync app. By default the Pipeline list view is opened. Click NEW
to create a new zSYNC pipeline.
If the Zebroo Sync app is not available on your Odoo, you may need to enable it. See here. If it’s enabled but you can’t access it, make sure you have the appropriate access rights.
Name your pipeline and you’re ready to go:
2. Add workers
Workers are the functional components which are chained together to create a pipeline. They fetch data, maps data, run code, etc.
In a very simple example we grab can data from Odoo, transform it as needed and then export it into a file.
In your pipeline’s Flow tab (selected by default), click ADD WORKER
.
You’ll see that the start worker (technical name: zbs.start
) and end worker (technical name: zbs.end
) are added to your pipeline automatically.
Next, search for grabbers. In this example we’ll select the Odoo Grabber. Click OK
.
Similarly, add a mapper and a file dumper:
3. Configure workers
Each zSYNC worker needs to be specifically configured. For our easy example, however, we won’t modify the configuration of the start and stop steps.
Grabber
The grabber needs to know how to authenticate to get the data and which data to fetch. Click on it to manage it’s configuration.
In this example we want all products that had a change since the last time this pipeline was executed. To authenticate with Odoo choose Local Odoo: the instance zSYNC is installed on. Then specify the data model to search for, product.product, and a domain (filter) to be applied: [('write_date','=',last_execution_date)]
. Switch the Format-Type to Browsable Data.
Executing this initially will fetch all products from the running Odoo instance.
Mapper
Next, the mapper needs to know which fields you want to export and which field names to use. In this Example we’ll take the name of each product and map it to the output column productname, and take the field lst_price from the Odoo data and map it to the output column price.
Dumper
Finally, the dumper needs to know where to put the data and how to authenticate with the destination. In our example we’re doing to dump our data to a local file. There are also which can dump to SQL databases, Data warehouses like Snowflake, Odoo instances, HTTP APIs and more.
And that’s all for the basic pipeline!
Here we have no cron job yet, it is a trivial case and the data didn't need much transformation. In essence, that is everything this all that needs to be done for any interface. Everything layer on top is to satisfy the specifics of external systems or specialties of a workflow. Any flat import and export of data has always these basics in common.
What next?
Dive deeper into Workers to learn more about each pipeline step.