Zeebe Logo

Zeebe is a cloud-native workflow engine for microservices orchestration

  • Define workflows graphically in BPMN 2.0
  • Choose any gRPC-supported programming language
  • Deploy with Docker and Kubernetes (in the cloud or on-premises)
  • Build workflows that react to events from Apache Kafka and other messaging platforms
  • Scale horizontally to handle very high throughput
  • Fault tolerance (no relational database required)
  • Export workflow data for monitoring and analysis
  • Engage with an active community

First Steps

Introduction to Zeebe

This section will help you understand what Zeebe is, how to get started with Zeebe, and how to get in touch with the community and the maintainers to ask questions.

  • What is Zeebe?: This writeup is the best place to start if you're brand new to Zeebe and microservices orchestration.

  • Install: Download a Zeebe distribution or use Docker to run Zeebe.

  • Quickstart: The Quickstart demonstrates the main concepts of Zeebe using only the command line client (no code writing necessary). After the Quickstart:

    • As a Java user, you should have look at the Java client library. The Get Started Java tutorial guides you through the first steps.

    • As a Go user, you should look into the Go client library. The Get Started Go tutorial guides you through the first steps.

  • Community Contributions: Find a list of contributions from the Zeebe community and learn how to contribute.

  • Get Help & Get Involved: Ask a question via one of Zeebe's public support channels or report an issue.

What is Zeebe?

Zeebe is a workflow engine for microservices orchestration. Zeebe ensures that, once started, flows are always carried out fully, retrying steps in case of failures. Along the way, Zeebe maintains a complete audit log so that the progress of flows can be monitored. Zeebe is fault tolerant and scales seamlessly to handle growing transaction volumes.

Below, we'll provide a brief overview of Zeebe. For more detail, we recommend the "What is Zeebe?" writeup on the main Zeebe site.

What problem does Zeebe solve, and how?

A company’s end-to-end workflows almost always span more than one microservice. In an e-commerce company, for example, a “customer order” workflow might involve a payments microservice, an inventory microservice, a shipping microservice, and more:

order-process

These cross-microservice workflows are mission critical, yet the workflows themselves are rarely modeled and monitored. Often, the flow of events through different microservices is expressed only implicitly in code.

If that’s the case, how can we ensure visibility of workflows and provide status and error monitoring? How do we guarantee that flows always complete, even if single microservices fail?

Zeebe gives you:

  1. Visibility into the state of a company’s end-to-end workflows, including the number of in-flight workflows, average workflow duration, current errors within a workflow, and more.
  2. Workflow orchestration based on the current state of a workflow; Zeebe publishes “commands” as events that can be consumed by one or more microservices, ensuring that workflows progress according to their definition.
  3. Monitoring for timeouts or other workflow errors with the ability to configure error-handling paths such as stateful retries or escalation to teams that can resolve an issue manually.

Zeebe was designed to operate at very large scale, and to achieve this, it provides:

  • Horizontal scalability and no dependence on an external database; Zeebe writes data directly to the filesystem on the same servers where it’s deployed. Zeebe makes it simple to distribute processing across a cluster of machines to deliver high throughput.
  • Fault tolerance via an easy-to-configure replication mechanism, ensuring that Zeebe can recover from machine or software failure with no data loss and minimal downtime. This ensures that the system as a whole remains available without requiring manual action.
  • A message-driven architecture where all workflow-relevant events are written to an append-only log, providing an audit trail and a history of the state of a workflow.
  • A publish-subscribe interaction model, which enables microservices that connect to Zeebe to maintain a high degree of control and autonomy, including control over processing rates. These properties make Zeebe resilient, scalable, and reactive.
  • Visual workflows modeled in ISO-standard BPMN 2.0 so that technical and non-technical stakeholders can collaborate on workflow design in a widely-used modeling language.
  • A language-agnostic client model, making it possible to build a Zeebe client in just about any programming language that an organization uses to build microservices.
  • Operational ease-of-use as a self-contained and self-sufficient system. Zeebe does not require a cluster coordinator such as ZooKeeper. Because all nodes in a Zeebe cluster are equal, it's relatively easy to scale, and it plays nicely with modern resource managers and container orchestrators such as Docker, Kubernetes, and DC/OS. Zeebe's CLI (Command Line Interface) allows you to script and automate management and operations tasks.

You can learn more about these technical concepts in the "Basics" section of the documentation.

Zeebe is simple and lightweight

Most existing workflow engines offer more features than Zeebe. While having access to lots of features is generally a good thing, it can come at a cost of increased complexity and degraded performance.

Zeebe is 100% focused on providing a compact, robust, and scalable solution for orchestration of workflows. Rather than supporting a broad spectrum of features, its goal is to excel within this scope.

In addition, Zeebe works well with other systems. For example, Zeebe provides a simple event stream API that makes it easy to stream all internal data into another system such as Elastic Search for indexing and querying.

Deciding if Zeebe is right for you

Note that Zeebe is currently in "developer preview", meaning that it's not yet ready for production and is under heavy development. See the roadmap for more details.

Your applications might not need the scalability and performance features provided by Zeebe. Or, you might a mature set of features around BPM (Business Process Management), which Zeebe does not yet offer. In such scenarios, a workflow automation platform such as Camunda BPM could be a better fit.

Install

This page guides you through the initial installation of a Zeebe broker for development purposes. In case you are looking for more detailed information on how to set up and operate Zeebe, make sure to check out the Operations Guide as well.

There are different ways to install Zeebe:

Using Docker

The easiest way to try Zeebe is using Docker. Using Docker provides you with a consistent environment, and we recommend it for development.

Prerequisites

  • Operating System:
    • Linux
    • Windows/MacOS (development only, not supported for production)
  • Docker

Docker configurations for docker-compose

The absolutely easiest way to try Zeebe is using the official docker-compose repository. This allows you to start complex configurations with a single command, and understand the details of how they are configured when you are ready to delve to that level.

Docker configurations for starting a single Zeebe broker using docker-compose, optionally with Operate and Simple Monitor, are available in the zeebe-docker-compose repository. Further instructions for using these configurations are in the README.md in that repository.

Using Docker without docker-compose

You can run Zeebe with Docker:

docker run --name zeebe -p 26500:26500 camunda/zeebe:latest

Exposed Ports

  • 26500: Gateway API
  • 26501: Command API (gateway-to-broker)
  • 26502: Internal API (broker-to-broker)

Volumes

The default data volume is under /usr/local/zeebe/data. It contains all data which should be persisted.

Configuration

The Zeebe configuration is located at /usr/local/zeebe/conf/zeebe.cfg.toml. The logging configuration is located at /usr/local/zeebe/conf/log4j2.xml.

The configuration of the docker image can also be changed by using environment variables.

Available environment variables:

  • ZEEBE_LOG_LEVEL: Sets the log level of the Zeebe Logger (default: info).
  • ZEEBE_HOST: Sets the host address to bind to instead of the IP of the container.
  • BOOTSTRAP: Sets the replication factor of the internal-system partition.
  • ZEEBE_CONTACT_POINTS: Sets the contact points of other brokers in a cluster setup.
  • DEPLOY_ON_KUBERNETES: If set to true, it applies some configuration changes in order to run Zeebe in a Kubernetes environment.

Mac and Windows users

Note: On systems which use a VM to run Docker containers like Mac and Windows, the VM needs at least 4GB of memory, otherwise Zeebe might fail to start with an error similar to:

Exception in thread "actor-runner-service-container" java.lang.OutOfMemoryError: Direct buffer memory
        at java.nio.Bits.reserveMemory(Bits.java:694)
        at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
        at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
        at io.zeebe.util.allocation.DirectBufferAllocator.allocate(DirectBufferAllocator.java:28)
        at io.zeebe.util.allocation.BufferAllocators.allocateDirect(BufferAllocators.java:26)
        at io.zeebe.dispatcher.DispatcherBuilder.initAllocatedBuffer(DispatcherBuilder.java:266)
        at io.zeebe.dispatcher.DispatcherBuilder.build(DispatcherBuilder.java:198)
        at io.zeebe.broker.services.DispatcherService.start(DispatcherService.java:61)
        at io.zeebe.servicecontainer.impl.ServiceController$InvokeStartState.doWork(ServiceController.java:269)
        at io.zeebe.servicecontainer.impl.ServiceController.doWork(ServiceController.java:138)
        at io.zeebe.servicecontainer.impl.ServiceContainerImpl.doWork(ServiceContainerImpl.java:110)
        at io.zeebe.util.actor.ActorRunner.tryRunActor(ActorRunner.java:165)
        at io.zeebe.util.actor.ActorRunner.runActor(ActorRunner.java:145)
        at io.zeebe.util.actor.ActorRunner.doWork(ActorRunner.java:114)
        at io.zeebe.util.actor.ActorRunner.run(ActorRunner.java:71)
        at java.lang.Thread.run(Thread.java:748)

If you are using Docker with the default Moby VM, you can adjust the amount of memory available to the VM through the Docker preferences. Right-click on the Docker icon in the System Tray to access preferences.

If you use a Docker setup with docker-machine and your default VM does not have 4GB of memory, you can create a new one with the following command:

docker-machine create --driver virtualbox --virtualbox-memory 4000 zeebe

Verify that the Docker Machine is running correctly:

docker-machine ls
NAME        ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER        ERRORS
zeebe     *        virtualbox   Running   tcp://192.168.99.100:2376           v17.03.1-ce

Configure your terminal:

eval $(docker-machine env zeebe)

Then run Zeebe:

docker run --rm -p 26500:26500 camunda/zeebe:latest

To get the ip of Zeebe:

docker-machine ip zeebe
192.168.99.100

Verify that you can connect to Zeebe:

telnet 192.168.99.100 26500

Download a distribution

You can always download the latest Zeebe release from the Github release page.

Prerequisites

  • Operating System:
    • Linux
    • Windows/MacOS (development only, not supported for production)
  • Java Virtual Machine:
    • Oracle Hotspot v1.8
    • Open JDK v1.8

Once you have downloaded a distribution, extract it into a folder of your choice. To extract the Zeebe distribution and start the broker, Linux users can type:

tar -xzf zeebe-distribution-X.Y.Z.tar.gz -C zeebe/
./bin/broker

Windows users can download the .zippackage and extract it using their favorite unzip tool. They can then open the extracted folder, navigate to the bin folder and start the broker by double-clicking on the broker.bat file.

Once the Zeebe broker has started, it should produce the following output:

23:39:13.167 [] [main] INFO  io.zeebe.util.config - Reading configuration for class class io.zeebe.broker.system.configuration.BrokerCfg from file conf/zeebe.cfg.toml
23:39:13.246 [] [main] INFO  io.zeebe.broker.system - Scheduler configuration: Threads{cpu-bound: 2, io-bound: 2}.
23:39:13.270 [] [main] INFO  io.zeebe.broker.system - Version: X.Y.Z
23:39:13.273 [] [main] INFO  io.zeebe.broker.system - Starting broker with configuration {

Quickstart

This tutorial should help you to get to know the main concepts of Zeebe without the need to write a single line of code.

  1. Download the Zeebe distribution
  2. Start the Zeebe broker
  3. Deploy a workflow
  4. Create a workflow instance
  5. Complete a workflow instance
  6. Next steps

Note: Some command examples might not work on Windows if you use cmd or Powershell. For Windows users we recommend to use a bash-like shell, i.e. Git Bash, Cygwin or MinGW for this guide.

Step 1: Download the Zeebe distribution

You can download the latest distribution from the Zeebe release page.

Extract the archive and enter the Zeebe directory.

tar -xzvf zeebe-distribution-X.Y.Z.tar.gz
cd zeebe-broker-X.Y.Z/

Inside the Zeebe directory you will find multiple directories.

tree -d
.
├── bin     - Binaries and start scripts of the distribution
├── conf    - Zeebe and logging configuration
└── lib     - Shared java libraries

Step 2: Start the Zeebe broker

To start a Zeebe broker use the broker or broker.bat file located in the bin/ folder.

./bin/broker
23:39:13.167 [] [main] INFO  io.zeebe.util.config - Reading configuration for class class io.zeebe.broker.system.configuration.BrokerCfg from file conf/zeebe.cfg.toml
23:39:13.246 [] [main] INFO  io.zeebe.broker.system - Scheduler configuration: Threads{cpu-bound: 2, io-bound: 2}.
23:39:13.270 [] [main] INFO  io.zeebe.broker.system - Version: X.Y.Z
23:39:13.273 [] [main] INFO  io.zeebe.broker.system - Starting broker with configuration {

You will see some output which contains the version of the broker and configuration parameters like directory locations and API socket addresses.

To continue this guide open another terminal to execute commands using the Zeebe CLI zbctl.

We can now check the status of the Zeebe broker.

./bin/zbctl status
Cluster size: 1
Partitions count: 1
Replication factor: 1
Brokers:
  Broker 0 - 0.0.0.0:26501
    Partition 1 : Leader

Step 3: Deploy a workflow

A workflow is used to orchestrate loosely coupled job workers and the flow of data between them.

In this guide we will use an example process order-process.bpmn. You can download it with the following link: order-process.bpmn.

order-process

The process describes a sequential flow of three tasks Collect Money, Fetch Items and Ship Parcel. If you open the order-process.bpmn file in a text editor you will see that every task has an attribute type defined in the XML which is later used as job type.

<!-- [...] -->
<bpmn:serviceTask id="collect-money" name="Collect Money">
  <bpmn:extensionElements>
    <zeebe:taskDefinition type="payment-service" />
  </bpmn:extensionElements>
</bpmn:serviceTask>
<!-- [...] -->
<bpmn:serviceTask id="fetch-items" name="Fetch Items">
  <bpmn:extensionElements>
    <zeebe:taskDefinition type="inventory-service" />
  </bpmn:extensionElements>
</bpmn:serviceTask>
<!-- [...] -->
<bpmn:serviceTask id="ship-parcel" name="Ship Parcel">
  <bpmn:extensionElements>
    <zeebe:taskDefinition type="shipment-service" />
  </bpmn:extensionElements>
</bpmn:serviceTask>
<!-- [...] -->

To complete an instance of this workflow we would need to activate and complete one job for each of the types payment-service, inventory-service and shipment-service.

But first let's deploy the workflow to the Zeebe broker.

./bin/zbctl deploy order-process.bpmn
{
  "key": 2251799813685250,
  "workflows": [
    {
      "bpmnProcessId": "order-process",
      "version": 1,
      "workflowKey": 2251799813685249,
      "resourceName": "order-process.bpmn"
    }
  ]
}

Step 4: Create a workflow instance

After the workflow is deployed we can create new instances of it. Every instance of a workflow is a single execution of the workflow. To create a new instance we have to specify the process ID from the BPMN file, in our case the ID is order-process as defined in the order-process.bpmn:

<bpmn:process id="order-process" isExecutable="true">

Every instance of a workflow normally processes some kind of data. We can specify the initial data of the instance as variables when we start the instance.

Note: Windows users who want to execute this command using cmd or Powershell have to escape the variables differently.

  • cmd: "{\"orderId\": 1234}"
  • Powershell: '{"\"orderId"\": 1234}'
./bin/zbctl create instance order-process --variables '{"orderId": 1234}'
{
  "workflowKey": 2251799813685249,
  "bpmnProcessId": "order-process",
  "version": 1,
  "workflowInstanceKey": 2251799813685251
}

Step 5: Complete a workflow instance

To complete the instance all three tasks have to be executed. In Zeebe a job is created for every task which is reached during workflow instance execution. In order to finish a job and thereby the corresponding task it has to be activated and completed by a job worker. A job worker is a long living process which repeatedly tries to activate jobs for a given job type and completes them after executing its business logic. The zbctl also provides a command to spawn simple job workers using an external command or script. The job worker will receive for every job the workflow instance variables as JSON object on stdin and has to return its result also as JSON object on stdout if it handled the job successfully.

In this example we use the unix command cat which just outputs what it receives on stdin. To complete a workflow instance we now have to create a job worker for each of the three task types from the workflow definition: payment-service, inventory-service and shipment-service.

Note: For Windows users this command does not work with cmd as the cat command does not exist. We recommend to use Powershell or a bash-like shell to execute this command.

./bin/zbctl create worker payment-service --handler cat &
./bin/zbctl create worker inventory-service --handler cat &
./bin/zbctl create worker shipment-service --handler cat &
2019/06/06 20:54:36 Handler completed job 2251799813685257 with variables
{"orderId":1234}
2019/06/06 20:54:36 Activated job 2251799813685264 with variables
{"orderId":1234}
2019/06/06 20:54:36 Handler completed job 2251799813685264 with variables
{"orderId":1234}
2019/06/06 20:54:36 Activated job 2251799813685271 with variables
{"orderId":1234}
2019/06/06 20:54:36 Handler completed job 2251799813685271 with variables
{"orderId":1234}

After the job workers are running in the background we can create more instances of our workflow to observe how the workers will complete them.

./bin/zbctl create instance order-process --variables '{"orderId": 12345}'

To close all job workers use the kill command to stop the background processes.

kill %1 %2 %3

If you want to visualize the state of the workflow instances you can start the Zeebe simple monitor.

Next steps

To continue working with Zeebe we recommend to get more familiar with the basic concepts of Zeebe, see the Basics chapter of the documentation.

In the BPMN Workflows chapter you can find an introduction to creating Workflows with BPMN. And the BPMN Modeler chapter shows you how to model them by yourself.

The documentation also provides getting started guides for implementing job workers using Java or Go.

Community Contributions

Zeebe welcomes extensions and contributions from the community.

We use Awesome Zeebe as a place to keep track of Zeebe ecosystem contributions, such as...

  • Clients
  • Workers
  • Exporters
  • Applications

...along with other integrations such as Spring-Zeebe and the Apache Kafka connector.

If you built something for the Zeebe ecosystem, we encourage you to add it to Awesome Zeebe via pull request.

If you're interested in contributing to the main Zeebe repository (vs. creating an extension that lives in its own repository), be sure to start with the "Contributing to Zeebe" guide in GitHub

If you have questions about contributing, please let us know.

Go to Awesome Zeebe to see community extensions.

Get Help and Get Involved

We provide a few different public-facing Zeebe support and feedback channels so that users can ask questions, report problems, and make contributions.

Zeebe User Forum

The best place to ask questions about Zeebe and to troubleshoot issues is the Zeebe user forum.

The Zeebe team monitors the forum closely, and we do our best to respond to all questions in a timely manner.

Go to the Zeebe user forum

Public Slack Group

There's a public Zeebe Slack group where you can ask one-off questions, share community contributions, and connect with other Zeebe users.

Join the Zeebe Slack group

Create An Issue in GitHub

Did you find a problem in Zeebe? Or do you have a suggestion for an improvement?

You can create an issue in the Zeebe GitHub project to let us know.

Go to Issues in the Zeebe GitHub repository

Community Contributions

We cover community contributions in a dedicated section of the docs.

Read the Zeebe docs entry about community contributions

Overview

This section provides an overview of Zeebe's core concepts. Understanding them helps to successfully build workflow applications.

Architecture

There are four main components in Zeebe's architecture: the client, the gateway, the broker, and the exporter.

zeebe-architecture

Client

Clients are libraries that you embed in an application (e.g. a microservice that executes your business logic) to connect to a Zeebe cluster. Clients have two primary uses:

  • Carrying out business logic (starting workflow instances, publishing messages, working on tasks)
  • Handling operational issues (updating workflow instance variables, resolving incidents)

More about Zeebe clients:

  • Clients connect to the Zeebe gateway via gRPC, which uses http/2-based transport. To learn more about gRPC in Zeebe, check out the gRPC section of the docs.
  • The Zeebe project includes officially-supported Java and Go clients, and gRPC makes it possible to generate clients in a range of different programming languages. Community clients have been created in other languages, including C#, Ruby, and JavaScript.
  • Client applications can be scaled up and down completely separately from Zeebe--the Zeebe brokers do not execute any business logic.

Gateway

The gateway, which proxies requests to brokers, serves as a single entry point to a Zeebe cluster.

The gateway is stateless and sessionless, and gateways can be added as necessary for load balancing and high availability.

Broker

The Zeebe broker is the distributed workflow engine that keeps state of active workflow instances.

Brokers can be partitioned for horizontal scalability and replicated for fault tolerance. A Zeebe deployment will often consist of more than one broker.

It's important to note that no application business logic lives in the broker. Its only responsibilities are:

  1. Storing and managing the state of active workflow instances

  2. Distributing work items to clients

Brokers form a peer-to-peer network in which there is no single point of failure. This is possible because all brokers perform the same kind of tasks and the responsibilities of an unavailable broker are transparently reassigned in the network.

Exporter

The exporter system provides an event stream of state changes within Zeebe. This data has many potential uses, including but not limited to:

  • Monitoring the current state of running workflow instances

  • Analysis of historic workflow data for auditing, business intelligence, etc

  • Tracking incidents created by Zeebe

The exporter includes a simple API that you can use to stream data into a storage system of your choice. Zeebe includes an out-of-the-box Elasticsearch exporter, and other community-contributed exporters are also available.

Workflows

Workflows are flowchart-like blueprints that define the orchestration of tasks. Every task represents a piece of business logic such that the ordered execution produces a meaningful result.

A job worker is your implementation of the business logic required to complete a task. A job worker must embed a Zeebe client library to communicate with the broker, but otherwise, there are no restrictions on its implementation. You can choose to write a worker as a microservice, but also as part of a classical three-tier application, as a (lambda) function, via command line tools, etc.

Running a workflow then requires two steps: submitting the workflow to Zeebe and creating job workers that can request jobs from Zeebe and complete them.

Sequences

The simplest kind of workflow is an ordered sequence of tasks. Whenever workflow execution reaches a task, Zeebe creates a job that can be requested and completed by a job worker.

workflow-sequence

You can think of Zeebe's workflow orchestration as a state machine. A workflow instance reaches a task, and Zeebe creates a job that can be requested by a worker. Zeebe then waits for the worker to request a job and complete the work. Once the work is completed, the flow continues to the next step. If the worker fails to complete the work, the workflow remains at the current step, and the job could be retried until it's successfully completed.

Data Flow

As Zeebe progresses from one task to the next in a workflow, it can move custom data in the form of variables. Variables are key-value-pairs and part of the workflow instance.

data-flow

Every job worker can read the variables and modify them when completing a job so that data can be shared between different tasks in a workflow.

Data-based Conditions

Some workflows do not always execute the same tasks but need to choose different tasks based on variables and conditions:

data-conditions

The diamond shape with the "X" in the middle is an element indicating that the workflow decides to take one of many paths.

Events

Events represent things that happen. A workflow can react to events (catching event) and can emit events (throwing event). For example:

workflow

There are different types of events such as message or timer.

Fork / Join Concurrency

In many cases, it is also useful to perform multiple tasks in parallel. This can be achieved with Fork / Join concurrency:

data-conditions

The diamond shape with the "+" marker means that all outgoing paths are activated and all incoming paths are merged.

BPMN 2.0

Zeebe uses BPMN 2.0 for representing workflows. BPMN is an industry standard which is widely supported by different vendors and implementations. Using BPMN ensures that workflows can be interchanged between Zeebe and other workflow systems.

YAML Workflows

In addition to BPMN 2.0, Zeebe supports a YAML workflow format. It can be used to quickly write simple workflows in text. Unlike BPMN, it has no visual representation and is not standardized. Zeebe transforms YAML to BPMN on submission.

BPMN Modeler

Zeebe provides a free and open-source BPMN modeling tool to create BPMN diagrams and configure their technical properties. The modeler is a desktop application based on the bpmn.io open source project.

Zeebe Modeler can be downloaded from GitHub.

Job Workers

A job worker is a component capable of performing a particular step in a workflow.

What is a Job?

A job is a work item in a workflow. For example:

  • Processing a payment
  • Generating a PDF document
  • Updating customer data in a backend system

A job has the following properties:

  • Type: Describes the work item and is defined in each task in the workflow. The type is referenced by workers to request the jobs they are able to perform.
  • Variables: The contextual/business data of the workflow instance that is required by the worker to do its work.
  • Custom Headers: Additional static metadata defined in the workflow. Mostly used to configure a worker which is used for more than one workflow step.

Requesting Jobs from the Broker

Job workers request jobs of a certain type from the broker on a regular interval (i.e. polling). This interval and the number of jobs requested are configurable in the Zeebe client.

If one or more jobs of the requested type are available, the broker will stream jobs to the worker. Upon receiving jobs, a worker performs them and sends back a complete or fail message for each job depending on whether the job could be completed successfully or not.

For example, the following workflow might generate three different types of jobs: process-payment, fetch-items, and ship-parcel:

order-workflow-model

Three different job workers, one for each job type, could request jobs from Zeebe:

zeebe-job-workers-requesting-jobs

Many workers can request the same job type in order to scale up processing. In this scenario, the broker ensures that each job is sent to only one of the workers.

On requesting jobs, the following properties can be set:

  • Worker: The identifier of the worker. Used for auditing purposes.
  • Timeout: The time a job is assigned to the worker. If a job is not completed within this time then it can be requested again from a worker.
  • MaxJobsToActivate: The maximum number of jobs which should be activated by this request.
  • FetchVariables: A list of variables names which are required. If the list is empty, all variables of the workflow instance are requested.

Job Queueing

Zeebe decouples creation of jobs from performing the work on them. It is always possible to create jobs at the highest possible rate, regardless of whether or not there's a worker available to work on them. This is possible because Zeebe queues jobs until workers request them. If no job worker is currently requesting jobs, jobs remain queued. Because workers request jobs from the broker, the workers have control over the rate at which they take on new jobs.

This allows the broker to handle bursts of traffic and effectively act as a buffer in front of the job workers.

Partitions

Note: If you have worked with the Apache Kafka System before, the concepts presented on this page will sound very familiar to you.

In Zeebe, all data is organized into partitions. A partition is a persistent stream of workflow-related events. In a cluster of brokers, partitions are distributed among the nodes so it can be thought of as a shard. When you bootstrap a Zeebe broker you can configure how many partitions you need.

Usage Examples

Whenever you deploy a workflow, you deploy it to the first partition. The workflow is then distributed to all partitions. On all partitions, this workflow receives the same key and version such that it can be consistently identified.

When you start an instance of a workflow, the client library will then route the request to one partition in which the workflow instance will be published. All subsequent processing of the workflow instance will happen in that partition.

Scalability

Use partitions to scale your workflow processing. Partitions are dynamically distributed in a Zeebe cluster and for each partition there is one leading broker at a time. This leader accepts requests and performs event processing for the partition. Let us assume you want to distribute workflow processing load over five machines. You can achieve that by bootstraping five partitions.

Partition Data Layout

A partition is a persistent append-only event stream. Initially, a partition is empty. As the first entry gets inserted, it takes the place of the first entry. As the second entry comes in and is inserted, it takes the place as the second entry and so on and so forth. Each entry has a position in the partition which uniquely identifies it.

partition

Replication

For fault tolerance, data in a partition is replicated from the leader of the partition to its followers. Followers are other Zeebe Broker nodes that maintain a copy of the partition without performing event processing.

Recommendations

Choosing the number of partitions depends on the use case, workload and cluster setup. Here are some rules of thumb:

  • For testing and early development, start with a single partition. Note that Zeebe's workflow processing is highly optimized for efficiency, so a single partition can already handle high event loads.
  • With a single Zeebe Broker, a single partition is mostly enough. However, if the node has many cores and the broker is configured to use them, then more partitions can increase the total throughput (~ 2 threads per partition).
  • Base your decisions on data. Simulate the expected workload, measure and compare the performance of different partition setups.

Protocols

Zeebe clients connect to brokers via a stateless gateway. For the communication between client and gateway gRPC is used. The communication protocol is defined using Protocol Buffers v3 (proto3), and you can find it in the Zeebe repository.

What is gRPC?

gRPC was first developed by Google and is now an open-source project and part of the Cloud Native Computing Foundation. If you’re new to gRPC, the “What is gRPC” page on the project website provides a good introduction to it.

Why gRPC?

gRPC has many nice features that make it a good fit for Zeebe. It:

  • supports bi-directional streaming for opening a persistent connection and sending or receiving a stream of messages between client and server
  • uses the common http2 protocol by default
  • uses Protocol Buffers as an interface definition and data serialization mechanism–specifically, Zeebe uses proto3, which supports easy client generation in ten different programming languages

Supported clients

At the moment, Zeebe officially supports two gRPC clients: one in Java, and one in Golang.

If Zeebe does not provide an officially-supported client in your target language, you can read the official Quick Start page to find out how to create a very basic one.

You can find a list of existing clients in the Awesome Zeebe repository. Additionally, a blog post was published with a short tutorial on how to write a new client from scratch in Python.

Internal Processing

Internally, Zeebe is implemented as a collection of stream processors working on record streams (partitions). The stream processing model is used since it is a unified approach to provide:

  • Command Protocol (Request-Response),
  • Record Export (Streaming),
  • Workflow Evaluation (Asynchronous Background Tasks)

Record export solves the history problem: The stream provides exactly the kind of exhaustive audit log that a workflow engine needs to produce.

State Machines

Zeebe manages stateful entities: Jobs, Workflows, etc. Internally, these entities are implemented as State Machines managed by a stream processor.

The concept of the state machine pattern is simple: An instance of a state machine is always in one of several logical states. From each state, a set of transitions defines the next possible states. Transitioning into a new state may produce outputs/side effects.

Let's look at the state machine for jobs. Simplified, it looks as follows:

partition

Every oval is a state. Every arrow is a state transition. Note how each state transition is only applicable in a specific state. For example, it is not possible to complete a job when it is in state CREATED.

Events and Commands

Every state change in a state machine is called an event. Zeebe publishes every event as a record on the stream.

State changes can be requested by submitting a command. A Zeebe broker receives commands from two sources:

  1. Clients send commands remotely. Examples: Deploying workflows, starting workflow instances, creating and completing jobs, etc.
  2. The broker itself generates commands. Examples: Locking a job for exclusive processing by a worker, etc.

Once received, a command is published as a record on the addressed stream.

Stateful Stream Processing

A stream processor reads the record stream sequentially and interprets the commands with respect to the addressed entity's lifecycle. More specifically, a stream processor repeatedly performs the following steps:

  1. Consume the next command from the stream.
  2. Determine whether the command is applicable based on the state lifecycle and the entity's current state.
  3. If the command is applicable: Apply it to the state machine. If the command was sent by a client, send a reply/response.
  4. If the command is not applicable: Reject it. If it was sent by a client, send an error reply/response.
  5. Publish an event reporting the entity's new state.

For example, processing the Create Job command produces the event Job Created.

Command Triggers

A state change which occurred in one entity can automatically trigger a command for another entity. Example: When a job is completed, the corresponding workflow instance shall continue with the next step. Thus, the Event Job Completed triggers the command Complete Activity.

Exporters

As Zeebe processes jobs and workflows, or performs internal maintenance (e.g. raft failover), it will generate an ordered stream of records:

record-stream

While the clients provide no way to inspect this stream directly, Zeebe can load and configure user code that can process each and every one of those records, in the form of an exporter.

An exporter provides a single entry point to process every record that is written on a stream.

With it, you can:

  • Persist historical data by pushing it to an external data warehouse
  • Export records to a visualization tool (e.g. zeebe-simple-monitor)

Zeebe will only load exporters which are configured through the main Zeebe TOML configuration file.

Once an exporter is configured, the next time Zeebe is started, the exporter will start receiving records. Note that it is only guaranteed to see records produced from that point on.

For more information, you can read the reference information page, and you can find a reference implementation in the form of the Zeebe-maintained ElasticSearch exporter.

Considerations

The main impact exporters have on a Zeebe cluster is that they remove the burden of persisting data indefinitely.

Once data is not needed by Zeebe itself anymore, it will query its exporters to know if it can be safely deleted, and if so, will permanently erase it, thereby reducing disk usage.

Note:, if no exporters are configured at all, then Zeebe will automatically erase data when it is not necessary anymore. If you need historical data, then you need to configure an exporter to stream records into your external data warehouse.

Performance

Zeebe is designed for performance, applying the following design principles:

  • Batching of I/O operations
  • Linear read/write data access patterns
  • Compact, cache-optimized data structures
  • Lock-free algorithms and actor concurrency (green threads model)
  • Broker is garbage-free in the hot/data path

In addition, using partitions as a scaling mechanism, Zeebe scales horizontally to achieve high throughput (without a relational database as a potential bottleneck).

Clustering

Zeebe can operate as a cluster of brokers, forming a peer-to-peer network. In this network, all brokers have the same responsibilities and there is no single point of failure.

cluster

Gossip Membership Protocol

Zeebe implements the Gossip protocol to know which brokers are currently part of the cluster.

The cluster is bootstrapped using a set of well-known bootstrap brokers, to which the other ones can connect. To achieve this, each broker must have at least one bootstrap broker as its initial contact point in their configuration:

[network.gossip]
initialContactPoints = [ "node1.mycluster.loc:26502" ]

When a broker is connected to the cluster for the first time, it fetches the topology from the initial contact points and then starts gossiping with the other brokers. Brokers keep cluster topology locally across restarts.

Raft Consensus and Replication Protocol

To ensure fault tolerance, Zeebe replicates data across machines using the Raft protocol.

Data is divided into partitions (shards). Each partition has a number of replicas. Among the replica set, a leader is determined by the raft protocol which takes in requests and performs all the processing. All other brokers are passive followers. When the leader becomes unavailable, the followers transparently select a new leader.

Each broker in the cluster may be both leader and follower at the same time for different partitions. This way, client traffic is distributed evenly across all brokers.

cluster

Commit

Before a new record on a partition can be processed, it must be replicated to a quorum (typically majority) of followers. This procedure is called commit. Committing ensures that a record is durable even in case of complete data loss on an individual broker. The exact semantics of committing are defined by the raft protocol.

cluster

Zeebe Getting Started Tutorial

Welcome to the Zeebe Getting Started Tutorial.

We'll walk you through an end-to-end Zeebe example, including building and configuring a workflow model in Zeebe Modeler, deploying the model then creating and working on instances using the Zeebe Command Line Interface, and then seeing what's going on in a tool called Operate.

  1. Tutorial Setup
  2. Create a Workflow
  3. Deploy a Workflow
  4. Create and Complete Instances
  5. Next Steps and Resources

If you have questions about Zeebe, we encourage you to visit the Zeebe user forum.

Go To Tutorial Setup >>

Tutorial Setup

Welcome to the Getting Started tutorial for Zeebe and Operate. In this tutorial, we'll walk you through how to...

  • Model a workflow using Zeebe Modeler
  • Deploy the workflow to Zeebe
  • Create workflow instances
  • Use workers to complete jobs created by those workflow instances
  • Correlate messages to workflow instances
  • Monitor what's happening and get detail about running workflow instances in Operate

If this is your first time working with Zeebe, we expect this entire guide to take you 30-45 minutes to complete.

If you're looking for a very fast (but less comprehensive) "first contact" experience, you might prefer the Quickstart.

The tutorial assumes you have some basic knowledge of what Zeebe is and what it's used for. If you're completely new to Zeebe, you might find it helpful to read through the "What is Zeebe?" docs article first.

Below are the components you'll use in the tutorial. The easiest way to run them is to download the Zeebe Modeler and use the operate docker-compose profile in the zeebe-docker-compose repository. Further instructions for using Zeebe with Docker can be found in the README.md file in that repository.

You can also download the full distributions for these components, instead of running them with Docker.

  1. Zeebe Modeler: A desktop modeling tool that we'll use to create and configure our workflow before we deploy it to Zeebe.
  2. Zeebe Distribution: The Zeebe distribution contains the workflow engine where we'll deploy our workflow model; the engine is also responsible for managing the state of active workflow instances. Included in the distro is the Zeebe CLI, which we'll use throughout the tutorial. Please use Zeebe 0.20.0.
  3. Camunda Operate: An operations tool for monitoring and troubleshooting live workflow instances in Zeebe. Operate is currently available for free and unrestricted non-production use.
  4. Elasticsearch 6.8.0: An open-source distributed datastore that can connect to Zeebe to store workflow data for auditing, visualization, analysis, etc. Camunda Operate uses Elasticsearch as its underlying datastore, which is why you need to download Elasticsearch to complete this tutorial. Operate and Zeebe are compatible with Elasticsearch 6.8.0.

In case you're already familiar with BPMN and how to create a BPMN model in Zeebe Modeler, you can find the finished model that we create during the tutorial here: Zeebe Getting Started Tutorial Workflow Model.

If you're using the finished model we provide rather than building your own, you can also move ahead to section 3.3: Deploy a Workflow.

And if you have questions or feedback about the tutorial, we encourage you to visit the Zeebe user forum and ask a question.

There's a "Getting Started" category for topics that you can use when you ask your question or give feedback.

Next Page: Create a Workflow >>

Create a Workflow in Zeebe Modeler

New to BPMN and want to learn more before moving forward? This blog post helps to explain the standard and why it's a good fit for microservices orchestration.

In case you're already familiar with BPMN and how to create a BPMN model in Zeebe Modeler, you can find the finished model that we create during the tutorial here: Zeebe Getting Started Tutorial Workflow Model.

If you're using the finished model we provide rather than building your own, you can also move ahead to section 3.3: Deploy a Workflow.

Zeebe Modeler is a desktop modeling tool that allows you to build and configure workflow models using BPMN 2.0. In this section, we'll create a workflow model and get it ready to be deployed to Zeebe.

We'll create an e-commerce order process as our example, and we'll model a workflow that consists of:

  • Initiating a payment for an order
  • Receiving a payment confirmation message from an external system
  • Shipping the items in the order with or without insurance depending on order value

This is what your workflow model will look like when we're finished:

Getting Started Workflow Model

The payment task and shipping tasks are carried out by worker services that we'll connect to the workflow engine. The "Payment Received" message will be published to Zeebe by an external system, and Zeebe will then correlate the message to a workflow instance.

To get started

  • Open the Zeebe Modeler and create a new BPMN diagram.
  • Save the model as order-process.bpmn in the top level of the Zeebe broker directory that you just downloaded. As a reminder, this directory is called zeebe-broker-0.17.0

The first element in your model will be a Start Event, which should already be on the canvas when you open the Modeler.

It's a BPMN best practice to label all elements in our model, so:

  • Double-click on the Start Event
  • Label it "Order Placed" to signify that our process will be initiated whenever a customer places an order

Next, we need to add a Service Task:

  • Click on the Start Event and select the Service Task icon
  • Label the Service Task "Initiate Payment"

Next, we'll configure the "Initiate Payment" Service Task so that an external microservice can work on it:

  • Click on the "Initiate Payment" task
  • Expand the Properties panel on the right side of the screen if it's not already visible
  • In the Type field in the Properties panel, enter initiate-payment

This is what you should see in your Modeler now.

Initiate Payment Service Task

This Type field represents the job type in Zeebe. A couple of concepts that are important to understand at this point:

  • A job is simply a work item in a workflow that needs to be completed before a workflow instance can proceed to the next step. (See: Job Workers)
  • A workflow instance is one running instance of a workflow model--in our case, an individual order to be fulfilled. (See: Workflows)

For every workflow instance that arrives at the "Initiate Payment" Service Task, Zeebe will create a job with type initiate-payment. The external worker service responsible for payment processing--the so-called job worker--will poll Zeebe intermittently to ask if any jobs of type initiate-payment are available.

If a job is available for a given workflow instance, the worker will activate it, complete it, and notify Zeebe. Zeebe will then advance that workflow instance to the next step in the workflow.

Next, we'll add a Message Event to the workflow:

  • Click on the "Initiate Payment" task on the Modeler
  • Select the circular icon with an envelope in the middle
  • Double-click on the message event and label it "Payment Received"

Message Event

We use message catch events in Zeebe when the workflow engine needs to receive a message from an external system before the workflow instance can advance. (See: Message Events)

In the scenario we're modeling, we initiate a payment with our Service Task, but we need to wait for some other external system to actually confirm that the payment was received. This confirmation comes in the form of a message that will be sent to Zeebe - asynchronously - by an external service.

Messages received by Zeebe need to be correlated to specific workflow instances. To make this possible, we have some more configuring to do:

  • Select the Message Event and make sure you're on the "General" tab of the Properties panel on the right side of the screen
  • In the Properties panel, click the + icon to create a new message. You'll now see two fields in the Modeler that we'll use to correlate a message to a specific workflow instance: Message Name and Subscription Correlation Key.
  • Let's give this message a self-explanatory name: payment-received.

Add Message Name

When Zeebe receives a message, this name field lets us know which message event in the workflow model the message is referring to.

But how do we know which specific workflow instance--that is, which customer order--a message refers to? That's where Subscription Correlation Key comes in. The Subscription Correlation Key is a unique ID present in both the workflow instance payload and the message sent to Zeebe.

We'll use orderId for our correlation key.

Go ahead and add orderId to the Subscription Correlation Key field.

When we create a workflow instance, we need to be sure to include orderId as a variable, and we also need to provide orderId as a correlation key when we send a message.

Here's what you should see in the Modeler:

Message Correlation Key

Next, we'll add an Exclusive (XOR) Gateway to our workflow model. The Exclusive Gateway is used to make a data-based decision about which path a workflow instance should follow. In this case, we want to ship items with insurance if total order value is greater than or equal to $100 and ship without insurance.

That means that when we create a workflow instance, we'll need to include order value as an instance variable. But we'll come to that later.

First, let's take the necessary steps to configure our workflow model to make this decision. To add the gateway:

  • Click on the Message Event you just created
  • Select the Gateway (diamond-shaped) symbol - the Exclusive Gateway is the default when you add a new gateway to a model
  • Double-click on the gateway and add a label "Order Value?" so that it's clear what we're using as our decision criteria

Add Exclusive Gateway to Model

Label Exclusive Gateway in Model

We'll add two outgoing Sequence Flows from this Exclusive Gateway that lead to two different Service Tasks. Each Sequence Flow will have a data-based condition that's evaluated in the context of the workflow instance payload.

Next, we need to:

  • Select the gateway and add a new Service Task to the model.
  • Label the task "Ship Without Insurance"
  • Set the Type to ship-without-insurance

Add No Insurance Service Task

Whenever we use an Exclusive Gateway, we want to be sure to set a default flow, which in this case will be shipping without insurance:

  • Select the Sequence Flow you just created from the gateway to the "Ship Without Insurance" Service Task
  • Click on the wrench icon
  • Choose "Default Flow"

Add No Insurance Service Task

Now we're ready to add a second outgoing Sequence Flow and Service Task from the gateway:

  • Select the gateway again
  • Add another Service Task to the model
  • Label it "Ship With Insurance"
  • Set the type to ship-with-insurance

Next, we'll set a condition expression in the Sequence Flow leading to this "Ship With Insurance" Service Task:

  • Click on the sequence flow and open the Properties panel
  • Input orderValue >= 100 in the "Condition expression" field in the Properties panel
  • Double-click on the sequence flow to add a label ">$100"

Condition Expression

We're almost finished! To wrap things up, we'll:

  • Select the "Ship Without Insurance" task
  • Add another Exclusive Gateway to the model to merge the branches together again (a BPMN best practice in a model like this one).
  • Select the "Ship With Insurance" task
  • Add an outgoing sequence flow that connects to the second Exclusive Gateway you just created

The only BPMN element we need to add is an End Event:

  • Click on the second Exclusive Gateway
  • Add an End Event
  • Double-click on it to label it "Order Fulfilled"

Condition Expression

Lastly, we'll change the process ID to something more descriptive than the default Process_1 that you'll see in the Modeler:

  • Click onto a blank part of the canvas
  • Open the Properties panel
  • Change the Id to order-process

Here's what you should see in the Modeler after these last few updates:

Update Process ID

That's all for our modeling step. Remember to save the file one more time to prepare to deploy the workflow to Zeebe, create workflow instances, and complete them.

Next Page: Deploy a Workflow >>

<< Previous Page: Tutorial Setup

Deploy a Workflow to Zeebe

In this section, we're going to start up the Zeebe broker as well as Camunda Operate, a tool that gives you visibility into deployed workflows and running workflow instances and contains tooling for fixing problems in those workflow instances.

We offer Operate free of charge for unrestricted non-production use because we think it's a great tool for getting familiar with Zeebe and building initial proofs-of-concept. And at this time, Operate is available for non-production use only. In the future, we'll offer an Operate enterprise license that allows for production use, too.

Before we run the Zeebe broker, we need to configure an Elasticsearch exporter in the Zeebe configuration file. Which leads to the question: what's an exporter, and why is Elasticsearch a part of this tutorial?

The answer is that Zeebe itself doesn't store historic data related to your workflow instances. If you want to keep this data for auditing or for analysis, you need to export to another storage system. Zeebe does provide an easy-to-use exporter interface, and it also offers an Elasticsearch exporter out of the box. (See: Exporters)

Elasticsearch is also what Camunda Operate uses to store data, so to run Operate, you need to enable the Elasticsearch exporter in Zeebe and run an instance of Elasticsearch. In this section and the next section of the tutorial, we'll use Operate to visualize what's going on in Zeebe with each step we take.

If you are using Docker and zeebe-docker-compose then follow the instructions in the README file in the operate directory of that repository to start Zeebe and Operate. Once you have done that, skip the following section, and continue from "Check the status of the broker".

If you are using individual components, then you will need to manually configure and start components.

Manually configure and start Zeebe and Operate

These instructions are for using separate components, and are not necessary when using Docker.

First, open the zeebe.cfg.toml file (in the conf directory of the Zeebe broker) and enable the Zeebe Elasticsearch exporter.

Note that you need to un-comment only these three lines to enable the exporter:

Zeebe Configuration File

Note: Some command examples might not work on Windows if you use cmd or Powershell. For Windows users we recommend to use a bash-like shell, i.e. Git Bash, Cygwin or MinGW for this guide.

Next, open Terminal or another command line tool and start up Elasticsearch.

cd elasticsearch-6.7.0

Linux / Mac

bin/elasticsearch

Windows

bin\elasticsearch.bat

You'll know that startup was successful when you see something like:

[2019-04-05T10:26:22,288][INFO ][o.e.n.Node ] [oy0juRR] started

Then start the Zeebe broker in another Terminal window.

cd zeebe-broker-0.17.0
./bin/broker

And finally, start Operate in yet another Terminal window. Note that you'll need port 8080 in order to run Operate and access the UI, so be sure to check that it's available.

cd camunda-operate-distro-1.0.0-RC2
bin/operate

To confirm that Operate was started, go to http://localhost:8080. You should see the following:

Zeebe Configuration File

You can leave this tab open as we'll be returning to it shortly.

Check the status of the broker

You can use the Zeebe CLI to check the status of your broker. Open a new Terminal window to run it.

If you are using Docker, change into the zeebe-docker-compose directory.

If you are using separate components, then change into the Zeebe broker directory.

Run the following:

Linux

./bin/zbctl status

Mac

./bin/zbctl.darwin status

Windows

./bin/zbctl.exe status

You should see a response like this one:

Cluster size: 1
Partitions count: 1
Replication factor: 1
Brokers:
  Broker 0 - 0.0.0.0:26501
    Partition 0 : Leader

For all Zeebe-related operations moving forward, we'll be using Zeebe's command-line interface (CLI). In a real-world deployment, you likely wouldn't rely on the CLI to send messages or create job workers. Rather, you would embed Zeebe clients in worker microservices that connect to the Zeebe engine.

But for the sake of keeping this guide simple (and language agnostic), we're going to use the CLI.

Next, we'll deploy our workflow model via the CLI. We'll deploy the workflow model we created in the previous section.

Linux

./bin/zbctl deploy order-process.bpmn

Mac

./bin/zbctl.darwin deploy order-process.bpmn

Windows

./bin/zbctl.exe deploy order-process.bpmn

You should see a response like this one:

{
  "key": 2,
  "workflows": [
    {
      "bpmnProcessId": "order-process",
      "version": 1,
      "workflowKey": 1,
      "resourceName": "order-process.bpmn"
    }
  ]
}

Now we'll take a look at the Operate user interface:

  • Go to http://localhost:8080 and use the credentials demo / demo to access Operate
  • Click on the "Running Instances" option in the navigation bar at the top of the interface
  • Select the order-process workflow from the "Workflows" selector on the left side of the screen

You should see the workflow model we just deployed – the same model we built in the previous section. You won't see any workflow instances because we haven't created them yet, and that's exactly what we'll do in the next section.

Zeebe Configuration File

Next Page: Create and Complete Instances >>

<< Previous Page: Create a Workflow

Create and Complete Workflow Instances

We're going to create 2 workflow instances for this tutorial: one with an order value less than $100 and one with an order value greater than or equal to $100 so that we can see our XOR Gateway in action.

Go back to the Terminal window where you deployed the workflow model and execute the following command.

Note: Windows users who want to execute this command using cmd or Powershell have to escape the variables differently.

  • cmd: "{\"orderId\": 1234}"
  • Powershell: '{"\"orderId"\": 1234}'

Linux

./bin/zbctl create instance order-process --variables '{"orderId": "1234", "orderValue":99}'

Mac

./bin/zbctl.darwin create instance order-process --variables '{"orderId": "1234", "orderValue":99}'

Windows (Powershell)

./bin/zbctl.exe create instance order-process --variables '{\"orderId\": \"1234\", \
"orderValue\":99}'

You'll see a response like:

{
  "workflowKey": 1,
  "bpmnProcessId": "order-process",
  "version": 1,
  "workflowInstanceKey": 8
}

This first workflow instance we just created represents a single customer order with orderId 1234 and orderValue 99 (or, $99).

In the same Terminal window, run the command:

Linux

./bin/zbctl create instance order-process --variables '{"orderId": "2345", "orderValue":100}'

Mac

./bin/zbctl.darwin create instance order-process --variables '{"orderId": "2345", "orderValue":100}'

Windows (Powershell)

./bin/zbctl.exe create instance order-process --variables '{\"orderId\": \"2345\", \
"orderValue\":100}'

This second workflow instance we just created represents a single customer order with orderId 2345 and orderValue 100 (or, $100).

If you go back to the Operate UI and refresh the page, you should now see two workflow instances (the green badge) waiting at the Initiate Payment task.

Workflow Instances in Operate

Note that the workflow instance can't move past this first task until we create a job worker to complete initiate-payment jobs. So that's exactly what we'll do next.

To make this point again: in a real-word use case, you probably won't manually create workflow instances using the Zeebe CLI. Rather, a workflow instance would be created programmatically in response to some business event, such as a message sent to Zeebe after a customer places an order. And instances might be created at very large scale if, for example, many customers were placing orders at the same time due to a sale. We're using the CLI here just for simplicity's sake.

We have two instances currently waiting at our "Initiate Payment" task, which means that Zeebe has created two jobs with type initiate-payment.

zbctl provides a command to spawn simple job workers using an external command or script. The job worker will receive the payload for every job as a JSON object on stdin and must also return its result as JSON object on stdout if it handled the job successfully.

In this example, we'll also use the unix command cat which just outputs what it receives on stdin.

Open a new Terminal tab or window, change into the Zeebe broker directory, and use the following command to create a job worker that will work on the initiate-payment job.

Note: For Windows users, this command does not work with cmd as the cat command does not exist. We recommend to use Powershell or a bash-like shell to execute this command.

Linux

./bin/zbctl create worker initiate-payment --handler cat

Mac

./bin/zbctl.darwin create worker initiate-payment --handler cat

Windows

./bin/zbctl.exe create worker initiate-payment --handler "findstr .*"

You should see a response along the lines of:

Activated job 12 with payload {"orderId":"2345","orderValue":100}
Activated job 7 with payload {"orderId":"1234","orderValue":99}
Handler completed job 12 with payload {"orderId":"2345","orderValue":100}
Handler completed job 7 with payload {"orderId":"1234","orderValue":99}

We can see that the job worker activated then completed the two available initiate-payment jobs. You can shut down the job worker if you'd like--you won't need it in the rest of the tutorial.

Now go to the browser tab where you're running Operate. You should see that the workflow instances have advanced to the Intermediate Message Catch Event and are waiting there.

Waiting at Message Event

The workflow instances will wait at the Intermediate Message Catch Event until a message is received by Zeebe and correlated to the instances. Messages can be published using Zeebe clients, and it's also possible for Zeebe to connect to a message queue such as Apache Kafka and correlate messages published there to workflow instances.

zbctl also supports message publishing, so we'll continue to use it in our demo. Below is the command we'll use to publish and correlate a message. You'll see that we provide the message "Name" that we assigned to this message event in the Zeebe Modeler as well as the orderId that we included in the payload of the instance when we created it.

Remember, orderId is the correlation key we set in the Modeler when configuring the message event. Zeebe requires both of these fields to be able to correlate a message to a workflow instance. Because we have two workflow instances with two distinct orderId, we'll need to publish two messages. Run these two commands one after the other:

Linux

./bin/zbctl publish message "payment-received" --correlationKey="1234"
./bin/zbctl publish message "payment-received" --correlationKey="2345"

Mac

./bin/zbctl.darwin publish message "payment-received" --correlationKey="1234"
./bin/zbctl.darwin publish message "payment-received" --correlationKey="2345"

Windows

./bin/zbctl.exe publish message "payment-received" --correlationKey="1234"
./bin/zbctl.exe publish message "payment-received" --correlationKey="2345"

You won't see a response in your Terminal window, but if you refresh Operate, you should see that the messages were correlated successfully and that one workflow instance has advanced to the "Ship With Insurance" task and the other has advanced to the "Ship Without Insurance" task.

Waiting at Shipping Service Tasks

The good news is that this visualization confirms that our decision logic worked as expected: our workflow instance with an orderValue of $100 will ship with insurance, and our workflow instance with an orderValue of $99 will ship without insurance.

You probably know what you need to do next. Go ahead and open a Terminal window and create a job worker for the ship-without-insurance job type.

Linux

./bin/zbctl create worker ship-without-insurance --handler cat

Mac

./bin/zbctl.darwin create worker ship-without-insurance --handler cat

Windows

./bin/zbctl.exe create worker ship-without-insurance --handler "findstr .*"

You should see a response along the lines of:

Activated job 529 with payload {"orderId":"1234","orderValue":99}
Handler completed job 529 with payload {"orderId":"1234","orderValue":99}

You can shut down this worker now.

Select the "Finished Instances" checkbox in the bottom left of Operate, refresh the page, and voila! You'll see your first completed Zeebe workflow instance.

First Workflow Instance Complete

Because the "Ship With Insurance" task has a different job type, we need to create a second worker that can take on this job.

Linux

./bin/zbctl create worker ship-with-insurance --handler cat

Mac

./bin/zbctl.darwin create worker ship-with-insurance --handler cat

Windows

./bin/zbctl.exe create worker ship-with-insurance --handler "findstr .*"

You should see a response along the lines of:

Activated job 535 with payload {"orderId":"2345","orderValue":100}
Handler completed job 535 with payload {"orderId":"2345","orderValue":100}

You can shut down this worker, too.

Let's take one more look in Operate to confirm that both workflow instances have been completed.

Both Workflow Instances Complete

Hooray! You've completed the tutorial! Congratulations.

In the next and final section, we'll point you to resources we think you'll find helpful as you continue working with Zeebe.

Next Page: Next Steps & Resources >>

<< Previous Page: Deploy a Workflow

Next Steps & Resources

Zeebe's Java and Go clients each have Getting Started guides of their own, showing in much greater detail how you can use the clients in the worker services you orchestrate with Zeebe.

Beyond Java and Go, it's possible to create clients for Zeebe in a range of other programming languages, including JavaScript and C#, via community-supported libraries. The Awesome Zeebe page includes community-contributed clients in other languages, and this blog post walks through how to generate a new client stub for Zeebe using gRPC.

The Zeebe docs (where this tutorial is located) contain resources to help you move your Zeebe project forward.

If you have questions, you can get in touch with us via the:

Please reach out if we can help you! We're here to offer support.

Lastly, we do a lot of writing about project news along with an occasional deep dive into the product in the Zeebe blog. And we usually make product announcements via Twitter and our email mailing list, which you can sign up for at the bottom of the homepage.

Thanks so much for working through this tutorial with us. We're really glad you're here, and we're happy to welcome you to the Zeebe community!

<< Previous Page: Create and Complete Instances

BPMN Workflows

Zeebe uses visual workflows based on the industry standard BPMN 2.0.

workflow

Read more about:

BPMN Primer

Business Process Model And Notation 2.0 (BPMN) is an industry standard for workflow modeling and execution. A BPMN workflow is an XML document that has a visual representation. For example, here is a BPMN workflow:

workflow

This is the corresponding XML: Click here.

This duality makes BPMN very powerful. The XML document contains all the necessary information to be interpreted by workflow engines and modeling tools like Zeebe. At the same time, the visual representation contains just enough information to be quickly understood by humans, even when they are non-technical people. The BPMN model is source code and documentation in one artifact.

The following is an introduction to BPMN 2.0, its elements and their execution semantics. It tries to briefly provide an intuitive understanding of BPMN's power but does not cover the entire feature set. For more exhaustive BPMN resources, see the reference links at the end of this section.

Modeling BPMN Diagrams

The best tool for modeling BPMN diagrams for Zeebe is Zeebe Modeler.

BPMN Elements

Sequence Flow: Controlling the Flow of Execution

A core concept of BPMN is a sequence flow that defines the order in which steps in the workflow happen. In BPMN's visual representation, a sequence flow is an arrow connecting two elements. The direction of the arrow indicates their order of execution.

workflow

You can think of workflow execution as tokens running through the workflow model. When a workflow is started, a token is spawned at the beginning of the model. It advances with every completed step. When the token reaches the end of the workflow, it is consumed and the workflow instance ends. Zeebe's task is to drive the token and to make sure that the job workers are invoked whenever necessary.

Tasks: Units of Work

The basic elements of BPMN workflows are tasks, atomic units of work that are composed to create a meaningful result. Whenever a token reaches a task, the token stops and Zeebe creates a job and notifies a registered worker to perform work. When that handler signals completion, then the token continues on the outgoing sequence flow.

Choosing the granularity of a task is up to the person modeling the workflow. For example, the activity of processing an order can be modeled as a single Process Order task, or as three individual tasks Collect Money, Fetch Items, Ship Parcel. If you use Zeebe to orchestrate microservices, one task can represent one microservice invocation.

See the Tasks section on which types of tasks are currently supported and how to use them.

Gateways: Steering Flow

Gateways are elements that route tokens in more complex patterns than plain sequence flow.

BPMN's exclusive gateway chooses one sequence flow out of many based on data:

BPMN's parallel gateway generates new tokens by activating multiple sequence flows in parallel:

See the Gateways section on which types of gateways are currently supported and how to use them.

Events: Waiting for Something to Happen

Events in BPMN represent things that happen. A workflow can react to events (catching event) as well as emit events (throwing event). For example:

The circle with the envelope symbol is a catching message event. It makes the token continue as soon as a message is received. The XML representation of the workflow contains the criteria for which kind of message triggers continuation.

Events can be added to the workflow in various ways. Not only can they be used to make a token wait at a certain point, but also for interrupting a token's progress.

See the Events section on which types of events are currently supported and how to use them.

Sub Processes: Grouping Elements

Sub Processes are element containers that allow defining common functionality. For example, we can attach an event to a sub process's border:

payload

When the event is triggered, the sub process is interrupted regardless which of its elements is currently active.

See the Sub Processes section on which types of sub processes are currently supported and how to use them.

Additional Resources

BPMN Coverage

Elements marked in orange are currently implemented by Zeebe.

Participants

Pool
Lane

Subprocesses

Subprocess
Call Activity
Event Subprocess
Transaction

Tasks

Service Task
User Task
Script Task
Business Rule Task
Manual Task
Receive Task
Undefined Task
Send Task
Receive Task (instantiated)

Data

Data Object
Data Store

Artifacts

Text Annotation
Group

Events

Type Start Intermediate End
Normal Event Sub-Process Event Sub-Process
non-interrupt
Catch Boundary Boundary
non-interrupt
Throw
None
Message
Timer
Conditional
Link
Signal
Error
Escalation
Termination
Compensation
Cancel
Multiple
Multiple Parallel

Data Flow

Every BPMN workflow instance can have one or more variables. Variables are key-value-pairs and hold the contextual data of the workflow instance that is required by job workers to do their work. They can be provided when a workflow instance is created, when a job is completed, and when a message is correlated.

data-flow

Job Workers

By default, a job worker gets all variables of a workflow instance. It can limit the data by providing a list of required variables as fetchVariables.

The worker uses the variables to do its work. When the work is done, it completes the job. If the result of the work is needed by follow-up tasks, then the worker set the variables while completing the job. These variables are merged into the workflow instance.

job-worker

If the job worker expects the variables in a different format or under different names then the variables can be transformed by defining input mappings in the workflow. Output mappings can be used to transform the job variables before merging them into the workflow instance.

Variable Scopes vs. Token-Based Data

A workflow can have concurrent paths, for example, when using a parallel gateway. When the execution reaches the parallel gateway then new tokens are spawned which executes the following paths concurrently.

Since the variables are part of the workflow instance and not of the token, they can be read globally from any token. If a token adds a variable or modifies the value of a variable then the changes are also visible to concurrent tokens.

variable-scopes

The visibility of variables is defined by the variable scopes of the workflow.

Additional Resources

Tasks

Currently supported elements:

Service Tasks

workflow

A service task represents a work item in the workflow with a specific type. When the workflow instance arrives a service task then it creates a corresponding job. The token flow stops at this point.

A worker can subscribe to these jobs and complete them when the work is done. When a job is completed, the token flow continues.

XML representation:

<bpmn:serviceTask id="collect-money" name="Collect Money">
  <bpmn:extensionElements>
    <zeebe:taskDefinition type="payment-service" />
    <zeebe:taskHeaders>
      <zeebe:header key="method" value="VISA" />
    </zeebe:taskHeaders>
  </bpmn:extensionElements>
</bpmn:serviceTask>

BPMN Modeler: Click Here

Task Definition

Each service task must have a task definition. It specifies the type of job which workers can subscribe to.

Optionally, a task definition can specify the number of times the job is retried when a worker signals failure (default = 3).

<zeebe:taskDefinition type="payment-service" retries="5" />

Task Headers

A service task can define an arbitrary number of task headers. Task headers are metadata that are handed to workers along with the job. They can be used as configuration parameters for the worker.

<zeebe:taskHeaders>
  <zeebe:header key="method" value="VISA" />
</zeebe:taskHeaders>

Variable Mappings

By default, all job variables are merged into the workflow instance. This behavior can be customized by defining an output mapping at the service task. Input mappings can be used to transform the variables into a format that is accepted by the job worker.

XML representation:

<serviceTask id="collectMoney">
  <extensionElements>
    <zeebe:ioMapping>
      <zeebe:input source="price" target="total"/>
      <zeebe:output source="method" target="paymentMethod"/>
     </zeebe:ioMapping>
  </extensionElements>
</serviceTask>

Additional Resources

Receive Tasks

Receive tasks are tasks which references a message. They can be used to wait until a proper message is received.

Messages

A message can be referenced by one or more receive tasks. It holds the information which is used for the message correlation. The required attributes are

  • the name of the message
  • the correlation key

The correlation key is specified as a variable expression. It is evaluated when the receive task is entered and extracts the value from the workflow instance variables. The variable value must be either a string or a number. If the correlation key can't be resolved or it is neither a string nor a number then an incident is created.

XML representation:

<bpmn:message id="Message_1iz5qtq" name="Money collected">
   <bpmn:extensionElements>
     <zeebe:subscription correlationKey="orderId" />
   </bpmn:extensionElements>
</bpmn:message>

Receive Tasks

Receive Tasks

When a token arrives at the receive task, it will wait there until a proper message is correlated. The correlation to the event is based on the name of the message and the correlation key. The task is left when a message is correlated.

XML representation:

<bpmn:receiveTask id="money-collected" name="Money collected" messageRef="Message_1iz5qtq">
</bpmn:receiveTask>

BPMN Modeler: Click Here

Message intermediate catch events are an alternative to receive tasks which can be used on an event-based gateway.

Variable Mappings

By default, all message variables are merged into the workflow instance. This behavior can be customized by defining an output mapping at the receive task.

XML representation:

<bpmn:receiveTask id="money-collected" name="Money collected" messageRef="Message_1iz5qtq">
    <bpmn:extensionElements>
    <zeebe:ioMapping>
      <zeebe:output source="price" target="totalPrice"/>
     </zeebe:ioMapping>
  </bpmn:extensionElements>
</bpmn:receiveTask>

Additional Resources

Gateways

Currently supported elements:

Exclusive Gateway (XOR)

workflow

An exclusive gateway chooses one of its outgoing sequence flows for continuation. Each sequence flow has a condition that is evaluated in the context of the current workflow instance. The workflow instance takes the first sequence flow which condition is fulfilled.

If no condition is fulfilled, then it takes the default flow which has no condition. In case the gateway has no default flow (not recommended), the execution stops and an incident is created.

XML Representation

<bpmn:exclusiveGateway id="exclusiveGateway" default="else" />

<bpmn:sequenceFlow id="priceGreaterThan100" name="totalPrice &#62; 100" sourceRef="exclusiveGateway" targetRef="shipParcelWithInsurance">
  <bpmn:conditionExpression xsi:type="bpmn:tFormalExpression">
    <![CDATA[ totalPrice > 100 ]]>
  </bpmn:conditionExpression>
</bpmn:sequenceFlow>

<bpmn:sequenceFlow id="else" name="else" sourceRef="exclusiveGateway" targetRef="shipParcel" />

BPMN Modeler: Click Here

Additional Resources

Parallel Gateway (AND)

workflow

A parallel gateway is only activated when a token has arrived on each of its incoming sequence flows. Once activated, all of the outgoing sequence flows are taken. So in the case of multiple outgoing sequence flows the branches are executed concurrently. Execution progresses independently until a synchronizing element is reached, for example, another merging parallel gateway.

BPMN Modeler: Click Here

Event-Based Gateway

workflow

An event-based gateway allows you to make a decision based on events. Each outgoing sequence flow of the gateway needs to be connected to an intermediate catch event.

When a token reaches an event-based gateway then it waits there until the first event is triggered. It takes the outgoing sequence flow of this event and continues. No other event of the gateway can be triggered afterward.

Constraints

  • the gateway has at least two outgoing sequence flows
  • the gateway has only outgoing sequence flows to intermediate catch events of type:

XML Representation

<bpmn:eventBasedGateway id="gateway" />

<bpmn:sequenceFlow id="s1" sourceRef="gateway" targetRef="payment-details-updated" />

<bpmn:intermediateCatchEvent id="payment-details-updated" name="Payment Details Updated">
  <bpmn:messageEventDefinition messageRef="message-payment-details-updated" />
</bpmn:intermediateCatchEvent>

<bpmn:sequenceFlow id="s2" sourceRef="gateway" targetRef="wait-one-hour" />

<bpmn:intermediateCatchEvent id="wait-one-hour" name="1 hour">
  <bpmn:timerEventDefinition>
    <bpmn:timeDuration>PT1H</bpmn:timeDuration>
  </bpmn:timerEventDefinition>
</bpmn:intermediateCatchEvent>

BPMN Modeler: Click Here

Events

Events in BPMN can be thrown (i.e. sent), or caught (i.e. received), respectively referred to as throw or catch events, e.g. message throw event, timer catch event.

Additionally, a distinction is made between start, intermediate, and end events:

  • Start events (catch events, as they can only react to something) are used to denote the beginning of a process or sub-process.
  • End events (throw events, as they indicate something has happened) are used to denote the end of a particular sequence flow.
  • Intermediate events can be used to indicate that something has happened (i.e. intermediate throw events), or to wait and react to certain events (i.e. intermediate catch events).

Intermediate events

Intermediate catch events can be inserted into your process in two different contexts: normal flow, or attached to an activity, and are called boundary events.

Intermediate Events

In normal flow, an intermediate throw event will execute its event (e.g. send a message) once the token has reached it, and once done the token will continue to all outgoing sequence flows.

An intermediate catch event, however, will stop the token, and wait until the event it is waiting for happens, at which execution will resume, and the token will move on.

Boundary events

Boundary events provide a way to model what should happen if an event occurs while an activity is currently active. For example, if a process is waiting on a user task to happen which is taking too long, an intermediate timer catch event can be attached to the task, with an outgoing sequence flow to notification task, allowing the modeler to automate sending a reminder email to the user.

A boundary event must be an intermediate catch event, and can be either interrupting or non-interrupting. Interrupting in this case means that once triggered, before taking any outgoing sequence flow, the activity the event is attached to will be terminated. This allows modeling timeouts where we want to prune certain execution paths if something happens, e.g. the process takes too long.

Supported events

At the moment, Zeebe only supports catching events, and only the following:

None Events

None events are unspecified events, also called ‘blank’ events.

None Start Events

workflow

A workflow must have exactly one none start event. The event is triggered when the workflow is started via API and in consequence a token spawns at the event.

XML representation:

<bpmn:startEvent id="order-placed" name="Order Placed" />

BPMN Modeler: Click Here

None End Events

workflow

A workflow can have one or more none end events. When a token arrives at an end event, then the it is consumed. If it is the last token within a scope (top-level workflow or sub process), then the scope instance ends.

XML representation:

<bpmn:endEvent id="order-delivered" name="Order Delivered" />

BPMN Modeler: Click Here

Note that an activity without outgoing sequence flow has the same semantics as a none end event. After the task is completed, the token is consumed and the workflow instance may end.

Message Events

Message events are events which reference a message. They can be used to wait until a proper message is received.

Currently, messages can be published only externally using one of the Zeebe clients.

Messages

A message can be referenced by one or more message events. It holds the information which is used for the message correlation. The required attributes are

  • the name of the message
  • the correlation key

The correlation key is specified as variable expression. It is evaluated when the message event is entered and extracts the value from the workflow instance variables. The value must be either a string or a number. If the correlation key can't be resolved or it is neither a string nor a number then an incident is created.

XML representation:

<bpmn:message id="Message_1iz5qtq" name="Money collected">
   <bpmn:extensionElements>
     <zeebe:subscription correlationKey="orderId" />
   </bpmn:extensionElements>
</bpmn:message>

Intermediate Message Catch Events

workflow

When a token arrives at the message intermediate catch event, it will wait there until a proper message is correlated. The correlation to the event is based on the name of the message and the correlation key. The event is left when a message is correlated.

XML representation:

<bpmn:intermediateCatchEvent id="money-collected">
  <bpmn:messageEventDefinition messageRef="Message_1iz5qtq" />
</bpmn:intermediateCatchEvent>

BPMN Modeler: Click Here

Receive tasks are an alternative to message intermediate catch events where boundary events can be attached to.

Message Start Events

A message start event allows creating a workflow instance by publishing a named message. A workflow can have more than one message start event, each with a unique message name. We can choose the right start event from a set of start events using the message name.

When deploying a workflow, the following conditions apply:

  • The message name must be unique across all start events in the workflow definition.
  • When a new version of the workflow is deployed, subscriptions to the message start events of the old version will be canceled. Thus instances of the old version cannot be created by publishing messages. This is true even if the new version has different start events.
  • Currently, a workflow that has message start events cannot have a none start event.

The following behavior applies to published messages:

  • A message is correlated to a message start event if the message name matches. The correlation key of the message is ignored.
  • A message is not correlated to a message start event if it was published before the subscription was created, i.e. before the workflow was deployed. This is because the message could have been already correlated to the previous version of the workflow.

XML representation

<bpmn:message id="newOrder" name="New order">
</bpmn:message>

<bpmn:startEvent id="messageStart">
  <bpmn:messageEventDefinition messageRef="newOrder" />
</bpmn:startEvent>

Message Boundary Events

When attached to the boundary of an activity, a message catch event behaves in two ways:

If it is non-interrupting, it will spawn a new token which will take the outgoing sequence flow. It will not terminate the activity that it's attached to. If it is interrupting, it will terminate the activity before spawning the token.

Variable Mappings

By default, all message variables are merged into the workflow instance. This behavior can be customized by defining an output mapping at the message catch event.

XML representation:

<bpmn:intermediateCatchEvent id="money-collected">
  <bpmn:extensionElements>
    <zeebe:ioMapping>
      <zeebe:output source="price" target="totalPrice"/>
     </zeebe:ioMapping>
  </bpmn:extensionElements>
</bpmn:intermediateCatchEvent>

Additional Resources

Timer Events

Timer events are events which are triggered by a defined timer.

Timer Intermediate Catch Events

workflow

A timer intermediate event acts as a stopwatch. When a token arrives at the timer intermediate catch event then a timer is started. The timer fires after the specified duration is over and the event is left.

The duration must be defined in the ISO 8601 format. For example:

  • PT15S - 15 seconds
  • PT1H30M - 1 hour and 30 minutes
  • P14D - 14 days
  • P1Y6M - 1 year, 6 months

Durations can also be zero or negative; a negative timer will fire immediately and can be expressed as -PT5S, or minus 5 seconds.

XML representation:

<bpmn:intermediateCatchEvent id="wait-for-coffee" name="4 minutes">
  <bpmn:timerEventDefinition>
    <bpmn:timeDuration>PT4M</bpmn:timeDuration>
  </bpmn:timerEventDefinition>
</bpmn:intermediateCatchEvent>

Timer Boundary Events

workflow

As boundary events, timer catch events can be marked as non-interrupting; as a simple duration, however, a non-interrupting timer event isn't particularly useful. As such, it is possible to define repeating timers, that is, timers that are fired every X amount of time, where X is a duration as specified above.

The notation to express the timer is changed slightly from the above to denote how often a timer should be repeated:

  • R/PT5S - every 5 seconds, infinitely.
  • R5/PT1S - every second, up to 5 times.

On the other hand, in the case of an interrupting boundary event, a cycle is not particularly useful, and here it makes sense to use a simple duration; this allows you, for example, to model timeout logic associated to a task.

XML representation:

<bpmn:serviceTask id="brew-coffee">
  <bpmn:incoming>incoming</bpmn:incoming>
</bpmn:serviceTask>
<bpmn:boundaryEvent id="send-reminder" cancelActivity="true" attachedToRef="brew-coffee">
  <bpmn:timerEventDefinition>
    <bpmn:timeDuration>PT4M</bpmn:timeDuration>
  </bpmn:timerEventDefinition>
</bpmn:boundaryEvent>

BPMN Modeler: Click Here

Timer Start Events

workflow

A timer start event can be used to periodically create an instance of a workflow. A workflow can have multiple timer start events along with other types of start events, with the sole exception of the none start event, which can only be used once and by itself. The interval is expressed in the same way as in timer boundary events:

  • R/PT5S - every 5 seconds, infinitely.
  • R5/PT1H - every hour, 5 times.

Note that subprocesses cannot have timer start events.

XML representation:

 <bpmn:startEvent id="timer-start">
  <bpmn:timerEventDefinition>
    <bpmn:timeCycle>R3/PT10H</bpmn:timeCycle>
  </bpmn:timerEventDefinition>
</bpmn:startEvent>

Sub Processes

Currently supported elements:

embedded-subprocess

Embedded Sub Process

An embedded sub process can be used to group workflow elements. It must have a single none start event. When activated, execution starts at that start event. The sub process only completes when all contained paths of execution have ended.

XML representation:

<bpmn:subProcess id="shipping" name="Shipping">
  <bpmn:startEvent id="shipping-start" />
  ... more contained elements ...
</bpmn:subProcess>

BPMN Modeler: Click Here

Variable Mappings

Input mappings can be used to create new variables in the scope of the sub process. These variables are only visible within the sub process.

By default, the variables of the sub process are not propagated (i.e. they are removed with the scope). This behavior can be customized by defining output mappings at the sub process. The output mappings are applied when the sub process is completed.

<bpmn:subProcess id="shipping" name="Shipping">
  <bpmn:extensionElements>
    <zeebe:ioMapping>
      <zeebe:input source="order.id" target="trackingId"/>
    </zeebe:ioMapping>
  </bpmn:extensionElements>
</bpmn:subProcess>

Additional Resources

BPMN Modeler

Zeebe Modeler is a tool to create BPMN 2.0 models that run on Zeebe.

overview

Learn how to create models:

Introduction

The modeler's basic elements are:

  • Element/Tool Palette
  • Context Menu
  • Properties Panel

Palette

The palette can be used to add new elements to the diagram. It also provides tools for navigating the diagram.

intro-palette

Context Menu

The context menu of a model element allows to quickly delete as well as add new follow-up elements.

intro-context-menu

Properties Panel

The properties panel can be used to configure element properties required for execution. Select an element to inspect and modify its properties.

intro-properties-panel

Tasks

Service Task

Create a service task and configure the job type.

service-task

Optional, add task headers.

task-headers

Optional, add Input/Output Variable Mappings.

variable-mappings

Receive Task

Create a receive task and a message.

receive-task

Gateways

Exclusive Gateway

Create an exclusive gateway with two outgoing sequence flow. One sequence flow has a condition and the other one is the default flow.

exclusive-gateway

Parallel Gateway

Create a parallel gateway with two outgoing sequence flow.

parallel-gateway

Event-Based Gateway

Create an event-based gateway with two events. One message event and one timer event.

parallel-gateway

Events

None Start Event

Create a none start event.

none-events

None End Event

Create a none end event.

none-events

Intermediate Message Catch Event

Create an intermediate message catch event and a message.

message-event

Boundary Timer Event

Create an interrupting timer boundary event.

timer-event

Add a non-interrupting timer boundary event.

timer-event

Sub Processes

Embedded Sub Process

Create an embedded sub process with two service tasks.

embedded-subprocess

YAML Workflows

In addition to BPMN, Zeebe provides a YAML format to define workflows. Creating a YAML workflow can be done with a regular text editor and does not require a graphical modelling tool. It is inspired by imperative programming concepts and aims to be easily understandable by programmers. Internally, Zeebe transforms a deployed YAML file to BPMN.

name: order-process

tasks:
    - id: collect-money
      type: payment-service

    - id: fetch-items
      type: inventory-service

    - id: ship-parcel
      type: shipment-service

Read more about:

Tasks

A workflow can contain multiple tasks, where each represents a step in the workflow.

name: order-process

tasks:
    - id: collect-money
      type: payment-service

    - id: fetch-items
      type: inventory-service
      retries: 5

    - id: ship-parcel
      type: shipment-service
      headers:
            method: "express"
            withInsurance: false

Each task has the following properties:

  • id (required): the unique identifier of the task.
  • type (required): the name to which job workers can subscribe.
  • retries: the amount of times the job is retried in case of failure. (default = 3)
  • headers: a list of metadata in the form of key-value pairs that can be accessed by a worker.

When Zeebe executes a task, it creates a job that is handed to a job worker. The worker can perform the business logic and complete the job eventually to trigger continuation in the workflow.

Related resources:

Control Flow

Control flow is about the order in which tasks are executed. The YAML format provides tools to decide which task is executed when.

Sequences

In a sequence, a task is executed after the previous one is completed. By default, tasks are executed top-down as they are declared in the YAML file.

name: order-process

tasks:
    - id: collect-money
      type: payment-service

    - id: fetch-items
      type: inventory-service

    - id: ship-parcel
      type: shipment-service

In the example above, the workflow starts with collect-money, followed by fetch-items and ends with ship-parcel.

We can use the goto and end attributes to define a different order:

name: order-process

tasks:
    - id: collect-money
      type: payment-service
      goto: ship-parcel

    - id: fetch-items
      type: inventory-service
      end: true

    - id: ship-parcel
      type: shipment-service
      goto: fetch-items

In the above example, we have reversed the order of fetch-items and ship-parcel. Note that the end attribute is required so that workflow execution stops after fetch-items.

Data-based Conditions

Some workflows do not always execute the same tasks but need to pick and choose different tasks, based on variables of the workflow instance.

We can use the switch attribute and conditions to decide on the next task.

name: order-process

tasks:
    - id: collect-money
      type: payment-service

    - id: fetch-items
      type: inventory-service
      switch:
          - case: totalPrice > 100
            goto: ship-parcel-with-insurance

          - default: ship-parcel

    - id: ship-parcel-with-insurance
      type: shipment-service-premium
      end: true

    - id: ship-parcel
      type: shipment-service

In the above example, the order-process starts with collect-money, followed by fetch-items. If the variable totalPrice is greater than 100, then it continues with ship-parcel-with-insurance. Otherwise, ship-parcel is chosen. In either case, the workflow instance ends after that.

In the switch element, there is one case element per alternative to choose from. If none of the conditions evaluates to true, then the default element is evaluated. While default is not required, it is best practice to include to avoid errors at workflow runtime. Should such an error occur (i.e. no case is fulfilled and there is no default), then workflow execution stops and an incident is raised.

Additional Resources

Data Flow

Zeebe carries custom data from task to task in form of variables. Variables are key-value-pairs and part of the workflow instance.

By default, all job variables are merged into the workflow instance. This behavior can be customized by defining an output mapping at the task. Input mappings can be used to transform the variables into a format that is accepted by the job worker.

name: order-process

tasks:
    - id: collect-money
      type: payment-service
      inputs:
          - source: totalPrice
            target: price
      outputs:
          - source: success
            target: paymentSuccess

    - id: fetch-items
      type: inventory-service

    - id: ship-parcel
      type: shipment-service

Every mapping element has a source and a target element which must be a variable expression.

Additional Resources

Reference

This section gives in-depth explanations of Zeebe usage concepts.

Workflow Lifecycles

In Zeebe, the workflow execution is represented internally by events of type WorkflowInstance. The events are written to the log stream and can be observed by an exporter.

Each event is one step in a workflow instance. All events of one workflow instance have the same workflowInstanceKey.

Events which belongs to the same element instance (e.g. a task) have the same key. The element instances have different lifecycles depending on the type of element.

(Sub-)Process/Activity/Gateway Lifecycle

activity lifecycle

Event Lifecycle

event lifecycle

Sequence Flow Lifecycle

sequence flow lifecycle

Example

order process

Intent Element Id Element Type
ELEMENT_ACTIVATING order-process process
ELEMENT_ACTIVATED order-process process
ELEMENT_ACTIVATING order-placed start event
ELEMENT_ACTIVATED order-placed start event
ELEMENT_COMPLETING order-placed start event
ELEMENT_COMPLETED order-placed start event
SEQUENCE_FLOW_TAKEN to-collect-money sequence flow
ELEMENT_ACTIVATING collect-money task
ELEMENT_ACTIVATED collect-money task
ELEMENT_COMPLETING collect-money task
ELEMENT_COMPLETED collect-money task
SEQUENCE_FLOW_TAKEN to-fetch-items sequence flow
... ... ...
SEQUENCE_FLOW_TAKEN to-order-delivered sequence flow
EVENT_ACTIVATING order-delivered end event
EVENT_ACTIVATED order-delivered end event
ELEMENT_COMPLETING order-delivered end event
ELEMENT_COMPLETED order-delivered end event
ELEMENT_COMPLETING order-placed process
ELEMENT_COMPLETED order-placed process

Variables

Variables are part of a workflow instance and represent the data of the instance. A variable has a name and a JSON value. The visibility of a variable is defined by its variable scope.

Variable Values

The value of a variable is stored as a JSON value. It must have one of the following types:

  • String
  • Number
  • Boolean
  • Array
  • Document/Object
  • Null

Access Variables

Variables can be accessed within the workflow instance, for example, on input/output mappings or conditions. In the expression, the variable is accessed by its name. If the variable is a document then the nested properties can be accessed via dot notation.

Examples:

Variables Expression Value
totalPrice: 25.0
totalPrice
25.0
order: {"id": "order-123",
  "items": ["item-1", "item-2"]}
order
{"id": "order-123",
  "items": ["item-1", "item-2"]}
order: {"id": "order-123"}
order.id
"order-123"
order: {"items": ["item-1", "item-2"]}
order.items
["item-1", "item-2"]

Variable Scopes

Variable scopes define the visibility of variables. The root scope is the workflow instance itself. Variables in this scope are visible everywhere in the workflow.

When the workflow instance enters a sub process or an activity then a new scope is created. Activities in this scope can see all variables of this and of higher scopes (i.e. parent scopes). But activities outside of this scope can not see the variables which are defined in this scopes.

If a variable has the same name as a variable from a higher scope then it covers this variable. Activities in this scope see only the value of this variable and not the one from the higher scope.

The scope of a variable is defined when the variable is created. By default, variables are created in the root scope.

Example:

variable-scopes

This workflow instance has the following variables:

  • a and b are defined on the root scope and can be seen by Task A, Task B, and Task C.
  • c is defined in the sub process scope and can be seen by Task A and Task B.
  • b is defined again on the activity scope of Task A and can be seen only by Task A. It covers the variable b from the root scope.

Variable Propagation

When variables are merged into a workflow instance (e.g. on job completion, on message correlation) then each variable is propagated from the scope of the activity to its higher scopes.

The propagation ends when a scope contains a variable with the same name. In this case, the variable value is updated.

If no scope contains this variable then it is created as a new variable in the root scope.

Example:

variable-propagation

The job of Task B is completed with the variables b, c, and d. The variables b and c are already defined in higher scopes and are updated with the new values. Variable d doesn't exist before and is created in the root scope.

Local Variables

In some cases, variables should be set in a given scope, even if they don't exist in this scope before.

In order to deactivate the variable propagation, the variables are set as local variables. That means that the variables are created or updated in the given scope, whether they exist in this scope before or not.

Input/Output Variable Mappings

Input/output variable mappings can be used to create new variables or customize how variables are merged into the workflow instance.

Variable mappings are defined in the workflow as extension elements under ioMapping. Every variable mapping has a source and a target expression. The source expression defines where the value is copied from. The target expression defines where the value is copied to. The expressions reference a variable by its name or a nested property of a variable.

If a variable or a nested property of a target expression doesn't exist then it is created. But if a variable or a nested property of a source expression doesn't exist then an incident is created.

Example:

variable-mappings

XML representation:

<serviceTask id="collectMoney" name="Collect Money">
    <extensionElements>
      <zeebe:ioMapping>
        <zeebe:input source="customer.name" target="sender"/>
        <zeebe:input source="customer.iban" target="iban"/>
        <zeebe:input source="totalPrice" target="price"/>
        <zeebe:input source="reference" target="orderId"/>
        <zeebe:output source="status" target="paymentStatus"/>
       </zeebe:ioMapping>
    </extensionElements>
</serviceTask>

Input Mappings

Input mappings can used to create new variables. They can be defined on service tasks and sub processes.

When an input mapping is applied then it creates a new variable in the scope where the mapping is defined.

Examples:

Workflow Instance Variables Input Mappings New Variables
orderId: "order-123"
source: orderId
target: reference
reference: "order-123"
customer: {"name": "John"}
source: customer.name
target: sender
sender: "John"
customer: "John"
iban: "DE456"
source: customer
target: sender.name

source: iban target: sender.iban

sender: {"name": "John",
"iban": "DE456"}

Output Mappings

Output mappings can be used to customize how job/message variables are merged into the workflow instance. They can be defined on service tasks, receive tasks, message catch events and sub processes.

If one or more output mappings are defined then the job/message variables are set as local variables in the scope where the mapping is defined. Then, the output mappings are applied to the variables and create new variables in this scope. The new variables are merged into the parent scope. If there is no mapping for a job/message variable then the variable is not merged.

If no output mappings are defined then all job/message variables are merged into the workflow instance.

In case of a sub process, the behavior is different. There are no job/message variables to be merged. But output mappings can be used to propagate local variables of the sub process to higher scopes. By default, all local variables are removed when the scope is left.

Examples:

Job/Message Variables Output Mappings Workflow Instance Variables
status: "Ok"
source: status
target: paymentStatus
paymentStatus: "OK"
result: {"status": "Ok",
  "transactionId": "t-789"}
source: result.status
target: paymentStatus

source: result.transactionId target: transactionId

paymentStatus: "Ok"
transactionId: "t-789"
status: "Ok"
transactionId: "t-789"
source: transactionId
target: order.transactionId
order: {"transactionId": "t-789"}

Conditions

Conditions can be used for conditional flows to determine the following task.

A condition is a Boolean expression with a JavaScript-like syntax. It allows to compare variables of a workflow instance with other variables or literals (e.g., numbers, strings, etc.).

Variables of a workflow instance are accessed by its name. If a variable has a document value then the nested properties can be accessed via dot notation. See the Variables section for details.

Examples:

totalPrice > 100

owner == "Paul"

order.count >= 5 && order.count < 15

Literals

Literal Examples
Variable totalPrice, order.id
Number 25, 4.5, -3, -5.5
String "Paul", 'Jonny'
Boolean true, false
Null-Value null

A Null-Value can be used to check if a variable or nested property is set (e.g., owner == null). If a variable or nested property doesn't exist, then it is resolved to null and can be compared as such.

Comparison Operators

Operator Description Example
== equal to owner == "Paul"
!= not equal to owner != "Paul"
< less than totalPrice < 25
<= less than or equal to totalPrice <= 25
> greater than totalPrice > 25
>= greater than or equal to totalPrice >= 25

The operators <, <=, > and >= can only be used for numbers.

If the values of an operator have different types, then the evaluation fails. Comparing null or missing property with a number is considered as comparing different types.

Logical Operators

Operator Description Example
&& and orderCount >= 5 && orderCount < 15
|| or orderCount > 15 || totalPrice > 50

It's also possible to use parentheses between the operators to change the precedence (e.g., (owner == "Paul" || owner == "Jonny") && totalPrice > 25).

Message Correlation

Message correlation describes how a message is correlated to a workflow instance (i.e. to a message catch event).

In Zeebe, a message is not sent to a workflow instance directly. Instead, it is published with correlation information. When a workflow instance is waiting at a message catch event which specifies the same correlation information then the message is correlated to the workflow instance.

The correlation information contains the name of the message and the correlation key.

Message Correlation

For example:

An instance of the order workflow is created and wait at the message catch event until the money is collected. The message catch event specifies the message name Money collected and the correlation key orderId. The key is resolved with the workflow instance variable orderId to order-123.

A message is published by the payment service using one of the Zeebe clients. It has the name Money collected and the correlation key order-123. Since the correlation information matches, the message is correlated to the workflow instance. That means its payload is merged into the workflow instance payload and the message catch event is left.

Note that a message can be correlated to multiple workflow instances if they share the same correlation information. But it can be correlated only once per workflow instance.

Message Buffering

In Zeebe, messages can be buffered for a given time. Buffering can be useful in a situation when it is not guaranteed that the message catch event is entered before the message is published.

A message has a time-to-live (TTL) which specifies the time how long it is buffered. Within this time, the message can be correlated to a workflow instance.

When a workflow instance enters a message catch event then it polls the buffer for a proper message. If a proper message exists then it is correlated to the workflow instance. In case multiple messages match the correlation information then the first published message is correlated. The behavior of the buffer is similar to a queue.

The buffering of a message is disabled when its TTL is set to zero. If the message can't be correlated to a workflow instance then it is discarded.

Message Uniqueness

A message can contain a unique id to ensure that it is published only once (i.e. idempotent). The id can be any string, for example, a request id, a tracking number or the offset/position in a message queue.

When the message is published then it checks if a message with the same name, correlation key and id exists in the buffer. If yes then the message is rejected and not correlated.

Note that the uniqueness check only looks into the buffer. If a message is discarded from the buffer then a message with the same name, correlation key and id can be published afterward.

Incidents

In Zeebe, an incident represents a problem in a workflow execution. That means a workflow instance is stuck at some point and it needs an user interaction to resolve the problem.

Incidents are created in different situations, for example, when

  • a job is failed and it has no more retries left
  • an input or output variable mapping can't be applied
  • a condition can't be evaluated

Note that incidents are not created when an unexpected exception happens at the broker (e.g. NullPointerException, OutOfMemoyError etc.).

Resolving

In order to resolve an incident, the user must identify and resolve the problem first. Then, the user marks the incident as resolved and the broker tries to continue the workflow execution. If the problem still exists then a new incident is created.

Resolving a Job-related Incident

If a job is failed and it has no more retries left then an incident is created. There can be different reasons why the job is failed, for example, the variables are not in the expected format, or a service is not available (e.g. a database).

In case that it is caused by the variables, the user needs to update the variables of the workflow instance first. Then, the user needs to increase the remaining retries of the job and mark the incident as resolved.

Using the Java client, this could look like:

client.newSetVariablesCommand(incident.getElementInstanceKey())
    .variables(NEW_PAYLOAD)
    .send()
    .join();

client.newUpdateRetriesCommand(incident.getJobKey())
    .retries(3)
    .send()
    .join();

client.newResolveIncidentCommand(incident.getKey())
    .send()
    .join();        

When the incident is resolved then the job can be activated by a worker again.

Resolving a Workflow Instance-related Incident

If an incident is created while workflow execution and it is not related to a job, then it is usually related to the variables of the workflow instance. For example, an input or output variable mapping can't be applied.

To resolve the incident, the user needs to update the variables first and then mark the incident as resolved.

Using the Java client, this could look like:

client.newSetVariablesCommand(incident.getElementInstanceKey())
    .variables(NEW_VARIABLES)
    .send()
    .join();

client.newResolveIncidentCommand(incident.getKey())
    .send()
    .join();        

When the incident is resolved then the workflow instance continues.

gRPC API Reference

Error handling

The gRPC API for Zeebe is exposed through the gateway, which acts as a proxy for the broker. Generally, this means that the client executes an remote call on the gateway, which is then translated to special binary protocol that the gateway uses to communicate with the broker.

As a result of this proxying, any errors which occur between the gateway and the broker for which the client is not at fault (e.g. the gateway cannot deserialize the broker response, the broker is unavailable, etc.) are reported to the client as internal errors using the GRPC_STATUS_INTERNAL code. One exception to this is if the gateway itself is in an invalid state (e.g. out of memory), at which point it will return GRPC_STATUS_UNAVAILABLE.

This behavior applies to every single possible RPC; in these cases, it is possible that retrying would succeed, but it is recommended to do so with an appropriate retry policy (e.g. a combination of exponential backoff or jitter wrapped in a circuit breaker).

In the documentation below, the documented errors are business logic errors, meaning errors which are a result of request processing logic, and not serialization, network, or other more general errors.

As the gRPC server/client is based on generated code, keep in mind that any call made to the server can return errors as described by the spec here.

Gateway service

The Zeebe gRPC API is exposed through a single gateway service.

ActivateJobs RPC

Iterates through all known partitions round-robin and activates up to the requested maximum and streams them back to the client as they are activated.

Input: ActivateJobsRequest

message ActivateJobsRequest {
  // the job type, as defined in the BPMN process (e.g. <zeebe:taskDefinition
  // type="payment-service" />)
  string type = 1;
  // the name of the worker activating the jobs, mostly used for logging purposes
  string worker = 2;
  // a job returned after this call will not be activated by another call until the
  // timeout has been reached
  int64 timeout = 3;
  // the maximum jobs to activate by this request
  int32 maxJobsToActivate = 4;
  // a list of variables to fetch as the job variables; if empty, all visible variables at
  // the time of activation for the scope of the job will be returned
  repeated string fetchVariable = 5;
}

Output: ActivateJobsResponse

message ActivateJobsResponse {
  // list of activated jobs
  repeated ActivatedJob jobs = 1;
}

message ActivatedJob {
  // the key, a unique identifier for the job
  int64 key = 1;
  // the type of the job (should match what was requested)
  string type = 2;
  // the job's workflow instance key
  int64 workflowInstanceKey = 3;
  // the bpmn process ID of the job workflow definition
  string bpmnProcessId = 4;
  // the version of the job workflow definition
  int32 workflowDefinitionVersion = 5;
  // the key of the job workflow definition
  int64 workflowKey = 6;
  // the associated task element ID
  string elementId = 7;
  // the unique key identifying the associated task, unique within the scope of the
  // workflow instance
  int64 elementInstanceKey = 8;
  // a set of custom headers defined during modelling; returned as a serialized
  // JSON document
  string customHeaders = 9;
  // the name of the worker which activated this job
  string worker = 10;
  // the amount of retries left to this job (should always be positive)
  int32 retries = 11;
  // when the job can be activated again, sent as a UNIX epoch timestamp
  int64 deadline = 12;
  // JSON document, computed at activation time, consisting of all visible variables to
  // the task scope
  string variables = 13;
}

Errors

GRPC_STATUS_INVALID_ARGUMENT

Returned if:

  • type is blank (empty string, null)
  • worker is blank (empty string, null)
  • timeout less than 1
  • amount is less than 1

CancelWorkflowInstance RPC

Cancels a running workflow instance

Input: CancelWorkflowInstanceRequest

message CancelWorkflowInstanceRequest {
  // the workflow instance key (as, for example, obtained from
  // CreateWorkflowInstanceResponse)
  int64 workflowInstanceKey = 1;
}

Output: CancelWorkflowInstanceResponse

message CancelWorkflowInstanceResponse {
}

Errors

GRPC_STATUS_NOT_FOUND

Returned if:

  • no workflow instance exists with the given key. Note that since workflow instances are removed once their are finished, it could mean the instance did exist at some point.

CompleteJob RPC

Completes a job with the given payload, which allows completing the associated service task.

Input: CompleteJobRequest

message CompleteJobRequest {
  // the unique job identifier, as obtained from ActivateJobsResponse
  int64 jobKey = 1;
  // a JSON document representing the variables in the current task scope
  string variables = 2;
}

Output: CompleteJobResponse

message CompleteJobResponse {
}

Errors

GRPC_STATUS_NOT_FOUND

Returned if:

  • no job exists with the given job key. Note that since jobs are removed once completed, it could be that this job did exist at some point.
GRPC_STATUS_FAILED_PRECONDITION

Returned if:

  • the job was marked as failed. In that case, the related incident must be resolved before the job can be activated again and completed.

CreateWorkflowInstance RPC

Creates and starts an instance of the specified workflow. The workflow definition to use to create the instance can be specified either using its unique key (as returned by DeployWorkflow), or using the BPMN process ID and a version. Pass -1 as the version to use the latest deployed version.

Note that only workflows with none start events can be started through this command.

Input: CreateWorkflowInstanceRequest

message CreateWorkflowInstanceRequest {
  // the unique key identifying the workflow definition (e.g. returned from a workflow
  // in the DeployWorkflowResponse message)
  int64 workflowKey = 1;
  // the BPMN process ID of the workflow definition
  string bpmnProcessId = 2;
  // the version of the process; set to -1 to use the latest version
  int32 version = 3;
  // JSON document that will instantiate the variables for the root variable scope of the
  // workflow instance; it must be a JSON object, as variables will be mapped in a
  // key-value fashion. e.g. { "a": 1, "b": 2 } will create two variables, named "a" and
  // "b" respectively, with their associated values. [{ "a": 1, "b": 2 }] would not be a
  // valid argument, as the root of the JSON document is an array and not an object.
  string variables = 4;
}

Output: CreateWorkflowInstanceResponse

message CreateWorkflowInstanceResponse {
  // the key of the workflow definition which was used to create the workflow instance
  int64 workflowKey = 1;
  // the BPMN process ID of the workflow definition which was used to create the workflow
  // instance
  string bpmnProcessId = 2;
  // the version of the workflow definition which was used to create the workflow instance
  int32 version = 3;
  // the unique identifier of the created workflow instance; to be used wherever a request
  // needs a workflow instance key (e.g. CancelWorkflowInstanceRequest)
  int64 workflowInstanceKey = 4;
}

Errors

GRPC_STATUS_NOT_FOUND

Returned if:

  • no workflow with the given key exists (if workflowKey was given)
  • no workflow with the given process ID exists (if bpmnProcessId was given but version was -1)
  • no workflow with the given process ID and version exists (if both bpmnProcessId and version were given)
GRPC_STATUS_FAILED_PRECONDITION

Returned if:

  • the workflow definition does not contain a none start event; only workflows with none start event can be started manually.
GRPC_STATUS_INVALID_ARGUMENT

Returned if:

  • the given variables argument is not a valid JSON document; it is expected to be a valid JSON document where the root node is an object.

DeployWorkflow RPC

Deploys one or more workflows to Zeebe. Note that this is an atomic call, i.e. either all workflows are deployed, or none of them are.

Input: DeployWorkflowRequest

message DeployWorkflowRequest {
  // List of workflow resources to deploy
  repeated WorkflowRequestObject workflows = 1;
}

message WorkflowRequestObject {
  enum ResourceType {
    // FILE type means the gateway will try to detect the resource type
    // using the file extension of the name field
    FILE = 0;
    BPMN = 1; // extension 'bpmn'
    YAML = 2; // extension 'yaml'
  }

  // the resource basename, e.g. myProcess.bpmn
  string name = 1;
  // the resource type; if set to BPMN or YAML then the file extension
  // is ignored
  ResourceType type = 2;
  // the process definition as a UTF8-encoded string
  bytes definition = 3;
}

Output: DeployWorkflowResponse

message DeployWorkflowResponse {
  // the unique key identifying the deployment
  int64 key = 1;
  // a list of deployed workflows
  repeated WorkflowMetadata workflows = 2;
}

message WorkflowMetadata {
  // the bpmn process ID, as parsed during deployment; together with the version forms a
  // unique identifier for a specific workflow definition
  string bpmnProcessId = 1;
  // the assigned process version
  int32 version = 2;
  // the assigned key, which acts as a unique identifier for this workflow
  int64 workflowKey = 3;
  // the resource name (see: WorkflowRequestObject.name) from which this workflow was
  // parsed
  string resourceName = 4;
}

Errors

GRPC_STATUS_INVALID_ARGUMENT

Returned if:

  • no resources given.
  • if at least one resource is invalid. A resource is considered invalid if:
    • it is not a BPMN or YAML file (currently detected through the file extension)
    • the resource data is not deserializable (e.g. detected as BPMN, but it's broken XML)
    • the workflow is invalid (e.g. an event-based gateway has an outgoing sequence flow to a task)

FailJob RPC

Marks the job as failed; if the retries argument is positive, then the job will be immediately activatable again, and a worker could try again to process it. If it is zero or negative however, an incident will be raised, tagged with the given errorMessage, and the job will not be activatable until the incident is resolved.

Input: FailJobRequest

message FailJobRequest {
  // the unique job identifier, as obtained when activating the job
  int64 jobKey = 1;
  // the amount of retries the job should have left
  int32 retries = 2;
  // an optional message describing why the job failed
  // this is particularly useful if a job runs out of retries and an incident is raised,
  // as it this message can help explain why an incident was raised
  string errorMessage = 3;
}

Output: FailJobResponse

message FailJobResponse {
}

Errors

GRPC_STATUS_NOT_FOUND

Returned if:

  • no job was found with the given key
GRPC_STATUS_FAILED_PRECONDITION

Returned if:

  • the job was not activated
  • the job is already in a failed state, i.e. ran out of retries

PublishMessage RPC

Publishes a single message. Messages are published to specific partitions computed from their correlation keys.

Input: Request

message PublishMessageRequest {
  // the name of the message
  string name = 1;
  // the correlation key of the message
  string correlationKey = 2;
  // how long the message should be buffered on the broker, in milliseconds
  int64 timeToLive = 3;
  // the unique ID of the message; can be omitted. only useful to ensure only one message
  // with the given ID will ever be published (during its lifetime)
  string messageId = 4;
  // the message variables as a JSON document; to be valid, the root of the document must be an
  // object, e.g. { "a": "foo" }. [ "foo" ] would not be valid.
  string variables = 5;
}

Output: Response

message PublishMessageResponse {
}

Errors

GRPC_STATUS_ALREADY_EXISTS

Returned if:

  • a message with the same ID was previously published (and is still alive)

ResolveIncident RPC

Resolves a given incident. This simply marks the incident as resolved; most likely a call to UpdateJobRetries or UpdateWorkflowInstancePayload will be necessary to actually resolve the problem, following by this call.

Input: Request

message ResolveIncidentRequest {
  // the unique ID of the incident to resolve
  int64 incidentKey = 1;
}

Output: Response

message ResolveIncidentResponse {
}

Errors

GRPC_STATUS_NOT_FOUND

Returned if:

  • no incident with the given key exists

SetVariables RPC

Updates all the variables of a particular scope (e.g. workflow instance, flow element instance) from the given JSON document.

Input: Request

message SetVariablesRequest {
  // the unique identifier of a particular element; can be the workflow instance key (as
  // obtained during instance creation), or a given element, such as a service task (see
  // elementInstanceKey on the job message)
  int64 elementInstanceKey = 1;
  // a JSON serialized document describing variables as key value pairs; the root of the document
  // must be an object
  string variables = 2;
  // if true, the variables will be merged strictly into the local scope (as indicated by
  // elementInstanceKey); this means the variables is not propagated to upper scopes.
  // for example, let's say we have two scopes, '1' and '2', with each having effective variables as:
  // 1 => `{ "foo" : 2 }`, and 2 => `{ "bar" : 1 }`. if we send an update request with
  // elementInstanceKey = 2, variables `{ "foo" : 5 }`, and local is true, then scope 1 will
  // be unchanged, and scope 2 will now be `{ "bar" : 1, "foo" 5 }`. if local was false, however,
  // then scope 1 would be `{ "foo": 5 }`, and scope 2 would be `{ "bar" : 1 }`.
  bool local = 3;
}

Output: Response

message SetVariablesResponse {
}

Errors

GRPC_STATUS_NOT_FOUND

Returned if:

  • no element with the given elementInstanceKey was exists
GRPC_STATUS_INVALID_ARGUMENT

Returned if:

  • the given payload is not a valid JSON document; all payloads are expected to be valid JSON documents where the root node is an object.

Topology RPC

Obtains the current topology of the cluster the gateway is part of.

Input: TopologyRequest

message TopologyRequest {
}

Output: TopologyResponse

message TopologyResponse {
  // list of brokers part of this cluster
  repeated BrokerInfo brokers = 1;
  // how many nodes are in the cluster
  int32 clusterSize = 2;
  // how many partitions are spread across the cluster
  int32 partitionsCount = 3;
  // configured replication factor for this cluster
  int32 replicationFactor = 4;
}

message BrokerInfo {
  // unique (within a cluster) node ID for the broker
  int32 nodeId = 1;
  // hostname of the broker
  string host = 2;
  // port for the broker
  int32 port = 3;
  // list of partitions managed or replicated on this broker
  repeated Partition partitions = 4;
}

message Partition {
  // Describes the Raft role of the broker for a given partition
  enum PartitionBrokerRole {
    LEADER = 0;
    FOLLOWER = 1;
  }

  // the unique ID of this partition
  int32 partitionId = 1;
  // the role of the broker for this partition
  PartitionBrokerRole role = 2;
}

Errors

No specific errors

UpdateJobRetries RPC

Updates the number of retries a job has left. This is mostly useful for jobs that have run out of retries, should the underlying problem be solved.

Input: Request

message UpdateJobRetriesRequest {
  // the unique job identifier, as obtained through ActivateJobs
  int64 jobKey = 1;
  // the new amount of retries for the job; must be positive
  int32 retries = 2;
}

Output: Response

message UpdateJobRetriesResponse {
}

Errors

GRPC_STATUS_NOT_FOUND

Returned if:

  • no job exists with the given key
GRPC_STATUS_INVALID_ARGUMENT

Returned if:

  • retries is not greater than 0

Exporters

Regardless of how an exporter is loaded (whether through an external JAR or not), all exporters interact in the same way with the broker, which is defined by the Exporter interface.

Loading

Once configured, exporters are loaded as part of the broker startup phase, before any processing is done.

During the loading phase, the configuration for each exporter is validated, such that the broker will not start if:

  • An exporter ID is not unique
  • An exporter points to a non-existent/non-accessible JAR
  • An exporter points to a non-existent/non-instantiable class
  • An exporter instance throws an exception in its Exporter#configure method.

The last point is there to provide individual exporters to perform lightweight validation of their configuration (e.g. fail if missing arguments).

One of the caveat of the last point is that an instance of an exporter is created and immediately thrown away; therefore, exporters should not perform any computationally heavy work during instantiation/configuration.

Note: Zeebe will create an isolated class loader for every JAR referenced by exporter configurations - that is, only once per JAR; if the same JAR is reused to define different exporters, then these will share the same class loader.

This has some nice properties, primarily that different exporters can depend on the same third party libraries without having to worry about versions, or class name collisions.

Additionally, exporters use the system class loader for system classes, or classes packaged as part of the Zeebe JAR.

Exporter specific configuration is handled through the exporter's [exporters.args] nested map. This provides a simple Map<String, Object> which is passed directly in form of a Configuration object when the broker calls the Exporter#configure(Configuration) method.

Configuration occurs at two different phases: during the broker startup phase, and once every time a leader is elected for a partition.

Processing

At any given point, there is exactly one leader node for a given partition. Whenever a node becomes the leader for a partition, one of the things it will do is run an instance of an exporter stream processor.

This stream processor will create exactly one instance of each configured exporter, and forward every record written on the stream to each of these in turn.

Note: this implies that there will be exactly one instance of every exporter for every partition: if you have 4 partitions, and at least 4 threads for processing, then there are potentially 4 instances of your exporter exporting simultaneously.

Note that Zeebe only guarantees at-least-once semantics, that is, a record will be seen at least once by an exporter, and maybe more. Cases where this may happen include:

  • During reprocessing after raft failover (i.e. new leader election)
  • On error if the position has not been updated yet

To reduce the amount of duplicate records an exporter will process, the stream processor will keep track of the position of the last successfully exported record for every single exporter; the position is sufficient since a stream is an ordered sequence of records whose position is monotonically increasing. This position is set by the exporter itself once it can guarantee a record has been successfully updated.

Note: although Zeebe tries to reduce the amount of duplicate records an exporter has to handle, it is likely that it will have to; therefore, it is necessary that export operations be idempotent.

This can be implemented either in the exporter itself, but if it exports to an external system, it is recommended that you perform deduplication there to reduce the load on Zeebe itself. Refer to the exporter specific documentation for how this is meant to be achieved.

Error handling

If an error occurs during the Exporter#open(Context) phase, the stream processor will fail and be restarted, potentially fixing the error; worst case scenario, this means no exporter is running at all until these errors stop.

If an error occurs during the Exporter#close phase, it will be logged, but will still allow other exporters to gracefully finish their work.

If an error occurs during processing, we will retry infinitely the same record until no error is produced. Worst case scenario, this means a failing exporter could bring all exporters to a halt. Currently, exporter implementations are expected to implement their own retry/error handling strategies, though this may change in the future.

Performance impact

Zeebe naturally incurs a performance impact for each loaded exporter. A slow exporter will slow down all other exporters for a given partition, and, in the worst case, could completely block a thread.

It's therefore recommended to keep exporters as simple as possible, and perform any data enrichment or transformation through the external system.

Zeebe Java Client

Setting up the Zeebe Java Client

Prerequisites

  • Java 8

Usage in a Maven project

To use the Java client library, declare the following Maven dependency in your project:

<dependency>
  <groupId>io.zeebe</groupId>
  <artifactId>zeebe-client-java</artifactId>
  <version>${zeebe.version}</version>
</dependency>

The version of the client should always match the broker's version.

Bootstrapping

In Java code, instantiate the client as follows:

ZeebeClient client = ZeebeClient.newClientBuilder()
  .brokerContactPoint("127.0.0.1:26500")
  .build();

See the class io.zeebe.client.ZeebeClientBuilder for a description of all available configuration properties.

Get Started with the Java client

In this tutorial, you will learn to use the Java client in a Java application to interact with Zeebe.

You will be guided through the following steps:

You can find the complete source code, including the BPMN diagrams, on GitHub.

You can watch a video walk-through of this guide on the Zeebe YouTube channel here.

Prerequisites

One of the following:

(Using Docker)

(Not using Docker)

Start the broker

Before you begin to setup your project, please start the broker.

If you are using Docker with zeebe-docker-compose, then change into the simple-monitor subdirectory, and run docker-compose up.

If you are not using Docker, run the start up script bin/broker or bin/broker.bat in the distribution.

By default, the broker binds to localhost:26500, which is used as contact point in this guide.

Set up a project

First, we need a Maven project. Create a new project using your IDE, or run the Maven command:

mvn archetype:generate \
    -DgroupId=io.zeebe \
    -DartifactId=zeebe-get-started-java-client \
    -DarchetypeArtifactId=maven-archetype-quickstart \
    -DinteractiveMode=false

Add the Zeebe client library as dependency to the project's pom.xml:

<dependency>
  <groupId>io.zeebe</groupId>
  <artifactId>zeebe-client-java</artifactId>
  <version>${zeebe.version}</version>
</dependency>

Create a main class and add the following lines to bootstrap the Zeebe client:

package io.zeebe;

import io.zeebe.client.ZeebeClient;

public class App
{
    public static void main(String[] args)
    {
        final ZeebeClient client = ZeebeClient.newClientBuilder()
            // change the contact point if needed
            .brokerContactPoint("127.0.0.1:26500")
            .build();

        System.out.println("Connected.");

        // ...

        client.close();
        System.out.println("Closed.");
    }
}

Run the program:

  • If you use an IDE, you can just execute the main class, using your IDE.
  • Otherwise, you must build an executable JAR file with Maven and execute it.

Interlude: Build an executable JAR file

Add the Maven Shade plugin to your pom.xml:

<!-- Maven Shade Plugin -->
<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-shade-plugin</artifactId>
  <version>2.3</version>
  <executions>
    <!-- Run shade goal on package phase -->
    <execution>
      <phase>package</phase>
      <goals>
        <goal>shade</goal>
      </goals>
      <configuration>
        <transformers>
          <!-- add Main-Class to manifest file -->
          <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
            <mainClass>io.zeebe.App</mainClass>
          </transformer>
        </transformers>
      </configuration>
    </execution>
  </executions>
</plugin>

Now run mvn package, and it will generate a JAR file in the target subdirectory. You can run this with java -jar target/${JAR file}.

Output of executing program

You should see the output:

Connected.

Closed.

Model a workflow

Now, we need a first workflow which can then be deployed. Later, we will extend the workflow with more functionality.

Open the Zeebe Modeler and create a new BPMN diagram. Add a start event and an end event to the diagram and connect the events.

model-workflow-step-1

Set the id (the BPMN process id), and mark the diagram as executable.

Save the diagram as src/main/resources/order-process.bpmn under the project's folder.

Deploy a workflow

Next, we want to deploy the modeled workflow to the broker.

The broker stores the workflow under its BPMN process id and assigns a version.

Add the following deploy command to the main class:

package io.zeebe;

import io.zeebe.client.api.response.DeploymentEvent;

public class Application
{
    public static void main(String[] args)
    {
        // after the client is connected

        final DeploymentEvent deployment = client.newDeployCommand()
            .addResourceFromClasspath("order-process.bpmn")
            .send()
            .join();

        final int version = deployment.getWorkflows().get(0).getVersion();
        System.out.println("Workflow deployed. Version: " + version);

        // ...
    }
}

Run the program and verify that the workflow is deployed successfully. You should see the output:

Workflow deployed. Version: 1

Create a workflow instance

Finally, we are ready to create a first instance of the deployed workflow. A workflow instance is created of a specific version of the workflow, which can be set on creation.

Add the following create command to the main class:

package io.zeebe;

import io.zeebe.client.api.response.WorkflowInstanceEvent;

public class Application
{
    public static void main(String[] args)
    {
        // after the workflow is deployed

        final WorkflowInstanceEvent wfInstance = client.newCreateInstanceCommand()
            .bpmnProcessId("order-process")
            .latestVersion()
            .send()
            .join();

        final long workflowInstanceKey = wfInstance.getWorkflowInstanceKey();

        System.out.println("Workflow instance created. Key: " + workflowInstanceKey);

        // ...
    }
}

Run the program and verify that the workflow instance is created. You should see the output:

Workflow instance created. Key: 2113425532

You did it! You want to see how the workflow instance is executed?

If you are running with Docker, just open http://localhost:8082 in your browser.

If you are running without Docker:

  • Start the Zeebe Monitor using java -jar zeebe-simple-monitor-app-*.jar.
  • Open a web browser and go to http://localhost:8080/.

In the Simple Monitor interface, you see the current state of the workflow instance. zeebe-monitor-step-1

Work on a job

Now we want to do some work within your workflow. First, add a few service jobs to the BPMN diagram and set the required attributes. Then extend your main class and create a job worker to process jobs which are created when the workflow instance reaches a service task.

Open the BPMN diagram in the Zeebe Modeler. Insert a few service tasks between the start and the end event.

model-workflow-step-2

You need to set the type of each task, which identifies the nature of the work to be performed. Set the type of the first task to 'payment-service'.

Set the type of the second task to 'fetcher-service'.

Set the type of the third task to 'shipping-service'.

Save the BPMN diagram and switch back to the main class.

Add the following lines to create a job worker for the first jobs type:

package io.zeebe;

import io.zeebe.client.api.worker.JobWorker;

public class App
{
    public static void main(String[] args)
    {
        // after the workflow instance is created

        final JobWorker jobWorker = client.newWorker()
            .jobType("payment-service")
            .handler((jobClient, job) ->
            {
                System.out.println("Collect money");

                // ...

                jobClient.newCompleteCommand(job.getKey())
                    .send()
                    .join();
            })
            .open();

        // waiting for the jobs

        // Don't close, we need to keep polling to get work
        // jobWorker.close();

        // ...
    }
}

Run the program and verify that the job is processed. You should see the output:

Collect money

When you have a look at the Zeebe Monitor, then you can see that the workflow instance moved from the first service task to the next one:

zeebe-monitor-step-2

Work with data

Usually, a workflow is more than just tasks, there is also a data flow. The worker gets the data from the workflow instance to do its work and send the result back to the workflow instance.

In Zeebe, the data is stored as key-value-pairs in form of variables. Variables can be set when the workflow instance is created. Within the workflow, variables can be read and modified by workers.

In our example, we want to create a workflow instance with the following variables:

"orderId": 31243
"orderItems": [435, 182, 376]

The first task should read orderId as input and return totalPrice as result.

Modify the workflow instance create command and pass the data as variables. Also, modify the job worker to read the job variables and complete the job with a result.

package io.zeebe;

public class App
{
    public static void main(String[] args)
    {
        // after the workflow is deployed

        final Map<String, Object> data = new HashMap<>();
        data.put("orderId", 31243);
        data.put("orderItems", Arrays.asList(435, 182, 376));

        final WorkflowInstanceEvent wfInstance = client.newCreateInstanceCommand()
            .bpmnProcessId("order-process")
            .latestVersion()
            .variables(data)
            .send()
            .join();

        // ...

        final JobWorker jobWorker = client.newWorker()
            .jobType("payment-service")
            .handler((jobClient, job) ->
            {
                final Map<String, Object> variables = job.getVariablesAsMap();

                System.out.println("Process order: " + variables.get("orderId"));
                double price = 46.50;
                System.out.println("Collect money: $" + price);

                // ...

                final Map<String, Object> result = new HashMap<>();
                result.put("totalPrice", price);

                jobClient.newCompleteCommand(job.getKey())
                    .variables(result)
                    .send()
                    .join();
            })
            .fetchVariables("orderId")
            .open();

        // ...
    }
}

Run the program and verify that the variable is read. You should see the output:

Process order: 31243
Collect money: $46.50

When we have a look at the Zeebe Monitor, then we can see that the variable totalPrice is set:

zeebe-monitor-step-3

What's next?

Hurray! You finished this tutorial and learned the basic usage of the Java client.

Next steps:

Logging

The client uses SLF4J for logging. It logs useful things, such as exception stack traces when a job handler fails execution. Using the SLF4J API, any SLF4J implementation can be plugged in. The following example uses Log4J 2.

Maven dependencies

<dependency>
  <groupId>org.apache.logging.log4j</groupId>
  <artifactId>log4j-slf4j-impl</artifactId>
  <version>2.8.1</version>
</dependency>

<dependency>
  <groupId>org.apache.logging.log4j</groupId>
  <artifactId>log4j-core</artifactId>
  <version>2.8.1</version>
</dependency>

Configuration

Add a file called log4j2.xml to the classpath of your application. Add the following content:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN" strict="true"
    xmlns="http://logging.apache.org/log4j/2.0/config"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://logging.apache.org/log4j/2.0/config https://raw.githubusercontent.com/apache/logging-log4j2/log4j-2.8.1/log4j-core/src/main/resources/Log4j-config.xsd">
  <Appenders>
    <Console name="Console" target="SYSTEM_OUT">
      <PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level Java Client: %logger{36} - %msg%n"/>
    </Console>
  </Appenders>
  <Loggers>
    <Root level="info">
      <AppenderRef ref="Console"/>
    </Root>
  </Loggers>
</Configuration>

This will log every log message to the console.

Writing Tests

You can use the zeebe-test module to write JUnit tests for your job worker and BPMN workflow. It provides a JUnit rule to bootstrap the broker and some basic assertions.

Usage in a Maven project

Add zeebe-test as Maven test dependency to your project:

<dependency>
  <groupId>io.zeebe</groupId>
  <artifactId>zeebe-test</artifactId>
  <scope>test</scope>
</dependency>

Bootstrap the Broker

Use the ZeebeTestRule in your test case to start an embedded broker. It contains a client which can be used to deploy a BPMN workflow or create an instance.

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.api.response.WorkflowInstanceEvent;
import org.junit.Before;
import org.junit.Rule;
import org.junit.Test;

public class MyTest {

  @Rule public final ZeebeTestRule testRule = new ZeebeTestRule();

  private ZeebeClient client;

  @Test
  public void test() {
    client = testRule.getClient();

    client
        .newDeployCommand()
        .addResourceFromClasspath("process.bpmn")
        .send()
        .join();  	
  
    final WorkflowInstanceEvent workflowInstance =
        client
            .newCreateInstanceCommand()
            .bpmnProcessId("process")
            .latestVersion()
            .send()
            .join();
  }
}

Verify the Result

The ZeebeTestRule provides also some basic assertions in AssertJ style. The entry point of the assertions is ZeebeTestRule.assertThat(...).

final WorkflowInstanceEvent workflowInstance = ...

ZeebeTestRule.assertThat(workflowInstance)
    .isEnded()
    .hasPassed("start", "task", "end")
    .hasVariable("result", 21.0);

Example Code using the Zeebe Java Client

These examples are accessible in the zeebe-io github repository at commit 08beeecee1242d63a483708306324ed5727a6f6a. Link to browse code on github.

Instructions to access code locally:

git clone https://github.com/zeebe-io/zeebe.git
git checkout 08beeecee1242d63a483708306324ed5727a6f6a
cd zeebe/samples

Import the Maven project in the samples directory into your IDE to start hacking.

Workflow

Job

Data

Cluster

Deploy a Workflow

Related Resources

Prerequisites

  1. Running Zeebe broker with endpoint localhost:26500 (default)

WorkflowDeployer.java

Source on github

/*
 * Copyright Camunda Services GmbH and/or licensed to Camunda Services GmbH under
 * one or more contributor license agreements. See the NOTICE file distributed
 * with this work for additional information regarding copyright ownership.
 * Licensed under the Zeebe Community License 1.0. You may not use this file
 * except in compliance with the Zeebe Community License 1.0.
 */
package io.zeebe.example.workflow;

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.ZeebeClientBuilder;
import io.zeebe.client.api.response.DeploymentEvent;

public class WorkflowDeployer {

  public static void main(final String[] args) {
    final String broker = "localhost:26500";

    final ZeebeClientBuilder clientBuilder =
        ZeebeClient.newClientBuilder().brokerContactPoint(broker);

    try (ZeebeClient client = clientBuilder.build()) {

      final DeploymentEvent deploymentEvent =
          client.newDeployCommand().addResourceFromClasspath("demoProcess.bpmn").send().join();

      System.out.println("Deployment created with key: " + deploymentEvent.getKey());
    }
  }
}

demoProcess.bpmn

Source on github

Download the XML and save it in the Java classpath before running the example. Open the file with Zeebe Modeler for a graphical representation.

<?xml version="1.0" encoding="UTF-8"?>
<bpmn:definitions xmlns:bpmn="http://www.omg.org/spec/BPMN/20100524/MODEL" xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI" xmlns:di="http://www.omg.org/spec/DD/20100524/DI" xmlns:dc="http://www.omg.org/spec/DD/20100524/DC" xmlns:camunda="http://camunda.org/schema/1.0/bpmn" xmlns:zeebe="http://camunda.org/schema/zeebe/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" id="Definitions_1" targetNamespace="http://bpmn.io/schema/bpmn" exporter="Camunda Modeler" exporterVersion="1.5.0-nightly">
  <bpmn:process id="demoProcess" isExecutable="true">
    <bpmn:startEvent id="start" name="start">
      <bpmn:outgoing>SequenceFlow_1sz6737</bpmn:outgoing>
    </bpmn:startEvent>
    <bpmn:sequenceFlow id="SequenceFlow_1sz6737" sourceRef="start" targetRef="taskA" />
    <bpmn:sequenceFlow id="SequenceFlow_06ytcxw" sourceRef="taskA" targetRef="taskB" />
    <bpmn:sequenceFlow id="SequenceFlow_1oh45y7" sourceRef="taskB" targetRef="taskC" />
    <bpmn:endEvent id="end" name="end">
      <bpmn:incoming>SequenceFlow_148rk2p</bpmn:incoming>
    </bpmn:endEvent>
    <bpmn:sequenceFlow id="SequenceFlow_148rk2p" sourceRef="taskC" targetRef="end" />
    <bpmn:serviceTask id="taskA" name="task A">
      <bpmn:extensionElements>
        <zeebe:taskDefinition type="foo" />
      </bpmn:extensionElements>
      <bpmn:incoming>SequenceFlow_1sz6737</bpmn:incoming>
      <bpmn:outgoing>SequenceFlow_06ytcxw</bpmn:outgoing>
    </bpmn:serviceTask>
    <bpmn:serviceTask id="taskB" name="task B">
      <bpmn:extensionElements>
        <zeebe:taskDefinition type="bar" />
      </bpmn:extensionElements>
      <bpmn:incoming>SequenceFlow_06ytcxw</bpmn:incoming>
      <bpmn:outgoing>SequenceFlow_1oh45y7</bpmn:outgoing>
    </bpmn:serviceTask>
    <bpmn:serviceTask id="taskC" name="task C">
      <bpmn:extensionElements>
        <zeebe:taskDefinition type="foo" />
      </bpmn:extensionElements>
      <bpmn:incoming>SequenceFlow_1oh45y7</bpmn:incoming>
      <bpmn:outgoing>SequenceFlow_148rk2p</bpmn:outgoing>
    </bpmn:serviceTask>
  </bpmn:process>
  <bpmndi:BPMNDiagram id="BPMNDiagram_1">
    <bpmndi:BPMNPlane id="BPMNPlane_1" bpmnElement="demoProcess">
      <bpmndi:BPMNShape id="_BPMNShape_StartEvent_2" bpmnElement="start">
        <dc:Bounds x="173" y="102" width="36" height="36" />
        <bpmndi:BPMNLabel>
          <dc:Bounds x="180" y="138" width="22" height="12" />
        </bpmndi:BPMNLabel>
      </bpmndi:BPMNShape>
      <bpmndi:BPMNEdge id="SequenceFlow_1sz6737_di" bpmnElement="SequenceFlow_1sz6737">
        <di:waypoint xsi:type="dc:Point" x="209" y="120" />
        <di:waypoint xsi:type="dc:Point" x="310" y="120" />
        <bpmndi:BPMNLabel>
          <dc:Bounds x="260" y="105" width="0" height="0" />
        </bpmndi:BPMNLabel>
      </bpmndi:BPMNEdge>
      <bpmndi:BPMNEdge id="SequenceFlow_06ytcxw_di" bpmnElement="SequenceFlow_06ytcxw">
        <di:waypoint xsi:type="dc:Point" x="410" y="120" />
        <di:waypoint xsi:type="dc:Point" x="502" y="120" />
        <bpmndi:BPMNLabel>
          <dc:Bounds x="456" y="105" width="0" height="0" />
        </bpmndi:BPMNLabel>
      </bpmndi:BPMNEdge>
      <bpmndi:BPMNEdge id="SequenceFlow_1oh45y7_di" bpmnElement="SequenceFlow_1oh45y7">
        <di:waypoint xsi:type="dc:Point" x="602" y="120" />
        <di:waypoint xsi:type="dc:Point" x="694" y="120" />
        <bpmndi:BPMNLabel>
          <dc:Bounds x="648" y="105" width="0" height="0" />
        </bpmndi:BPMNLabel>
      </bpmndi:BPMNEdge>
      <bpmndi:BPMNShape id="EndEvent_0gbv3sc_di" bpmnElement="end">
        <dc:Bounds x="867" y="102" width="36" height="36" />
        <bpmndi:BPMNLabel>
          <dc:Bounds x="876" y="138" width="18" height="12" />
        </bpmndi:BPMNLabel>
      </bpmndi:BPMNShape>
      <bpmndi:BPMNEdge id="SequenceFlow_148rk2p_di" bpmnElement="SequenceFlow_148rk2p">
        <di:waypoint xsi:type="dc:Point" x="794" y="120" />
        <di:waypoint xsi:type="dc:Point" x="867" y="120" />
        <bpmndi:BPMNLabel>
          <dc:Bounds x="831" y="105" width="0" height="0" />
        </bpmndi:BPMNLabel>
      </bpmndi:BPMNEdge>
      <bpmndi:BPMNShape id="ServiceTask_09m0goq_di" bpmnElement="taskA">
        <dc:Bounds x="310" y="80" width="100" height="80" />
      </bpmndi:BPMNShape>
      <bpmndi:BPMNShape id="ServiceTask_0sryj72_di" bpmnElement="taskB">
        <dc:Bounds x="502" y="80" width="100" height="80" />
      </bpmndi:BPMNShape>
      <bpmndi:BPMNShape id="ServiceTask_1xu4l3g_di" bpmnElement="taskC">
        <dc:Bounds x="694" y="80" width="100" height="80" />
      </bpmndi:BPMNShape>
    </bpmndi:BPMNPlane>
  </bpmndi:BPMNDiagram>
</bpmn:definitions>

Create a Workflow Instance

Prerequisites

  1. Running Zeebe broker with endpoint localhost:26500 (default)
  2. Run the Deploy a Workflow example

WorkflowInstanceCreator.java

Source on github

/*
 * Copyright Camunda Services GmbH and/or licensed to Camunda Services GmbH under
 * one or more contributor license agreements. See the NOTICE file distributed
 * with this work for additional information regarding copyright ownership.
 * Licensed under the Zeebe Community License 1.0. You may not use this file
 * except in compliance with the Zeebe Community License 1.0.
 */
package io.zeebe.example.workflow;

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.ZeebeClientBuilder;
import io.zeebe.client.api.response.WorkflowInstanceEvent;

public class WorkflowInstanceCreator {

  public static void main(final String[] args) {
    final String broker = "127.0.0.1:26500";

    final String bpmnProcessId = "demoProcess";

    final ZeebeClientBuilder builder = ZeebeClient.newClientBuilder().brokerContactPoint(broker);

    try (ZeebeClient client = builder.build()) {

      System.out.println("Creating workflow instance");

      final WorkflowInstanceEvent workflowInstanceEvent =
          client
              .newCreateInstanceCommand()
              .bpmnProcessId(bpmnProcessId)
              .latestVersion()
              .send()
              .join();

      System.out.println(
          "Workflow instance created with key: " + workflowInstanceEvent.getWorkflowInstanceKey());
    }
  }
}

Create Workflow Instances Non-Blocking

Prerequisites

  1. Running Zeebe broker with endpoint localhost:26500 (default)
  2. Run the Deploy a Workflow example

NonBlockingWorkflowInstanceCreator.java

Source on github

/*
 * Copyright Camunda Services GmbH and/or licensed to Camunda Services GmbH under
 * one or more contributor license agreements. See the NOTICE file distributed
 * with this work for additional information regarding copyright ownership.
 * Licensed under the Zeebe Community License 1.0. You may not use this file
 * except in compliance with the Zeebe Community License 1.0.
 */
package io.zeebe.example.workflow;

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.ZeebeClientBuilder;
import io.zeebe.client.api.ZeebeFuture;
import io.zeebe.client.api.response.WorkflowInstanceEvent;

public class NonBlockingWorkflowInstanceCreator {
  public static void main(final String[] args) {
    final String broker = "127.0.0.1:26500";
    final int numberOfInstances = 100_000;
    final String bpmnProcessId = "demoProcess";

    final ZeebeClientBuilder builder = ZeebeClient.newClientBuilder().brokerContactPoint(broker);

    try (ZeebeClient client = builder.build()) {
      System.out.println("Creating " + numberOfInstances + " workflow instances");

      final long startTime = System.currentTimeMillis();

      long instancesCreating = 0;

      while (instancesCreating < numberOfInstances) {
        // this is non-blocking/async => returns a future
        final ZeebeFuture<WorkflowInstanceEvent> future =
            client.newCreateInstanceCommand().bpmnProcessId(bpmnProcessId).latestVersion().send();

        // could put the future somewhere and eventually wait for its completion

        instancesCreating++;
      }

      // creating one more instance; joining on this future ensures
      // that all the other create commands were handled
      client.newCreateInstanceCommand().bpmnProcessId(bpmnProcessId).latestVersion().send().join();

      System.out.println("Took: " + (System.currentTimeMillis() - startTime));
    }
  }
}

Open a Job Worker

Related Resources

Prerequisites

  1. Running Zeebe broker with endpoint localhost:26500 (default)
  2. Run the Deploy a Workflow example
  3. Run the Create a Workflow Instance example a couple of times

JobWorkerCreator.java

Source on github

/*
 * Copyright Camunda Services GmbH and/or licensed to Camunda Services GmbH under
 * one or more contributor license agreements. See the NOTICE file distributed
 * with this work for additional information regarding copyright ownership.
 * Licensed under the Zeebe Community License 1.0. You may not use this file
 * except in compliance with the Zeebe Community License 1.0.
 */
package io.zeebe.example.job;

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.ZeebeClientBuilder;
import io.zeebe.client.api.response.ActivatedJob;
import io.zeebe.client.api.worker.JobClient;
import io.zeebe.client.api.worker.JobHandler;
import io.zeebe.client.api.worker.JobWorker;
import java.time.Duration;
import java.util.Scanner;

public class JobWorkerCreator {
  public static void main(final String[] args) {
    final String broker = "127.0.0.1:26500";

    final String jobType = "foo";

    final ZeebeClientBuilder builder = ZeebeClient.newClientBuilder().brokerContactPoint(broker);

    try (ZeebeClient client = builder.build()) {

      System.out.println("Opening job worker.");

      final JobWorker workerRegistration =
          client
              .newWorker()
              .jobType(jobType)
              .handler(new ExampleJobHandler())
              .timeout(Duration.ofSeconds(10))
              .open();

      System.out.println("Job worker opened and receiving jobs.");

      // call workerRegistration.close() to close it

      // run until System.in receives exit command
      waitUntilSystemInput("exit");
    }
  }

  private static class ExampleJobHandler implements JobHandler {
    @Override
    public void handle(final JobClient client, final ActivatedJob job) {
      // here: business logic that is executed with every job
      System.out.println(job);
      client.newCompleteCommand(job.getKey()).send().join();
    }
  }

  private static void waitUntilSystemInput(final String exitCode) {
    try (Scanner scanner = new Scanner(System.in)) {
      while (scanner.hasNextLine()) {
        final String nextLine = scanner.nextLine();
        if (nextLine.contains(exitCode)) {
          return;
        }
      }
    }
  }
}

Handle variables as POJO

Related Resources

Prerequisites

  1. Running Zeebe broker with endpoint localhost:26500 (default)
  2. Run the Deploy a Workflow example

HandleVariablesAsPojo.java

Source on github

/*
 * Copyright Camunda Services GmbH and/or licensed to Camunda Services GmbH under
 * one or more contributor license agreements. See the NOTICE file distributed
 * with this work for additional information regarding copyright ownership.
 * Licensed under the Zeebe Community License 1.0. You may not use this file
 * except in compliance with the Zeebe Community License 1.0.
 */
package io.zeebe.example.data;

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.ZeebeClientBuilder;
import io.zeebe.client.api.response.ActivatedJob;
import io.zeebe.client.api.worker.JobClient;
import io.zeebe.client.api.worker.JobHandler;
import java.util.Scanner;

public class HandleVariablesAsPojo {
  public static void main(final String[] args) {
    final String broker = "127.0.0.1:26500";

    final ZeebeClientBuilder builder = ZeebeClient.newClientBuilder().brokerContactPoint(broker);

    try (ZeebeClient client = builder.build()) {
      final Order order = new Order();
      order.setOrderId(31243);

      client
          .newCreateInstanceCommand()
          .bpmnProcessId("demoProcess")
          .latestVersion()
          .variables(order)
          .send()
          .join();

      client.newWorker().jobType("foo").handler(new DemoJobHandler()).open();

      // run until System.in receives exit command
      waitUntilSystemInput("exit");
    }
  }

  private static class DemoJobHandler implements JobHandler {
    @Override
    public void handle(final JobClient client, final ActivatedJob job) {
      // read the variables of the job
      final Order order = job.getVariablesAsType(Order.class);
      System.out.println("new job with orderId: " + order.getOrderId());

      // update the variables and complete the job
      order.setTotalPrice(46.50);

      client.newCompleteCommand(job.getKey()).variables(order).send();
    }
  }

  public static class Order {
    private long orderId;
    private double totalPrice;

    public long getOrderId() {
      return orderId;
    }

    public void setOrderId(final long orderId) {
      this.orderId = orderId;
    }

    public double getTotalPrice() {
      return totalPrice;
    }

    public void setTotalPrice(final double totalPrice) {
      this.totalPrice = totalPrice;
    }
  }

  private static void waitUntilSystemInput(final String exitCode) {
    try (Scanner scanner = new Scanner(System.in)) {
      while (scanner.hasNextLine()) {
        final String nextLine = scanner.nextLine();
        if (nextLine.contains(exitCode)) {
          return;
        }
      }
    }
  }
}

Request Cluster Topology

Shows which broker is leader and follower for which partition. Particularly useful when you run a cluster with multiple Zeebe brokers.

Related Resources

Prerequisites

  1. Running Zeebe broker with endpoint localhost:26500 (default)

TopologyViewer.java

Source on github

/*
 * Copyright Camunda Services GmbH and/or licensed to Camunda Services GmbH under
 * one or more contributor license agreements. See the NOTICE file distributed
 * with this work for additional information regarding copyright ownership.
 * Licensed under the Zeebe Community License 1.0. You may not use this file
 * except in compliance with the Zeebe Community License 1.0.
 */
package io.zeebe.example.cluster;

import io.zeebe.client.ZeebeClient;
import io.zeebe.client.ZeebeClientBuilder;
import io.zeebe.client.api.response.Topology;

public class TopologyViewer {

  public static void main(final String[] args) {
    final String broker = "127.0.0.1:26500";

    final ZeebeClientBuilder builder = ZeebeClient.newClientBuilder().brokerContactPoint(broker);

    try (ZeebeClient client = builder.build()) {
      System.out.println("Requesting topology with initial contact point " + broker);

      final Topology topology = client.newTopologyRequest().send().join();

      System.out.println("Topology:");
      topology
          .getBrokers()
          .forEach(
              b -> {
                System.out.println("    " + b.getAddress());
                b.getPartitions()
                    .forEach(
                        p ->
                            System.out.println(
                                "      " + p.getPartitionId() + " - " + p.getRole()));
              });

      System.out.println("Done.");
    }
  }
}

Zeebe Go Client

Get Started with the Go client

In this tutorial, you will learn to use the Go client in a Go application to interact with Zeebe.

You will be guided through the following steps:

You can find the complete source code, on GitHub.

Prerequisites

Before you begin to setup your project please start the broker, i.e. by running the start up script bin/broker or bin/broker.bat in the distribution. Per default the broker is binding to the address localhost:26500, which is used as contact point in this guide. In case your broker is available under another address please adjust the broker contact point when building the client.

Set up a project

First, we need a new Go project. Create a new project using your IDE, or create new Go module with:

mkdir -p $GOPATH/src/github.com/{{your username}}/zb-example
cd $GOPATH/src/github.com/{{your username}}/zb-example

Install Zeebe Go client library:

go get -u github.com/zeebe-io/zeebe/clients/go/...

Create a main.go file inside the module and add the following lines to bootstrap the Zeebe client:

package main

import (
    "fmt"
    "github.com/zeebe-io/zeebe/clients/go/zbc"
    "github.com/zeebe-io/zeebe/clients/go/pb"
)

const BrokerAddr = "0.0.0.0:26500"

func main() {
    zbClient, err := zbc.NewZBClient(BrokerAddr)
    if err != nil {
        panic(err)
    }

    topology, err := zbClient.NewTopologyCommand().Send()
    if err != nil {
        panic(err)
    }

    for _, broker := range topology.Brokers {
        fmt.Println("Broker", broker.Host, ":", broker.Port)
        for _, partition := range broker.Partitions {
            fmt.Println("  Partition", partition.PartitionId, ":", roleToString(partition.Role))
        }
    }
}

func roleToString(role pb.Partition_PartitionBrokerRole) string {
    switch role {
    case pb.Partition_LEADER:
        return "Leader"
    case pb.Partition_FOLLOWER:
        return "Follower"
    default:
        return "Unknown"
    }
}

Run the program.

go run main.go

You should see similar output:

Broker 0.0.0.0 : 26501
  Partition 1 : Leader

Model a workflow

Now, we need a first workflow which can then be deployed. Later, we will extend the workflow with more functionality.

Open the Zeebe Modeler and create a new BPMN diagram. Add a start event and an end event to the diagram and connect the events.

model-workflow-step-1

Set the id to order-process (i.e., the BPMN process id) and mark the diagram as executable. Save the diagram in the project's source folder.

Deploy a workflow

Next, we want to deploy the modeled workflow to the broker. The broker stores the workflow under its BPMN process id and assigns a version (i.e., the revision).

package main

import (
    "fmt"
    "github.com/zeebe-io/zeebe/clients/go/zbc"
)

const brokerAddr = "0.0.0.0:26500"

func main() {
    zbClient, err := zbc.NewZBClient(brokerAddr)
    if err != nil {
        panic(err)
    }

    response, err := zbClient.NewDeployWorkflowCommand().AddResourceFile("order-process.bpmn").Send()
    if err != nil {
        panic(err)
    }

    fmt.Println(response.String())
}

Run the program and verify that the workflow is deployed successfully. You should see similar the output:

key:2251799813686743 workflows:<bpmnProcessId:"order-process" version:3 workflowKey:2251799813686742 resourceName:"order-process.bpmn" >

Create a workflow instance

Finally, we are ready to create a first instance of the deployed workflow. A workflow instance is created of a specific version of the workflow, which can be set on creation.

package main

import (
    "fmt"

    "github.com/zeebe-io/zeebe/clients/go/zbc"
)

const brokerAddr = "0.0.0.0:26500"

func main() {
    client, err := zbc.NewZBClient(brokerAddr)
    if err != nil {
        panic(err)
    }

    // After the workflow is deployed.
    variables := make(map[string]interface{})
    variables["orderId"] = "31243"

    request, err := client.NewCreateInstanceCommand().BPMNProcessId("order-process").LatestVersion().VariablesFromMap(variables)
    if err != nil {
        panic(err)
    }

    msg, err := request.Send()
    if err != nil {
        panic(err)
    }

    fmt.Println(msg.String())
}

Run the program and verify that the workflow instance is created. You should see the output:

workflowKey:2251799813686742 bpmnProcessId:"order-process" version:3 workflowInstanceKey:2251799813686744

You did it! You want to see how the workflow instance is executed?

Start the Zeebe Monitor using java -jar zeebe-simple-monitor-app-*.jar.

Open a web browser and go to http://localhost:8080/.

Here, you see the current state of the workflow instance. zeebe-monitor-step-1

Work on a task

Now we want to do some work within your workflow. First, add a few service tasks to the BPMN diagram and set the required attributes. Then extend your main.go file and activate a job which are created when the workflow instance reaches a service task.

Open the BPMN diagram in the Zeebe Modeler. Insert a few service tasks between the start and the end event.

model-workflow-step-2

You need to set the type of each task, which identifies the nature of the work to be performed. Set the type of the first task to payment-service.

Add the following lines to redeploy the modified process, then activate and complete a job of the first task type:

package main

import (
    "fmt"
    "log"

    "github.com/zeebe-io/zeebe/clients/go/entities"
    "github.com/zeebe-io/zeebe/clients/go/worker"
    "github.com/zeebe-io/zeebe/clients/go/zbc"
)

const brokerAddr = "0.0.0.0:26500"

func main() {
    client, err := zbc.NewZBClient(brokerAddr)
    if err != nil {
        panic(err)
    }

    // deploy workflow
    response, err := client.NewDeployWorkflowCommand().AddResourceFile("order-process.bpmn").Send()
    if err != nil {
        panic(err)
    }

    fmt.Println(response.String())

    // create a new workflow instance
    variables := make(map[string]interface{})
    variables["orderId"] = "31243"

    request, err := client.NewCreateInstanceCommand().BPMNProcessId("order-process").LatestVersion().VariablesFromMap(variables)
    if err != nil {
        panic(err)
    }

    result, err := request.Send()
    if err != nil {
        panic(err)
    }

    fmt.Println(result.String())

    jobWorker := client.NewJobWorker().JobType("payment-service").Handler(handleJob).Open()
    defer jobWorker.Close()

    jobWorker.AwaitClose()
}

func handleJob(client worker.JobClient, job entities.Job) {
    jobKey := job.GetKey()

    headers, err := job.GetCustomHeadersAsMap()
    if err != nil {
        // failed to handle job as we require the custom job headers
        failJob(client, job)
        return
    }

    variables, err := job.GetVariablesAsMap()
    if err != nil {
        // failed to handle job as we require the variables
        failJob(client, job)
        return
    }

    variables["totalPrice"] = 46.50
    request, err := client.NewCompleteJobCommand().JobKey(jobKey).VariablesFromMap(variables)
    if err != nil {
        // failed to set the updated variables
        failJob(client, job)
        return
    }

    log.Println("Complete job", jobKey, "of type", job.Type)
    log.Println("Processing order:", variables["orderId"])
    log.Println("Collect money using payment method:", headers["method"])

    request.Send()
}

func failJob(client worker.JobClient, job entities.Job) {
    log.Println("Failed to complete job", job.GetKey())
    client.NewFailJobCommand().JobKey(job.GetKey()).Retries(job.Retries - 1).Send()
}

In this example we open a job worker for jobs of type payment-service. The job worker will repeatedly poll for new jobs of the type payment-service and activate them subsequently. Each activated job will then be passed to the job handler which implements the business logic of the job worker. The handler will then complete the job with its result or fail the job if it encounters a problem while processing the job.

When you have a look at the Zeebe Monitor, then you can see that the workflow instance moved from the first service task to the next one:

zeebe-monitor-step-2

When you run the above example you should see similar output:

key:2251799813686751 workflows:<bpmnProcessId:"order-process" version:4 workflowKey:2251799813686750 resourceName:"order-process.bpmn" >
workflowKey:2251799813686750 bpmnProcessId:"order-process" version:4 workflowInstanceKey:22517998136
86752
2019/06/06 20:59:50 Complete job 2251799813686760 of type payment-service
2019/06/06 20:59:50 Processing order: 31243
2019/06/06 20:59:50 Collect money using payment method: VISA

What's next?

Yay! You finished this tutorial and learned the basic usage of the Go client.

Next steps:

Operations

Development

We recommend using Docker during development. This gives you a consistent, repeatable development environment.

Production

In Production, we recommend using Kubernetes and container images. This provides you with predictable and consistent configuration, and the ability to manage deployment using automation tools.

Tools For Monitoring And Managing Workflows

Operate is a tool that was built for monitoring and managing Zeebe workflows. We walk through how to install Operate in the "Getting Started" tutorial.

The current Operate release is a developer preview and is available for non-production use only. You can find the Operate preview license here.

We plan to release Operate under an enterprise license for production use in the future.

Alternatively:

  • There's a community project called Simple Monitor that can also be used to inspect deployed workflows and workflow instances. Simple Monitor is not intended for production use.
  • It's possible to combine Kibana with Zeebe's Elasticsearch exporter to create a dashboard for monitoring the state of Zeebe.

The zeebe.cfg.toml file

The following snipped represents the default Zeebe configuration, which is shipped with the distribution. It can be found inside the config folder (config/zeebe.cfg.toml) and can be used to adjust Zeebe to your needs.

Source on github

# Zeebe broker configuration file

# Overview -------------------------------------------

# This file contains a complete list of available configuration options.

# Default values:
#
# When the default value is used for a configuration option, the option is
# commented out. You can learn the default value from this file

# Conventions:
#
# Byte sizes
# For buffers and others must be specified as strings and follow the following
# format: "10U" where U (unit) must be replaced with K = Kilobytes, M = Megabytes or G = Gigabytes.
# If unit is omitted then the default unit is simply bytes.
# Example:
# sendBufferSize = "16M" (creates a buffer of 16 Megabytes)
#
# Time units
# Timeouts, intervals, and the likes, must be specified as strings and follow the following
# format: "VU", where:
#   - V is a numerical value (e.g. 1, 1.2, 3.56, etc.)
#   - U is the unit, one of: ms = Millis, s = Seconds, m = Minutes, or h = Hours
#
# Paths:
# Relative paths are resolved relative to the installation directory of the
# broker.

# ----------------------------------------------------


[gateway]
# Enable the embedded gateway to start on broker startup.
# This setting can also be overridden using the environment variable ZEEBE_EMBED_GATEWAY.
# enable = true

[gateway.network]
# Sets the host the embedded gateway binds to.
# This setting can be specified using the following precedence:
# 1. setting the environment variable ZEEBE_GATEWAY_HOST
# 2. setting gateway.network.host property in this file
# 3. setting the environment variable ZEEBE__HOST
# 4. setting network.host property in this file
# host = "0.0.0.0"

# Sets the port the embedded gateway binds to.
# This setting can also be overridden using the environment variable ZEEBE_GATEWAY_PORT.
# port = 26500

[gateway.cluster]
# Sets the broker the gateway should initial contact.
# This setting can also be overridden using the environment variable ZEEBE_GATEWAY_CONTACT_POINT.
# contactPoint = "127.0.0.1:26501"

# Sets size of the transport buffer to send and received messages between gateway and broker cluster.
# This setting can also be overridden using the environment variable ZEEBE_GATEWAY_TRANSPORT_BUFFER.
# transportBuffer = "128M"

# Sets the timeout of requests send to the broker cluster
# This setting can also be overridden using the environment variable ZEEBE_GATEWAY_REQUEST_TIMEOUT.
# requestTimeout = "15s"

[gateway.threads]
# Sets the number of threads the gateway will use to communicate with the broker cluster
# This setting can also be overridden using the environment variable ZEEBE_GATEWAY_MANAGEMENT_THREADS.
# managementThreads = 1

[gateway.monitoring]
# Enables the metrics collection in the gateway
# enabled = false

[gateway.security]
# Enables TLS authentication between clients and the gateway
# This setting can also be overridden using the environment variable ZEEBE_GATEWAY_SECURITY_ENABLED. 
# enabled = false
#
# Sets the path to the certificate chain file
# This setting can also be overridden using the environment variable ZEEBE_GATEWAY_CERTIFICATE_PATH.
# certificateChainPath = ""
#
# Sets the path to the private key file location
# This setting can also be overridden using the environment variable ZEEBE_GATEWAY_PRIVATE_KEY_PATH. 
# privateKeyPath = ""

[network]

# This section contains the network configuration. Particularly, it allows to
# configure the hosts and ports the broker should bind to. The broker exposes two sockets:
# 1. command: the socket which is used for gateway-to-broker communication
# 2. internal: the socket which is used for broker-to-broker communication
# 3. monitoring: the socket which is used to monitor the broker

# Controls the default host the broker should bind to. Can be overwritten on a
# per binding basis for client, management and replication
#
# This setting can also be overridden using the environment variable ZEEBE_HOST.
# host = "0.0.0.0"

# If a port offset is set it will be added to all ports specified in the config
# or the default values. This is a shortcut to not always specifying every port.
#
# The offset will be added to the second last position of the port, as Zeebe
# requires multiple ports. As example a portOffset of 5 will increment all ports
# by 50, i.e. 26500 will become 26550 and so on.
#
# This setting can also be overridden using the environment variable ZEEBE_PORT_OFFSET.
# portOffset = 0

[network.commandApi]
# Overrides the host used for gateway-to-broker communication
# host = "localhost"

# Sets the port used for gateway-to-broker communication
# port = 26501

# Sets the size of the buffer used for buffering outgoing messages
# sendBufferSize = "16M"

[network.internalApi]
# Overrides the host used for internal broker-to-broker communication
# host = "localhost"

# Sets the port used for internal broker-to-broker communication
# port = 26502

[network.monitoringApi]
# Overrides the host used for exposing monitoring information
# host = "localhost"

# Sets the port used for exposing monitoring information
# port = 9600


[data]

# This section allows to configure Zeebe's data storage. Data is stored in
# "partition folders". A partition folder has the following structure:
#
# partition-0                       (root partition folder)
# ├── partition.json                (metadata about the partition)
# ├── segments                      (the actual data as segment files)
# │   ├── 00.data
# │   └── 01.data
# └── state                     	(stream processor state and snapshots)
#     └── stream-processor
#		  ├── runtime
#		  └── snapshots

# Specify a list of directories in which data is stored. Using multiple
# directories makes sense in case the machine which is running Zeebe has
# multiple disks which are used in a JBOD (just a bunch of disks) manner. This
# allows to get greater throughput in combination with a higher io thread count
# since writes to different disks can potentially be done in parallel.
#
# This setting can also be overridden using the environment variable ZEEBE_DIRECTORIES.
# directories = [ "data" ]

# The size of data log segment files.
# logSegmentSize = "512M"

# How often we take snapshots of streams (time unit)
# snapshotPeriod = "15m"

# The maximum number of snapshots kept (must be a positive integer). When this 
# limit is passed the oldest snapshot is deleted.
# maxSnapshots = "3"
#
# How often follower partitions will check for new snapshots to replicate from
# the leader partitions. Snapshot replication enables faster failover by
# reducing how many log entries must be reprocessed in case of leader change.
# snapshotReplicationPeriod = "5m"


[cluster]

# This section contains all cluster related configurations, to setup an zeebe cluster

# Specifies the unique id of this broker node in a cluster.
# The id should be between 0 and number of nodes in the cluster (exclusive).
#
# This setting can also be overridden using the environment variable ZEEBE_NODE_ID.
# nodeId = 0

# Controls the number of partitions, which should exist in the cluster.
#
# This can also be overridden using the environment variable ZEEBE_PARTITIONS_COUNT.
# partitionsCount = 1

# Controls the replication factor, which defines the count of replicas per partition.
# The replication factor cannot be greater than the number of nodes in the cluster.
#
# This can also be overridden using the environment variable ZEEBE_REPLICATION_FACTOR.
# replicationFactor = 1

# Specifies the zeebe cluster size. This value is used to determine which broker
# is responsible for which partition.
#
# This can also be overridden using the environment variable ZEEBE_CLUSTER_SIZE.
# clusterSize = 1

# Allows to specify a list of known other nodes to connect to on startup
# The contact points of the internal network configuration must be specified.
# The format is [HOST:PORT]
# Example:
# initialContactPoints = [ "192.168.1.22:26502", "192.168.1.32:26502" ]
#
# This setting can also be overridden using the environment variable ZEEBE_CONTACT_POINTS
# specifying a comma-separated list of contact points.
#
# To guarantee the cluster can survive network partitions, all nodes must be specified
# as initial contact points.
#
# Default is empty list:
# initialContactPoints = []

# Allows to specify a name for the cluster
# This setting can also be overridden using the environment variable ZEEBE_CLUSTER_NAME
# Example:
# clusterName = "zeebe-cluster"

[threads]

# Controls the number of non-blocking CPU threads to be used. WARNING: You
# should never specify a value that is larger than the number of physical cores
# available. Good practice is to leave 1-2 cores for ioThreads and the operating
# system (it has to run somewhere). For example, when running Zeebe on a machine
# which has 4 cores, a good value would be 2.
#
# The default value is 2.
#cpuThreadCount = 2

# Controls the number of io threads to be used. These threads are used for
# workloads that write data to disk. While writing, these threads are blocked
# which means that they yield the CPU.
#
# The default value is 2.
#ioThreadCount = 2

# Configure exporters below; note that configuration parsing conventions do not apply to exporter
# arguments, which will be parsed as normal TOML.
#
# Each exporter should be configured following this template:
#
# id:
#   property should be unique in this configuration file, as it will server as the exporter
#   ID for loading/unloading.
# jarPath:
#   path to the JAR file containing the exporter class. JARs are only loaded once, so you can define
#   two exporters that point to the same JAR, with the same class or a different one, and use args
#   to parametrize its instantiation.
# className:
#   entry point of the exporter, a class which *must* extend the io.zeebe.exporter.Exporter
#   interface.
#
# A nested table as [exporters.args] will allow you to inject arbitrary arguments into your
# class through the use of annotations.
#
# Enable the following debug exporter to log the exported records to console
# This exporter can also be enabled using the environment variable ZEEBE_DEBUG, the pretty print
# option will be enabled if the variable is set to "pretty".
#
# [[exporters]]
# id = "debug-log"
# className = "io.zeebe.broker.exporter.debug.DebugLogExporter"
# [exporters.args]
#   logLevel = "debug"
#   prettyPrint = false
#
# Enable the following debug exporter to start a http server to inspect the exported records
#
# [[exporters]]
# id = "debug-http"
# className = "io.zeebe.broker.exporter.debug.DebugHttpExporter"
# [exporters.args]
#   port = 8000
#   limit = 1024
#
#
# An example configuration for the elasticsearch exporter:
#
#[[exporters]]
#id = "elasticsearch"
#className = "io.zeebe.exporter.ElasticsearchExporter"
#
#  [exporters.args]
#  url = "http://localhost:9200"
#
#  [exporters.args.bulk]
#  delay = 5
#  size = 1_000
#
#  [exporters.args.authentication]
#  username = elastic
#  password = changeme
#
#  [exporters.args.index]
#  prefix = "zeebe-record"
#  createTemplate = true
#
#  command = false
#  event = true
#  rejection = false
#
#  deployment = true
#  error = true
#  incident = true
#  job = true
#  jobBatch = false
#  message = false
#  messageSubscription = false
#  variable = true
#  variableDocument = false
#  workflowInstance = true
#  workflowInstanceCreation = false
#  workflowInstanceSubscription = false

Setting up a Zeebe Cluster

To set up a cluster you need to adjust the cluster section in the Zeebe configuration file. Below is a snippet of the default Zeebe configuration file, it should be self-explanatory.

[cluster]

# This section contains all cluster related configurations, to set up a Zeebe cluster

# Specifies the unique id of this broker node in a cluster.
# The id should be between 0 and the number of nodes in the cluster (exclusive).
#
# This setting can also be overridden using the environment variable ZEEBE_NODE_ID.
# nodeId = 0

# Controls the number of partitions, which should exist in the cluster.
#
# This can also be overridden using the environment variable ZEEBE_PARTITIONS_COUNT.
# partitionsCount = 1

# Controls the replication factor, which defines the count of replicas per partition.
# The replication factor cannot be greater than the number of nodes in the cluster.
#
# This can also be overridden using the environment variable ZEEBE_REPLICATION_FACTOR.
# replicationFactor = 1

# Specifies the Zeebe cluster size. This value is used to determine which broker
# is responsible for which partition.
#
# This can also be overridden using the environment variable ZEEBE_CLUSTER_SIZE.
# clusterSize = 1

# Allows to specify a list of known other nodes to connect to on startup
# The contact points of the management api must be specified.
# The format is [HOST:PORT]
# Example:
# initialContactPoints = [ "192.168.1.22:26502", "192.168.1.32:26502" ]
#
# This setting can also be overridden using the environment variable ZEEBE_CONTACT_POINTS
# specifying a comma-separated list of contact points.
#
# To guarantee the cluster can survive network partitions, all nodes must be specified
# as initial contact points.
#
# Default is empty list:
# initialContactPoints = []

Example

In this example, we will set up a Zeebe cluster with five brokers. Each broker needs to get a unique node id. To scale well, we will bootstrap five partitions with a replication factor of three. For more information about this, please take a look into the Clustering section.

The clustering setup will look like this:

cluster

Configuration

The configuration of the first broker could look like this:

[cluster]
nodeId = 0
partitionsCount = 5
replicationFactor = 3
clusterSize = 5
initialContactPoints = [
  ADDRESS_AND_PORT_OF_NODE_0,
  ADDRESS_AND_PORT_OF_NODE_1,
  ADDRESS_AND_PORT_OF_NODE_2,
  ADDRESS_AND_PORT_OF_NODE_3,
  ADDRESS_AND_PORT_OF_NODE_4
]

For the other brokers the configuration will slightly change.

[cluster]
nodeId = NODE_ID
partitionsCount = 5
replicationFactor = 3
clusterSize = 5
initialContactPoints = [
  ADDRESS_AND_PORT_OF_NODE_0,
  ADDRESS_AND_PORT_OF_NODE_1,
  ADDRESS_AND_PORT_OF_NODE_2,
  ADDRESS_AND_PORT_OF_NODE_3,
  ADDRESS_AND_PORT_OF_NODE_4
]

Each broker needs a unique node id. The ids should be in the range of zero and clusterSize - 1. You need to replace the NODE_ID placeholder with an appropriate value. Furthermore, the brokers need an initial contact point to start their gossip conversation. Make sure that you use the address and management port of another broker. You need to replace the ADDRESS_AND_PORT_OF_NODE_0 placeholder.

To guarantee that a cluster can properly recover from network partitions, it is currently required that all nodes be specified as initial contact points. It is not necessary for a broker to list itself as initial contact point, but it is safe to do so, and probably simpler to maintain.

Partitions bootstrapping

On bootstrap, each node will create a partition matrix.

This matrix depends on the partitions count, replication factor and the cluster size. If you did the configuration right and used the same values for partitionsCount, replicationFactor and clusterSize on each node, then all nodes will generate the same partition matrix.

For the current example the matrix will look like the following:

Node 0 Node 1 Node 2 Node 3 Node 4
Partition 0 Leader Follower Follower - -
Partition 1 - Leader Follower Follower -
Partition 2 - - Leader Follower Follower
Partition 3 Follower - - Leader Follower
Partition 4 Follower Follower - - Leader

The matrix ensures that the partitions are well distributed between the different nodes. Furthermore, it guarantees that each node knows exactly, which partitions it has to bootstrap and for which it will become the leader at first (this could change later, if the node needs to step down for example).

The Metrics

When operating a distributed system like Zeebe, it is important to put proper monitoring in place. To facilitate this, Zeebe exposes an extensive set of metrics.

Zeebe exposes metrics over an embedded HTTP server.

Types of metrics

  • Counters: a time series that records a growing count of some unit. Examples: number of bytes transmitted over the network, number of workflow instances started, ...
  • Gauges: a time series that records the current size of some unit. Examples: number of currently open client connections, current number of partitions, ...

Metrics Format

Zeebe exposes metrics directly in Prometheus text format. The details of the format can be read in the Prometheus documentation.

Example:

# HELP zeebe_stream_processor_events_total Number of events processed by stream processor
# TYPE zeebe_stream_processor_events_total counter
zeebe_stream_processor_events_total{action="written",partition="1",} 20320.0
zeebe_stream_processor_events_total{action="processed",partition="1",} 20320.0
zeebe_stream_processor_events_total{action="skipped",partition="1",} 2153.0

Configuring Metrics

The HTTP server to export the metrics can be configured in the configuration file.

Connecting Prometheus

As explained, Zeebe exposes the metrics over a HTTP server. The default port is 9600.

Add the following entry to your prometheus.yml:

- job_name: zeebe
  scrape_interval: 15s
  metrics_path: /metrics
  scheme: http
  static_configs:
  - targets:
    - localhost: 9600

Available Metrics

All Zeebe related metrics have a zeebe_-prefix.

Most metrics have the following common label:

  • partition: cluster-unique id of the partition

The following metrics are collected:

  • zeebe_stream_processor_events_total: The number of events processed by the stream processor. The action label separates processed, skipped and written events.
  • zeebe_exporter_events_total: The number of events processed by the exporter processor. The action label separates exported and skipped events.
  • zeebe_element_instance_events_total: The number of occurred workflow element instance events. The action label separates the number of activated, completed and terminated elements. The type label separates different BPMN element types.
  • zeebe_running_workflow_instances_total: The number of currently running workflow instances, i.e. not completed or terminated.
  • zeebe_job_events_total: The number of job events. The action label separates the number of created, activated, timed out, completed, failed and canceled jobs.
  • zeebe_pending_jobs_total: The number of currently pending jobs, i.e. not completed or terminated.
  • zeebe_incident_events_total: The number of incident events. The action label separates the number of created and resolved incident events.
  • zeebe_pending_incidents_total: The number of currently pending incident, i.e. not resolved.

Operate User Guide

Operate is a tool for monitoring and troubleshooting workflow instances running in Zeebe.

In addition to providing visibility into active and completed workflow instances, Operate also makes it possible to carry out key operations such as resolving incidents and updating workflow instance variables.

In the Getting Started tutorial, we walk through how to install and run Operate and how to use it to monitor workflow instances. In this Operate User Guide, we’ll cover some of Operate’s more advanced features.

Currently, Operate is available under a developer license for free, non-production use only. You can find the license here.

There are no restrictions under this license when it comes to the length of the evaluation period or the feature set–as long as you use Operate in non-production environments only.

In the future, we plan to make Operate available for production deployments under an enterprise license. If this sounds interesting to you, please contact us, and we’ll notify you when an enterprise version of Operate is available.

Even after Operate is available for production use as an enterprise product, we will continue to offer Operate for free, non-production use because we have found it to be a helpful tool for becoming familiar with Zeebe and building an initial proof-of-concept.

Install & Start Operate

Running via Docker (Local Development)

The easiest way to run Operate in development is with Docker. This gives you a consistent, reproducible environment and an out-of-the-box integrated experience for experimenting with Zeebe and Operate.

To do this, you need Docker Desktop installed on your development machine.

The zeebe-docker-compose repository contains an operate profile that starts a single Zeebe broker with Operate and all its dependencies. See the README file in the repository for instructions to start Zeebe and Operate using Docker.

If you are using Docker, once you follow the instructions in the repository, skip ahead to the section "Access the Operate Web Interface”.

Running with Kubernetes (Production)

We will update this section after Operate is available for production use.

Running Operate with Kubernetes will be recommended for production deployments.

Manual Configuration (Local Development)

Here, we’ll walk you through how to download and run an Operate distribution manually, without using Docker.

Note that the Operate web UI is available by default at http://localhost:8080, so please be sure this port is available.

Download Operate and a compatible version of Zeebe.

Operate and Zeebe distributions are available for download on the same release page.

Note that each version of Operate is compatible with a specific version of Zeebe. For example: Zeebe 0.18.0 and Operate-1.0.0-alpha11 are compatible.

On the Zeebe release page, compatible versions of Zeebe and Operate are grouped together. Please be sure to download and use compatible versions. This is handled for you if you use the Docker profile from our repository.

Download Elasticsearch

Operate uses open-source Elasticsearch as its underlying data store, and so to run Operate, you need to download and run Elasticsearch.

Operate is currently compatible Elasticsearch 6.8.1. You can download Elasticsearch here.

Run Elasticsearch

To run Elasticsearch, execute the following commands in Terminal or another command line tool of your choice:

cd elasticsearch-*
bin/elasticearch

You’ll know Elasticsearch has started successfully when you see a message similar to:

[INFO ][o.e.l.LicenseService     ] [-IbqP-o] license [72038058-e8ae-4c71-81a1-e9727f2b81c7] mode [basic] - valid

Run Zeebe

To run Zeebe, execute the following commands:

cd zeebe-broker-*
./bin/broker

You’ll know Zeebe has started successfully when you see a message similar to:

[partition-0] [0.0.0.0:26501-zb-actors-0] INFO  io.zeebe.raft - Joined raft in term 0
[exporter] [0.0.0.0:26501-zb-actors-1] INFO  io.zeebe.broker.exporter.elasticsearch - Exporter opened

Run Operate

To run Operate, execute the following commands:

`cd camunda-operate-distro-1.0.0-*``

bin/operate

You’ll know Operate has started successfully when you see messages similar to:

DEBUG 1416 --- [       Thread-6] o.c.o.e.w.BatchOperationWriter           : 0 operations locked
DEBUG 1416 --- [       Thread-4] o.c.o.z.ZeebeESImporter                  : Latest loaded position for alias [zeebe-record-deployment] and partitionId [0]: 0
INFO 1416 --- [       Thread-4] o.c.o.z.ZeebeESImporter                  : Elasticsearch index for ValueType DEPLOYMENT was not found, alias zeebe-record-deployment. Skipping.

Access the Operate Web Interface

The Operate web interface is available at http://localhost:8080.

The first screen you'll see is a sign-in page. Use the credentials demo / demo to sign in.

After you sign in, you'll see an empty dashboard if you haven't yet deployed any workflows:

operate-dash-no-workflows

If you have deployed workflows or created workflow instances, you'll see those on your dashboard:

operate-dash-with-workflows

Getting Familiar With Operate

This section "Getting Familiar With Operate" and the next section “Incidents and Payloads” assumes that you’ve deployed a workflow to Zeebe and have created at least one workflow instance.

If you’re not sure how to deploy workflows or create instances, we recommend going through the Getting Started tutorial.

In the following sections, we’ll use the same order-process.bpmn workflow model from the Getting Started guide.

View A Deployed Workflow

In the “Instances by Workflow” panel in your dashboard, you should see a list of your deployed workflows and running instances.

operate-dash-with-workflows

When you click on the name of a deployed workflow in the “Instances by Workflow” panel, you’ll navigate to a view of that workflow model along with all running instances.

operate-view-workflow

From this “Running Instances” view, you have the ability to cancel a single running workflow instance.

operate-cancel-workflow-instance

Inspect A Workflow Instance

Running workflow instances appear in the “Instances” section below the workflow model. To inspect a specific instance, you can click on the instance ID.

operate-inspect-instance

There, you’ll be able to see detail about the workflow instance, including the instance history and the variables attached to the instance.

operate-view-instance-detail

Update Variables & Resolve Incidents

Every workflow instance created for the workflow model used in the Getting Started tutorial requires an orderValue so that the XOR gateway evaluation will happen properly.

Let’s look at a case where orderValue is present but was set as a string, but our order-process.bpmn model requires an integer in order to properly evaluate the orderValue and route the instance.

Linux

./bin/zbctl create instance order-process --variables '{"orderId": "1234", "orderValue":"99"}'

Mac

./bin/zbctl.darwin create instance order-process --variables '{"orderId": "1234", "orderValue":"99"}'

Windows (Powershell)

./bin/zbctl.exe create instance order-process --variables '{\"orderId\": \"1234\", \
"orderValue\": \"99\"}'

To advance the instance to our XOR gateway, we’ll quickly create a job worker to complete the “Initiate Payment” task:

Linux

./bin/zbctl create worker initiate-payment --handler cat

Mac

./bin/zbctl.darwin create worker initiate-payment --handler cat

Windows (Powershell)

./bin/zbctl.exe create worker initiate-payment --handler "findstr .*"

And we’ll publish a message that will be correlated with the instance so we can advance past the “Payment Received” Intermediate Message Catch Event:

Linux

./bin/zbctl publish message "payment-received" --correlationKey="1234"

Mac

./bin/zbctl.darwin publish message "payment-received" --correlationKey="1234"

Windows (Powershell)

./bin/zbctl.exe publish message "payment-received" --correlationKey="1234"

In the Operate interface, you should now see that the workflow instance has an “Incident”, which means there’s a problem with workflow execution that needs to be fixed before the workflow instance can progress to the next step.

operate-incident-workflow-view

Operate provides tools for diagnosing and resolving incidents. Let’s go through incident diagnosis and resolution step-by-step.

When we inspect the workflow instance, we can see exactly what our incident is: Expected to evaluate condition 'orderValue>=100' successfully, but failed because: Cannot compare values of different types: STRING and INTEGER

operate-incident-instance-view

We have enough information to know that to resolve this incident, we need to edit the orderValue variable so that it’s an integer. To do so, first click on the edit icon next to the variable you’d like to edit.

operate-incident-edit-variable

Next, edit the variable by removing the quotation marks from the orderValue value. Then click on the checkmark icon to save the change.

We were able to solve this particular problem by editing a variable, but it’s worth noting that you can also add a variable if a variable is missing from a workflow instance altogether.

operate-incident-save-variable-edit

There’s one last step we need to take: initiating a “retry” of the workflow instance. There are two places on the workflow instance page where you can initiate a retry:

operate-retry-instance

You should now see that the incident has been resolved, and the workflow instance has progressed to the next step. Well done!

operate-incident-resolved-instance-view

If you’d like to complete the workflow instance, you can create a worker for the “Ship Without Insurance” task:

Linux

./bin/zbctl create worker ship-without-insurance --handler cat

Mac

./bin/zbctl.darwin create worker ship-without-insurance --handler cat

Windows (Powershell)

./bin/zbctl.exe create worker ship-without-insurance --handler "findstr .*"

Selections & Batch Operations

In some cases, you’ll need to retry or cancel many workflow instances at once. Operate also supports this type of batch operation.

Imagine a case where many workflow instances have an incident incident caused by the same issue. At some point, the underlying problem will have been resolved (for example, maybe a microservice was down for an extended period of time then was brought back up).

But even though the underlying problem was resolved, the affected workflow instances are stuck until they’re “retried”.

operate-batch-retry

Let's create a selection in Operate. A selection is simply a set of workflow instances on which you can carry out a batch retry or batch cancellation. To create a selection, check the box next to the workflow instances you'd like to include, then click on the blue “Create Selection” button.

operate-batch-retry

Your selection will appear in the right-side Selections panel.

operate-batch-retry

You can retry or cancel the workflow instances in the selection immediately, or you can come back to the selection later–your selection will remain saved. And you can add more workflow instances to the selection after it was initially created via the blue “Add To Selection” button.

operate-batch-retry

When you’re ready to carry out an operation, you can simply return to the selection and retry or cancel the workflow instances as a batch.

operate-batch-retry

If the operation was successful, the state of the workflow instances will be updated to active and without incident.

operate-batch-retry

Giving Feedback And Asking Questions

If you have questions or feedback about Operate, the Zeebe user forum is the best way to get in touch with us. Please use the “Operate” tag when you create a post.

If you’re interested in an Operate enterprise license so that you can use Operate in production, please contact us so that we can notify you when an enterprise edition of Operate becomes available.