Agentic Tasks in BPMN Diagrams

The Problem: BPMN Wasn’t Built for AI Agents

BPMN (Business Process Model and Notation) is the standard for modeling business processes. It handles human tasks, service calls, decision gateways, and message flows. But it was designed for a world where every participant is either a person or a deterministic system.

AI agents are neither. They’re non-deterministic, they need reflection loops, they produce outputs with varying confidence, and they collaborate through patterns that don’t map to traditional BPMN constructs. When you put an AI agent into a swim lane, the diagram tells you nothing about how that agent works, how reliable it is, or how multiple agents should coordinate.

This gap matters. As organizations move from AI-assisted to AI-agentic workflows, the process diagrams that document, communicate, and ultimately drive execution need to capture what agents actually do.

The Research Behind the Extension

A research team at the University of Luxembourg and UOC (Universitat Oberta de Catalunya) addressed this directly in their 2024 paper “Towards Modeling Human-Agentic Collaborative Workflows: A BPMN Extension” (Ait, Cánovas Izquierdo, Cabot). They identified five fundamental gaps:

1. Agent Profiling — Standard lanes and pools can’t express whether a participant is an AI agent, what its role is (manager vs. worker), or how reliable it is. You need metadata beyond a name.

2. Reflection — LLMs don’t always get it right on the first try. Agents need formal reflection loops — self-check, cross-agent review, or human approval — and BPMN has no way to model this without cluttering the diagram with extra tasks and gateways.

3. Confidence — When an agent produces output, there’s an implicit confidence level. Process decisions should be able to depend on this. Standard BPMN conditions operate on data objects, not on the quality of non-deterministic outputs.

4. Multi-Agent Collaboration — When multiple agents work in parallel, they need coordination patterns: voting, debate, role-based delegation, competition. BPMN groups and text annotations are too informal to capture this.

5. Result Merging — When parallel agent branches converge, the merging strategy matters. Majority vote? Leader decides? Combine all outputs? Pick the fastest? Standard BPMN gateways just synchronize — they don’t specify how results are selected or combined.

The paper proposes extending BPMN 2.0 with custom attributes on existing elements — tasks, lanes, participants, and gateways — rather than inventing new element types. This keeps diagrams valid BPMN while carrying the agentic metadata.

From Paper to Practice: Our Implementation

We took the paper’s proposals and built them into a working BPMN editor and process runtime. The key design decision: use the BPMN 2.0 moddle extends pattern (the same mechanism bpmn.io uses for diagram colors) to attach custom attributes directly to standard elements. No new shapes, no new XML elements — just an agentic: namespace with attributes.

Agent Tasks

Any BPMN task can become an agent task by setting agentic:taskType="agent". Once marked, additional properties become available:

PropertyPurpose
Agent IDReferences a specific agent definition — connects the diagram to your agent library
LLM ModelOverride the default model for this specific task (e.g., use a larger model for complex reasoning)
Reflection Modeself (agent reviews own output), cross (another agent reviews), human (human-in-the-loop approval)
Confidence ThresholdMinimum acceptable confidence (0-100%) — the agent keeps reflecting until this is met
Max IterationsHard cap on reflection loops to prevent infinite retries

In the editor, agent tasks get a distinct visual marker — a purple robot icon — so you can immediately see which tasks are human and which are agentic.

Agent Lanes

Lanes represent participants. With the extension, each lane can specify:

  • Agent Role: manager (makes decisions, resolves conflicts) or worker (executes tasks, reports results)
  • Trust Score: A reliability rating (0-100%) that informs how much weight this agent’s output carries in collaborative decisions

This captures the reality that not all agents are equal. A well-tested, specialized agent might have a trust score of 95%, while an experimental general-purpose agent might sit at 60%. The process can use these scores for routing decisions.

Agent Pools (Systems)

At the pool level — representing entire organizational units or systems — you can define:

  • System Type: Is this pool a human team, an agentic system, or a hybrid?
  • System Description: The purpose and scope of the agentic system

This gives stakeholders an immediate high-level view: which parts of the process are fully automated, which are human-driven, and which combine both.

Collaboration Gateways

Standard BPMN gateways split and join flows. With the extension, diverging gateways specify how agents collaborate:

  • Voting: All agents process the task, results are decided by vote
  • Role-based: Manager agent delegates to workers based on expertise
  • Debate: Agents argue positions until consensus
  • Competition: Multiple agents race; best result wins

Converging gateways specify how results are merged:

  • Majority vote: Most common answer wins
  • Leader-driven: Manager agent picks the best result
  • Composed: All outputs are combined into a richer result
  • Fastest: First response wins (for time-sensitive operations)
  • Most complete: The most thorough response is selected

What This Looks Like in Practice

Consider a content review workflow. A document arrives and needs to be checked for accuracy, tone, and compliance.

Traditional BPMN: Three parallel human review tasks, a synchronization gateway, and a manual merge step. The diagram shows what happens but not how reviewers coordinate.

Agentic BPMN: Three agent tasks in parallel lanes (accuracy checker, tone reviewer, compliance scanner), each with its own model, confidence threshold, and reflection mode. A diverging gateway with collaborationMode="role" delegates based on expertise. A converging gateway with mergingStrategy="composed" combines all findings into a unified review report. A manager agent in a separate lane has final approval authority with agentRole="manager".

The diagram now captures the full collaboration pattern. Anyone reading it understands not just the flow, but the decision-making structure.

Building on Our Platform

Our platform provides the tooling to make this practical:

Browser-based BPMN Editor — A full bpmn-js modeler with custom agent task rendering, a drag-and-drop palette for agent tasks, and a properties panel for editing all agentic attributes. Token simulation lets you visually trace process execution.

Standard BPMN 2.0 Output — Diagrams save as valid BPMN XML with the agentic namespace. Import them into any BPMN tool, share them with stakeholders who use different software, or version them in git.

Process Runtime Engine — The agentic attributes aren’t just documentation. They map directly to runtime behavior: which agent handles the task, what model it uses, how it validates output, and how multi-agent results are merged.

Human-Agent Hybrid Support — Not every task needs an agent. The same diagram can contain human tasks, automated service tasks, and agent tasks. Lanes can hold humans, agents, or both. The process captures the real-world collaboration.

The Bigger Picture

The shift to agentic workflows isn’t about replacing humans with AI. It’s about making the collaboration explicit, visible, and manageable. When an AI agent sits in a BPMN lane alongside human participants, the process diagram becomes the single source of truth for how work gets done — regardless of who (or what) does it.

The academic research gives us the conceptual framework. The BPMN 2.0 extension mechanism gives us standards compliance. And a working implementation gives us the ability to actually design, simulate, and execute these workflows today.

If you’re exploring how AI agents fit into your business processes, start with the diagram. Make the collaboration visible. The rest follows.


The agentic BPMN extension is based on “Towards Modeling Human-Agentic Collaborative Workflows: A BPMN Extension” by Ait, Cánovas Izquierdo, and Cabot (2024). The open-source reference implementation is available at github.com/BESSER-PEARL/agentic-bpmn.

No. The extension uses standard BPMN 2.0 extensibility mechanisms, so diagrams remain valid BPMN. Our platform includes a browser-based editor with built-in support, but any BPMN-compliant tool can read the files.
That's exactly the point. Standard BPMN tasks remain human tasks. Only tasks explicitly marked as agentic get AI behavior. Lanes can hold humans, agents, or both — the diagram captures the real collaboration.
The reflection loop handles this. You set a confidence threshold and max iterations. The agent retries with self-reflection, cross-agent review, or human-in-the-loop escalation until the threshold is met or the iteration limit is reached.
Yes. The extension is based on the paper 'Towards Modeling Human-Agentic Collaborative Workflows' by Ait, Cánovas Izquierdo, and Cabot (University of Luxembourg / UOC, 2024). We extended their approach with practical runtime attributes.
Like

Get in touch

Published · Updated
On this page
Agentic Tasks and Lanes in BPMN: Modeling Human-AI Collaboration Agentic Tasks in BPMN Diagrams The Problem: BPMN Wasn’t Built for AI Agents The Research Behind the Extension From Paper to Practice: Our Implementation Agent Tasks Agent Lanes Agent Pools (Systems) Collaboration Gateways What This Looks Like in Practice Building on Our Platform The Bigger Picture