Add more content here...
May, 2025

Boomi chief warns on agentic AI chaos as firms undercook governance risk, process oversight collapses

What you need to know:

  • At its global user conference in Dallas, Boomi unveiled new products and partnerships designed to position it as the orchestration backbone for agentic AI. It says it wants to help usher in an era of autonomous software agents capable of executing tasks across enterprise systems.
  • However, with the agentic hype cycle burning red hot, industry critics like UC Berkeley’s Stephen Klein say agentic AI is still fiction, with no credible real-world deployments. Current systems, he argues, are prompt-reactive, not truly autonomous, and heavily scaffolded by humans.
  • At the other extreme of the debate, for those arguing we are on the cusp of a world of dramatically more efficient business processes, there’s a growing risk of chaos as enterprises rush into agentic AI without the foundational orchestration, semantic layers, and control frameworks required to manage millions of digital agents.
  • Boomi CEO Steve Lucas says AI will democratise software development. Industry experts, not coders, will soon build powerful, no-code agents tailored to their verticals – from logistics to accounting – culling the steps required to program business processes by orders of magnitude
  • And get ready for some serious business model biffo. While supporting the emerging Model-Context-Protocol (MCP) approach for real-time agent execution, Lucas warns it’s a “ruse” backed by major model vendors to control user interfaces and data pipelines, while one of his key execs warns it comes with security and bloat risks. 
  • There’s also a compression shock incoming as workflow acceleration driven by agentic AI collapses oversight layers. Boomi’s Mike Bachman warns of rising “spaghetti factory” complexity and urges organisations to clean up data, understand processes, and build governance first.
  • Agentic AI needs a semantic backbone: Boomi says it’s investing early in a shared meaning infrastructure that will enable agents to interpret metadata across systems and act coherently. It sees this as key if brands are to scale from 10 to 10,000 AI-powered agents per enterprise – or higher.

To be agentic an AI must do more than complete tasks. It must generate and pursue its own goals, adapt plans over time without human prompts, operate with coherence and persistence, and exhibit autonomy in dynamic, real-world environments.

Stephen Klein, CEO, Curiouser.AI

The hype is hard to miss. Walk the halls of any AI vendor conference in 2025 and you’ll hear the same promises: billions of autonomous AI agents transforming everything from marketing to procurement.  The noise is only going to get louder and the pitch more urgent.

But before marketers go too deeply down the agentic AI rabbit hole, there’s a foundational question they should consider asking themselves, says Mike Bachman, Boomi Innovation Group’s Head of Architecture and AI Strategy who spoke with Mi3 at his company’s global user conference in Dallas last week.

Where’s the agency

“In every speaking session I do I ask people to define an agent for me. And invariably, I get a different answer from everyone,” says Bachman. “So we first have to talk about agency and help people understand what an agent is.”

He is not alone in this view, although Bachman, unsurprisingly, is positive about the agentic state of play and bullish about Boomi’s role in the emerging agentic world.

Not so Stephen Klein, a lecturer in AI Ethics, at the University of California, Berkeley and co-Founder and CEO of Curiouser.AI.

With the tech sector hype cycle at full throttle, Klein is a contrarian. He is scathing about the kind of messaging coming from the technology sector about agentic AI.

On LinkedIn this week, he wrote, “At this moment, there is no credible, peer-reviewed evidence that agentic AI exists in any real-world deployment. Agentic AI is absolutely a fabricated fiction designed to confuse you and scare you into spending money.”

He went further, describing it as a term being used recklessly by marketers, consultants, and startups alike.

However, he did provide the kind of specificity around what it means to be agentic that Bachman believes executives need.

Per Klein: To be agentic, “An AI must do more than complete tasks. It must generate and pursue its own goals, adapt plans over time without human prompts, operate with coherence and persistence, and exhibit autonomy in dynamic, real-world environments.”

“Today’s best-known systems (GPT-4, Claude, Gemini, open-source agents, etc.) are prompt-reactive, not self-directed, tool-augmented, not autonomous, and heavily scaffolded by humans to function.”

“They are essentially statistical parrots with plugins,” he said, noting that a Stanford & UC Berkeley study about foundational model task performance found current models fail at sustained reasoning, planning, and adaptive autonomy.

I’ll just say this: MCP is a ruse. It's backed by Anthropic, OpenAI, and Google, and anytime Anthropic, OpenAI and Google agree on anything you should totally not trust that thing.

Steve Lucas, CEO, Boomi

Klein’s views are in the minority and largely drowned out by the agentic chorus rising up from the tech sector which takes less of a purist view of the definition, and which is fairness is racking up impressive, efficient gains with this latest generation of algorithms.

But even assuming the tech sector is correct about a huge incoming transformation, the problem is that enterprises are about to unleash vast digital workforces based on software agents often without the basic semantic scaffolding, orchestration layers or governance frameworks to control the explosion.

The result, as Bachman colourfully describes it, could be a “spaghetti factory” of chaos.

The impact of agentic AI was a core theme at the Boomi World conference in Dallas last week, where the company announced a series of product innovations and new industry partnerships to help its clients transition to a software world it believes is emerging at an extraordinary pace.

Boomi see its role as the agentic AI integration backbone. That’s not a million miles removed from its traditional role, doing things like seamlessly connecting apps such as a Netsuite ERP to a Shopify ecommerce platform.

But just like your marketing stack relies on systems talking to each other (CRM, email, analytics), agentic AI relies on real-time, seamless connections between dozens of apps and data sources. It needs an infrastructure that lets AI agents tap into these systems, understand the context, and take intelligent action.

Boomi also provides the orchestration layer, akin to a stage manager behind the scenes, ensuring the actors know where to go, what script to read, and how to interact with others on stage (like ERP, supply chain, or CX systems). This orchestration is critical to avoid chaos and ensure agents make decisions aligned with your business rules.

According to Boomi CEO Steve Lucas – familiar to many Australian CMOs from his time as the chief executive of Marketo and then as a senior Adobe executive – the agentic AI era is calling time on traditional deterministic software.

In his keynote, Lucas said traditional software development is pivoting away from the logic that has underpinned business automation for decades, to the new agentic model.

Historically, he said, “Deterministic processes rely on ‘if, then else – if this happens, do that.’”

“But software with agency has the ability to change its mind. It’s not deterministic. An agentic process can create new possibilities. You can have many agents working together in a single canvas.”

What does that mean in practical terms?

“We will go from a process with 1,000 steps to a process with 10.”

Under this model, a user describes the intent, selects from templates, and builds an agent without code. Each agent can be customised with specific tasks, governance layers, and contextual controls: “You can add guardrails, you can add cost controls. It has long-term memory out of the box.”

For example, says Lucas, “When the process breaks – like if UPS can’t ship an order – I can have an AI agent that finds a new shipper for me. Not a human, not a broken process. The agent takes the order data and customer requirements, including the receiving times for their warehouse, looks at all the different shipping options, creates a new one, and the process continues without fail.”

Before agentic AI, such a problem would’ve demanded exhaustive programming or manual escalation. And it’s amidst this huge knot of complex, tangled business rules, that Lucas says agentic AI offers massive and occasionally extraordinary efficiency gains.

Use cases are already emerging. Last week, for instance, we noted an example from Forrester VP and Principal Analyst Sam Higgins who told Mi3, “I was just onsite with a global professional services business who built a vertical AI solution to identify specific offerings from across a plethora of service lines – from classic consulting to audit – given a handful of client account documents as input. The result was a 99 per cent reduction in the effort taken to analyse a client’s needs, match their services, and produce a custom proposal.”

Agent builders, not coders

There is another, perhaps more profound implication that runs contrary to the current narrative of a knowledge worker employment slaughterhouse,  and that is how agent creation will quickly extend beyond technologists.

Per Lucas, “Instead of us sitting around writing code, we will create an entirely new class of jobs where people who have expertise in accounting can create ridiculously powerful agents without code.”

“Whether it’s accounting, logistics, manufacturing — the people that have industry expertise are now empowered to create the most powerful agents integrated into business processes.”

It’s a future where entire enterprise operations are stitched together by collaborating, learning AI entities created by subject matter experts, no longer reliant upon brittle rule sets that break apart at the first sign of an unexpected customer edge case.

And all of this also assumes the creation of a fully functioning, integrated, managed, monitored, orchestrated, and secure ecosystem of perhaps millions of agents inside any one company and billions more across the wider economy all reliant on standards, protocols, and control mechanisms that don’t exist today, or a still very immature.

It’s a big ask, but there are signs it is starting to emerge.

For instance, to mitigate governance risks, Boomi integrates its AI ecosystem into what it calls its Control Tower.

“So I can watch every agent, I can monitor, I can understand if they experience things like drift or hallucination, “ says Lucas. “I can determine if agents are accessing APIs they should not be accessing, or databases or applications. I can apply policies. I can revoke privileges. I can turn on an agent. I can turn off an agent. This is the end-to-end solution.”

But even the Boomi chief accepts there is a lot more to be done across the wider ecosystem,

“I’m confident there will be 100 different protocols within a year, all over the place. Our job will be to figure out how many of those we can support.”

And it will raise some uncomfortable questions for vendors like Boomi.

That’s because in this fluid, messy, and chaotic moment of industry disruption, vendors are also laser-focused on business model considerations.

Take comments Lucas to made to Mi3 about Model-Context-Protocol (MCP)  which Boomi supports and which featured as part of the key announcement set at its global user conference last week.

Think of MCP as a new way for AI agents to get things done. Instead of relying on older methods where AI pulls in background information from documents (like what the techies call RAG – Retrieval Augmented Generation), MCP lets AI go straight to the source such as your systems, apps, or APIs to take action or retrieve live data in real-time. It’s faster, more flexible, and in theory opens the door to truly interactive AI experiences.

Lucas told Mi3 Australia, “As for the MCP thing, I spared everyone my rant. I’ll just say this: MCP is a ruse. It’s backed by Anthropic, OpenAI, and Google, and anytime Anthropic, OpenAI, and Google agree on anything, you should totally not trust that thing.”

“Every single one of these model vendors wants to script the UI, take the eyeballs away from enterprise software, and bring them into their own application. That’s absolutely happening.”

“So, does it need to exist? Yes, it does. Is there an ulterior motive? A thousand percent, “ he said.

That first multi-million-dollar-a-month bill when you were expecting $10,000, that will make headlines

Mike Bachman, Boomi Innovation Group’s Head of Architecture And AI Strategy

Move fast, break

The current MCP debate is emblematic of another issue: the compromises the technology sector is willing to make to innovate quickly. Even companies like Boomi that support MCP are open about its potential limitations.

On that point, Boomi’s Bachman said, “There are two big issues with MCP that I see personally and some of my colleagues may think there are more. One is security. The second is bloat.”

“So on the security side, anytime you put authentication with the ability to make calls to different subsystems in the same place without being able to separate those in a composable way, you’re introducing the security threat. So architecturally, it’s important to be able to separate all of that.”

He’s also worried about bloat leading to complexity and cost. “The more tools, the more data, the longer it’s going to take that agent to be able to traverse the answers… That’s going to take your runtime to a greater level.”

“That bloat leads to expense… These tokens aren’t free, or the GPUs that you’re going to need to run your models on aren’t free.”

Enterprises already burned by over-engineered CDPs could also be about to experience bill shock on steroids.

“That first multi-million-dollar-a-month bill when you were expecting $10,000, that will make headlines. And once that happens, then it’s going to slow progress down for a minute. But still, you can’t uneat the apple. Entropy flows in one direction.

Compression shock

Another overlooked consequence of agentic AI is workflow compression, a phenomenon Bachman sees accelerating across software engineering and enterprise operations.

Unfettered and uncontrolled change wrought by agentic AI only adds to the risk.

“Coder productivity has increased quite a bit [for instance]. So you’re able to ship code a lot faster when you use tools like Copilot or Cursor, anything like that.”

But that speed hides a structural risk.

“Anecdotally, I can say that one thing we have to watch out for is a loss of critical thinking,” he says. “One can get lazy and start to trust what’s coming back. And I think we’re all guilty of that because we’re human, and we’ve got a lot of pressure to produce [results].”

The concern isn’t academic. Agentic systems don’t just speed up execution. They remove the friction points where oversight traditionally occurred. With marketing workflows collapsing from hours to minutes, review layers and control checks will also likely disappear.

Preparing for change

Bachman argues that three foundations must be laid before organisations go big on agents:

  • “Understand your data”: “Data is a mess in a lot of different enterprises, so get your data house in order now.”
  • “Understand your processes”: “If you understand your processes… you’re going to find that determinism is still an okay approach.”
  • “Prioritise”: “See what you know, see what needs improving, and see what needs to stay the same. Start easy.”

At Boomi, the orchestration layer is everything he says. “We didn’t want to just stop there. This is why it’s appropriate to manage agents, govern agents, and be able to essentially observe what these agents are actually [doing],” Bachman says.

“If you don’t have a hub, and  governance and control in the middle, that is not going to be spaghetti on your plate, that is going to be a spaghetti factory that you’ve never witnessed the likes of before.”

Shared meaning

Ultimately, the agentic ecosystem can’t scale without shared meaning. “In general, the semantic layer is going to be important,” Bachman says. “So the ability for agents to be able to understand what is in the metadata of every other data system that’s around.”

Boomi is betting early, investing in orchestration and governance while others focus on building agents. “While everybody’s building out agents, we’re building for that… we’re building a management and governance framework, network and platform for all of these agents that are going to be built.”

That means preparing for a world where “a digital twin of yourself… 10 of them, 1,000 or more of them doing the tasks that you would do today if you could just clone yourself.”

But to get there, enterprises need a language, a map, and a serious rethink of their stacks.