Add more content here...
August, 2025

Lendi Group’s Project Aurora bets big on agentic AI-led mortgage market; CEO says “agents managing humans” is coming, but as coaches, not bosses

What You Need to Know

  • By 2026, Lendi Group plans to have every mortgage processed, monitored and managed by agentic AI, with humans only handling relationship-driven, complex cases.

  •  After building Australia’s first fully online mortgage application in 2016 and acquiring Aussie Home Loans in 2020, Lendi Group is shifting from automating parts of the process to making AI the organising principle across the business.

  • CEO David Hyman sees agentic AI as a productivity amplifier. Voice AI “Max” now handles early-stage buyer calls and admin tasks, freeing brokers to focus on negotiation, empathy and complex structuring. It’s not about job cuts, but rather redeployment to higher-value work.

  • With ASIC yet to formalise AI rules, Lendi Group is proactively embedding transparency, audit trails and human override points into all AI workflows to ensure regulatory readiness and commercial resilience.

  • Its Aurora platform is designed to thrive amid shifting standards, using abstraction layers so tools and LLMs can be swapped without breaking the system.

  • The firm is pursuing a security-first by design: Aurora operates under strict least-privilege principles, avoids storing secrets in agents, and enforces scoped tokens and per-tool allowlists, anticipating both potential disasters and future regulatory demands.

  • Meanwhile, observability and control are baked into the platform, and CTO Devesh Maheshwari says future-proofing is achieved through modularity.

  •  Work is measured in “units” rather than headcount, allowing tasks to be dynamically allocated to AI or humans based on complexity, scaling capacity up or down without mass hiring.

  • The firm’s multi-agent mortgage system uses abstraction layers to swap out LLMs and orchestration frameworks as standards evolve, avoiding lock-in to today’s protocols.

  • It’s a world where AI assistants monitor loans 24/7, flag refinancing opportunities, track market changes, and proactively intervene to improve customer outcomes and loyalty.

  • Lendi Group’s plans have sector-wide implications. If successful, the company could set a blueprint for embedding AI at the operational core of financial services, but faces risks from regulation, trust, and rapid tech change.

We’re certainly not getting into the world of building out LLMs. Where we’re focusing is fine-tuning and reinforcement learning based on unique data assets we have.

David Hyman, CEO Lendi Group

Australia’s $2.26 trillion mortgage market runs on human brokers and creaky processes. Lendi Group, a Sydney-based lender, aims to change that. The business formed out of a merger with Aussie Home Loans in 2020 and since that time Lendi Group has grown the combined mortgage portfolio from $75 billion to more than $100bn.  Now it is betting that agentic AI will drive the next leap in growth, in a project it has dubbed “Aurora”. By June 2026 the firm wants every mortgage processed, monitored and managed by software agents, with humans stepping in only for the messy, relationship-heavy work.

In 2013, getting a mortgage in Australia was as much a test of patience as of creditworthiness. Applicants shuttled between in-person meetings, faxes and paper forms, all to secure a loan that might take weeks to approve. That year, David Hyman co-founded Lendi Group to drag the process into the digital age. A decade on, he wants to do it again. The plan this time by replacing much of the human process with a platoon of tireless, compliant, and conversational AI agents.

The ambition is stark: by June 2026, Lendi Group’s will be “AI Native.” That means the company’s goal is to make agentic AI the standard approach in every workflow, decision and customer interaction, Hyman told staff earlier this year. He described the coming shift as an opportunity for the company to surge ahead of competitors. The pledge is not a rebrand of existing automation. It is a structural shift from AI as an add-on to AI as the organising principle of the business.

Lendi Group’s first act was to digitise the mortgage process. By 2016, it had built Australia’s first fully online application, integrating directly with banks’ credit engines, and delivering digital ID and consent long before incumbents stirred. Its second act began in 2020 with the acquisition of 55 per cent Aussie Home Loans from CBA which retained 42 per cent. 

That deal also created an unusual operating model: “One kitchen, many dining rooms,” as Hyman puts it, a single leadership team, unified data and technology, but two brands. Lendi Group’s became the Formula One garage, free to innovate, while Aussie served as the high-street retail channel. The arrangement allowed Hyman’s team to pilot high-risk ideas and, if successful, ship them to Aussie’s larger base.

AI as a workforce multiplier, not a job killer

Hyman unpacked one of the most politically charged aspects of AI adoption: its impact on jobs. “We decided early on to be upfront with the whole organisation: we’re going deep into AI, and it’s coming whether we like it or not. We’ve even said that if we don’t do this, we might not have a business in five or ten years.”

But Hyman’s faith in agents goes far beyond the typical industry assessments. He told Mi3 that some staff will eventually report to agents.

“We’ve gone through the change, and it’s a change management process we’ll need to keep working through. Our perspective is that we’re here to help you. We’re building a world-class team that’s discovering this as it develops, because unless you work for OpenAI or one of the big hyperscalers, there’s really no such thing as an AI expert.”

The modern AI landscape is a recent creation; two years ago it scarcely existed, at least in general business consciousness. Yet the firm is rapidly cultivating expertise, a process that Hyman says bolstered staff confidence.

“We now have several in-market, production-grade AI products, Voxi is one of them.”

Voxi, Lendi Group’s AI sales coach, is no demo bot. Built as a production-grade agent, it parses broker performance, customer exchanges and sales data in real time, then delivers coaching, prompts and performance feedback straight to sales leaders and their teams. Today it serves human managers, surfacing opportunities, suggesting next best actions and flagging weak spots. Tomorrow, says Hyman, the hierarchy may invert and agents like Voxi could oversee aspects of human work, acting as “dotted-line” managers. “In the future, we could even leapfrog to AI agents managing humans,” but he suggests this would be as coaches. 

“Using Voxi as an example, if Voxi is a sales coach, it might work directly with Brad, Dave, and Andrew, the sales leaders, giving them direction they’d typically receive from a senior sales leader.”

Early experiences appear to bear out Hyman’s view about AI and employment. Just ask Max.

Digital broker

Max is Lendi Group’s first production-grade generative-AI “digital broker.” It shoulders the drudgery of the mortgage trade, fielding enquiries, assessing needs, matching borrowers to lenders and readying applications, without human hand-holding, yet always within strict compliance and service bounds. Equally at ease with chit-chat and checklists, it collects documents, pre-fills forms and runs eligibility checks in real time, plugging results directly into lenders’ systems.

In practice, Max works in tandem with human brokers, who handle tricky cases or relationship-building, while the bot dispatches the routine and keeps both clients and colleagues informed.

The manager who once supervised 20 call-centre staff now oversees Max’s “output controls and config”. Hyman says the people themselves were redeployed to higher-intent conversations where empathy, negotiation and financial problem-solving make the difference between a lead and a settlement.

Rather than removing humans, the AI widened the aperture for them to do higher-value work. The busywork of logging calls, chasing missing forms and triaging basic questions is now the domain of agents like Max. Humans focus on relationship-building, trust, and navigating the more labyrinthine elements of a client’s borrowing journey.

“Max works with our customers and helps them. This is at the top of the funnel…this might be people who have played with some of our tools, and they’re in the buying process. Max has a conversation with them [for instance] about any properties that they found that they like? Can we help them find a property? Is there anything about their finances or pre-approval we can help with? And has that conversation with them on the phone.”

Its purpose is straightforward: when a customer signals a strong interest in speaking to a broker, it ensures a seamless exchange and promptly transfers the call, putting both parties in direct contact.

In many ways, it’s like the early days of cloud. The promise was clear, but the scaffolding, interoperability standards, security frameworks, observability took years to mature.

Lendi Group's Chief Technology Officer, Devesh Maheshwari

Compliance by Design

Mortgage broking operates under tight regulation; introducing AI agents into the chain risks introducing new failure points. ASIC has begun to whisper its expectations, from disclosure requirements to ensuring customers know when they are speaking with a machine. But the formal framework is incomplete.

Lendi Group’s solution is “compliance by design”, building codified controls, audit trails and human override points into the software from the outset. “Governance and compliance are not new concepts,” Hyman says. “When new technology comes out, I think where people fall afoul is they ignore it.”

Rather than waiting for ASIC to finalise AI guidelines, the company applies the logic of existing rules to the new workflows, drawing on the palimpsest of past compliance regimes. Customers are told when they’re talking to an AI. Every decision is logged. Every agent has escalation protocols baked in.

This proactive approach is partly self-interest: AI-led processes without compliance credibility are commercially brittle. It is also a hedge against being forced to retrofit guardrails later, at greater cost and disruption.

Elastic Work Units

One of the more disruptive shifts is a change in measurement. Instead of counting “seats”, the number of human operators, the company counts “work units”. Each unit is a discrete task: reviewing a payslip, issuing a follow-up request, or checking a valuation.

In this model, work is routed to whoever, or whatever,  can execute it most efficiently. Routine document checks? An AI agent can process them in seconds. Complex loan structuring? That goes to a seasoned broker.

This elasticity allows the business to surge capacity during peak demand without hiring in waves, and to scale down when the load eases. It also changes management: leaders coach agents,  software ones,  in much the same way they would a human team, adjusting parameters and refining performance.

What scaffolding?

The agentic world is still a standards wild west. Competing protocols govern how agents talk to each other, invoke tools, and secure data.

Today’s bet might be tomorrow’s orphaned standard. For instance, the industry very rapidly coalesced around Anthropic’s Model Context Protocol (MCP) which enables AI models to plug into outside tools and data in a safe, structured way. That’s essential to help build the kind of multi-agent world that agentic promises. On the flip side, Google’s A2A (agent-to-agent) protocol which lets agents talk to each other has not achieved the same traction. In such a world, brands do not want to bet on the wrong horse.

Aurora’s architecture is built to avoid that trap. It uses abstraction layers so that components, from the large language models to the orchestration frameworks, can be swapped without tearing out the core systems. Frameworks like LangChain can be replaced; LLM providers can be changed with minimal disruption.

Lendi Group’s Chief Technology Officer, Devesh Maheshwari, acknowledges that the agentic AI space is still early in its infrastructure cycle.

“In many ways, it’s like the early days of cloud. The promise was clear, but the scaffolding, interoperability standards, security frameworks, observability took years to mature.”

But he says, “With Aurora we’ve deliberately designed for that reality. Our approach isn’t to wait for perfect conditions, but to build in a way that’s modular, adaptable, and standards-ready.”

That will see the online lender investing heavily in areas where even much of the technology industry is still playing in the sandbox.

Its architecture begins with a monkish devotion to security: agents and workflows operate under least-privilege access, every action is auditable, and the system slots neatly into whatever cybersecurity orthodoxy emerges next.  And security proceeds on the assumption that disasters are inevitable: “Security that assumes the worst. Capabilities are brokered, not embedded. Secrets never sit inside the agent. We use scoped tokens, short-lived authentication, and per-tool allowlists. It’s designed for human as an overflow.”

Constant surveillance, he means of the software, not the staff, is also built in, allowing operators to monitor, tweak and discipline AI agents in real time.

Maheshwari told Mi3, that when it comes to future-proofing for regulation and interoperability,  “We’re building against open interfaces where possible, so Aurora can connect into the broader ecosystem as it matures, without costly re-platforming.”

Lendi Group’s strategy is to adhere to standards without being shackled by them, using open schemas and common interfaces to treat interoperability as a feature rather than an endless integration project.

“The way we see it, the absence of mature scaffolding is actually an opportunity. It forces you to design for resilience from day one, and it creates room for innovators to set the bar for how Agentic AI should operate securely, transparently, and at enterprise scale,” he said

“With Aurora, we’ve built it as a thinking system with a control plane i.e. plan, verify, act – with security, observability and governance built in, so it works now and stays ready as standards mature.”

Madness, in moderation

Confirming, perhaps, that while a little crazy goes a long way he draws the line at insane, Hyman says there are limits to what Lendi Group will do.

“We’re certainly not getting into the world of building out LLMs,” Hyman says. “Where we’re focusing is fine-tuning and reinforcement learning based on unique data assets we have.” This modularity is not just an engineering choice; it’s a commercial one, preserving the ability to adapt as the ecosystem settles.

It also positions Lendi Group as a potential reference model for the industry, an enterprise-grade, multi-agent mortgage ecosystem that can absorb change without stalling.

The customer-facing vision is simple but potent: an AI assistant that is always on, always up to date, and always working in the borrower’s interest.

“They’re going to be there with you by your side at every minute of every day,” Hyman says. That means answering questions at 2am, monitoring market rates, flagging when property values trigger a better loan-to-value ratio, or nudging when a lender has failed to pass on an RBA cut.

By shifting from reactive service to proactive intervention, the AI becomes not just a helpdesk but a steward, one capable of sustained attention no human could match. In a market where customer inertia is the norm, this could be a competitive wedge, increasing refinancing activity and deepening loyalty.

Bigger picture

If Lendi Group succeeds, it will have demonstrated how AI can be wired into the operational core of a financial services business,  not just bolted on. It will also test how far brokers can be repositioned as advisers rather than processors.

The risks are clear: regulatory overreach, customer distrust, and the ever-present possibility that today’s architecture becomes tomorrow’s liability. The mitigations are equally clear: modularity, compliance-by-design, and an unblinking focus on commercial value.

Hyman’s rhetoric is measured, but the intent is not. Yes, AI is a huge accelerant; it adds capacity and makes things cheaper and faster, but that misses the point, he says. “Where AI is today, it’s enabling completely different work and economic value than ever before.”