What would it take for your finance team to trust an AI agent enough to let it act autonomously inside a live workflow?
That’s the question sitting underneath every AI conversation in finance right now. Not whether the technology works. Not whether the business case exists. But whether the organization is actually ready to hand over the wheel, even partially, and what needs to be true before that feels safe.
Basware and the Executive Leaders Network brought together Olav Maas, Head of Product Management at Basware, and Simo Uusijoki, Head of Finance Strategy & AI Offering at Deloitte Finland, to get into the detail.
Nearly half of the senior finance leaders surveyed in Basware’s latest AI to ROI report, produced in partnership with FT Longitude and drawing on responses from over 200 leaders across large global organizations, say they feel direct pressure from leadership to do something with AI. The problem? Many don’t yet know what that something should be.
The result is a wave of experimentation that rarely delivers. MIT research suggests only 5% of AI pilots achieve rapid revenue acceleration. The other 95% never meaningfully move the needle.
The challenge isn’t the technology. It’s knowing where and how to apply it.
One of the most useful reframes from the discussion: stop thinking about agentic AI as a technology project and start treating it like a new hire.
When you bring on a junior colleague, you don’t hand them your most critical decisions on day one. You define their role carefully, start them on lower-risk work, give them clear guardrails, and let trust grow incrementally. The same logic applies here.
This isn’t a metaphor for the sake of it. It’s a practical governance model. Define what the agent is allowed to do. Define what it isn’t. Decide exactly where a human needs to remain in the loop, not because the technology can’t proceed, but because judgment, accountability, and auditability demand it.
Finance is a high-trust environment. An agent operating as a black box, however accurate, will not survive first contact with a CFO review.
Pilots fail to scale for a predictable reason: organizations launch them without a clear picture of where they’re ultimately heading. If the pilot succeeds but there’s no roadmap waiting for it, no vision of the target state, no defined next step, no shared understanding across finance, IT, procurement, and the business, then success simply runs out of road.
The organizations making the most progress have done something that sounds simple but is surprisingly rare: they’ve thought end-to-end before they acted. They’ve redesigned the workflow, not just automated the one they had.
The practical path forward looks like this:
For finance teams weighing whether to build their own AI capabilities or work with a platform partner, the answer isn’t ideological. It’s strategic. Standard processes that every AP team runs, such as invoice ingestion, data validation, coding, and approval routing, don’t need to be built from scratch. They’re solved problems. The value of buying an embedded solution here is speed, governance, and the benefit of a network that has been learning from real invoice data at scale.
Where building makes sense is when the capability in question is genuinely differentiating, when it drives competitive advantage that no off-the-shelf product will ever replicate.
Most organizations should start by asking what their existing enterprise architecture already offers. Vendors are investing heavily in AI capabilities within their own platforms. The answer may already be closer than it looks.
Perhaps the most memorable idea from the conversation was the concept of a ladder of trust in agentic AI adoption:
Most finance teams are at the first or second rung. That’s not a failure. That’s good governance. The goal isn’t to rush to full autonomy. It’s to move up the ladder deliberately, with explainability and audit trails at every step.
One real-world example shared in the session made this vivid: a finance team that fully automated their forecasting process without building in transparency or human checkpoints. The numbers couldn’t be trusted. The process had to be rolled back. The lesson wasn’t that automation failed. It was that trust has to be earned in sequence.
The window for deliberate, well-governed AI adoption is open, but it won’t stay open indefinitely. The gap between organizations that are scaling agentic AI and those still running pilots is already widening.
The takeaway from both speakers was consistent: go for it. But go for it prepared. Know your target state. Define your governance before you deploy. Start with a use case small enough to control but meaningful enough to prove the value. And bring your people along, because the most sophisticated agent in the world won’t succeed inside an organization that doesn’t understand what it’s doing or why.
The technology is ready. The question is whether the organization is. Watch the full conversation on demand. Olav Maas and Simo Uusijoki cover the research findings, the leader vs. follower divide, practical steps to ROI, and how to think about building trust in agentic AI inside finance.