Adoption is shaped less by technical ambition and more by governance, risk tolerance, and formal decision rights.
Agentic AI is moving from pilot to early, bounded production across financial services. The shift from tool to teammate adds continuous decision-making to regulated workflows, reshaping accountability and oversight. In Europe, the EU AI Act, DORA, and national supervisory rules formalize AI governance as a board-level obligation, reinforced by EBA, ECB, and Basel expectations on operational resilience.
This article discusses:
As autonomous capabilities begin to interact across suppliers, data platforms, and infrastructure, the speed and safety of adoption are increasingly shaped by how technology is sourced and governed externally. At this point, execution risk shifts away from models and into operating design. Procurement plays a critical role at this boundary by embedding regulatory and risk requirements into supplier and commercial arrangements.
- How agent-like systems are changing the operating reality for financial institutions
- Why adoption remains constrained despite technical progress
- How sourcing and supplier design influence the practical limits of autonomy
- What leadership teams should decide before scaling further
Why Agentic AI matters now for procurement
Financial institutions are pushing past the limits of traditional automation, but they are doing so cautiously. BCG’s 2025 global survey on agentic AI adoption across industries found that 35% of companies already use agentic AI, and another 44% plan to adopt it soon. In financial services, however, this momentum translates into supervised, human-in-the-loop deployments with clearly bounded decision authority, rather than fully autonomous systems in production.
This pattern is visible across core functions, where institutions are testing different forms of bounded autonomy under strict controls:
-

Front office: portfolio-rebalancing agents monitoring client goals and market shifts, proposing and executing trades once approved
-

Middle office: claims-settlement agents validating data, requesting missing information, and escalating low-confidence cases
-

Back office: Optimization agents detecting performance issues and adjusting systems automatically
Even at this stage, the shift is material. Once systems begin initiating actions rather than simply executing instructions, accountability moves from a technical concern to an operating one. Decisions must remain explainable, traceable, and reversible under supervisory expectations, which pushes AI governance into the core of how financial institutions determine what can be deployed and scaled in practice.
The challenge for financial institutions
Procurement: The enabler of scalable autonomy
Procurement does not govern autonomy on its own. Model risk management, compliance, and operational risk functions define the internal control requirements for agentic systems. Procurement’s role is to translate those requirements into enforceable supplier and commercial constraints that shape how autonomous capabilities are supported, adapted, and held accountable over time.
This role becomes critical as delegation increasingly sits with external providers. When autonomy is embedded in supplier technologies rather than internal systems, control is exercised less through code and more through contracts, incentives, and accountability mechanisms. In this context, procurement supports the autonomy control framework by ensuring that supplier relationships remain auditable, aligned with risk appetite, and compatible with supervisory expectations across the AI lifecycle.
In effect, sourcing becomes a control surface.
- Build, when control and knowledge retention are paramount, accepting longer timelines and higher capability demands
- Partner, when speed and differentiation require shared accountability and aligned incentives
- Buy, when reliability and regulatory readiness outweigh customization
Most institutions blend these approaches across their AI portfolios.

Across these sourcing choices, three design elements determine whether autonomy can scale without eroding control:
- Control architecture, defining where systems may act independently and where human approval and escalation remain mandatory
- Capital focus., directing investment toward measurable outcomes such as error reduction, cycle-time improvement, and compliance performance rather than technical capability alone
- Ecosystem design., structuring supplier relationships so accountability, auditability, and supervisory access are preserved as systems evolve

Evidence points: Institutions applying progressive sourcing and control mechanisms report reductions in average time-to-scale of approximately 30–40 percent while maintaining audit readiness.
Questions for the leadership
Which use cases introduce decisions that go beyond fixed rules or scripted workflows, and where does accountability sit once systems initiate actions?
As pilots move toward broader deployment, are there staged investment and review points that tie proof of concept, MVP, and rollout to verified outcomes, resilience testing, and risk acceptance rather than technical success alone?
For those use cases, what decision authority is explicitly granted to systems, what remains with humans, and how are override and escalation mechanisms enforced in practice?
Are success measures defined around business and control outcomes, such as process stability, accuracy, recovery time, and manual effort reduction, rather than usage or model performance in isolation?
Do sourcing models and contracts translate supervisory expectations under the EU AI Act and DORA into operational reality, including how accountability, auditability, incident handling, and ICT outsourcing constraints are applied when systems adapt or fail?
Preparing for the next stage
In financial services, trust is currency and, under EU supervision, a licensed asset. Scaling agentic AI safely requires that autonomy be designed not only into systems, but into the external relationships that sustain them.
The upcoming whitepaper will build on this perspective, outlining sourcing models, contract structures, and governance templates that help financial institutions operationalize agentic AI within regulatory and risk constraints.
This perspective is written in the context of the European financial regulatory environment (EU AI Act, DORA, BaFin, EIOPA, ECB).
Gain deeper Insights