When Machines Begin To Act

Agentic AI in financial services puts tech sourcing at the center of governance

 

A European bank’s fraud detection AI agent flags a transaction, applies its risk model, and freezes the account. The decision takes 340 milliseconds. The customer calls to complain. The compliance officer asks the vendor for the decision log. The vendor contract does not include audit access. That contract was signed by procurement eight months ago.

This is not an isolated scenario. Across fraud detection, credit decisioning, regulatory reporting, and operations, institutions accelerating experimentation ahead of governance. This is creating a structural gap: early-stage agents are being piloted under contractual frameworks designed for a different class of technology.

Financial institutions are not debating whether agentic capabilities will play a role; pilots are already exploring what controlled autonomy can look like in practice. What remains unresolved is structural: the scope of action available to an AI agent inside a regulated institution is ultimately defined in the vendor agreements governing its operational boundaries. Under DORA (Digital Operational Resilience Act) and the EU AI Act, those agreements must now carry provisions that many existing contracts simply do not contain.

What Changes When an AI System Acts

The distinction between a system that recommends and one that executes carries significant commercial and legal weight. A system that surfaces a recommendation operates within familiar liability territory, but one that executes a transaction, modifies a risk assessment, or triggers a workflow change becomes an operational actor inside a regulated institution, and that shift changes the accountability question entirely.

Three answers must now exist in contractually enforceable terms: which entity bears accountability for the action, what record exists of the decision logic, and who holds the right to override the outcome. DORA obliges institutions to demonstrate that critical functions remain manageable when third-party AI systems behave unexpectedly, while the EU AI Act requires technical documentation sufficient to explain any output from systems used in credit, insurance, and employment decisions. Both frameworks place those obligations on the institution, not the vendor, and both must be reflected in contracts before a system reaches deployment.

Most current AI vendor agreements were not written with that in mind. They cover availability, uptime, and data handling, while accountability for actions taken within defined parameters is either absent or buried in indemnity language that would not survive a supervisory review. That gap was not created by technology, but when procurement entered the process after the technology decision had already been made.

When companies structure AI sourcing strategically, they enable controlled innovation. When they treat it as a transactional task, they create hidden risks.

Tech Sourcing as the Architecture of Autonomy

When treated as a transactional function, it becomes a hidden vulnerability. Understanding why that gap matters requires stepping back from the contract itself and looking at how agentic AI systems are built.

These systems are rarely constructed entirely in-house. They depend on foundation models, cloud providers, specialized AI vendors, data platforms, integration partners, and managed services, and each of those relationships defines a portion of the system’s behavior and its risk profile. Autonomy is not only coded into algorithms but also embedded in the contracts surrounding them.

The scope of decision-making authority delegated to an AI agent, the transparency of its training data, access to logs, audit rights, update mechanisms, liability clauses, cybersecurity provisions: all of these are negotiated through tech sourcing processes, which means tech sourcing determines the boundaries within which machine autonomy operates.

Relying on a single AI vendor without exit options or data portability creates dependency risks that regulators will actively examine.

 

Four Tensions Every Financial Services Leader Recognizes

Agentic AI does not create new problems so much as it accelerates existing ones and raises the cost of leaving them unresolved. Four tensions that procurement leaders have always managed are now arriving faster, with higher stakes and considerably less room for ambiguity

  • Speed against documentation.

     

    Business units want deployment in weeks. Regulators require documentation and compliance demonstrations that operate on a different schedule entirely. Institutions holding both are doing so through modular contract frameworks with pre-negotiated governance templates, so compliance groundwork does not have to be rebuilt from zero with every new AI vendor engagement.

  • Innovation against dependency.

     

    Advanced AI capabilities sit with a small number of global providers and accessing them is often the fastest path to deployment. Thirdparty spending at European financial institutions already represents 45 to 55 percent of operating expenses. The practical sourcing question is not which vendor to select, but what contractual options exist if that vendor’s model changes materially or becomes unavailable.

  • Execution against accountability.

     

    When an AI agent causes financial or reputational harm within its defined parameters, general indemnity clauses are not held under supervisory scrutiny. The specificity
    required can only be negotiated before the technological decision is made, which is precisely when sourcing is most frequently absent.

  • Complexity against transparency.

     

    Agentic systems move data across foundation model providers, cloud infrastructure, API partners, and managed service providers simultaneously. Without a contractual map of those flows, the data lineage a regulator will ask for does not exist precisely where they will look first.

Institutions scaling agentic AI on a single vendor relationship, without exit provisions or portability standards, are building concentration risk into regulated operations that supervisors can examine directly.

 

What Tech Sourcing Needs to Do Differently

The thread running through all four tensions is the same: sourcing is being asked to govern outcomes it was not present to shape. Build-vs-buy decisions, ecosystem architecture choices, and long-term vendor roadmap commitments are made upstream, and the contracts that follow them reflect those decisions rather than the governance requirements that surround them. The function that arrives after those choices have been finalized inherits terms it did not negotiate, along with the regulatory exposure embedded in them.

Changing that requires more than earlier involvement. It requires technical fluency to know what AI-specific provisions are achievable, not just commercially desirable: understanding model
versioning, behavioral drift, and the difference between an explainability clause that satisfies a regulator and one that satisfies a vendor’s legal team.

It also requires a formal governance mandate that connects risk, legal, compliance, and IT into a single set of requirements that sourcing can translate into contractual language. Without that mandate, sourcing will continue to execute the commercial terms of decisions made elsewhere, and the gap between what the institution agreed to and what its regulators require will continue to widen with each new deployment.

Conclusion: Engineering Trust in an Autonomous Age

Agentic AI will scale in financial services. The efficiency case is clear, the competitive pressure is real, and the technology is mature enough for enterprise deployment. The constraint on scaling depends on whether vendor relationships have been structured to govern autonomous behavior within regulated environments. That structure is built within the contracts that the sourcing team produces.

 

Authors

Further Articles of this Issue