Gallery
White Paper
Artificial Intelligence in Treasury and Payments
From Fragmentation to Intelligence -Unlocking Value through Data, Governance, and Control
By Chryssi Chorafa (Director of TerraEvra) and Professor William Scott-Jackson (Oxford Centre for Impact Research)
Abstract
Over the past year, there has been growing discussion about how Artificial Intelligence (AI) will change the way businesses operate. Treasury and payments are no exception. AI could reduce manual effort, improve forecasting, strengthen risk analysis, and support faster and more informed decisions. Yet an important question remains. Has AI truly transformed treasury and payments so far, or are many organisations still in the early stages of adoption?
There is increasing recognition that treasury and payments contain many repetitive, administrative and data-heavy processes that AI could improve by reducing cost, rework and operational risk. At the same time, however, value of AI is not yet realised. In many cases, the challenge is not the absence of tools, but the weakness of the foundations beneath them, particularly fragmented data, inconsistent standards, weak governance and limited explainability.
In this paper, I examine how AI is beginning to shape treasury and payments, where it is creating value, what is holding organisations back, and what will be required if AI is to move from experimentation to meaningful and controlled impact.
1. Executive Summary
AI is rapidly reshaping treasury and payments, transitioning from experimental tooling to a more foundational capability. According to the Bank of England (BoE) and Financial Conduct Authority (FCA), 75% of UK financial services firms are already using AI, with a further 10% planning adoption within the next three years (Bank of England and FCA, 2024). This points to clear momentum across the sector.
However, the question is how financial services use AI and for what purpose. Evidence from industry research, practitioner interviews and poll findings suggest that AI could deliver benefits in forecasting, automation and risk analytics, but its wider impact is constrained by fragmented data, governance gaps and lack of explainability. At the same time, risks are increasing alongside capability, particularly around model opacity, third-party dependency, concentration risk and accountability.
The BoE’s 2025 financial stability assessment reinforces this dual message. AI has the potential to improve productivity, support better decision-making, tailor products and services more effectively, and enhance innovation. However, if adoption is not supported by strong controls, it can also introduce new operational and systemic financial stability risks (Bank of England, 2025a).
This paper argues that success in AI-driven treasury and payments will not be determined primarily by the sophistication of models. It will depend more fundamentally on the strength of data foundations, governance structures, control frameworks and operating models.
2. AI Adoption in Financial Services: Momentum with Constraints
AI is already being applied across a broad range of use cases, including optimisation of internal processes, fraud detection, customer support and cybersecurity (Bank of England and FCA, 2024). This shows that adoption is no longer limited to experimentation. It is already entering day-to-day operations across multiple business areas.
Yet the same research highlights important structural concerns. Forty-six per cent of firms reported having only a partial understanding of the AI technologies they use, and around one third of AI use cases rely on third-party implementations (Bank of England and FCA, 2024). In addition, 55% of use cases involve some degree of automated decision-making, although only a small proportion are fully autonomous (Bank of England and FCA, 2024).
This reveals a critical tension. AI adoption is scaling faster than many organisations’ ability to fully understand, govern and control the systems they are deploying. The issue is no longer whether firms are using AI. It is whether they are using it with sufficient clarity, ownership, accountability and resilience.
3. Where AI is Creating Value in Treasury and Payments
3.1 Cashflow forecasting and data intelligence
In December 2025, I conducted three polls to understand where AI is expected to add the greatest value in treasury over the next one to three years. The strongest response, with 60% of votes, was cashflow forecasting, followed by data pooling and cleansing, and then risk analytics and scenarios.
This aligns with wider industry research. KPMG notes that AI can improve forecast accuracy by up to 30%, particularly by enabling more dynamic scenario modelling, faster updates and the integration of internal and external data sources (KPMG, 2025). In treasury, the value is not only in speed, but in the shift from static and lagging visibility to more responsive and forward-looking liquidity insight.
The same theme came through strongly in interviews. One interviewee from a large consultancy described cashflow forecasting as the most compelling use case because the underlying data sits across multiple teams and systems, including CAPEX, payments, payables and receivables. The complexity is therefore not only the forecasting logic itself, but the fragmentation of the data landscape that feeds it.
Practitioner insight also brought an important note of caution. AI can support treasury forecasting, but it should not replace judgment. EuroFinance makes this point persuasively by showing that AI is most useful when it accelerates the work that precedes the decision, not when it attempts to replace the decision-maker (EuroFinance, 2025b). In this sense, AI is better understood as a second opinion than as a final authority in forecasting.
One interviewee from a niche consultancy also described a practical use case in which AI could bring together information from banking fees, short-term cash positions, FX trades and money market activity to provide a more holistic picture of treasury activity and support stronger decision-making. This is important because it suggests that the opportunity is not limited to a single model or workflow. It lies in connecting fragmented signals and making them more usable.
Overall, the evidence suggests that AI is already enhancing prediction and visibility in treasury, but that human judgment remains central to interpretation and decision-making.
3.2 Automation and operational efficiency
Automation is another area where AI is beginning to show measurable value. Crisil Coalition Greenwich highlights that many treasury teams are first deploying AI in operational and automation-focused use cases, while KPMG points to reconciliation, reporting, payment processing and documentation as areas where AI can reduce manual effort and improve efficiency (Crisil Coalition Greenwich, 2026; KPMG, 2025).
This is consistent with the BoE and FCA survey, which found that optimisation of internal processes is currently the most common use case across financial services (Bank of England and FCA, 2024).
However, automation should not replace the human but should enable. When repetitive and low-value tasks are reduced, treasury teams can shift more time and attention towards strategic analysis, oversight and decision support. In a function that is increasingly expected to support wider business resilience, this matters more than simple process acceleration.
3.3 Risk management and predictive analytics
AI is also beginning to strengthen risk management and scenario analysis. In my own poll findings, risk analytics and scenarios emerged as a meaningful, though less dominant, use case compared with cashflow forecasting and data pooling.
Interview insights support this direction. One interviewee explained that AI can help treasury teams describe, code and evaluate scenarios more quickly, allowing them to model multiple outcomes and respond faster to shifts in market conditions. In practice, this could support analysis across FX, rates, liquidity and short-term funding exposures.
The BoE’s 2025 financial stability paper reinforces this point more broadly. It notes that AI’s ability to process large volumes of data could significantly enhance firms’ analytical capabilities and support increasingly important financial decisions. At the same time, it warns that model complexity, opacity, data weaknesses and concentration in commonly used tools could create new risks at both firm and system level (Bank of England, 2025a).
This creates a familiar pattern. The very characteristics that make AI powerful in risk analytics also make governance and control more important.
3.4 Payments and the emergence of agentic systems
Another major area of development is payments. Here, AI is evolving from fraud detection and transaction monitoring towards more intelligent orchestration and, potentially, agentic decision-making.
Potential use cases include routing payments across rails, optimising timing and cost, and improving customer journeys. Yet this shift also introduces more complex risks.
The BoE and FCA survey found that 55% of AI use cases already involve some degree of automated decision-making (Bank of England and FCA, 2024). This matters in payments because the more that workflows become automated, dynamic and interconnected, the harder it becomes to explain outcomes, assign accountability and control systemic interactions.
This was also reflected in discussions at the FCA AI Supercharged Sandbox Showcase. The most striking theme was the possibility that payment ecosystems may evolve into intelligent “systems of systems”, in which autonomous agents act, interact and trigger actions with consequences across interconnected infrastructures. Questions emerged around permissions, limits, tolerances, identity, governance and operational adaptation where customers themselves may increasingly become agents.
This introduces fundamental questions for the future of payments. Who is accountable for AI-driven payments? How should agentic systems be governed? How do firms control second-order effects when multiple intelligent systems begin interacting with one another?
These are no longer theoretical questions. They are becoming practical design and governance issues.
4. The Core Barrier: Data Fragmentation and Lack of Standardisation
Across the polls, interviews and industry sources, one message emerged with striking consistency: the biggest barrier to AI adoption in treasury and finance is data.
In my December 2025 poll, 60% of respondents identified data and legacy systems as the primary barrier to adopting AI in treasury and finance functions, followed by organisational readiness at 40%.
Interview findings strongly reinforce this. One interviewee from a large consultancy pointed to the difficulty of implementing AI when firms do not have the right or clean data. Legacy systems often contain data in different formats, structures, syntaxes and architectures. In some cases, the issue is quality. In others, it is fragmented location, poor capture, inconsistent naming or lack of qualification. In practice, these weaknesses combine.
Crisil Coalition Greenwich makes the same point even more directly. It argues that the fundamental gap many companies face is investing in AI solutions before they have built the data infrastructure needed to support them effectively (Crisil Coalition Greenwich, 2026).
In short, AI initiatives often underperform not because the technology is incapable, but because the surrounding data environment is inconsistent, fragmented and poorly governed.
4.1 Structural complexity in treasury and payments
This challenge is particularly acute in treasury and payments because the underlying data landscape is inherently complex. Data is spread across ERP systems, treasury management systems, banking platforms and spreadsheets. Payment data flows across multiple rails and standards, including SWIFT, CHAPS, BACS, cards, open banking, ISO 20022 MX formats and legacy MT formats.
The result is inconsistent field definitions, limited tagging, weak comparability and poor interoperability. These are not just technical inconveniences. They directly constrain the performance, scalability and trustworthiness of AI models.
4.2 The need for a common data denominator
If AI is to scale meaningfully in treasury and payments, organisations need a common data denominator. This means more than cleaning files. It requires standardised fields and formats, aligned classifications, consistent tagging frameworks, clearer data lineage and unified data layers that can support analysis across systems and workflows.
EuroFinance captures this particularly well in its observation that AI is becoming increasingly useful in modelling scenarios and forecasting outcomes, but that its outputs are only as good as the data that informs them, and only as wise as the humans who frame the problem (EuroFinance, 2025a).
James Benford’s speech (Bank of England, 2025b) on data governance points in the same direction. Rather than focusing only on model capability, he frames effective AI through the principles of being targeted, reliable, understood, secure, tested, ethical and durable. The underlying implication is clear: trustworthy AI depends on trustworthy data and disciplined governance (Bank of England, 2025b).
These foundations lead directly to the next issue: governance, explainability and accountability.
5. Governance, Explainability and Accountability
If AI is to be deployed safely and effectively in treasury and payments, the surrounding governance framework matters as much as the model itself.
This is not simply because the models are complex. It is because treasury and payments are functions in which outputs can affect liquidity, risk, customer outcomes, resilience and regulatory exposure. In that context, organisations need more than technical performance. They need confidence that AI operates in a way that is understandable, controlled, auditable and accountable.
5.1 Explainability and transparency
There is broad agreement across industry and regulatory sources that AI models can be complex, dynamic and difficult to explain. The BoE’s 2025 financial stability work notes that advanced AI models may evolve over time, can be less predictable than traditional approaches, and create new challenges in explainability, transparency and data integrity (Bank of England, 2025a).
The 2024 BoE and FCA survey supports this concern from another angle. It found that the risks expected to increase most over the next three years are third-party dependencies, model complexity and embedded or hidden models (Bank of England and FCA, 2024).
In treasury, this creates practical consequences. If model outputs cannot be explained, they are harder to trust, harder to challenge and harder to defend. That affects auditability, internal governance, regulatory confidence and, ultimately, decision quality.
AI in treasury therefore needs transparent data lineage, traceable logic and explainability proportionate to the materiality of the use case.
5.2 Accountability and control
As AI becomes more embedded in business processes, accountability can become blurred. This is particularly important where automated or semi-automated decisions are involved.
The BoE’s 2025 report notes that reliance on AI in key decisions may create conduct-related and legal risks if liability becomes unclear or if decisions are later challenged (Bank of England, 2025a). At the same time, the current regulatory direction in the UK continues to emphasise individual accountability and effective governance.
This means organisations need to define clearly who owns the data, who owns the model, who approves the decision and how exceptions are escalated. Without that, firms risk creating automation without a defensible accountability chain.
5.3 Systemic and third-party risks
Governance also needs to extend beyond internal systems. One of the most important messages from the BoE and FCA work is the growing significance of third-party and concentration risk.
Around one third of AI use cases rely on third-party implementations, and the top providers hold substantial shares across cloud, model and data services (Bank of England and FCA, 2024). The 2025 financial stability paper develops this risk further, warning that a severe disruption related to external AI service providers could affect the delivery of vital services, including time-critical payments (Bank of England, 2025a).
This aligns closely with the “systems of systems” concern raised in payments discussions. As interdependence increases, governance needs to extend across orchestration layers and resilience planning, not just individual models.
5.4 Continuous monitoring and model governance
AI governance is not a one-off implementation task. It requires continuous monitoring, validation and oversight.
Andrew Ellis makes this point clearly in Finextra by arguing that effective governance depends not only on rules and policies, but on people, training, shared understanding and organisational culture (Ellis, 2026). That matters greatly in treasury, where users need to interpret, challenge and contextualise AI outputs rather than consume them passively.
The BoE’s 2025 work also notes that AI models may evolve dynamically, which means governance must be continuous rather than static (Bank of England, 2025a). In practice, this means model validation, ongoing monitoring and governance oversight should be embedded into the operating model, not bolted on afterwards.
6. Human-Centric AI: Augmentation, Not Replacement
One of the strongest messages across interviews, practitioner articles and event discussions was that AI should augment treasury professionals, not replace them.
This is both practical and important. Treasury decisions often require contextual understanding, judgment under uncertainty, stakeholder awareness and awareness of broader business implications. AI can accelerate analysis and surface patterns, but it does not remove the need for human interpretation and accountability.
EuroFinance expresses this theme particularly well. In treasury, AI may handle much of the numerical analysis and repetitive work, but treasury still needs to understand the “why” behind the decision (EuroFinance, 2025a; EuroFinance, 2025b).
The BoE and FCA survey supports the same principle more generally. Human oversight remains an important feature of many automated use cases, and this is likely to remain essential as materiality and autonomy increase (Bank of England and FCA, 2024).
6.1 The evolving role of treasury
AI may shift treasury further away from reporting and repetitive processing towards insight generation and strategic decision support. This is consistent with the wider transformation of treasury itself. Crisil Coalition Greenwich notes that CFOs and other senior executives increasingly look to treasury for strategic insight, not just operational execution (Crisil Coalition Greenwich, 2026).
In that sense, AI could strengthen treasury’s strategic role, but only if teams are equipped to question outputs, interpret ambiguity and apply sound judgment.
6.2 Inclusion and accessibility
AI can also support broader inclusion and better customer outcomes when it is designed thoughtfully. At Pay360 in 2026, examples from Project Nemo and AnnaMoney showed how AI-enabled features can help people with vulnerabilities and disabilities manage transactions and banking more effectively.
This is an important reminder that AI is not only about speed and automation. It can also improve accessibility, customer journeys and consumer protection when developed with care. The real test is whether systems are designed around human needs rather than around technical novelty.
7. Recommendations: Building an AI-Ready Treasury and Payments Operating Model
The evidence in this paper suggests that AI success in treasury and payments depends less on model sophistication and more on disciplined operating foundations. To move from fragmented experimentation to scalable value, organisations need an operating model that connects data, governance, controls and decision-making.
7.1 Establish a data orchestration layer
The first priority is to move away from fragmented data environments towards a unified orchestration layer. This layer should connect systems, standardise inputs and support consistent downstream use across treasury, payments and risk. In practice, this means defining a common data model, harmonising formats across ISO 20022, legacy MT structures and internal ERP or TMS formats, and implementing lineage tracking from source to output.
The outcome should be a single trusted data foundation on which AI can operate more consistently and at scale.
7.2 Define ownership, accountability and governance clearly
AI introduces blurred accountability unless ownership is defined explicitly. Organisations should assign named owners across data domains such as liquidity, payments, bank accounts and risk, while also defining accountability for model outputs and business decisions. Governance structures should cover data quality, lineage, model usage and decision ownership.
A practical RACI framework can help clarify who creates, validates, consumes and signs off information across the chain.
7.3 Implement rule-based standardisation and controls
AI should not operate on uncontrolled inputs. Before organisations scale models, they should define rule-based standards covering validation logic, completeness thresholds, reconciliation tolerances, classification rules, routing constraints and escalation triggers.
These rules create the auditable boundaries within which AI can operate safely and consistently.
7.4 Introduce diagnostics and threshold-based monitoring
AI requires continuous monitoring rather than periodic review. Organisations should introduce diagnostics across data quality, model performance and operational outcomes, with thresholds that trigger alerts, root-cause analysis and ownership for remediation.
This creates a closed-loop feedback system in which data informs the model, outputs are monitored, and issues are corrected through disciplined governance.
7.5 Embed explainability and auditability by design
Explainability should not be treated as an afterthought. Organisations should build traceability into the operating model from the start, including input lineage, transformation tracking, model versioning, decision logs and retained evidence. The more material the use case, the greater the need for transparent and defensible outputs.
7.6 Strengthen third-party and systemic risk controls
Given the increasing reliance on external vendors, cloud providers and AI platforms, organisations should map dependencies across their broader ecosystem and extend governance beyond internal systems. This should include concentration risk assessments, fallback arrangements, contingency planning and resilience design.
As AI becomes more embedded in interconnected infrastructures, firms need to manage the broader “system of systems”, not only the internal model.
7.7 Keep AI aligned with treasury decision-making
Finally, organisations should define clearly where AI supports decisions and where human approval remains necessary. AI can be positioned as a decision-support tool, scenario generator and anomaly detector, but material funding decisions, risk interpretation and exception handling should remain under human oversight.
That boundary needs to be designed consciously. It should not be left implicit.
8. Conclusion: High Value Requires High Discipline
AI presents a significant opportunity for treasury and payments. It can improve forecasting, strengthen predictive risk management, support more intelligent payments and reduce operational effort. There is no doubt that it can create real value.
But that value will not be realised through technology alone.
Across the research, interviews and practitioner evidence, a consistent message emerges. The main barriers is not technology, but weak data foundations, limited standardisation, insufficient governance, explainability gaps and blurred accountability. The more organisations try to scale AI without fixing those foundations, the more likely they are to create complexity without control.
The BoE’s 2025 financial stability perspective reinforces this conclusion at system level. AI introduces both important benefits and new forms of operational and systemic risk that must be actively managed (Bank of England, 2025a).
The competitive advantage will therefore not come from adopting AI the fastest. It will come from adopting it with greater discipline: by focusing on the right use cases for the business, structuring data properly, governing AI effectively, embedding accountability and aligning technology with real business outcomes.
In the end, AI does not remove the need for judgment. It increases the importance of it.
That may be one of the most important insights for treasury and payments: high-value AI requires high discipline.
References
Bank of England (2025a) Financial Stability in Focus: Artificial Intelligence in the Financial System.
Bank of England (2025b) Data governance to set us free, speech by James Benford.
Bank of England and Financial Conduct Authority (2024) Artificial Intelligence in UK Financial Services.
Crisil Coalition Greenwich (2026) AI in corporate treasury: What causes slow adoption, preventing full potential?
Ellis, A. (2026) ‘The next AI breakthrough will come from governance’, Finextra.
EuroFinance (2025a) ‘AI handles the numbers but treasury handle the why’.
EuroFinance (2025b) ‘AI is not replacing us: why treasurers are embracing AI on their own terms’.
KPMG (2025) AI is transforming corporate treasury.
Primary research
Interviews with treasury practitioners and consultants (2025–2026).
LinkedIn practitioner polls conducted by the author (December 2025).


