Most asset owners will tell you they’re serious about AI. But beyond using large language models to summarize or write board reports, few can actually tell you they have integrated AI in their investment decision-making.

In my work with pension funds and other pools of beneficial assets, the same gap keeps surfacing between what AI can deliver and what organizations have put into practice. The gap is more consequential than most are willing to admit. The real obstacle is not the technology. It is the institutional structure of asset owners themselves.

Earlier this year, Monte Tarbox, interim chief investment officer of the New York City Bureau of Asset Management, invited me to share my views on AI adoption with the trustees of the city's five public pension systems. It was a conversation worth having — and worth expanding on here. What follows is my honest assessment of what AI can deliver for asset owners (the promise), what's stopping adoption (the barriers), and how asset owners can move forward responsibly (the path forward).

The Promise

The case for asset owners' use of AI is compelling for investment performance, risk management, and operational efficiency.

On the investment side, AI could enable asset owners to process alternative data at a scale no human team could match, identify factor exposures across thousands of positions in real time, and detect market regime changes earlier than traditional econometric models. For manager selection — one of the most expensive decisions an asset owner makes — AI offers the ability to conduct rigorous style drift analysis and to distinguish genuine alpha generation from luck with far greater statistical precision than conventional approaches.

Risk management stands to benefit equally. Continuous portfolio monitoring across all asset classes, simultaneous stress testing of thousands of scenarios, and dynamic adjustments to protect funded status are capabilities that AI can deliver where human bandwidth falls short. 

Operationally, early evidence suggests AI can reduce routine task time by 20 percent to 40 percent, freeing investment staff to focus on higher-value decision-making while cutting costs through better trade execution and reduced reliance on external consultants.

The governance upside is also significant. AI can generate customized reporting for staff, boards, and regulators, improve transparency and visualization of complex portfolio data, and provide real-time answers to ad-hoc questions, reducing the informational asymmetry that has historically favored asset managers and consultants over the asset owners who hire them.

The Barriers

The promise is real, but the path is not straightforward. And the barriers asset owners face are unlike those confronting most other institutional adopters, starting well before a single line of code is written.

The first decision — to build or buy AI competencies — presents a fundamental challenge. Building internal capabilities demands both capital and talent: meaningful AI initiatives can carry price tags of $5 million or more, and the competition for qualified data science and engineering talent is fierce, with technology, healthcare, and manufacturing companies offering compensation and culture that most asset owners simply cannot match. 

Buying solutions isn't much easier. The AI vendor landscape is noisy and difficult to navigate — claims are hard to verify, track records in institutional investment settings are sparse, and many of these firms are startups, which adds a form of supply chain risk that most asset owners aren't accustomed to pricing into a technology decision. 

The deeper problem cuts across both options: most asset owners are being asked to evaluate and commit to AI projects and systems they don't fully understand, which is a difficult position from which to make a sound fiduciary decision.

Then there are the technical realities. Most asset owners are working with legacy data infrastructure and siloed systems that were never designed to talk to each other. Additionally, the data needed to train, fine-tune, and operationalize AI systems exists, but not in a form AI can readily use. The result is the classic “garbage in, garbage out” problem — and for AI applications, bad data isn't just an inconvenience, it's a fatal flaw.

For many asset owners, the real barrier to AI adoption is not the technology. It is the unglamorous, expensive, and time-consuming work of getting data into a usable format. 

But if I had to pick the most stubborn barriers, I'd point to strategy and culture, and I'd argue they're harder to fix than any technical problem.

Start with ROI. Measuring the long-term return on an AI investment against high upfront costs and timelines of two to five years is genuinely difficult. In an industry that already leans toward caution, that uncertainty tends to carry the day. Most asset owners would rather watch a peer succeed, and given that enterprise AI solutions carry an estimated 95 percent failure rate, that instinct isn't entirely irrational. But waiting has its own costs, and the standoff it creates is real.

Add to that the pressure for near-term results, and you start to understand why internal champions are so hard to find. Nobody wants to stake their professional reputation (especially those nearing retirement) on an initiative with long odds and a timeline that outlasts most performance review cycles. And without someone willing to advocate for it when the results are slow and uncertain, and the skeptics are loud, even well-conceived AI projects tend to quietly lose momentum before they ever have a chance to prove their worth.

Of course, there's the governance problem, which for pension funds specifically, may be the most consequential barrier of all.

Most trustees simply lack the technical background to evaluate a sophisticated AI proposal. That's not a criticism; it's a structural reality. But it creates a credibility gap that makes accountability genuinely murky. Who owns the decision? Who answers for the outcome? When nobody can fully explain how the model works, those questions don't have clean answers.

The fiduciary dimension makes it sharper. The most powerful AI models like deep learning and reinforcement learning systems — are inherently opaque. They produce outputs that even their designers can't always explain. That's a problem when you are obliged to make decisions you can justify.

Fiduciary duty doesn't bend for a black box.

And, per usual, the regulatory environment hasn't caught up with innovation. The SEC and DOL have offered little meaningful guidance on what "prudent" AI use looks like in an institutional investment context. Liability arising from AI-driven recommendations that lead to underperformance hasn't been tested in court.

The standards that CIOs and boards need to navigate this responsibly have yet to be codified.

None of this is hypothetical. These are the questions sitting on the desks of investment officers and trustees right now.

The Pathway

Despite these challenges, a responsible path forward exists, and it begins with discipline rather than ambition.

Before committing capital or organizational bandwidth to any AI initiative, asset owners need a structured decision-making framework for evaluating whether a proposed AI application makes sense and, if so, how to pursue it responsibly.

It starts with defining the research problem. This sounds obvious, yet it's where many AI initiatives go awry. The question on the table shouldn't be "how can we use AI?" It should be "what specific problem are we trying to solve, and is AI actually the right tool for it?" That distinction matters. An AI initiative built around a vague aspiration (“we want to be more innovative” or “we want to leverage data better”) makes for expensive failures. The objective needs to be clearly articulated, documented, and agreed upon before anything else happens.

Next comes feasibility. Even with a well-defined problem, asset owners need to honestly assess data quality, system readiness, budget requirements, and the talent needed to execute before making any decision on whether building or buying is the optimal choice.

The third step, and the one most distinctive to asset owners, is fiduciary review. Any proposed AI application needs to be evaluated for model explainability and transparency, data security and privacy, compliance with applicable regulatory standards, and a clear answer to the ownership question of who is responsible for this initiative, and who is accountable when something goes wrong. These are not afterthoughts. They are prerequisites.

After addressing these three issues, asset owners can adopt what I call a crawl- walk-run adoption process. 

In the first year, asset owners should consider launching a low-risk AI pilot project with measurable KPIs, documenting what works and what doesn't, and using those learnings to inform the decision of whether and how to scale, for example, using AI for automating specific back-office processes like reporting, reconciliation, and data consolidation. These applications require modest investments and limited internal resources, or are available through third-party vendors, and build credibility with boards, generate measurable efficiency gains, and establish the data and security infrastructure that more sophisticated AI applications will require. 

In the second year, the focus shifts from efficiency to insight. Specific use cases with clear success metrics (e.g., automated manager monitoring, decision-support tools for manager selection, real-time factor detection) allow institutions to learn what works, what doesn't, and document both rigorously. Human judgment remains in the loop throughout. By year three and beyond, the groundwork laid in earlier phases makes it possible to deploy AI where the stakes are higher and the potential payoff is greatest. Examples include portfolio construction, asset allocation, and risk assessment.

Governance must precede technology at every stage. This means establishing a cross-functional AI steering committee charged with board and staff education, defining acceptable-use standards with explicit human override requirements, drafting policies and procedures to safeguard sensitive data, and building audit and compliance documentation that withstand fiduciary scrutiny. The question of who owns AI-driven decisions is not a bureaucratic detail — it is a fiduciary obligation.

A Call to Action

The cost of waiting is real, but so is the cost of moving without a clear plan. Asset owners who delay building the organizational capabilities, data infrastructure, and governance framework required for AI adoption will find it increasingly difficult to close the gap with early movers. But the cost of moving without discipline is equally real.

I'd suggest that asset owners take a structured, deliberate approach that begins with educating boards and staff, evaluating data infrastructure, and identifying a single high-value, low-risk pilot. AI is a journey, not a destination, and the asset owners who navigate it best will be those who treat it with the same rigor they bring to every other fiduciary decision. Within five years, the burden of proof will shift. The question will no longer be "Why adopt AI?" It will be "Why haven't you?


Angelo Calvello, PhD is the founder of C/79 Consulting LLC and writes extensively on the impact of AI on institutional investing. All views expressed herein are solely those of the author and not those of any entity with which the author is affiliated.