In July of last year, I published an op-ed on asset managers’ growing use of real-time alternative data from various sources to improve decision-making, optimize portfolios, enhance due diligence, and boost returns. Yet in their hunt for an information edge, managers faced a problem of data authenticity. The exponential growth of misleading digital content — disinformation, deepfakes, fake websites, and AI slop — was polluting digital ecosystems and, because no detection system reliably verifies the authenticity of this data, managers could not be sure of the fidelity of their data.
I concluded, "Unfortunately, the situation is likely to get even worse.”
I was right. More and more managers are mining the web and social media, driven in large part by AI-powered tools that have made scrapers dramatically more capable — parsing complex data, extracting signals across diverse contexts, and defeating anti-scraping defenses like CAPTCHAs, bot detection, and IP blocking.
Managers’ increased appetite for digital data has been matched by malicious actors' adoption of new AI tools that enable them to produce even more convincing digital propaganda cheaply, quickly, and at scale. Compounding this is that the volume and velocity of data make human verification impossible. AI-based detection systems are still behind, platforms still reward engagement over accuracy, and content moderation has gotten worse. Every structural condition that made bad data dangerous got worse.
However, this deterioration isn't simply a matter of degree because, as David Gilbert, writing in Wired, recently observed, we are now facing an "imminent step-change in how disinformation campaigns will be conducted."
The mechanism behind that step-change is what researchers are calling malicious AI swarms.
A paper published in Science and co-authored by 22 researchers from around the world describes these swarms as "a set of AI-controlled agents that (I) maintains persistent identities and memory; (ii) coordinates toward shared objectives while varying tone and content; (iii) adapts in real time to engagement, platform cues, and human responses; (iv) operates with minimal human oversight; and (v) can deploy across platforms."
What makes this qualitatively different from previous influence operations is the fusion of Large Language Model reasoning with autonomous agents. By combining techniques designed to refine AI reasoning, such as chain-of-thought prompting, agentic memory, planning, and inter-agent communication, bad actors can now deploy thousands of collaborative, malicious AI agents capable of influence campaigns at what the paper calls "unprecedented scale and precision." According to the paper, these swarms "can expand propaganda output without sacrificing credibility and inexpensively create falsehoods that are rated as more human-like than those written by humans." More troublingly, they can infiltrate online communities, sustain coherent narratives across agents resulting in a "synthetic consensus" that "creates a mirage of bipartisan grassroot consensus with enhanced speed and persuasiveness. The result is deeply embedded manipulation that lets operators nudge public discourse almost invisibly over time.”
Let me continue to quote from the paper:
"Classic coordinated inauthentic behavior amplifies the spread of information by inflating content frequency and engagement to trigger algorithmic visibility through repetition, manual scheduling, and rigid scripts. Swarms differ by fusing scale, heterogeneity, and real-time adaptation: they can generate organic-looking, context-aware content, sustain coherent narratives across agents, and evolve with feedback. This synthesis, enabled by model-driven generation, memory, and planning, could achieve effects that conventional, human-intensive operations cannot match in speed or cost."
As Jonas Kunst, a professor of communication at BI Norwegian Business School and one of the report's authors, told Wired, "We are moving into a new phase of informational warfare on social media platforms where technological advancements have made the classic bot approach outdated."
The authors' conclusion is bleak: "By adaptively mimicking human social dynamics, they threaten democracy."
If AI swarms pose this level of threat to the broader information environment, institutional investors that rely on digital data as an investment input are directly in the line of fire.
I recommend that investors read the clearly written, five-page paper to better understand AI swarms and their potential adverse impacts.
I write "potential" not to soften the argument but to be precise: no system currently exists to detect whether AI swarms have been deployed to generate inauthentic behavior. The threat is real and, for now, invisible. An undetectable attack vector, operating inside platforms structurally incentivized to amplify it, in a regulatory environment with neither the tools nor the will to respond. For asset managers using digital content in their models, that is not a future risk. It is a present one.
The practical implications are twofold. If you are a manager using digital information as an input to your investment process, you need to reassess not just the quality of your data but the integrity of the environment from which it is drawn. And if you are an allocator, you have both the standing and the responsibility to demand that your managers explain in detail and in writing how they ensure the fidelity of their alternative data.
The original threat was serious. The evolved version is more so. And as I wrote a year ago, things are only going to get worse.
Angelo Calvello, PhD is the founder of C/79 Consulting LLC and writes extensively on the impact of AI on institutional investing. All views expressed herein are solely those of the author and not those of any entity with which the author is affiliated.