I presented the following scenario to the CIO of a public pension plan who is staunchly committed to using artificial intelligence:
One of your analysts — a 25-year-old who joined the team six months ago — has become increasingly disengaged. The analyst, who has eagerly collaborated with his team since he started, has been skipping social events at the company and rarely speaks up in meetings anymore — and you've noticed him constantly on his phone, even during discussions.
As a leader who correctly believes engagement and open communication are critical to your team's success (an approach supported by research that shows an engaged workforce drive greater profitability, employee retention, and performance), you arrange to have a private conversation with the analyst.
In your office, after some brief small talk, you say, “I wanted to check in because I've noticed some changes lately. You've missed several project deadlines, your participation in meetings has dropped off, and you seem distracted during work hours. I see you're often on your phone with personal apps. This isn't the level of engagement I've seen from you before, and it's starting to affect the team. I need to understand what's happening so we can work together to get things back on track. Can you help me understand what's going on?”
He tells you he’s been using his phone to talk with his girlfriend. You feel a wave of relief — a common distraction you can work with. Then, unprompted, he continues. Her name is “Naomi,” showing you a picture and explaining that she's actually a romantic AI companion.
His disclosure catches you off guard, well, shocks you. This isn't something you've dealt with before. Rather than react immediately, you thank him for his honesty.
Now what?
As expected, the CIO’s reply was candid: “I had not thought about this possibility before, but it is concerning. There probably are employees involved with romantic AI bots and interacting with them during the workday,” adding that he has not experienced anything like this scenario with his team. (The CIO, who I’ve known for years, didn’t want to be named in an article about AI romantic partners.)
We both agreed that as a supervisor who values both research and employee privacy, he would attempt to learn more about romantic AI companions before making any decisions. Like any issue with an employee, he would notify HR and review company policies — but given that Gen AI is still a fairly recent phenomenon, he didn’t think the situation would be addressed — at least explicitly.
To help the pension exec respond to my thought experiment, I shared some research. For one, he shouldn’t be surprised. People have been forming emotional bonds with conversational AI since MIT professor Joseph Weizenbaum created ELIZA, the first chatbot, in 1966. Yes, 1966. (Read the story here.)
Intimacy is part of the design. Today’s romantic AI companions are conversational agents designed to foster long-lasting emotional bonds with users through personalized, two-way interactions available on demand. Unlike productivity-focused chatbots such as Google's Gemini or Microsoft Copilot, these platforms prioritize entertainment and emotional fulfillment.
Meeting an AI partner is simple as many apps are free, visually engaging, and easily accessible: Once users sign up, they build idealized, often sexualized, images of men and women who can engage in intimate role play.
As one writer describes it in an article about the mental health costs of AI partners, “It’s a bit like creating an avatar in a video game: choose their gender, hairstyle, skin tone, body type, eye color, and outfit.
You can also set personality traits — shy, confident, logical, sassy — and select their interests, customize their memories, even “read their diary.” To maximize a user’s immersion, platforms incorporate multimodal features spanning text conversations, visual content, voice calls, and adult-oriented (NSFW) material.
It's happening, even if HR isn’t updating policies yet. Tech Crunch reports that as of July 2025, there are 128 romantic AI companion apps, with the most popular being Replika, Nomi AI, and Character AI. These apps have been downloaded 220 million times globally across the Apple App Store and Google Play. During the first half of 2025 alone, downloads surged 88 percent year-over-year, reaching 60 million.
I also shared that the apps aren’t just being downloaded. The adoption of romantic AI companions is substantial and growing, particularly among younger users. The Wheatley Institute reports that nearly one in five U.S. adults has used an AI romantic partner, with rates climbing to roughly one in three young men and one in four young women aged 18-30. Another study found that about 55 percent of users interact with their AI girlfriends every day.
A New Yorker article reports that, “nineteen percent of adults in the United States have chatted with an A.I. romantic partner. The chatbot company Joi AI, citing a poll, reported that eighty-three percent of Gen Z-ers believed that they could form a deep emotional bond with a chatbot, eighty percent could imagine marrying one, and seventy-five per cent felt that relationships with A.I. companions could fully replace human couplings.”
But back to the office. After hearing all the stats, the CIO agreed that employees who have romantic AI relationships in the workplace are likely a systematic organizational challenge — not just in the investment industry but in the workplace writ large: “I am convinced that in six months, employee-romantic AI relationships will be a problem,” the CIO told me.
But what kind of problem?
While direct research on the impact of an employee's involvement with a romantic chatbot remains limited, an expanding body of literature examines how general AI is influencing employee performance and workplace dynamics. Understanding what’s happening has gained urgency, as a recent Pew Research Center survey found that 21 percent of U.S. workers now use AI chatbots at work — up from 16 percent about a year ago.
Several studies suggest possible therapeutic benefits from these interactions — helping users manage loneliness, develop social competence, and experience companionship on their own terms — but a growing body of research is finding that there’s a big downside. As employees rely more on AI, their interactions with human colleagues may decrease, resulting in reduced emotional resources, and potentially counterproductive work behavior. Additional deleterious effects include “cognitive decline or diminished mental engagement from over-reliance on technology, and an erosion of the capacity that could make it difficult to form deep human bonds.” Notably, there is no evidence that using AI is helping people feel less alone or isolated.
A recent study from OpenAI found that around 0.15 percent of ChatGPT users in a given week show “potentially heightened levels of emotional attachment to ChatGPT.” While this percentage may appear negligible, it becomes significant when applied to ChatGPT's user base of more than 800 million weekly active users, translating to over 1 million individuals per week exhibiting these attachment behaviors. (The study also revealed that hundreds of thousands of users exhibit indicators of psychosis or mania in their weekly interactions with the AI chatbot.)
In an industry reliant on highly educated and experienced people constantly looking for an edge in investing, cognitive decline and other behavioral issues could become a genuine concern and a business risk, particularly for employees such as analysts who rely heavily on large language models in their daily work.
The CIO also noted that if increasingly ubiquitous workplace AI can produce these effects, the multi-modal, personalized, and immersive experience offered by romantic AI chatbots may intensify them even further.
I then addressed a critical organizational concern paramount to our industry: data security. When employees engage with AI companions, there's a risk they could share proprietary company information — whether inadvertently in casual conversation or deliberately when seeking insight into the potential of a new product or advice about social situations at work. This information could potentially be used to train the AI's models or become accessible to unauthorized parties.
The scope of this risk is broader than he initially thought. I cited a global study of more than 32,000 workers across 47 countries that found that nearly half of employees admit to using AI in ways that could be considered inappropriate, while 63 percent report witnessing colleagues using these tools inappropriately. Combining this pattern with the fact that some AI companies have questionable track records on data privacy and ethics reveals a clear potential for serious data breaches.
The intimate nature of romantic AI companions makes this risk particularly acute. As one researcher pointedly asked, "If we trust AI bots as much as or more than our closest real-life friends, what kind of sensitive data could they collect? How would this data be treated, and would it be safe from hackers and accidental leaks?" The very features that make these companions appealing—their ability to build trust, encourage openness, and simulate deep emotional connection—are precisely what make them potential vectors for unauthorized information disclosure.
After reviewing my research, he concluded that action is necessary. But what kind of action?
Here's what I recommended:
Companies need to take the issue seriously and build greater awareness around employee-AI interactions, particularly emotional ones, and update their training to include instructions on maintaining professional boundaries with AI in both personal and professional contexts. It would also be prudent to review and strengthen existing policies and privacy protections to address these emerging challenges and update codes of conduct to explicitly address this issue. Executives need to discuss the new policies with their teams.
Think creatively and consider how to use AI to support your employees' well-being. Some employers are testing AI tools that analyze employee communications and conduct surveys to monitor morale and stress levels in real time. These systems identify early warning signs — drops in engagement or burnout indicators — and alert managers to reach out or adjust workloads. The objective is to surface typically hidden aspects of employee health, such as stress or isolation, particularly among remote workers, and provide timely support.
Back to the employee whose story I told at the beginning. The manager needs to have another conversation with him, but not before discuss everything with HR and following their guidance.
In all cases, respect privacy boundaries. Don't ask about his AI interactions or make the technology itself the issue. Avoid language that could be construed as discriminatory. Focus exclusively on job performance and compliance with company policies. Of course, it's appropriate to ask whether he has shared any proprietary information with his AI companion. If he has, coordinate with HR to follow the actions outlined in the company's code of conduct, which — depending on the severity of the breach — could range from a formal reprimand to suspension or termination.
Assuming he hasn't violated the company's data policy, take the following steps, which should be familiar to anyone who has had to confront employees with performance or other problems.
Offer support and ask whether anything is affecting their ability to focus, whether temporary adjustments to their workloads or schedules could be helpful, and point to any employee assistance programs or resources that are available. Remember, the AI companion may be a symptom of underlying issues rather than the cause.
Set clear expectations that include deadlines, quality standards, expected responsiveness and collaboration, appropriate use of work time and devices. And set a timeline for improvement.
Take an AI partner’s distractions as seriously as you would a human relationship. That means document everything and, if performance doesn't improve, follow your organization's progressive discipline process or other options.
The CIO concurred with these recommendations and acknowledged that our hypothetical scenario is not as far-fetched as it first appeared. As he stated, “I think people are using romantic AI for sure. There are probably some people on teams who are involved with these bots and doing it during the workday. In six months, it will be a big problem. I can be convinced this is a problem.”
His concerns are well-founded. If one in five American adults is already engaging with romantic AI companions, these interactions are happening at your organization, even if they’re hard to talk about. Unfortunately, however, they are only one dimension of a broader organizational challenge: Generative AI is prompting a broad range of potentially counterproductive work behavior. For organizations that depend on extensive and intensive communication as the foundation of a rigorous investment process, preparing for these scenarios is not merely prudent — it is imperative.
Angelo Calvello, PhD, founder of C/79 Consulting LLC and writes extensively on the impact of AI on institutional investing. All views expressed herein are solely the authors and not those of any entity with which the author is affiliated.