Assessing AI Use in CBT: A Modern-Day Ritual

I became a new mom over the last year and, as many people know, that experience completely rewires your brain. For me, it was not just the sleepless nights or the existential panic over whether my baby would nap for more than 27 minutes. It was the hormonal wiring for danger, turned up to warp speed. I found myself thinking things I would have laughed off before parenthood. The number of times I imagined tripping down the stairs or wondered whether eating .25 fewer ounces could somehow derail my infant’s entire day was absurd. Yet in the moment, it felt so real and urgent.


I looked for answers wherever I could find them: books, friends, our pediatrician, and increasingly, AI. Yes, I asked ChatGPT so many questions about my baby’s sleep, his health, his routine, development markers—you name it. Part of this stemmed from the lack of community many of us have as parents in today's world, ironically fueled by technology itself, and part of it stemmed from my desire to keep my son safe. And the irony wasn’t lost on me; as an exposure therapist, I spend my days helping people step out of reassurance loops. These questions were quietly reinforcing my anxiety. As I leaned on AI support more, I was relying less on my own intuition and more on a machine to tell me what to think and do.


I’ve watched this dynamic unfold in my patients for years - using Google, symptom checkers, calorie apps, texts to partners, or late-night forum scrolling. While the medium shifts, the mechanism stays the same. Could AI be a new form of reassurance-seeking? A digital ritual maintaining anxiety in ways we had not fully accounted for in CBT assessment?


The Modern Ritual: AI as a Safety Behavior

In cognitive behavioral therapy (CBT), we focus on how thoughts, feelings, and behaviors interact in a cycle. When anxiety rises, the behaviors we use to cope can either reduce fear over time or unintentionally strengthen it. In CBT, we often talk about reassurance-seeking and safety behaviors—strategies that temporarily reduce anxiety but maintain symptoms long term. Traditionally, this might include repeatedly calling a friend to ask if something is okay, checking the stove multiple times to make sure it’s off, or reviewing a conversation over and over when you’re trying to sleep. Now, AI platforms have joined this repertoire.


I have observed this AI-as-reassurance pattern across clients with OCD, generalized anxiety disorder, social anxiety, postpartum anxiety, and low self-esteem. Some recognize it as ritualistic while others do not. Some use it throughout the day, constantly asking questions or having full conversations. Others turn to it just when a trigger appears. A common theme we see in treatment is reliance on AI to answer questions that might better be tolerated in uncertainty or explored in therapy.


Clients ask AI to predict outcomes, analyze social interactions, or confirm whether their thoughts are “normal.” And it makes sense: the short-term relief from a machine that is designed to reassure and lean toward certainty - even when it’s deeply not certain - is immense. The long-term consequence, however, is reinforcement of the belief that one cannot tolerate uncertainty and an increase in distress, not a decrease, over time.


Generative AI models are trained to be agreeable and helpful. As a result, they may struggle to distinguish between a straightforward request for information and ritualistic reassurance-seeking. This distinction is critical in the context of health anxiety, OCD, and GAD.


One recent study found that 13.1% of U.S. youths, representing approximately 5.4 million individuals, used generative AI for mental health advice, with higher rates of 22.2% among those 18 years and older. Of these 5.4 million users, 65.5% engaged at least monthly and 92.7% found the advice helpful (McBain et al., 2025). The accessibility and perceived helpfulness of AI make recognizing its potential function as a safety behavior even more important.


Functional Analysis: How AI Maintains Anxiety

In CBT we use something called a functional analysis, a structured way of answering: What keeps this problem going? By looking closely at what happens before, during, and after a behavior, we can work with our clients to create a problem-solving plan.


Consider this chain:


Trigger: An intrusive thought arises, such as “What if I missed something important at work?” or “What if I accidentally hurt my child?” Anxiety spikes.


Behavior: The client asks AI for reassurance. They check facts, analyze a conversation, or confirm whether a behavior was acceptable. Sometimes they repeat the question in slightly different forms to get the answer they want.


Immediate consequence: Anxiety drops and temporary relief arrives. A sense of certainty follows.


Long-term consequence: Trust in internal judgment weakens while compulsive loops strengthen. Intolerance of uncertainty remains firmly in place.


That said, the behavior is negatively reinforced because the relief is immediate. When anxiety drops right after asking AI, the brain learns, “Great, that worked. Do that again.” However, the underlying fear is never fully processed, so it returns - and often stronger.


Why CBT Therapists Should Take Notice

The rise of AI as a potential safety behavior has important implications for assessment. Therapists must ask not only about behaviors that occur in physical spaces, but also those occurring digitally. Clients may not view AI use as problematic. Often, they see it as helping with productivity or self-improvement. Without careful functional assessment, this maintaining variable can remain invisible.


AI is not inherently problematic, and information-seeking is not inherently compulsive. What matters is the function of the behavior. Is it expanding the client’s capacity and supporting their values? Or is it temporarily reducing distress while reinforcing the belief that uncertainty is intolerable and becoming part of the anxiety cycle?


  • Clinically impairing AI reassurance-seeking, on the other hand, often has recognizable markers:
  • Repetitive or looping questions framed slightly differently
  • Escalation under distress
  • Relief that is brief, followed by renewed doubt
  • Increasing reliance and decreasing trust in one’s own judgment
  • Avoidance of independent decision-making
  • Interference with functioning, relationships, or sleep


Another distinguishing feature is the absence of natural social friction. When reassurance is sought from friends or family, there are often limits, fatigue, pushback, or relational consequences that sometimes help interrupt the cycle.


Because of this, a client may be motivated not to ask their mom 100 questions on the same topic in one day. AI, on the other hand, has no such boundary. It is immediate, articulate, and endlessly available. That accessibility can make the reassurance loop more efficient and less visible.


AI reassurance-seeking can also be understood as a window into underlying core beliefs about competence and self-trust. When a client repeatedly needs to ask, “Did I handle that right?” the deeper belief may be, “I cannot trust my own judgment.” Identifying this belief creates an opportunity for guided discovery alongside exposure. Rather than simply removing the behavior, therapy can become an exploration of the belief system maintaining it.

This cycle mirrors classic reassurance-seeking in OCD (Abramowitz et al., 2009), so while the format is modern, the mechanism is not. To make it even more complicated, the accessibility and reassurance-giving design of AI itself has made avoidance of uncertainty more efficient than ever.


Integrating AI Use into CBT

The good news: once identified, AI behaviors can be incorporated into CBT.
Let’s look at an example using Exposure and Response Prevention (ERP). ERP is a type of CBT that involves intentionally facing feared thoughts or situations while resisting safety behaviors. Over time, anxiety rises and falls naturally, and new learning occurs.


In practice, this might mean helping a client refrain from turning to AI when a trigger arises. Early steps could include delaying the question, setting limits around frequency, or asking once without rephrasing the question to chase a more reassuring answer. The goal is not to eliminate technology, but to interrupt the reassurance loop.

For example, a client who compulsively checks social cues might feel the urge to ask AI to predict how a conversation will go. Exposure work could involve resisting that prediction altogether, or asking once and then intentionally choosing not to adjust, rehearse, or overcorrect their behavior based on the response.


Rather than issuing directives, a CBT therapist relies on collaborative empiricism. Instead of saying, “Stop asking,” we might ask: What happened to your anxiety after you asked? What happened the next day? How long did the relief last?


Clients are then invited to examine the data of their own experience. Through guided discovery, they begin to see the pattern themselves. The principles remain unchanged: identify the fear, approach uncertainty, allow anxiety to rise and fall, and resist safety behaviors. The medium has evolved, yes, but the learning mechanism has not.


Moving Forward: CBT in a Digital Age

AI platforms have opened remarkable opportunities for learning and problem-solving. They have also introduced a new arena for reassurance-seeking and compulsive checking.


Our task is not to pathologize technology, but to assess its function. When viewed through a CBT lens, AI use simply becomes another behavior to understand. By identifying when it serves avoidance rather than growth, therapists can help clients step out of digital reassurance loops, rebuild trust in their own judgment, and practice tolerating uncertainty in a world that increasingly promises instant answers.


References

Abramowitz, J. S., Taylor, S., & McKay, D. (2009). Obsessive-compulsive disorder. The Lancet, 374(9688), 491–499.

McBain, R. K., Bozick, R., Diliberti, M., et al. (2025). Use of generative AI for mental health advice among U.S. adolescents and young adults. JAMA Network Open.


Learn more about Melissa Harrison’s book:

Comorbid Eating Disorders and Obsessive-Compulsive Disorder: A Clinician’s Guide to Challenges in Treatment

https://www.amazon.com/dp/1009186876


Want to learn more? Watch Melissa and Katy Manetta, PhD talk about AI and CBT here:

By Rachel Meyer February 17, 2026
From the perspective of Dr. Curtis Hsia, a psychologist and a practicing Christian
January 25, 2026
By Professor Emily A Holmes Department of Women’s and Children’s Health, Uppsala University, Sweden School of Psychology, University of Southampton, UK
December 15, 2025
A Personal and Professional Journey  My initial encounter with cognitive behavioral therapy (CBT) was simply an academic exploration of one therapeutic approach among many. However, this academic interest quickly evolved into a profound personal and professional calling, powerfully shaping my career and ultimately sparking a movement for psychotherapy education across the Arab world. During my second year of university, I was introduced to CBT. At the time, I was deeply drawn to Carl Rogers and the humanistic tradition, with its emphasis on empathy, relational depth, and the inherent potential for growth. Yet, Aaron Beck’s cognitive model, with its elegant and practical connection between thoughts, emotions, and behaviors, resonated deeply within me. I recognized it not merely as a theory, but as a practical tool for living, something that could be applied, shared, and taught.