New research from Waseda University introduces a surprising lens on human-AI interaction: attachment theory. Traditionally used to understand human relationships, the theory is now being applied to explore how users emotionally respond to AI — not just in terms of trust and utility, but in ways that mirror attachment anxiety and avoidance.
For marketers and media strategists, the findings open up a fresh way of thinking about AI-powered customer experiences.
Guidance for AI design
The researchers have devised a novel self-report scale and highlighted the concepts of attachment anxiety and avoidance toward AI. Their work is expected to serve as a guideline to further explore human-AI relationships and incorporate ethical considerations in AI design.
Artificial intelligence (AI) is ubiquitous in this era. As a result, human-AI interactions are becoming more frequent and complex, and this trend is expected to accelerate soon. Therefore, scientists have made remarkable efforts to better understand human-AI relationships in terms of trust and companionship. However, these man-machine interactions can possibly also be understood in terms of attachment-related functions and experiences, which have traditionally been used to explain human interpersonal bonds.
In an innovative work, which incorporates two pilot studies and one formal study, a group of researchers from Waseda University, Japan, including research associate Fan Yang and professor Atsushi Oshio from the Faculty of Letters, Arts and Sciences, has utilised attachment theory to examine human-AI relationships. Their findings were recently published online in the journal Current Psychology on 9 May 2025.
Chezanne Haigh 22 May 2025
Yang explains the motivation behind their research. “As researchers in attachment and social psychology, we have long been interested in how people form emotional bonds. In recent years, generative AI such as ChatGPT has become increasingly stronger and wiser, offering not only informational support but also a sense of security. These characteristics resemble what attachment theory describes as the basis for forming secure relationships. As people begin to interact with AI not just for problem-solving or learning, but also for emotional support and companionship, their emotional connection or security experience with AI demands attention. This research is our attempt to explore that possibility.”
Notably, the team developed a new self-report scale called the Experiences in Human-AI Relationships Scale, or EHARS, to measure attachment-related tendencies toward AI. They found that some individuals seek emotional support and guidance from AI, similar to how they interact with people. Nearly 75% of participants turned to AI for advice, while about 39% perceived AI as a constant, dependable presence.
Anxiety and avoidance
This study differentiated two dimensions of human attachment to AI: anxiety and avoidance. An individual with high attachment anxiety toward AI needs emotional reassurance and harbors a fear of receiving inadequate responses from AI. In contrast, a high attachment avoidance toward AI is characterised by discomfort with closeness and a consequent preference for emotional distance from AI.
However, these findings do not mean that humans are currently forming genuine emotional attachments to AI. Rather, the study demonstrates that psychological frameworks used for human relationships may also apply to human-AI interactions. The present results can inform the ethical design of AI companions and mental health support tools. For instance, AI chatbots used in loneliness interventions or therapy apps could be tailored to different users’ emotional needs, providing more empathetic responses for users with high attachment anxiety or maintaining respectful distance for users with avoidant tendencies. The results also suggest a need for transparency in AI systems that simulate emotional relationships, such as romantic AI apps or caregiver robots, to prevent emotional overdependence or manipulation.
Furthermore, the proposed EHARS could be used by developers or psychologists to assess how people relate to AI emotionally and adjust AI interaction strategies accordingly.
“As AI becomes increasingly integrated into everyday life, people may begin to seek not only information but also emotional support from AI systems. Our research highlights the psychological dynamics behind these interactions and offers tools to assess emotional tendencies toward AI. Lastly, it promotes a better understanding of how humans connect with technology on a societal level, helping to guide policy and design practices that prioritize psychological well-being,” concludes Yang.