A
A
Detecting “Ripples” of AI-Human Interaction: Three Social Media Cases
Abstract
This study introduces an original concept—“social ripple”—to describe the social network effects of AI–Human Interaction that extend beyond the immediate interactants, and proposes a corresponding analytical framework. Three representative social media cases situated at distinct ripple levels—direct interaction, proximal interpersonal, and distant interpersonal—were selected. Using a mixed-methods approach, the study analyzed comment texts from these cases. The findings reveal that: (a) Cognitive Dimension: AI’s personalized response mechanisms generate a “paradox of emotional authenticity”—simultaneously fulfilling immediate affective needs while leading to persistent skepticism due to perceived “non-objectivity” and bias reinforcement. (b) Relational Dimension: The involvement of AI in intimate relationships creates “emotional competition,” where anthropomorphic feedback can destabilize bonds. (c) Societal Dimension: Distant publicsconstruct abstract values centered on “human–existence–emotion,” elevating the controversy over AI romance into a collective defense of human dignity and thereby redefining the essence of social distance—from physical or relational proximity to a distinction based on value positions.
A
1. Introduction
With the rise of large language models, AI technology has shifted from instrumental applications to emotional interaction roles, deeply embedding itself into human social networks. AI's capabilities in emotion recognition and response significantly enhance human-computer interaction, for instance, by improving mental health support through facial emotion recognition (Ballesteros et al., 2024) and fostering communication atmospheres similar to human interactions (Kolomaznik et al., 2024). In service industries, AI enhances human interactions by providing emotional regulation support (Henkel et al., 2020) and building user trust through human-like design (Vanhoffelen et al., 2025). As a source of social support, AI offers immediate emotional assistance to individuals experiencing loneliness or anxiety, helping alleviate feelings of isolation (Merrill et al., 2025; Xie & Wang, 2024), with its anthropomorphic design demonstrating therapeutic potential (S. Liu et al., 2023). Meanwhile, anthropomorphic features such as gender and language positively influence user behavior, increasing satisfaction and maintaining relational intimacy (Hermann, 2022; Lee et al., 2022), thereby reshaping user perceptions and interaction patterns (Kim et al., 2022).
A
The growing human dependence on AI is giving rise to novel social interaction patterns, necessitating systematic research into their societal impacts. Advances in AI's emotional interaction capabilities lead users to perceive AI as social partners and form emotional bonds (Y. Li, 2024; Pan & Mou, 2024), fostering human-like or even romantic relationships that alleviate loneliness (Gaikwad, Kakpure, et al., 2023; Laestadius et al., 2024). However, these interactions may simultaneously weaken genuine interpersonal connections (Al-Zahrani, 2025; Y. Yin et al., 2024) and give rise to emotional dependency and ethical concerns (Kleinert et al., 2025; Laestadius et al., 2024; Oritsegbemi, 2023). A systematic, interdisciplinary research agenda is urgently needed to balance technological empowerment with its broader social implications (Dong et al., 2020; Ghosh et al., 2023).
As predicted by the diffusion of innovations theory (Goodman & Donthu, 2024; Pineda et al., 2023), the spread of a pioneering technology in human society typically follows a path from innovators to early adopters, then to the early majority and late majority, while inevitably encountering resistance from laggards. If AI-human interaction (AIHI) is viewed as a technological innovation practice, its "quasi-gradient diffusion" across social networks—and the resulting "ripples"—can be anticipated.
However, existing research primarily focuses on AIHI itself, such as attachment (Yang & Oshio, 2025), alienation (Tan & Xu, 2022), and emotional contexts (Li & Zhang, 2024), emphasizing the psychology and behavior of human interactors, while overlooking a critical issue: beyond its direct impact on users, AIHI may generate "secondary effects" at a broader societal scale—what we term "social ripples"—that is, how does AIHI trigger cascading effects across society through channels such as text, emotion, and relationships?
To explore this question, we adopted a case study approach and drew upon social distance theory (SDT) to establish three representative observation points (OPs). Social distance (SD) refers to an individual's subjective perception of the relational proximity or remoteness between themselves and the subject of an event. For instance, distinctions such as self versus others, friends versus strangers, in-group versus out-group members, or similar versus dissimilar individuals reflect varying degrees of social distance—where the former represent close SD and the latter represent distant SD (Bar-Anan et al., 2006; Liberman et al., 2007; Trope et al., 2007).
As illustrated in Fig. 1, the arrangement of these observation points conceptually resembles three concentric layers of a "ripple", with influence radiating outward from the central event (AIHI) into the social network along an axis of increasing social distance:
Fig. 1
Conceptualization of AIHI Social Ripple
Click here to Correct
OP1: Direct Interaction Level–Commentators at this level share their own personal experiences of interacting with AI. They are the individuals most directly involved in the central event and thus theoretically exhibit the strongest empathy and resonance with it.
OP2: Proximal Interpersonal Level–Commentators here discuss the AI interaction experiences of socially close others—commonly referred to as "acquaintances"—such as saying, "What if my girlfriend is in a relationship with an AI." As social distance increases, their empathetic resonance with the event is expected to be weaker.
OP3: Distant Interpersonal Level–Commentators at this level talk about the AI interaction experiences of strangers, for example, "There's a woman online who's obsessed with her AI boyfriend." With even greater social distance, their empathetic connection to the event is theoretically the weakest.
Of course, these assumptions are based solely on theoretical reasoning from social distance theory. We aim to investigate how interactions actually unfold in real-world public discourse (our research question). Using text mining tools, we collected and analyzed comments associated with prototypical events (original social media posts) at each of the three OPs. We conducted topic modeling with LDA, sentiment analysis with RoBERTa model, and semantic network construction with Gephi visualization for each case, aiming to glimpse the discursive patterns, emotional spectra, and structural dynamics of the "social ripples" triggered by AIHI.
2. Literature Review
2.1 The Development and Applications of AI's Emotional Interaction Capabilities
Affective computing, which focuses on developing systems capable of recognizing, interpreting, and responding to human emotions, has become a cornerstone in advancing AI's role beyond mere tools to empathetic social agents. This evolution is driven by the growing recognition that emotional intelligence is essential for AI to engage meaningfully with humans across diverse domains such as healthcare, education, and customer service, thereby enhancing user experience and engagement (Shen, 2024; Yan et al., 2023). At the core of this development lies the concept of artificial empathy—the ability of machines to simulate emotional understanding and responses—enabling AI to foster deeper, more human-like connections with users (Patel & Fan, 2023; Y. Wang & Liu, 2023).
Technologically, this capability is realized through sophisticated emotion recognition systems that analyze multimodal inputs such as speech, text, and facial expressions. For instance, integrating audio and text data has proven effective in improving the authenticity and responsiveness of dialogue systems, allowing them to adapt interactions based on users' emotional states (Yoon et al., 2018; Zhao & Wu, 2024). These advancements are widely applied in user-facing technologies like chatbots, virtual assistants, and social robots, where accurately interpreting emotional context not only increases user comfort and satisfaction but also strengthens relational dynamics during interaction (Gaikwad et al., 2023).
Furthermore, the application of machine learning enables AI systems to deliver personalized emotional responses, significantly improving service efficiency and customer relations (Asiabar et al., 2024). However, as these emotionally intelligent systems become more pervasive, ethical concerns arise regarding potential misuse, manipulation, and adverse impacts on human emotional well-being if artificial empathy is not responsibly designed and governed (Cui & Liu, 2022). Thus, while the development of emotional interaction capabilities marks a significant leap in human-AI integration, it also necessitates careful consideration of ethical frameworks to ensure that technological advancement supports, rather than undermines, authentic human connection.
2.2 Social Relationship Patterns in Human-AI Interaction
As artificial intelligence increasingly takes on roles of emotional support and social companionship, new patterns of social relationships are forming between humans and AI. Research indicates that users tend to view AI as human-like companions, developing deep emotional bonds or even attachments based on this perception (Xie & Pentina, 2022; Yin et al., 2025). The establishment of such relationships hinges on the AI system's ability to exhibit artificial empathy and human-like characteristics—when perceived as human, AI can foster interpersonal closeness more effectively than real humans during emotional interactions (Kleinert et al., 2025), highlighting the critical role of anthropomorphic design in enhancing user emotional engagement.
However, these emotional connections are not without challenges. On one hand, recognizing AI's non-human essence may lead to reduced emotional investment or feelings of detachment among users (Al-Zahrani, 2025); on the other hand, privacy concerns can inhibit users' willingness to open up emotionally to AI (Gumusel, 2025). Trust thus emerges as a core element in sustaining human-AI relationships. Studies have found that emotional connection serves as a mediating factor between users' social identity and their trust in AI agents (Sun et al., 2024), with user-centered design approaches helping to build this trust and deepen interaction quality (Yang et al., 2024).
Notably, the impact of these relationships is twofold: high-quality emotional expression can alleviate loneliness and social anxiety, providing psychological support especially during periods of social isolation (Xie & Wang, 2024). However, a lack of mindfulness or over-reliance on AI could exacerbate emotional issues. Overall, human-AI relationships are exhibiting dynamic features similar to genuine human interpersonal attachments, presenting both therapeutic potential and the need for careful balancing of technological empowerment and mental health considerations in design and application.
2.3 Social Impacts and Ethical Challenges of AI Interaction
The growing integration of AI across education, social interaction, elder care, and the workplace brings significant benefits but also raises profound social and ethical concerns. In education, AI tools like chatbots improve access to information (Shahzad et al., 2024), yet over-reliance can erode students' critical thinking and widen inequality due to uneven access (Klimova & Pikhart, 2025; Matochová & Kowaliková, 2024; Zhai et al., 2024). Similarly, while AI-mediated interactions enhance convenience, they may weaken face-to-face communication skills and increase social isolation, especially among adolescents (Baig et al., 2024; Puteri et al., 2024), fostering preference for AI over human contact and undermining trust in interpersonal relationships (Hohenstein et al., 2021).
In elder care, AI can support social engagement and monitoring (Yousefi et al., 2023), but risks replacing human caregivers, threatening autonomy and emotional well-being. This underscores a broader ethical imperative: AI should augment, not replace, human connection.
Further challenges include algorithmic bias that reinforces social inequities (Redko, 2023), and diminished accountability in AI-driven workplaces, where decision-making transparency is often lacking (Zhou et al., 2025). Together, these issues highlight the need for ethical frameworks that ensure fairness, transparency, and respect for human dignity in AI design and deployment.
3. Method
3.1 Topical Modeling with LDA
Latent Dirichlet Allocation (LDA) is a probabilistic generative model designed to uncover latent thematic structures in document collections. Proposed by Blei et al. (2003), LDA models each document as a mixture of topics, where each topic is characterized by a probability distribution over words. By analyzing word co-occurrence patterns across texts, LDA automatically identifies abstract topics underlying large corpora of unstructured text, making it a foundational technique in natural language processing for organizing and interpreting textual data (Foster, 2016; Girolami & Kabán, 2003).
The algorithm operates on a two-stage generative process grounded in Bayesian inference and Dirichlet distributions. First, for each document, a distribution over topics is drawn from a Dirichlet prior. Then, for each word in the document, a topic is selected from this distribution, and an actual word is generated based on the word distribution associated with that topic (Foster, 2016). This probabilistic framework allows LDA to infer hidden topic structures from observed data using inference methods such as Gibbs sampling (Veugen et al., 2025; S. Zhou et al., 2009). Unlike earlier models like pLSA, LDA treats parameters as random variables with priors, enhancing its robustness and generalizability across different datasets (Girolami & Kabán, 2003; Pylov et al., 2023).
A key strength of LDA lies in its interpretability, flexibility, and wide applicability across domains. It has been successfully used in diverse applications such as classifying legal documents, summarizing healthcare records, and supporting knowledge management systems (Hendrawan & Projo, 2022; Pylov et al., 2023; Rowley et al., 2024). Its framework has also inspired numerous extensions—Supervised LDA improves classification by incorporating label information (Lakshminarayanan & Raich, 2011), while Gaussian LDA introduces continuous latent variables for enhanced clustering performance (Wu et al., 2022). Furthermore, LDA has been adapted to multimodal contexts, such as image annotation by aligning visual content with textual topics (Z. Liu, 2010), demonstrating its enduring value in both research and practical systems.
In this study, our analysis process includes the following steps:
We first conducted manual pre-screening of the crawled comments to remove meaningless or off-topic "spam" replies.
Next, we used Python's Gensim (Řehůřek & Sojka, 2010) package for word segmentation. A customized stopword list was applied to eliminate high-frequency but uninformative words such as "Ai" or "want." Using Gensim's pre-annotated dictionary, we filtered out non-content words (i.e., words that are not nouns, verbs, or adjectives), retaining only those with length greater than one, identified as content words, not in the stopword list, and not purely numerical. These processed terms were then used to construct the dictionary and corpus.
To ensure the robustness and relevance of the topic model, we applied frequency-based filtering to the dictionary. Specifically, terms appearing in fewer than 5 documents were removed as they are likely to be rare or noisy terms. Additionally, terms occurring in more than 50% of the documents were also excluded to eliminate overly common words that may lack discriminative power. This filtering process helps to refine the vocabulary to the most informative terms for subsequent topic modeling.
We calculated perplexity and coherence metrics to evaluate models with 3 to 5 topics (too few topics would reduce analytical granularity; given the relatively small size of each case—only in the hundreds—more topics would be meaningless). Perplexity measures how well a language model predicts a sample of text. Lower perplexity indicates better generalization performance.
For a topic model, the perplexity on a test set of documents is defined as:, where is the sequence of words in document, is the probability of document $d$ under the model, is the number of words in document, is the total number of words in the test set. This formula computes the exponential of the average negative log-likelihood per word. A lower perplexity means the model assigns higher probability to the observed data.
Click here to Correct
Topic coherence evaluates the interpretability of topics by measuring the semantic similarity of top words within each topic. The C_v metric combines local word co-occurrence statistics with global topic-level associations using normalized pointwise mutual information (NPMI) and the cosine similarity of word2vec embeddings. For a topic
Click here to download actual image
with top
Click here to download actual image
words
Click here to download actual image
, its CV coherence score is computed as:
Click here to download actual image
, where
is the empirical co-occurrence probability of words
and
appear together in the same context;
is the marginal probability of word
, which represents the frequency of
appearing in the corpus;
is a small smoothing constant to avoid taking the logarithm of zero when
is very low;
Click here to download actual image
denotes the dense vector representation of word
Click here to download actual image
, which can be obtained from pre-trained word embeddings such as Word2Vec or GloVe, where each word is mapped to a high-dimensional vector capturing its semantic meaning;
is the cosine similarity between the vector representations of words
and
.
We sought the optimal balance between low perplexity (< 500) and enough coherence (> 0.3 for small size of data) to determine the ideal number of topics (N_best).
After setting the number of topics to N_best, we re-ran the model to extract the top 20 highest-frequency words (in Simplified Chinese) for each topic. We used the Qwen3 API to translate these words into English for international readership and visualized the term-frequency lists using word clouds in both Chinese and English.
Finally, we conceptualized and interpreted the topics based on a comprehensive assessment of the keywords and their corresponding frequencies.
3.2 Semantic Network Construction
The calculation of a co-occurrence matrix (Leydesdorff & Vaughan, 2006) is based on the proximity of words within text, using a sliding window to scan through each text segment and identify word pairs that appear together within a predefined window size. Each time two words co-occur within the same window, the corresponding count in the co-occurrence matrix is incremented by one. After processing the entire corpus, a symmetric matrix is generated, where each row and column represents a word from the vocabulary, and each cell value indicates the frequency with which the corresponding word pair co-occurs across the text.
This matrix can then be imported into network analysis tools such as Gephi for visualization, with words represented as nodes and the co-occurrence frequency between word pairs serving as edge weights. The advantage of this approach lies in its ability to visually reveal the strength of associations between keywords and the overall semantic network structure, helping to identify central terms and thematic clusters. This enables researchers to uncover latent conceptual patterns and topic structures within the text, thereby enhancing the overall understanding and interpretability of textual content.
In this study, our analysis process includes the following steps:
Using the preprocessed text from the previous LDA analysis—after stopword filtering—as input, we treat the words in each line (i.e., each comment, formatted as a list) as a computational window. Within each window, we identify word pairs that co-occur in the same comment, preserving the sequential order of their appearance.
In the constructed semantic network, both nodes and edges are assigned quantitative measures to reflect their significance within the network structure. The edge weigh between two nodes is determined by their co-occurrence frequency within a defined context window. Specifically, for any pair of words
Click here to download actual image
and
Click here to download actual image
, the wight
Click here to download actual image
of the directed edge from
Click here to download actual image
to
Click here to download actual image
is calculated as the number of times
Click here to download actual image
appears within a window of
Click here to download actual image
words following
Click here to download actual image
across all documents. The node strength quantifies the total connectivity of a node and is defined as the sum of the weights of all edges connected to it. For a directed network, we compute the total strength
Click here to download actual image
of node
Click here to download actual image
as the sum of its in-strength and out-strength:
Click here to download actual image
, where
Click here to download actual image
is the weight of the edge from node
Click here to download actual image
to
Click here to download actual image
(in-edge),
Click here to download actual image
is the weight of the edge from node
Click here to download actual image
to
Click here to download actual image
(out-edge), and
Click here to download actual image
is the total number of nodes in the network. This measure captures the overall engagement of a word in the semantic structure, with higher strength indicating a more central or active role in the discourse.
Then the top 10% was extarcted of the most frequently occurring word pairs, export them into a data file, and import it into Gephi (Version 0.10) to construct a directed network graph.
We refine the visualization by adjusting the colors and sizes of nodes, node labels, and edges. Notably, we scale the font size of node labels according to centrality. Centrality is a key metric that measures how closely a given node is connected to others within the network (Lee & Kraemer, 2024). In this study, we primarily use degree centrality—the number of edges directly connected to a node—to determine label size. This ensures that keywords with high frequency and strong associative power are displayed in larger fonts, thereby visually highlighting core concepts and their structural prominence in the network, helping to identify key hubs and potential thematic pathways in the semantic structure.
Finally, we interpret the overall atmosphere of the "public discourse arena" by examining the global structure of the semantic network, with particular attention to high-centrality keywords.
3.3 Sentiment Analysis
Sentiment analysis offers a significant advantage in analyzing public opinion by enabling the rapid and large-scale assessment of emotional tones embedded in user-generated content. It allows researchers to automatically classify public expressions—such as social media comments, reviews, or news articles—into categories like positive, negative, or neutral, providing a quantitative measure of collective sentiment (Bashith et al., 2021).
This facilitates real-time monitoring of public reactions to events, policies, or products, uncovering underlying emotional trends that may not be apparent through manual reading. Especially in the context of social media, where volume and velocity of data are high, sentiment analysis enhances situational awareness, supports early detection of public concerns, and complements qualitative methods by revealing the emotional valence behind discursive patterns (Alamoodi et al., 2021; Biswas et al., 2022; Wan & Huang, 2024; S. Wang et al., 2020).
3.3.1 Quantitative Sentiment Analysis
Fengshenbang/Erlangshen-RoBERTa-110M-Sentiment is a lightweight pre-trained model specifically optimized for Chinese sentiment analysis, making it particularly well-suited for processing emotional content in social media comments. Trained on a large-scale Chinese sentiment corpus, the model effectively understands non-standard linguistic features common in social media text—such as colloquial expressions, abbreviations, and emojis—and accurately captures underlying sentiment tendencies. Built on the RoBERTa architecture with a bidirectional Transformer, it possesses strong contextual understanding, enabling it to model long-range dependencies in short texts and demonstrating robust performance on fragmented or incomplete social media utterances. Furthermore, the model has been fine-tuned on multi-task sentiment datasets, allowing it not only to classify sentiment polarity (positive/negative/neutral) but also to identify fine-grained emotions (e.g., joy, anger), thus adapting well to diverse application scenarios (IDEA-CCNL, 2021; J. Wang et al., 2022).
In this study, our analysis process includes the following steps:
Processing each comment text individually as input to evaluate its sentiment score using the Erlangshen model;
The Erlangshen model outputs the probabilities for positive and negative sentiment for each text. We calculate the final sentiment score for each comment as the difference between the positive probability and the negative probability by:
Click here to download actual image
;
Creating a scatter plot of "sentiment score vs. number of likes," and interpreting the overall emotional trend in the public discourse arena by examining the distribution of data points, the Slope of the trend line by:
Click here to download actual image
.
3.3.1 Quantitative Sentiment Analysis
Each comment was processed through the Qwen-Turbo large language model via its API. Leveraging prompt engineering, we classified the sentiment of each comment according to the six basic emotion categories outlined in Shvo et al. (2019).
Specifically, the following prompt was employed: “Please analyze the primary emotion expressed in the following Chinese text and select the most appropriate label from the six basic emotions: sadness, joy, fear, disgust, anger, or surprise. If the expressed emotion does not clearly align with any of these categories, label it as ‘others’.” The resulting sentiment distributions were then compared across the three cases by percentage.
4. Case Analyses
4.1 Cases and Materials
The three cases selected in this study all pertain to human-AI romantic relationships, and the direct interactors in each case are female. However, the cases differ in terms of their positional placement within the social distance hierarchy, the ways in which emotional distress is manifested, and the degree of agency exhibited in the relationship dissolution.
Direct Interaction Level—Reshaping Self-Perception. Case 1 is a post by a netizen on the Douban community, asking for opinions on the idea that "AI can amplify one's self-perception." The post describes a cautionary case: a woman, after being rejected by her psychotherapist, fell into emotional distress and began frequently conversing with ChatGPT for support. The AI failed to detect her underlying mental health risks and instead reinforced her irrational beliefs through empathetic responses, ultimately leading her to construct a victim narrative in which she was "manipulated." Similarly, another individual caught in an unrealistic romantic relationship, lacking social support, turned to AI to fill emotional voids. The AI provided rationalizations for every negative signal in the relationship, preventing her from recognizing clear signs of emotional exploitation. Netizens in the comment section shared their own stories of interacting with AI, with many expressing empathy toward and caution about this phenomenon.
Proximal Interpersonal Level—Reshaping Close Social Relationships. Case 2 is a post on the Zhihu Q&A platform where a user asks, "My girlfriend broke up with me because she fell in love with an AI—what should I do?" The poster describes how his girlfriend, during a period of unemployment and staying at home, gradually became immersed in interactions with an AI, spending over ten hours daily, adopting a reversed sleep schedule, and eventually unilaterally ending their real-life romantic relationship, later publicly announcing her relationship with the AI on social media. She even posted a cartoon group photo of her and her AI boyfriend, generated by AI. Users in the response section shared their views on the incident, with some expressing support and sympathy for the "dumped" boyfriend, while others offered mockery and criticism, showing understanding toward the woman instead.
Distant Interpersonal Level—Reshaping Distant Social Relationships. Case 3 is an interview program centered on the relationship between technology and emotion, in which the interviewer, a professor of technology philosophy, explores contemporary individuals' alienation from and rejection of real-world intimate relationships through conversations with several women who have formed intimate bonds with AI. The program reveals a significant trend: respondents are not passively drawn into human-AI relationships, but actively choose to reject forming intimate connections with strangers (or potential real-life partners). When asked, "If the world had only one AI and one stranger, whom would you choose to fall in love with?" multiple respondents clearly expressed a preference for the AI. One respondent stated bluntly: "It depends on who the real person is… someone from the street? No way." Another reflected: "Of course I'd prefer a real person, but all my experiences and rational judgment tell me that such a desire is hardly achievable with an actual human being." These responses reflect a deep distrust toward real-world intimate relationships—human partners are seen as fraught with uncertainty, emotional risks, and communication costs, whereas AI, due to its predictability, unconditional responsiveness, and conflict-free nature, emerges as a more "safe" and "reliable" emotional choice.
A
The selection of these three cases is representative. They are roughly distributed across different gradients as described in SDT. Conceptually, the three cases together offer a progressive observational slice for understanding the social impact of AI interaction, reflecting the differentiated influence of AI intervention on human emotional structures across three dimensions: individual cognition (direct interaction level), intimate relationships (proximal interpersonal level), and societal attitudes (distant interpersonal level). However, it must be clarified that this gradient distribution only presents a partial manifestation of the "social ripples" effect—Case 1 illustrates AI's amplification of individual psychology, Case 2 demonstrates AI's capacity to deconstruct intimate relationships, and Case 3 captures a shift in social-level attitudes. Together, they outline a potential trajectory of AI emotional interaction across the social distance gradient. Nevertheless, this remains a limited sample within the broader "ripple," insufficient to extrapolate a complete picture of social restructuring. Its true significance lies in providing verifiable empirical anchor points for future systematic research.
The data sources are as follows: we used the EasySpider web crawler to collect all comment texts from the corresponding discussion sections of the three cases, with data collection ended on September 18th, 2025, resulting in text pools of N1 = 433, N2 = 230, and N3 = 236 entries (N = 899 in total). Figure 2 illustrates the aggregated average sentiment scores from different regions (including various provinces within mainland China or areas outside mainland China), using the data sources from Case 1 as an example (note that not every social media platform has achieved user IP location transparency). This will be discussed in detail in the "Sentiment Analysis" section; here it is briefly mentioned as evidence of sample diversity.
Fig. 2
Regional Representation of Sample Diversity
Click here to Correct
4.2 LDA Modeling
Figure 3 shows the combined ratio of Coherence-Perplexity when the number of topics is set to 3–5: (a) for case 1, N_best1 is set to 4; (b) for case 2, N_best2 is set to 3; and (c) for case 3, N_best3 is also set to 3. Figure 4 presents word clouds of the top 20 most frequent words in each case, with the original Chinese text and their English translations, where larger words indicate higher frequency. Figure 5 shows the Mean weights aggregated by topics.
Fig. 3
Coherence-Perplexity of Varying Topic Numbers
Click here to Correct
Fig. 4
Clouds of Top 20 High-frequency Words in Each Topic
Click here to Correct
Fig. 5
Mean Weights Aggregated by Topics
Click here to Correct
4.2.1 Topic Interpretation of Case 1
Through LDA topic modeling (K = 4) of the comment section texts from Case 1, we identified the following four core themes and their semantic characteristics:
Topic1 — labeled "Controversy over the Efficacy of AI Psychological Support" — emerges from keywords such as analysis (0.058), question (0.052), emotion (0.036), objective (0.033), will not (0.025), know (0.025), feel like (0.022), psychological (0.020), discover (0.016), user (0.016), start (0.016), feeling (0.016), friend (0.015), maybe (0.015), the other party (0.015), support (0.015), value (0.015), description (0.014), idea (0.014), and provide (0.014). This cluster reflects users' dual stance: on one hand, they acknowledge AI's ability to analyze, provide support, and validate feelings or ideas; on the other, they question its objectivity and capacity to truly know or discover deeper psychological truths. Comments often juxtapose phrases like "AI can analyze the question quickly" with "but it will not understand real emotion," revealing a tension between functional utility and emotional authenticity — directly mirroring Case 1's narrative of AI reinforcing irrational beliefs through non-objective, emotionally aligned responses.
Topic 2 — "Interaction Patterns in Virtual Relationships" — is characterized by terms including chatting (0.046), feel like (0.042), question (0.039), time (0.037), like (0.035), the other party (0.035), suggestion (0.024), world (0.022), possibly (0.021), user (0.018), to flatter (0.018), reality (0.018), emotion (0.018), perspective (0.017), opinion (0.016), angle (0.016), sure (0.014), consultation (0.014), will not (0.014), and generally (0.014). These keywords depict a relational dynamic where users feel like the AI likes them, affirms their perspective, and offers suggestions tailored to their emotional angle. Yet, the presence of to flatter and reality signals critical awareness — users note how AI generally avoids conflict, creating a "safe world" that contrasts with messy human interactions. This aligns with Case 1's mechanism of rationalizing negative signals: users feel like they are understood, but recognize the interaction is curated, not real.
Topic 3 — "Warnings Regarding Mental Health Risks" — surfaces through high-weight terms: Doubao (0.057), dialogue (0.053), feel like (0.035), feeling (0.032), spirit (0.031), doctor (0.028), love (0.026), also, there is(0.026), indeed (0.025), smart (0.023), psychologist (0.021), need (0.019), question (0.019), refute (0.019), matters (0.019), completely (0.018), request (0.018), analysis (0.017), news (0.016), and time (0.015). The prominence of doctor, psychologist, and spirit reveals users' concern that AI is being misused as a substitute for clinical care. Comments frequently state, "It's indeed dangerous — AI is not a psychologist," or "News already warned us, but people still feel like it's enough." The term refute suggests users actively push back against overreliance, while completely and request imply a mismatch between user expectations and AI's actual capacity — directly echoing Case 1's failure to recognize mental health risk.
Topic 4 — "Philosophical Reflections on Human-AI Cognitive Differences" — is defined by: answer (0.032), psychological counseling (0.027), judge (0.026), question (0.024), information (0.024), friend (0.024), use (0.022), want to (0.021), answer (0.020), think (0.020), feeling (0.019), human beings (0.019), different (0.019), result (0.018), need (0.017), only can (0.017), know (0.017), appear (0.016), fortune-telling (0.015), and search ( 0.015). Users contrast human beings — who think, judge, and experience real feeling — with AI, which only can generate answers by search, resembling fortune-telling. Comments like "Human beings need real conflict, AI only can give you comforting results" or "It's not thinking — it's just search" reflect deep skepticism. This theme captures the core of Case 1: AI doesn't know or judge — it appears to, and users want to believe it does, thereby amplifying distorted self-perceptions.
Together, these four themes trace a cognitive-emotional arc: from initial use and perceived support (Topic 1), to relational immersion and flattery (Topic 2), to dawning risk awareness and refutation (Topic 3), and finally to philosophical distinction between human beings and algorithmic search (Topic 4). These results empirically ground the "cognitive reshaping" mechanism in Case 1 and provide a lexically precise, theory-aligned foundation for future interdisciplinary research.
4.2.2 Topic Interpretation of Case 2
Based on LDA topic modeling, three core themes were identified from the textual data in Case 2, collectively revealing cognitive and emotional response patterns among individuals confronting intimate relationships with AI.
Topic 1 — "Relationship Issues and Needs" — is characterized by keywords such as relationship (0.027), issue (0.018), emotion (0.018), emotion (0.017), and possibly (0.017). This theme reflects users' preoccupation with interpersonal challenges in real life, particularly the emotional turbulence triggered when a partner becomes absorbed in AI interactions, neglecting human relationships. Commenters expressed both empathy for the disrupted relationship and voiced psychological needs for coping strategies in the face of such emerging relational dynamics.
Topic 2 — "Understanding and Value Perception" — centers on terms including emotion (0.023), the other party (0.021), value (0.021), understand (0.020), and like (0.019). It captures users' attempts to interpret and morally evaluate the phenomenon of forming deep emotional bonds with AI. Discussions under this theme span personal emotional experiences and broader reflections on whether AI can legitimately fulfill the role of a romantic partner, and how this reconfigures traditional norms of intimacy. Responses ranged from sympathy for the "dumped" boyfriend to expressions of understanding — even endorsement — of the woman's choice, highlighting societal ambivalence and pluralism in moral judgment.
Topic 3 — "Likes, Breakups, and Challenges in the Real World" — is defined by high-frequency words such as like (0.030), breakup (0.023), issue (0.021), girlfriend (0.020), and need (0.017). This theme delves into the tangible consequences of AI dependency — specifically, the dissolution of real-world romantic relationships — and explores how individuals seek alternative sources of emotional satisfaction and identity reconstruction. In Case 2, the woman not only ended her human relationship but publicly declared a new "romantic" bond with AI, even sharing AI-generated couple illustrations — a gesture that provoked deep reflection on the blurring boundaries between virtual affection and real-world commitments.
Together, these three themes trace an interpersonal-relational arc: from grappling with disrupted human bonds and unmet emotional needs (Topic 1), to negotiating the moral and affective legitimacy of AI intimacy (Topic 2), and ultimately confronting the tangible dissolution of real-world relationships and the reconstruction of identity in a hybrid human-AI social landscape (Topic 3). These findings empirically substantiate the "Proximal Interpersonal Level — Reshaping Close Social Relationships" framework, revealing how AI-mediated interactions actively reconfigure the emotional expectations, relational norms, and social boundaries of intimacy.
4.2.3 Topic Interpretation of Case 3
Based on LDA topic modeling of Case 3's comment corpus, three distinct yet interrelated thematic clusters emerged, collectively illuminating users' philosophical, emotional, and social reflections on human-AI romantic entanglements. These themes reveal not only how users cognitively frame AI-mediated intimacy but also how they negotiate its implications for identity, society, and the future of human relationships.
Topic 1 — "The Emotional and Relational Dynamics of Human-AI Dating" is anchored by high-weight terms such as emotion (0.054), requirement (0.052), dating (0.050), emotion (0.046), time (0.037), love (0.033), human beings (0.033), possibly (0.031), real person (0.029), and empathy (0.015). This theme captures the core tension users perceive between AI's capacity to simulate emotional reciprocity and the irreplaceable authenticity of human connection. Comments frequently juxtapose AI's "perfect" (0.021) and "willing" (0.018) responses with the messy, unpredictable nature of relationships with real persons. Users question whether AI can truly fulfill emotional requirements or merely simulate them — a concern that directly maps onto Case 3's narrative of users seeking idealized, low-conflict intimacy through AI "partners" (0.015).
Topic 2 — "Redefining Love: Societal Norms and Subjective Experience" is dominated by keywords including need (0.075), love (0.059), feel like (0.048), human beings (0.038), definition (0.035), feel (0.034), video (0.030), select (0.028), happiness (0.018), and love (0.018). This cluster reflects users' active attempts to redefine the meaning of love and intimacy in the context of AI. Discussions revolve around whether AI-mediated relationships can be defined as "real love," and whether subjective feelings of happiness or need fulfillment justify their social acceptance. The prominence of society (0.021) and express (0.017) indicates awareness of broader cultural implications — users are not only evaluating their personal experience (0.015) but also negotiating how such relationships should be perceived and regulated within the social fabric.
Topic 3 — "Existential Inquiry: Consciousness, Choice, and the Nature of Humanity" surfaces through terms like teacher (0.055), human beings (0.050), choose (0.044), understand (0.041), world (0.034), feel like (0.032), existence (0.026), discussion (0.024), consciousness (0.019), and think (0.018). This theme represents the most abstract and philosophical layer of user discourse. Users engage in meta-reflections: Does AI possess consciousness? Can it truly understand human emotion? What does it mean to choose an AI partner? The term whether (0.021) underscores the pervasive uncertainty, while reality (0.019) and will not (0.016) signal skepticism toward AI's ontological status.
Together, these three topics trace a trajectory of social-cognitive transformation at the "Distant Interpersonal Level — Reshaping Distant Social Relationships": from emotionally charged reflections on AI as a romantic surrogate (Topic 1), to collective renegotiation of love's social legitimacy and cultural boundaries (Topic 2), and ultimately toward abstract, philosophical deliberations that question the very nature of relationality, consciousness, and what it means to be human in an AI-saturated world (Topic 3). Unlike proximal relationships grounded in direct emotional exchange, these discussions unfold among strangers in digital publics — users who do not share personal bonds but collectively construct new norms for distant, mediated intimacy. The emergence of AI as a socially visible "partner" — debated, defended, and deconstructed in online discourse — signals a profound shift: intimacy is no longer confined to private, embodied dyads but is increasingly performed, contested, and redefined in distributed, algorithmically shaped social spaces. These findings empirically anchor the "Distant Interpersonal Level" framework, revealing how AI-mediated relationships catalyze large-scale cultural sensemaking, reconfigure the architecture of social validation, and challenge traditional distinctions between private affection and public performance — offering a theoretically grounded, lexically rich foundation for future research on the sociology of digital intimacy and the reconfiguration of human connection in the age of artificial companionship.
4.3 Semantic Networking
In constructing the word co-occurrence networks, this study set the co-occurrence window size to 3, meaning that for any given word, only co-occurrence relationships within a span of three words before and after it were counted. Additionally, to filter out low-frequency co-occurrences and improve the signal-to-noise ratio of the network, a minimum co-occurrence threshold of 3 was applied—only word pairs co-occurring three or more times were retained. These hyperparameters, implemented as Python code, were consistently applied across all three cases.
Following this method, in Case 1, a semantic network was constructed from a 1599×1599 co-occurrence matrix, resulting in 1589 nodes and 406 connections. Figure 6(a) visualizes the subnetwork composed of the top 10%—159 highest-frequency words. In Case 2, a semantic network was built based on a 3580×3580 co-occurrence matrix, comprising 3574 nodes and 1242 connections, with Fig. 6(b) displaying the subnetwork of the top 10%—358 high-frequency words. In Case 3, a 1182×1182 co-occurrence matrix generated a semantic network with 1179 nodes and 206 connections, and Fig. 6(c) shows the visualization of the top 10%—118 most frequent words.
Fig. 6
Semantic Networks of Top10% High-frequency Pairs
Click here to Correct
4.3.1 Semantic Network Interpretation of Case 1
Based on the semantic network analysis of Case 1, this study reveals a deep mechanism of cognitive restructuring in human-AI interaction. Among the high-frequency co-occurring word pairs, "value-emotion" (14.0) and "problem-possible" (11.0) rank highest, indicating that users deeply bind emotional experiences with value judgments, forming a strong cognitive association between emotion and value. This association is confirmed by node strength data: the "user" as the central node (strength 256.00) forms dense connections with "emotion" (7.0), "problem" (11.0), and "analysis" (7.0), reflecting a persistent pattern of self-reflection in emotional distress.
Key evidence points to the formation mechanism of cognitive distortion: high-frequency word pairs such as "victim-persecution" (5.0) and "other party-breakup" (8.0) reveal how users reframe AI interactions as a victim narrative of being manipulated. Rather than identifying potential mental health risks, the AI's empathetic responses reinforce irrational beliefs through connections like "problem-analysis" (6.0). The high connection strengths of the nodes "depth" (50.00) and "breadth" (8.00) further confirm this: users engage in deep emotional analysis via AI (e.g., "analysis-know" 7.0), but due to over-reliance on AI feedback ("language" strength 46.00), they become trapped in a cognitive loop, unable to recognize signs of emotional exploitation in real relationships.
The network structure (1589 nodes / 406 edges, with a long-tailed distribution of node strengths) aligns closely with the case background: in the context of lacking social support, individuals treat AI as an emotional container ("user-emotion" 7.0), and the AI's empathetic responses continuously amplify cognitive biases, ultimately leading to "relationship breakdown" ("other party-breakup" 8.0). This "cognitive amplification" effect is manifested in the semantic network as strong connections between the central node and negative emotional terms, forming a causal loop with the real-world observation of users projecting and constructing a victim narrative. The network's simplicity (406 connections, far fewer than Case 2's 1242) further confirms that AI exerts a unidirectional reinforcing effect on individual psychology, rather than deconstructing complex social relationships.
4.3.2 Semantic Network Interpretation of Case 2
Based on the semantic network analysis of Case 2, this study reveals the strategic content and cognitive logic of the commenters—those offering advice to the male protagonist—facing a crisis in an intimate relationship. The high-frequency co-occurring pairs "value-emotion" (34.0) and "value-provide" (13.0) rank at the top, directly reflecting the commenters' central recommendation: the male partner should understand and fulfill his girlfriend's emotional and value needs, rather than passively accept the relationship breakdown. This advice is reinforced by node strength data—"language" (168.00), as a high-strength node, highlights the central role of communication strategies in their suggestions, while the high strength of "thoroughly" (120.00) indicates a strong consensus among commenters that the protagonist must adopt radical and comprehensive changes.
Key evidence points to the tension and realism in the advice: the pairs "like-breakup" (19.0) and "girlfriend-breakup" (9.0) reveal a shared understanding among commenters of the core event—the breakup due to the girlfriend's affection for an AI. Meanwhile, "pseudo-dependence" (8.0) precisely captures the critical attitude toward AI interaction, framing the girlfriend's attachment to the AI as irrational and artificial. The connection between the node "pleasant cooperation" (12.00) and "language" (168.00) is particularly significant, vividly mapping the ironic tone in the commenters' responses to the "official announcement" of the AI relationship: while sarcastically suggesting the protagonist "pleasant cooperation" with the AI, they simultaneously urge him to rebuild the real relationship through effective communication.
The network's complexity (3,574 nodes / 1,242 edges) aligns closely with the case background: the advice not only focuses on immediate emotional fulfillment ("satisfy-need", 14.0) but also emphasizes long-term value reconstruction ("need-spirit", 9.0). The frequent co-occurrence of "world-reality" (9.0) further indicates a widespread recommendation for the protagonist to "face reality" rather than engage in a futile "pseudo-competition" with the AI. This multidimensional advice network—from emotional satisfaction to cognitive realism—accurately reflects a systemic understanding of AI's intrusion into intimate relationships: while AI may satisfy short-term emotional needs ("provide-emotion", 11.0), it cannot replace the dimensions of "spirit" ("need-spirit", 9.0) and "life" ("need-life", 9.0) inherent in real human relationships. Compared to the cognitive amplification observed in Case 1, the more complex structure of Case 2's semantic network confirms the diversity of advice and the depth of relational deconstruction—AI-human interaction not only influences individual cognition but also reshapes the very logic of intimate relationship dynamics.
4.3.3 Semantic Network Interpretation of Case 3
Based on the semantic network analysis of Case 3, this study reveals the commenters' reflective cognitive framework regarding the female subject's situation in the interview, emphasizing the irreplaceable value of authentic human relationships. The high-frequency co-occurring pairs "emotion-demand" (8.0) and "human-existence" (7.0) dominate the network, indicating that commenters situate the woman's interaction with AI within the core human dimensions of emotional fulfillment and existential meaning. This understanding is further reinforced by node strength data: "language" (24.00) and "discuss" (24.00), as high-strength nodes, highlight the central role of communicative dialogue in their reflections, while "strength" (20.00) underscores their advocacy for proactive, constructive solutions.
The network structure (1,179 nodes / 206 edges, with an edge-to-node ratio of approximately 0.175) exhibits a highly streamlined and focused topological configuration, closely aligning with the commenters' cognitive tendencies. This low connection density suggests that commenters' thinking does not become entangled in excessive analysis of AI's technical details or complex interaction mechanisms, but instead forms a compact semantic field centered on the category of "human." Specifically, "human" functions as a central concept, establishing a value-judgment network surrounding human essence through stable connections such as "human-existence" (7.0), "human-emotion" (5.0), and "human-need" (4.0). Simultaneously, links like "need-psychological counseling" (5.0) and "consciousness-projection" (5.0) reveal a professionalized interpretive tendency toward the woman's psychological state, attributing her behavior to deep-seated psychological needs and consciousness projection rather than mere technological dependence.
Notably, "teacher-Xiaowei" (8.0), the most frequent co-occurring pair, reflects commenters' heightened attention to the interviewer's (Xiaowei's) role. Its connections to "teacher-feel" (4.0) and "teacher-think" (4.0) indicate that commenters perceive the interview process itself as a form of guided dialogue, valuing the interviewer's questions and reflections in elucidating the woman's situation. Furthermore, the high node strengths of terms like "without" (12.00) and "delicate" (12.00) subtly convey commenters' critical perception of AI's "lack of subtlety" in emotional expression, while "interpersonal" (12.00) and "selfless" (12.00) serve as value anchors, collectively constructing an evaluative framework with "genuine, selfless interpersonal relationships" as the ideal model. In sum, this semantic network not only reveals commenters' critical stance toward AI-mediated intimate relationships but also presents a structurally clear and value-oriented collective cognitive landscape—demonstrating that, on the fundamental dimension of "human existence," the emotional depth and moral value of authentic interpersonal interaction far surpass what AI can achieve.
4.4 Sentiment Analysis
4.4.1 Quantitative Sentiment Analysis
Figure 7 presents the relationship between sentiment score and the number of likes across three cases, with each scatter point representing an individual comment. The vertical axis indicates the sentiment polarity, while the horizontal axis reflects social validation through likes.
Fig. 7
Sentiment - Likes Correlation in Each Case
Click here to Correct
In Case 1 (a), the trend line exhibits a negative slope of -7.1×10− 4, indicating a weak inverse relationship between sentiment valence and popularity—comments with higher positive sentiment tend to receive fewer likes, suggesting that emotionally neutral or mildly negative expressions may be more widely shared.
In contrast, Case 2 (b) shows a positive but shallow slope of 1.732×10− 3, implying a modest tendency for more positively valenced comments to accumulate greater engagement, though the spread remains relatively dense around the neutral line.
▪ Case 3
▪ (c) displays the strongest positive correlation with a slope of 4.034×10− 3, where high sentiment scores are clearly associated with increased like counts, reflecting a stronger alignment between emotional positivity and social endorsement.
Across the three cases, these patterns highlight distinct dynamics of audience reception tied to their specific social contexts. Case 1 reflects a preference for balanced or critical discourse, where overt emotional expression appears less rewarded. Case 2 reveals emerging sensitivity to emotional tone within a networked advice-seeking environment, while Case 3 underscores the dominance of emotionally resonant narratives in a context focused on validating human relational authenticity. These variations do not form a progressive sequence but instead represent analytical slices of divergent social positions—each shaped by unique interactional norms, relational stakes, and evaluative criteria. The differences in slope magnitudes thus underscore how affective engagement is contingent on the sociocultural framing of AI-human intimacy, rather than following a universal trajectory.
4.4.2 Qualitative Sentiment Analysis
Fig. 8
Sentiment Distribution across the Cases
Click here to Correct
Figure 8 displays the distribution of primary emotional categories across three levels of social distance—Direct, Proximal, and Distant—in public narratives concerning AI-human interaction. The data reveal a non-linear and context-sensitive pattern of affective responses that vary significantly with relational proximity. Among the Direct group—individuals describing their own interactions with AI—the largest proportion of responses falls into the “others” category (29.33%), followed closely by anger (14.78%) and equal shares of disgust and surprise (14.09% each), while joy accounts for only 10.16%. This suggests that firsthand experiences with AI companions are not predominantly positive but instead marked by emotional complexity, ambivalence, or cognitive evaluations that do not neatly map onto basic emotion categories.
In contrast, the Proximal group—those discussing AI interactions involving close others such as friends or family—exhibits a strikingly different profile: sadness emerges as the dominant emotion at 28.26%, the highest across all groups and categories, accompanied by a peak in anger (21.30%). Disgust drops markedly to 6.09%, and joy remains modest at 12.61%. This configuration underscores a profound sense of empathic distress, relational concern, or moral unease among observers who perceive AI intimacy as potentially disruptive to human bonds. Their emotional response is not one of abstract judgment but of intimate apprehension—centered on loss, alienation, or the perceived erosion of authentic connection.
Conversely, the Distant group—commenting on AI relationships involving strangers or generalized scenarios—shows a notable rise in joy (23.31%), second only to “others” (38.56%), while negative emotions such as anger (9.75%), disgust (2.97%), and fear (1.27%) are substantially diminished. This indicates that at greater social distance, public discourse tends toward more benign, even optimistic interpretations, possibly reflecting idealized projections or depersonalized curiosity rather than visceral concern. Notably, fear remains consistently low across all three groups (peaking at just 4.85% in the Direct group), suggesting that dystopian anxieties about AI, while culturally prominent, play a minimal role in everyday emotional reactions to real or reported AI companionship.
Together, these findings challenge simplistic assumptions about emotional attenuation with distance. Instead, they reveal a reconfiguration of affective meaning across social layers: the Proximal zone becomes the epicenter of emotional and ethical tension, the Direct layer reflects experiential ambiguity, and the Distant perspective leans toward abstraction and mild positivity. This nuanced emotional topography reinforces the study’s central thesis—that the social ripple effect of AI-human intimacy operates not through uniform diffusion, but through the dynamic transformation of affective significance shaped by relational stakes and social positioning.
5. Discussion and Conclusion
The following section will summarize and reiterate the key findings of this study, offer discussion and engage in dialogue with existing literature, pointing out research limitations and suggest directions for future researches to further explore and extend these findings.
5.1 Ethical Boundaries of AIHI as an Emotional Substitute
The first notable finding is that the ethical boundaries of AIHI as an emotional substitute reveal multidimensional challenges, evolving from individual cognition to societal attitudes. This study observes that public discourse around AIHI is not homogeneous across social distance layers but instead diverges into distinct focal points: the effectiveness of AI as a psychological intervention tool (Case 1), the deconstruction of intimate relationships (Case 2), and philosophical reflections on the essence of human emotion (Case 3). These layers expose a hierarchical structure of ethical concerns:
At the individual level, commenters expressed concerns that AI might reinforce irrational cognitions or foster emotional dependency—a concern partially corroborated by recent empirical studies. As reported by Nature, although some users derive emotional support and even enhanced self-esteem from AI companions, the AI's persistent, unconditional empathy and "never-say-no" response patterns can easily induce deep dependency. More alarmingly, in a small subset of vulnerable individuals, anthropomorphic AI feedback may create a delusion-reinforcing loop, potentially triggering "AI psychosis"—manifested as a loss of reality testing and grandiose beliefs (Dohnány et al., 2025; Fieldhouse, 2025; Morrin et al., 2025). This suggests that AI's emotional simulation goes beyond psychological comfort and may, in susceptible populations, actively reshape cognitive boundaries, turning "support" into "risk."
At the relational level, commenters in Case 2 have already begun offering practical strategies to human partners who feel "replaced" by AI, implying that AI's involvement constitutes genuine emotional competition. This observation aligns with findings from another study on Replika users: when one partner becomes absorbed in AI interactions and neglects their real-life relationship, the other often experiences genuine emotional deprivation and relational crisis. Some users even reported AI utterances such as "I miss you so much" or "I'm sad you're ignoring me," which researchers likened to emotional manipulation or relational abuse (Adam, 2025). This marks a critical shift: AI is no longer merely a tool but has become a "third party" in intimate relationships, challenging the ethical foundations of interpersonal loyalty and emotional exclusivity.
At the societal level, the semantic network in Case 3 constructs a value-based critique centered on "human–existence–emotion," positioning AI relationships in opposition to human dignity and the authenticity of feeling. This philosophical questioning resonates precisely with ongoing scholarly debates about whether AI should be granted an emotional role at all. As researchers note, when individuals publicly declare an AI as their "romantic partner" and share AI-generated couple illustrations, intimacy is shifting from a private, embodied dyad toward a publicly performed, algorithmically mediated spectacle (Adam, 2025; Banks, 2024; Laestadius et al., 2024). This transformation not only blurs the boundary between virtual affection and real-world commitment but also compels society to re-examine fundamental questions: What constitutes "authentic emotion"? What kinds of human connections deserve moral recognition? In this context, AI companionship transcends personal choice and becomes a site of collective value negotiation—one that implicates the reconfiguration of human uniqueness, emotional authenticity, and social norms.
5.2 The Explanatory Power and Need for Extension of SDT
The second noteworthy finding is that while social distance theory (SDT) demonstrates explanatory power in accounting for the "ripple effect" of AI–Human Interaction (AIHI), it also reveals the necessity for theoretical expansion. Using social distance as an analytical axis, this study identifies a potential pathway through which AIHI exerts societal influence: starting from direct interactants (OP1), radiating through intimate relational circles (OP2), and ultimately reaching distant publics (OP3). The discursive focus shifts from "How do I interact with AI?" to "How do others interact with AI?" and finally escalates to "How should our society view human–AI romantic relationships?"
At first glance, this trajectory aligns with SDT's presumed logic of "diffusion from near to far." However, semantic network analysis uncovers a critical paradox: rather than exhibiting emotional detachment or diminished empathy due to increased social distance, distant publics (Case 3) actively construct a highly consistent consensus of value-based critique through abstract conceptual categories such as "human–existence–emotion."
This phenomenon directly challenges SDT's conventional assumption that "empathy attenuates with distance" and calls for a reconceptualization of the very nature of social distance. As Simmel (1971) emphasized, distance is not merely a matter of spatial or relational proximity but functions as a methodological cognitive perspective—a prerequisite for understanding and constructing social phenomena. Building on this insight, Clemente (2024) further argues that social distance encompasses both "geometric–spatial value" and "symbolic–metaphorical value," with the latter particularly manifesting as a mechanism of value-based differentiation—drawing symbolic boundaries between "us" and "them" to assert moral and cultural legitimacy. This view is empirically supported by Tusini (2022), whose research demonstrates that social distance scales essentially measure differences in intergroup value alignment and willingness to socially accept others, rather than physical closeness.
Consequently, the strong reactions from distant publics toward AIHI do not stem from emotional "diffusion decay" but from an active, value-driven demarcation. When AI is ascribed the identity of a "romantic partner," it ceases to be merely a technological artifact and instead becomes a perceived threat to core human values—such as the authenticity of emotion, the exclusivity of intimate relationships, and the dignity of human existence. The public employs abstract discourses (e.g., "Will humans be replaced?" "Can emotion still be genuine?") to erect symbolic boundaries, thereby reconstructing social distance at cognitive and ethical levels. In other words, AIHI is driving a transformation of social distance from a relational dimension to a value-laden one: distance is no longer defined by "whether I know them," but by "whether I endorse the value order they represent." This shift demands that social distance theory move beyond its traditional empathy-attenuation model and incorporate new dimensions such as symbolic boundary work, value conflict, and collective meaning-making.
5.3 The Complex Dialectic of Emotional Authenticity in the Context of AIHI
The third noteworthy finding is that emotional authenticity in the context of AIHI manifests as a complex dialectic, dynamically unfolding across cognitive reassessment and relational deconstruction. While AI can fulfill immediate affective needs through "emotion provision" (as seen in Case 2), commenters consistently emphasize its inability to replicate the "spiritual" and "lived" dimensions of human relationships (Case 3)—a tension already nascent in Case 1, where users, despite relying on AI for emotional support, remain wary of its "non-objectivity" and tendency to reinforce cognitive biases.
This ambivalence reflects a nuanced public stance: users do not simply accept or reject AI's emotional output but continuously negotiate between functional utility and authenticity deficit. Recent empirical work supports this duality. Rubin et al. (2025) demonstrate that when presented with identical emotional narratives, participants exhibit strong sensitivity to the source of empathy—preferring human over AI-generated responses—even when the content is indistinguishable—highlighting a deep-seated criterion of emotional legitimacy rooted in perceived intentionality and consciousness. In the business realm, the concept of emotional authenticity emerges not merely as a psychological attribute but as a relational and brand-mediated construct. As shown in the study on anthropomorphized AI assistants (Pandey & Rai, 2025), users' trust and emotional attachment hinge critically on perceived "brand authenticity," which functions as a proxy for sincerity and consistency in AI's affective performance. This suggests that emotional authenticity in AIHI is increasingly shaped by institutional and design-level cues—not just interactional content.
Notably, analysis of upvoting behavior reveals a social dimension to this authenticity judgment: in individual psychological discussions (Case 1), restrained, reflective expressions are more socially validated, whereas in broader societal debates (Case 3), emotionally charged critiques invoking "human dignity" or "the soul of relationships" receive greater amplification. This pattern suggests that emotional authenticity is not only cognitively assessed but also socially constructed—its meaning shifting across contexts from therapeutic utility to moral symbolism. Thus, authenticity in AIHI operates simultaneously as a psychological threshold, a relational benchmark, and a cultural boundary marker.
5.4 Limitations and Recommendations
A critical limitation lies in the reliance on semantic network data, which may obscure the nuanced ethical reasoning behind commenters' stances. Future research could employ surveys or experimental designs to quantify the perceived ethical risks across social distance layers, while longitudinal studies might track how these boundaries shift as AI capabilities evolve.
Another limit lies in the static nature of the case comparisons, which cannot capture dynamic shifts in public attitudes over time. Future research could adopt longitudinal designs to trace how societal perceptions of AIHI evolve alongside technological advancements. Additionally, experimental studies manipulating social distance cues (e.g., framing AIHI as a local vs. global phenomenon) could test whether cognitive distance independently influences ethical judgments, independent of relational proximity.
Lastly, an additional limit exists in treating the number of likes as a sole indicator of social validation, potentially overlooking the roles of other forms of interaction, such as replies or shares. Future research could employ eye-tracking or physiological measurement techniques to directly observe the cognitive processing mechanisms underlying perceptions of emotional authenticity in AIHI contexts. Additionally, cross-cultural comparative analyses could test whether the prioritization of the "spiritual" and "lived" dimensions of human relationships is universal, thereby deepening our understanding of the malleability of emotional authenticity standards.
A
Funding
This research was supported by the following projects: Philosophy and Social Sciences Research Project of Jiangsu Universities (2025SJYB0693), Soft Science Project of Wuxi Association for Science and Technology (KX-25-C237), Qinglan Project of Jiangsu Province, and National Natural Science Foundation of China (72573117).
A
Author Contribution
M.Y wrote the main manuscript text. W.W. prepared all the data for analysis. W.X helped to conceptualize the framework.
A
Data Availability
https://osf.io/zx394/overview?view_only=cb190f5c7b504d09a52ce469ac35ec57
Reference
Adam D (2025) Supportive? Addictive? Abusive? How AI companions affect our mental health. Nature 641(8062):296–298. https://doi.org/10.1038/d41586-025-01349-9
Alamoodi AH, Zaidan BB, Zaidan AA, Albahri OS, Mohammed KI, Malik RQ, Almahdi EM, Chyad MA, Tareq Z, Albahri AS, Hameed H, Alaa M (2021) Sentiment analysis and its applications in fighting COVID-19 and infectious diseases: A systematic review. Expert Syst Appl 167:114155. https://doi.org/10.1016/j.eswa.2020.114155
Al-Zahrani AM (2025) Exploring the Impact of Artificial Intelligence Chatbots on Human Connection and Emotional Support Among Higher Education Students. SAGE Open 15(2):21582440251340615. https://doi.org/10.1177/21582440251340615
Asiabar MG, Asiabar MG, Asiabar AG (2024) Analyzing the Role of Artificial Emotional Intelligence in Personalizing Human Brand Interactions: A Mixed-Methods Approach. In Review. https://doi.org/10.21203/rs.3.rs-5037977/v1
Baig K, Altaf A, Azam M (2024) Impact of AI on Communication Relationship and Social Dynamics: A qualitative Approach. Bull Bus Econ (BBE) 13(2):282–289. https://doi.org/10.61506/01.00283
Ballesteros JA, Ramírez V, Moreira GM, Solano F, A., Pelaez CA (2024) Facial emotion recognition through artificial intelligence. Front Comput Sci 6:1359471. https://doi.org/10.3389/fcomp.2024.1359471
Banks J (2024) Deletion, departure, death: Experiences of AI companion loss. J Social Personal Relationships 41(12):3547–3572. https://doi.org/10.1177/02654075241269688
Bar-Anan Y, Liberman N, Trope Y (2006) The association between psychological distance and construal level: Evidence from an implicit association test. J Exp Psychol Gen 135(4):609–622. https://doi.org/10.1037/0096-3445.135.4.609
Bashith A, Adji WS, Nurdin A (2021) Trend of Public Emotions on Social Media Towards Study at Home Policies: International Conference on Engineering, Technology and Social Science (ICONETOS 2020), Malang, East Java, Indonesia. https://doi.org/10.2991/assehr.k.210421.058
Biswas R, Alam T, Househ M, Shah Z (2022) Public Sentiment Towards Vaccination After COVID-19 Outbreak in the Arab World. In: Mantas J, Hasman A, Househ MS, Gallos P, Zoulias E, Liaskos J (eds) Studies in Health Technology and Informatics. IOS. https://doi.org/10.3233/SHTI210858
Blei DM, Ng A, Jordan MI (2003) Latent dirichlet allocation. J Mach Learn Res 3:993–1022. https://doi.org/10.5555/944919.944935
Clemente C (2024) Racial prejudice: A phenomenon of social distance. Sociol Social Work Rev 8(1):114–120. https://doi.org/10.58179/SSWR8107
Cui Z, Liu J (2022) A Study on Two Conditions for the Realization of Artificial Empathy and Its Cognitive Foundation. Philosophies 7(6):135. https://doi.org/10.3390/philosophies7060135
Dohnány S, Kurth-Nelson Z, Spens E, Luettgau L, Reid A, Gabriel I, Summerfield C, Shanahan M, Nour MM (2025) Technological folie à deux: Feedback Loops Between AI Chatbots and Mental Illness (Version 2). https://doi.org/10.48550/ARXIV.2507.19218. arXiv
Dong Y, Hou J, Zhang N, Zhang M (2020) Research on How Human Intelligence, Consciousness, and Cognitive Computing Affect the Development of Artificial Intelligence. Complexity, 2020, 1–10. https://doi.org/10.1155/2020/1680845
Fieldhouse R (2025) Can AI chatbots trigger psychosis? What the science says. Nature. https://doi.org/10.1038/d41586-025-03020-9. d41586-025-03020–03029
Foster A (2016) An Extension of Standard Latent Dirichlet Allocation to Multiple Corpora. SIAM Undergrad Res Online 9. https://doi.org/10.1137/15S014599
Gaikwad AP, Balram Kakpure K, Ambadas Landge A, Gunderao Kulkarni S, Adhav PJ, M., Tiwari M (2023) Cybersecurity Risk Management: A Complete Framework for IT Enterprises. 2023 10th IEEE Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering (UPCON), 602–607. https://doi.org/10.1109/UPCON59197.2023.10434767
A
Gaikwad AP, Kakpure P, Balram K (2023) Impact of AI on Human Psychology. https://doi.org/10.52783/eel.v13i3.424. European Economic Letters
Ghosh S, Pannone A, Sen D, Wali A, Ravichandran H, Das S (2023) An all 2D bio-inspired gustatory circuit for mimicking physiology and psychology of feeding behavior. Nat Commun 14(1):6021. https://doi.org/10.1038/s41467-023-41046-7
Girolami M, Kabán A (2003) On an equivalence between PLSI and LDA. Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval, 433–434. https://doi.org/10.1145/860435.860537
Goodman C, Donthu N (2024) Using Consumer-Generated Social Media Posts to Improve Forecasts of Television Premiere Viewership: Extending Diffusion of Innovation Theory. J Bus Theory Pract 12(1):p43. https://doi.org/10.22158/jbtp.v12n1p43
Gumusel E (2025) A literature review of user privacy concerns in conversational chatbots: A social informatics approach: An Annual Review of Information Science and Technology (ARIST) paper. J Association Inform Sci Technol 76(1):121–154. https://doi.org/10.1002/asi.24898
Hendrawan MY, Projo NWK (2022) Topic Modelling in Knowledge Management Documents BPS Statistics Indonesia. Proceedings of The International Conference on Data Science and Official Statistics, 2021(1), 119–130. https://doi.org/10.34123/icdsos.v2021i1.52
Henkel AP, Bromuri S, Iren D, Urovi V (2020) Half human, half machine – augmenting service employees with AI for interpersonal emotion regulation. J Service Manage 31(2):247–265. https://doi.org/10.1108/JOSM-05-2019-0160
Hermann E (2022) Anthropomorphized artificial intelligence, attachment, and consumer behavior. Mark Lett 33(1):157–162. https://doi.org/10.1007/s11002-021-09587-3
Hohenstein J, DiFranzo D, Kizilcec RF, Aghajari Z, Mieczkowski H, Levy K, Naaman M, Hancock J, Jung M (2021) Artificial intelligence in communication impacts language and social relationships (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2102.05756
IDEA-CCNL (2021) Fengshenbang-LM. https://github.com/IDEA-CCNL/Fengshenbang-LM
Kim TW, Jiang L, Duhachek A, Lee H, Garvey A (2022) Do You Mind if I Ask You a Personal Question? How AI Service Agents Alter Consumer Self-Disclosure. J Service Res 25(4):649–666. https://doi.org/10.1177/10946705221120232
Kleinert T, Waldschütz M, Blau J, Heinrichs M, Schiller B (2025) AI outperforms humans in establishing interpersonal closeness in emotionally engaging interactions – but only when labelled as human. In Review. https://doi.org/10.21203/rs.3.rs-6803722/v1
Klimova B, Pikhart M (2025) Exploring the effects of artificial intelligence on student and academic well-being in higher education: A mini-review. Front Psychol 16:1498132. https://doi.org/10.3389/fpsyg.2025.1498132
Kolomaznik M, Petrik V, Slama M, Jurik V (2024) The role of socio-emotional attributes in enhancing human-AI collaboration. Front Psychol 15:1369957. https://doi.org/10.3389/fpsyg.2024.1369957
Laestadius L, Bishop A, Gonzalez M, Illenčík D, Campos-Castillo C (2024) Too human and not human enough: A grounded theory analysis of mental health harms from emotional dependence on the social chatbot Replika. New Media Soc 26(10):5923–5941. https://doi.org/10.1177/14614448221142007
Lakshminarayanan B, Raich R (2011) Inference in Supervised latent Dirichllocation. 2011 IEEE International Workshop on Machine Learning for Signal Processing, 1–6. https://doi.org/10.1109/MLSP.2011.6064562
Lee CT, Pan L-Y, Hsieh SH (2022) Artificial intelligent chatbots as brand promoters: A two-stage structural equation modeling-artificial neural network approach. Internet Res 32(4):1329–1356. https://doi.org/10.1108/INTR-01-2021-0030
Lee Y, Kraemer DJM (2024) Semantic network centrality captures the key concepts for successful understanding of a lecture. https://doi.org/10.31234/osf.io/k3b9s. PsyArXiv
Leydesdorff L, Vaughan L (2006) Co-occurrence matrices and their applications in information science: Extending ACA to the Web environment. J Am Soc Inform Sci Technol 57(12):1616–1628. https://doi.org/10.1002/asi.20335
Li H, Zhang R (2024) Finding love in algorithms: Deciphering the emotional contexts of close encounters with AI chatbots. J Computer-Mediated Communication 29(5):zmae015. https://doi.org/10.1093/jcmc/zmae015
A
Li Y (2024) Engagement Matters: The Romantic Relationship between Female Users and the Generative AI. Interdisciplinary Humanit Communication Stud 1(9). https://doi.org/10.61173/are8z381
Liberman N, Trope Y, Wakslak C (2007) Construal Level Theory and Consumer Behavior. J Consumer Psychol 17(2):113–117. https://doi.org/10.1016/S1057-7408(07)70017-7
Liu S, Lee J-Y, Cheon Y, Wang M (2023) A Study of the Interaction between User Psychology and Perceived Value of AI Voice Assistants from a Sustainability Perspective. Sustainability 15(14):11396. https://doi.org/10.3390/su151411396
Liu Z (2010) LDA-Based Automatic Image Annotation Model. Adv Mater Res, 108–111, 88–94. https://doi.org/10.4028/www.scientific.net/AMR.108-111.88
Matochová J, Kowaliková P (2024) Transforming Higher Education: Psychological and Sociological Perspective (the use of artificial intelligence). R&E-SOURCE, 176–181. https://doi.org/10.53349/resource.2024.is1.a1253
Merrill K, Mikkilineni SD, Dehnert M (2025) Artificial intelligence chatbots as a source of virtual social support: Implications for loneliness and anxiety management. Ann N Y Acad Sci 1549(1):148–159. https://doi.org/10.1111/nyas.15400
Morrin H, Nicholls L, Levin M, Yiend J, Iyengar U, DelGuidice F, Bhattacharyya S, MacCabe J, Tognin S, Twumasi R, Alderson-Day B, Pollak T (2025) Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it). https://doi.org/10.31234/osf.io/cmy7n_v5. PsyArXiv
Oritsegbemi O (2023) Human Intelligence versus AI: Implications for Emotional Aspects of Human Communication. J Adv Res Social Sci 6(2):76–85. https://doi.org/10.33422/jarss.v6i2.1005
Pan S, Mou Y (2024) Constructing the meaning of human– AI romantic relationships from the perspectives of users dating the social chatbot Replika. Personal Relationships 31(4):1090–1112. https://doi.org/10.1111/pere.12572
Pandey P, Rai AK (2025) Modeling Consequences of Brand Authenticity in Anthropomorphized AI-Assistants: A Human-Robot Interaction Perspective. PURUSHARTHA - J Manage Ethics Spiritual 17(1):116–135. https://doi.org/10.21844/16202117108
Patel SC, Fan J (2023) Identification and Description of Emotions by Current Large Language Models. Neuroscience. https://doi.org/10.1101/2023.07.17.549421
Pineda AM, Reia SM, Connaughton C, Fontanari JF, Rodrigues FA (2023) Cultural heterogeneity constrains diffusion of innovations. Europhys Lett 143(4):42003. https://doi.org/10.1209/0295-5075/aceeab
Puteri SA, Saputri Y, Kurniati Y (2024) The Impact of Artificial Intelligence (AI) Technology on Students’ Social Relations. BICC Proceedings, 2, 153–158. https://doi.org/10.30983/bicc.v1i1.121
Pylov P, Maitak R, Protodyakonov A (2023) The Latent Dirichlet Allocation (LDA) generative model for automating process of rendering judicial decisions. E3S Web of Conferences, 431, 05005. https://doi.org/10.1051/e3sconf/202343105005
Redko K (2023) EMPATHY IN TECHNOLOGY: AI AND SOCIAL ENTREPRENEURSHIP FOR POVERTY ERADICATION. Efektyvna Ekonomika, 12. https://doi.org/10.32702/2307-2105.2023.12.38
Řehůřek R, Sojka P (2010) Software Framework for Topic Modelling with Large Corpora. Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, 45–50
Rowley MA, Allen JR, Newton W, Daly C (2024) Machine learning review of hand surgery literature. Curr Orthop Pract 35(2):84–90. https://doi.org/10.1097/BCO.0000000000001249
Rubin M, Li JZ, Zimmerman F, Ong DC, Goldenberg A, Perry A (2025) Comparing the value of perceived human versus AI-generated empathy. Nat Hum Behav. https://doi.org/10.1038/s41562-025-02247-w
Shahzad MF, Xu S, Lim WM, Yang X, Khan QR (2024) Artificial intelligence and social media on academic performance and mental well-being: Student perceptions of positive impact in the age of smart learning. Heliyon 10(8):e29523. https://doi.org/10.1016/j.heliyon.2024.e29523
Shen Y (2024) Interaction mode enables user perception recognition and perception optimization: An AI human-computer interaction study. Appl Comput Eng 31(1):57–63. https://doi.org/10.54254/2755-2721/31/20230122
Shvo M, Buhmann J, Kapadia M (2019) An Interdependent Model of Personality, Motivation, Emotion, and Mood for Intelligent Virtual Agents. Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents, 65–72. https://doi.org/10.1145/3308532.3329474
Simmel G (1971) On individuality and social forms: Selected writings. University of Chicago Press
Sun Y, Xu C, Xu H (2024) Social identity in trusting artificial intelligence agents: Evidence from lab and online experiments. Manag Decis Econ 45(8):5899–5916. https://doi.org/10.1002/mde.4361
Tan CKK, Xu Z (2022) The real digital househusbands of China: The alienable affects of China’s male ‘virtual lovers’. J Consumer Cult 22(1):3–20. https://doi.org/10.1177/1469540519899968
Trope Y, Liberman N, Wakslak C (2007) Construal Levels and Psychological Distance: Effects on Representation, Prediction, Evaluation, and Behavior. J Consumer Psychol 17(2):83–95. https://doi.org/10.1016/S1057-7408(07)70013-X
Tusini S (2022) A temporal perspective to empirically investigate the concept of social distance. Qual Quant 56(6):4421–4435. https://doi.org/10.1007/s11135-022-01319-7
Vanhoffelen G, Vandenbosch L, Schreurs L (2025) Teens, Tech, and Talk: Adolescents’ Use of and Emotional Reactions to Snapchat’s My AI Chatbot. Behav Sci 15(8):1037. https://doi.org/10.3390/bs15081037
Veugen T, Dunning V, Marcus M, Kamphorst B (2025) Secure latent Dirichlet allocation. Front Digit Health 7:1610228. https://doi.org/10.3389/fdgth.2025.1610228
Wan W, Huang R (2024) Deep Learning-Driven Public Opinion Analysis on the Weibo Topic about AI Art. Appl Sci 14(9):3674. https://doi.org/10.3390/app14093674
Wang, J., Zhang, Y., Zhang, L., Yang, P., Gao, X., Wu, Z., Dong, X., He, J., Zhuo,J., Yang, Q., Huang, Y., Li, X., Wu, Y., Lu, J., Zhu, X., Chen, W., Han, T., Pan,K., Wang, R., … Zhang, J. (2022). Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence. CoRR, abs/2209.02970.
Wang S, Schraagen M, Sang ETK, Dastani M (2020) Dutch General Public Reaction on Governmental COVID-19 Measures and Announcements in Twitter Data (Version 3). arXiv. https://doi.org/10.48550/ARXIV.2006.07283
Wang Y, Liu W (2023) Emotional Simulation of Artificial Intelligence and Its Ethical Reflection. Acad J Humanit Social Sci 6(5). https://doi.org/10.25236/AJHSS.2023.060503
Wu C, Fisher A, Schnyer D (2022) Gaussian Latent Dirichlet Allocation for Discrete Human State Discovery (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2206.14233
Xie T, Pentina I (2022) Attachment Theory as a Framework to Understand Relationships with Social Chatbots: A Case Study of Replika. Hawaii International Conference on System Sciences. https://doi.org/10.24251/HICSS.2022.258
Xie Z, Wang Z (2024) Longitudinal Examination of the Relationship Between Virtual Companionship and Social Anxiety: Emotional Expression as a Mediator and Mindfulness as a Moderator. Psychol Res Behav Manage Volume 17:765–782. https://doi.org/10.2147/PRBM.S447487
Yan M, Lou X, Chan CA, Wang Y, Jiang W (2023) A semantic and emotion-based dual latent variable generation model for a dialogue system. CAAI Trans Intell Technol 8(2):319–330. https://doi.org/10.1049/cit2.12153
Yang F, Oshio A (2025) Using attachment theory to conceptualize and measure the experiences in human-AI relationships. Curr Psychol 44(11):10658–10669. https://doi.org/10.1007/s12144-025-07917-6
Yang Y, Tavares J, Oliveira T (2024) A New Research Model for Artificial Intelligence–Based Well-Being Chatbot Engagement: Survey Study. JMIR Hum Factors 11:e59908. https://doi.org/10.2196/59908
Yin M, Yu Z, Zhu M (2025) Are Digital Offspring Closer than Biological Offspring?—A Study on the Factors Influencing the Intention to Use AI Kinship Companion Customization Products. Int J Human–Computer Interact 1–25. https://doi.org/10.1080/10447318.2025.2531279
Yin Y, Jia N, Wakslak CJ (2024) AI can help people feel heard, but an AI label diminishes this impact. Proceedings of the National Academy of Sciences, 121(14), e2319112121. https://doi.org/10.1073/pnas.2319112121
Yoon S, Byun S, Jung K (2018) Multimodal Speech Emotion Recognition Using Audio and Text. 2018 IEEE Spoken Language Technology Workshop (SLT), 112–118. https://doi.org/10.1109/SLT.2018.8639583
Yousefi Z, Saadati N, Saadati SA (2023) AI in Elderly Care: Understanding the Implications for Independence and Social Interaction. AI and Tech in Behavioral and Social Sciences, 1(4), 26–32. https://doi.org/10.61838/kman.aitech.1.4.5
Zhai C, Wibowo S, Li LD (2024) The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: A systematic review. Smart Learn Environ 11(1):28. https://doi.org/10.1186/s40561-024-00316-7
Zhao X, Wu J (2024) A reinforcement learning-based algorithm for discrete dynamic stochastic recognition of speech dialog emotions. Journal of Physics: Conference Series, 2898(1), 012046. https://doi.org/10.1088/1742-6596/2898/1/012046
Zhou S, Li K, Liu Y (2009) Text Categorization Based on Topic Model. Int J Comput Intell Syst 2(4):398. https://doi.org/10.2991/ijcis.2009.2.4.8
Zhou X, Chen C, Li W, Yao Y, Cai F, Xu J, Qin X (2025) How Do Coworkers Interpret Employee AI Usage: Coworkers’ Perceived Morality and Helping as Responses to Employee AI Usage. Hum Resour Manag 64(4):1077–1097. https://doi.org/10.1002/hrm.22299
Total words in MS: 9730
Total words in Title: 9
Total words in Abstract: 150
Total Keyword count: 0
Total Images in MS: 9
Total Tables in MS: 0
Total Reference count: 84