References
Agrawal, A. (2024). Fairness in AI-Driven Oncology: Investigating Racial and Gender Biases in Large Language Models. Cureus. https://doi.org/10.7759/cureus.69541
Alahmed, Y., Abadla, R., & Ansari, M. J. A. (2024). Exploring the Potential Implications of AI-generated Content in Social Engineering Attacks. 2024 International Conference on Multimedia Computing, Networking and Applications (MCNA), 64–73. https://doi.org/10.1109/MCNA63144.2024.10703950
Alawida, M., Abu Shawar, B., Abiodun, O. I., Mehmood, A., Omolara, A. E., & Al Hwaitat, A. K. (2024). Unveiling the Dark Side of ChatGPT: Exploring Cyberattacks and Enhancing User Awareness. Information, 15(1), 27. https://doi.org/10.3390/info15010027
Appignani, T., & Sanchez, J. (2024). AI and racism: Tone policing by the Bing AI chatbot. Discourse Studies, 26(5), 591–605. https://doi.org/10.1177/14614456241235075
Asiksoy, G. (2025). Nurses’ assessment of artificial intelligence chatbots for health literacy education. Journal of Education and Health Promotion, 14(1). https://doi.org/10.4103/jehp.jehp_1195_24
Atkins, C., Zhao, B. Z. H., Asghar, H. J., Wood, I., & Kaafar, M. A. (2023). Those Aren’t Your Memories, They’re Somebody Else’s: Seeding Misinformation in Chat Bot Memories. In M. Tibouchi & X. Wang (Eds.), Applied Cryptography and Network Security (Vol. 13905, pp. 284–308). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-33488-7_11
Ba, Z., Zhong, J., Lei, J., Cheng, P., Wang, Q., Qin, Z., Wang, Z., & Ren, K. (2024). SurrogatePrompt: Bypassing the Safety Filter of Text-to-Image Models via Substitution. Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security, 1166–1180. https://doi.org/10.1145/3658644.3690346
Bai, H., Voelkel, J. G., Muldowney, S., Eichstaedt, J. C., & Willer, R. (2025). LLM-generated messages can persuade humans on policy issues. Nature Communications, 16(1), 6037. https://doi.org/10.1038/s41467-025-61345-5
Bakir, V., Laffer, A., McStay, A., Miranda, D., & Urquhart, L. (2024). On manipulation by emotional AI: UK adults’ views and governance implications. Frontiers in Sociology, 9, 1339834. https://doi.org/10.3389/fsoc.2024.1339834
Battista, D., & Camargo Molano, J. (2023). How AI Bots Have Reinforced Gender Bias in Hate Speech. Ex aequo,(48), 53–68. https://doi.org/10.22355/exaequo.2023.48.05
Beckerich, M., Plein, L., & Coronado, S. (2023). RatGPT: Turning online LLMs into Proxies for Malware Attacks (arXiv:2308.09183). arXiv. https://doi.org/10.48550/arXiv.2308.09183
Boucher, N., Pajola, L., Shumailov, I., Anderson, R., & Conti, M. (2023). Boosting Big Brother: Attacking Search Engines with Encodings. Proceedings of the 26th International Symposium on Research in Attacks, Intrusions and Defenses, 700–713. https://doi.org/10.1145/3607199.3607220
Brendel, A. B., Hildebrandt, F., Dennis, A. R., & Riquel, J. (2023). The Paradoxical Role of Humanness in Aggression Toward Conversational Agents. Journal of Management Information Systems, 40(3), 883–913. https://doi.org/10.1080/07421222.2023.2229127
Carroll, M., Chan, A., Ashton, H., & Krueger, D. (2023, October). Characterizing manipulation from AI systems. In Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (pp. 1–13).
Cercas Curry, A., Abercrombie, G., & Rieser, V. (2021). ConvAbuse: Data, Analysis, and Benchmarks for Nuanced Detection in Conversational AI. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 7388–7403. https://doi.org/10.18653/v1/2021.emnlp-main.587
Chan, S., Pataranutaporn, P., Suri, A., Zulfikar, W., Maes, P., & Loftus, E. F. (2024). Conversational AI Powered by Large Language Models Amplifies False Memories in Witness Interviews (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2408.04681
Chang, C. T., Srivathsa, N., Bou-Khalil, C., Swaminathan, A., Lunn, M. R., Mishra, K., Koyejo, S., & Daneshjou, R. (2024). Evaluating anti-LGBTQIA + medical bias in large language models. https://doi.org/10.1101/2024.08.22.24312464
Chen, B., Ivanov, N., Wang, G., & Yan, Q. (2024). Multi-Turn Hidden Backdoor in Large Language Model-powered Chatbot Models. Proceedings of the 19th ACM Asia Conference on Computer and Communications Security, 1316–1330. https://doi.org/10.1145/3634737.3656289
Chen, B., Wang, G., Guo, H., Wang, Y., & Yan, Q. (2023). Understanding Multi-Turn Toxic Behaviors in Open-Domain Chatbots. Proceedings of the 26th International Symposium on Research in Attacks, Intrusions and Defenses, 282–296. https://doi.org/10.1145/3607199.3607237
Choi, R., Kim, T., Park, S., Kim, J. G., & Lee, S.-J. (2025). Private Yet Social: How LLM Chatbots Support and Challenge Eating Disorder Recovery. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 1–19. https://doi.org/10.1145/3706598.3713485
Choi, S. (2025). The Manner Is the Matter: How the Chatbot Communication Style and Consumers’ Regulatory Focus Shape Purchase Intention. Journal of Consumer Behaviour, 24(4), 1950–1966. https://doi.org/10.1002/cb.2505
Contro, J., Deol, S., He, Y., & Brandão, M. (2025). ChatbotManip: A Dataset to Facilitate Evaluation and Oversight of Manipulative Chatbot Behaviour (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2506.12090
Cork, A., Smith, L. G., Ellis, D. A., Stanton Fraser, D., & Joinson, A. (2022). Rethinking Online Harm: A Psychological Model of Contextual Vulnerability. https://doi.org/10.31234/osf.io/z7re2
Cuadra, A., Wang, M., Stein, L. A., Jung, M. F., Dell, N., Estrin, D., & Landay, J. A. (2024). The Illusion of Empathy? Notes on Displays of Emotion in Human-Computer Interaction. Proceedings of the CHI Conference on Human Factors in Computing Systems, 1–18. https://doi.org/10.1145/3613904.3642336
Danry, V., Pataranutaporn, P., Groh, M., & Epstein, Z. (2025). Deceptive Explanations by Large Language Models Lead People to Change their Beliefs About Misinformation More Often than Honest Explanations. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 1–31. https://doi.org/10.1145/3706598.3713408
Davar, N. F., Dewan, M. A. A., & Zhang, X. (2025). AI Chatbots in Education: Challenges and Opportunities. Information, 16(3), 235. https://doi.org/10.3390/info16030235
De Cicco, R. (2024). Exploring the dark corners of human-chatbot interactions: A literature review on conversational agent abuse. In International workshop on chatbot research and design (pp. 185–203). Springer, Cham.
Doshi, J., Novacic, I., Fletcher, C., Borges, M., Zhong, E., Marino, M. C., Gan, J., Mager, S., Sprague, D., & Xia, M. (2024). Sleeper Social Bots: A new generation of AI disinformation bots are already a political threat (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2408.12603
Durántez-Stolle, P., Martínez-Sanz, R., Piñeiro-Otero, T., & Gómez-García, S. (2023). Feminism as a polarizing axis of the political conversation on Twitter: The case of #IreneMonteroDimision. El Profesional de La Información, e320607. https://doi.org/10.3145/epi.2023.nov.07
Edu, J., Mulligan, C., Pierazzi, F., Polakis, J., Suarez-Tangil, G., & Such, J. (2022). Exploring the security and privacy risks of chatbots in messaging services. Proceedings of the 22nd ACM Internet Measurement Conference, 581–588. https://doi.org/10.1145/3517745.3561433
Fatimah, R., Mumtaz, A., Fahrezi, F. M., & Zakaria, D. (2024). AI-generated misinformation: A literature review. Indonesian Journal of Artificial Intelligence and Data Mining (IJAIDM), 7(2), 241–254.
Gabriel, S., Lyu, L., Siderius, J., Ghassemi, M., Andreas, J., & Ozdaglar, A. (2024). MisinfoEval: Generative AI in the Era of “Alternative Facts” (Version 2). arXiv. https://doi.org/10.48550/ARXIV.2410.09949
Gehman, S., Gururangan, S., Sap, M., Choi, Y., & Smith, N. A. (2020). RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models (Version 2). arXiv. https://doi.org/10.48550/ARXIV.2009.11462
Gendi, M., & Munteanu, C. (2021). Towards a chatbot for evidence gathering on the dark web. CUI 2021–3rd Conference on Conversational User Interfaces, 1–3. https://doi.org/10.1145/3469595.3469598
Gupta, M., Akiri, C., Aryal, K., Parker, E., & Praharaj, L. (2023). From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy. IEEE Access, 11, 80218–80245. https://doi.org/10.1109/ACCESS.2023.3300381
Hajli, N., Saeed, U., Tajvidi, M., & Shirazi, F. (2022). Social Bots and the Spread of Disinformation in Social Media: The Challenges of Artificial Intelligence. British Journal of Management, 33(3), 1238–1253. https://doi.org/10.1111/1467-8551.12554
Han, C., Seering, J., Kumar, D., Hancock, J. T., & Durumeric, Z. (2023). Hate Raids on Twitch: Echoes of the Past, New Modalities, and Implications for Platform Governance. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW1), 1–28. https://doi.org/10.1145/3579609
Ienca, M. (2023). On artificial intelligence and manipulation. Topoi, 42(3), 833–842.
Jakesch, M., Bhat, A., Buschek, D., Zalmanson, L., & Naaman, M. (2023). Co-Writing with Opinionated Language Models Affects Users’ Views. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–15. https://doi.org/10.1145/3544548.3581196
Keijsers, M., Bartneck, C., & Eyssel, F. (2021). What’s to bullying a bot?: Correlates between chatbot humanlikeness and abuse. Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systems, 22(1), 55–80. https://doi.org/10.1075/is.20002.kei
Kim, W. B., & Hur, H. J. (2023). What Makes People Feel Empathy for AI Chatbots? Assessing the Role of Competence and Warmth. International Journal of Human–Computer Interaction, 40(17), 4674–4687. https://doi.org/10.1080/10447318.2023.2219961
Klyueva, A. (2021). Trolls, Bots, and Whatnots: Deceptive Content, Deception Detection, and Deception Suppression. In I. R. Management Association (Ed.), Research Anthology on Fake News, Political Warfare, and Combatting the Spread of Misinformation (pp. 316–330). IGI Global. https://doi.org/10.4018/978-1-7998-7291-7.ch018
Köbis, N., Bonnefon, J.-F., & Rahwan, I. (2021). Bad machines corrupt good morals. Nature Human Behaviour, 5(6), 679–685. https://doi.org/10.1038/s41562-021-01128-2
Krauß, V., McGill, M., Kosch, T., Thiel, Y. M., Schön, D., & Gugenheimer, J. (2025). “Create a Fear of Missing Out”—ChatGPT Implements Unsolicited Deceptive Designs in Generated Websites Without Warning. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 1–20. https://doi.org/10.1145/3706598.3713083
Krook, J. (2025). Manipulation and the AI Act: Large Language Model Chatbots and the Danger of Mirrors. arXiv preprint arXiv:2503.18387.
Krügel, S., Ostermaier, A., & Uhl, M. (2023). ChatGPT’s inconsistent moral advice influences users’ judgment. Scientific Reports, 13(1), 4569. https://doi.org/10.1038/s41598-023-31341-0
Kurniawan, M. H., Handiyani, H., Nuraini, T., Hariyati, R. T. S., & Sutrisno, S. (2024). A systematic review of artificial intelligence-powered (AI-powered) chatbot intervention for managing chronic illness. Annals of Medicine, 56(1). https://doi.org/10.1080/07853890.2024.2302980
Lan, Q., AnujKaul, & Jones, S. (2025). Prompt Injection Detection in LLM Integrated Applications. International Journal of Network Dynamics and Intelligence, 100013. https://doi.org/10.53941/ijndi.2025.100013
Leib, M., Köbis, N. C., Rilke, R. M., Hagens, M., & Irlenbusch, B. (2021). The corruptive force of AI-generated advice (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2102.07536
Li, H., Guo, D., Fan, W., Xu, M., Huang, J., Meng, F., & Song, Y. (2023). Multi-step Jailbreaking Privacy Attacks on ChatGPT. Findings of the Association for Computational Linguistics: EMNLP 2023, 4138–4153. https://doi.org/10.18653/v1/2023.findings-emnlp.272
Li, J. (2023). Security Implications of AI Chatbots in Health Care. Journal of Medical Internet Research, 25, e47551. https://doi.org/10.2196/47551
Li, L., Peng, W., & Rheu, M. M. J. (2023). Factors Predicting Intentions of Adoption and Continued Use of Artificial Intelligence Chatbots for Mental Health: Examining the Role of UTAUT Model, Stigma, Privacy Concerns, and Artificial Intelligence Hesitancy. Telemedicine and E-Health, 30(3), 722–730. https://doi.org/10.1089/tmj.2023.0313
Lin, W., Gerchanovsky, A., Akgul, O., Bauer, L., Fredrikson, M., & Wang, Z. (2025). LLM Whisperer: An Inconspicuous Attack to Bias LLM Responses. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 1–24. https://doi.org/10.1145/3706598.3714025
Lin, Z., Wang, Z., Tong, Y., Wang, Y., Guo, Y., Wang, Y., & Shang, J. (2023). ToxicChat: Unveiling Hidden Challenges of Toxicity Detection in Real-World User-AI Conversation. Findings of the Association for Computational Linguistics: EMNLP 2023, 4694–4702. https://doi.org/10.18653/v1/2023.findings-emnlp.311
Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., & Wang, K. (2024). A Hitchhiker’s Guide to Jailbreaking ChatGPT via Prompt Engineering. Proceedings of the 4th International Workshop on Software Engineering and AI for Data Quality in Cyber-Physical Systems/Internet of Things, 12–21. https://doi.org/10.1145/3663530.3665021
Makhortykh, M., Sydorova, M., Baghumyan, A., Vziatysheva, V., & Kuznetsova, E. (2024). Stochastic lies: How LLM-powered chatbots deal with Russian disinformation about the war in Ukraine. Harvard Kennedy School Misinformation Review. https://doi.org/10.37016/mr-2020-154
Manoli, A., Pauketat, J. V. T., & Anthis, J. R. (2025). The AI Double Standard: Humans Judge All AIs for the Actions of One. Proceedings of the ACM on Human-Computer Interaction, 9(2), 1–24. https://doi.org/10.1145/3711083
McGuire, J., De Cremer, D., Hesselbarth, Y., De Schutter, L., Mai, K. M., & Van Hiel, A. (2023). The reputational and ethical consequences of deceptive chatbot use. Scientific Reports, 13(1), 16246. https://doi.org/10.1038/s41598-023-41692-3
Menz, B. D., Kuderer, N. M., Bacchi, S., Modi, N. D., Chin-Yee, B., Hu, T., Rickard, C., Haseloff, M., Vitry, A., McKinnon, R. A., Kichenadasse, G., Rowland, A., Sorich, M. J., & Hopkins, A. M. (2024). Current safeguards, risk mitigation, and transparency measures of large language models against the generation of health disinformation: Repeated cross sectional analysis. BMJ, e078538. https://doi.org/10.1136/bmj-2023-078538
Moy, W. R., & Gradon, K. T. (2023). A double-edged sword. Artificial Intelligence and International Conflict in Cyberspace.
Namvarpour, M., & Razi, A. (2024). Uncovering Contradictions in Human-AI Interactions: Lessons Learned from User Reviews of Replika. Companion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing, 579–586. https://doi.org/10.1145/3678884.3681909
Paluszek, O., & Loeb, S. (2025). Artificial intelligence and patient education. Current Opinion in Urology, 35(3), 219–223. https://doi.org/10.1097/mou.0000000000001267
Parray, I. (2021). Humour in the Age of Contagion: Coronavirus, ‘Janata Curfew’ Meme and India’s Digital Cultures of Virality. In S. Mpofu (Ed.), Digital Humour in the Covid-19 Pandemic (pp. 279–293). Springer International Publishing. https://doi.org/10.1007/978-3-030-79279-4_13
Pataranutaporn, P., Archiwaranguprok, C., Chan, S. W. T., Loftus, E., & Maes, P. (2025). Slip Through the Chat: Subtle Injection of False Information in LLM Chatbot Conversations Increases False Memory Formation. Proceedings of the 30th International Conference on Intelligent User Interfaces, 1297–1313. https://doi.org/10.1145/3708359.3712112
Piggott, B., Patil, S., Feng, G., Odat, I., Mukherjee, R., Dharmalingam, B., & Liu, A. (2023). Net-GPT: A LLM-Empowered Man-in-the-Middle Chatbot for Unmanned Aerial Vehicle. Proceedings of the Eighth ACM/IEEE Symposium on Edge Computing, 287–293. https://doi.org/10.1145/3583740.3626809
Polyportis, A., & Pahos, N. (2024). Navigating the perils of artificial intelligence: a focused review on ChatGPT and responsible research and innovation. Humanities and Social Sciences Communications, 11(1), 1–10.
Porna, S. B., Ahmad, M., Vallejo, R. G., Shahzadi, I., & Rahman, M. A. (2025). Exploring Ethical Dimensions of AI Assistants and Chatbots. In Responsible Implementations of Generative AI for Multidisciplinary Use (pp. 291–316). IGI Global.
Prakash, A. V., Joshi, A., Nim, S., & Das, S. (2023). Determinants and consequences of trust in AI-based customer service chatbots. The Service Industries Journal, ahead-of-print(ahead-of-print), 642–675. https://doi.org/10.1080/02642069.2023.2166493
Rafiq, F., Adil, M., Wu, J.-Z., & Dogra, N. (2022). Examining Consumer’s Intention to Adopt AI-Chatbots in Tourism Using Partial Least Squares Structural Equation Modeling Method. Mathematics, 10(13), 2190. https://doi.org/10.3390/math10132190
Rodríguez, J. I., Durán, S. R., Díaz-López, D., Pastor-Galindo, J., & Mármol, F. G. (2020). C3-Sex: A Conversational Agent to Detect Online Sex Offenders. Electronics, 9(11), 1779. https://doi.org/10.3390/electronics9111779
Roy, S. S., Naragam, K. V., & Nilizadeh, S. (2023). Generating Phishing Attacks using ChatGPT (arXiv:2305.05133). arXiv. https://doi.org/10.48550/arXiv.2305.05133
Schiller Hansen, S., & Søgaard, A. (2025). Captivation Lures and Social Robots. In J. Seibt, P. Fazekas, & O. S. Quick (Eds.), Frontiers in Artificial Intelligence and Applications. IOS Press. https://doi.org/10.3233/FAIA241534
Shibli, A. M., Pritom, M. M. A., & Gupta, M. (2024). AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns. 2024 12th International Symposium on Digital Forensics and Security (ISDFS), 1–6. IEEE https://doi.org/10.1109/ISDFS60797.2024.10527300
Si, W. M., Backes, M., Blackburn, J., De Cristofaro, E., Stringhini, G., Zannettou, S., & Zhang, Y. (2022). Why So Toxic?: Measuring and Triggering Toxic Behavior in Open-Domain Chatbots. Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2659–2673. https://doi.org/10.1145/3548606.3560599
Sison, A. J. G., Daza, M. T., Gozalo-Brizuela, R., & Garrido-Merchán, E. C. (2024). ChatGPT: More than a “weapon of mass deception” ethical challenges and responses from the human-centered artificial intelligence (HCAI) perspective. International Journal of Human–Computer Interaction, 40(17), 4853–4872.
Spitale, G., Biller-Andorno, N., & Germani, F. (2023). AI model GPT-3 (dis)informs us better than humans. Science Advances, 9(26), eadh1850. https://doi.org/10.1126/sciadv.adh1850
Szmurlo, H., & Akhtar, Z. (2024). Digital Sentinels and Antagonists: The Dual Nature of Chatbots in Cybersecurity. Information, 15(8), 443. https://doi.org/10.3390/info15080443
Urman, A., & Makhortykh, M. (2025). The silence of the LLMs: Cross-lingual analysis of guardrail-related political bias and false information prevalence in ChatGPT, Google Bard (Gemini), and Bing Chat. Telematics and Informatics, 96, 102211. https://doi.org/10.1016/j.tele.2024.102211
Usman, Y., Upadhyay, A., Gyawali, P., & Chataut, R. (2024). Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2408.12806
Veisi, O., Kazemian, K., Gerami, F., Mirzaee Kharghani, M., Amirkhani, S., Du, D. K., Stevens, G., & Boden, A. (2025). User Narrative Study for Dealing with Deceptive Chatbot Scams Aiming to Online Fraud. Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, 1–7. https://doi.org/10.1145/3706599.3720152
Vidgen, B., Scherrer, N., Kirk, H. R., Qian, R., Kannappan, A., Hale, S. A., & Röttger, P. (2024). SimpleSafetyTests: A Test Suite for Identifying Critical Safety Risks in Large Language Models (arXiv:2311.08370). arXiv. https://doi.org/10.48550/arXiv.2311.08370
Vorsino, Z. (2021). Chatbots, Gender, and Race on Web 2.0 Platforms: Tay.AI as Monstrous Femininity and Abject Whiteness. Signs: Journal of Women in Culture and Society, 47(1), 105–127. https://doi.org/10.1086/715227
Wang, J., Hu, X., Hou, W., Chen, H., Zheng, R., Wang, Y., Yang, L., Huang, H., Ye, W., Geng, X., Jiao, B., Zhang, Y., & Xie, X. (2023). On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective (arXiv:2302.12095). arXiv. https://doi.org/10.48550/arXiv.2302.12095
Wang, R., Ma, X., Zhou, H., Ji, C., Ye, G., & Jiang, Y.-G. (2024). White-box Multimodal Jailbreaks Against Large Vision-Language Models. Proceedings of the 32nd ACM International Conference on Multimedia, 6920–6928. https://doi.org/10.1145/3664647.3681092
Weeks, C., Cheruvu, A., Abdullah, S. M., Kanchi, S., Yao, D., & Viswanath, B. (2023). A First Look at Toxicity Injection Attacks on Open-domain Chatbots. Annual Computer Security Applications Conference, 521–534. https://doi.org/10.1145/3627106.3627122
Yang, K.-C., & Menczer, F. (2023). Anatomy of an AI-powered malicious social botnet. https://doi.org/10.48550/ARXIV.2307.16336
Yu, J., Lin, X., Yu, Z., & Xing, X. (2024). GPTFUZZER: Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts (arXiv:2309.10253). arXiv. https://doi.org/10.48550/arXiv.2309.10253
Zellagui, W., Imine, A., & Tadjeddine, Y. (2025). Cryptocurrency Frauds for Dummies: How ChatGPT introduces us to fraud? Digital Government: Research and Practice, 6(1), 1–16. https://doi.org/10.1145/3673764
Zhang, R., Li, H., Meng, H., Zhan, J., Gan, H., & Lee, Y.-C. (2025). The Dark Side of AI Companionship: A Taxonomy of Harmful Algorithmic Behaviors in Human-AI Relationships. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 1–17. https://doi.org/10.1145/3706598.3713429
Zhang, R., Li, H., Meng, H., Zhan, J., Gan, H., & Lee, Y.-C. (2025). The Dark Side of AI Companionship: A Taxonomy of Harmful Algorithmic Behaviors in Human-AI Relationships. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 1–17. https://doi.org/10.1145/3706598.3713429
Zhou, J., Zhang, Y., Luo, Q., Parker, A. G., & De Choudhury, M. (2023). Synthetic Lies: Understanding AI-Generated Misinformation and Evaluating Algorithmic and Human Solutions. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–20. https://doi.org/10.1145/3544548.3581318