References
Afroogh S, Akbari A, Malone E, Kargar M, Alambeigi H. Trust in AI: progress, challenges, and future directions. Humanit Soc Sci Commun. 2024;11:1568. https://doi.org/10.1057/s41599-024-04044-8.
Alarcon GM, Lyons JB, Christensen JC. The effect of propensity to trust and familiarity on perceptions of trustworthiness over time. Personal Individ Differ. 2016;94:309–15. https://doi.org/10.1016/j.paid.2016.01.031.
Alshakhsi S, Almourad MB, Babkir A, Al-Thani D, Yankouskaya A, Montag C, Ali R. Designing AI to foster acceptance: do freedom to choose and social proof impact AI attitudes among British and Arab populations? Behav. Inf Technol. 2025;1–19. https://doi.org/10.1080/0144929X.2025.2477053.
Andreassen R, Bråten I. Br J Educ Technol. 2013;44:821–36. https://doi.org/10.1111/j.1467-8535.2012.01366.x. Teachers’ source evaluation self-efficacy predicts their use of relevant source features when evaluating the trustworthiness of web sources on special education.
Bandura A. Social Foundations of Thought and Action: A Social Cognitive Theory. Englewood Cliffs, NJ: Prentice-Hall; 1986.
Bandura A. Self-efficacy: Toward a unifying theory of behavioral change. Psychol Rev. 1977;84:191–215. https://doi.org/10.1037/0033-295X.84.2.191.
A
Belsley DA, Kuh E, Welsch RE. Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. Wiley; 2005.
https://doi.org/10.1002/0471725153.
Bo D, Ma’rof AA. Understanding User Attitude towards AI Agents: The Roles of Perceived Competence, Trust in Technology, and Social Influence. Int J Acad Res Bus Soc Sci. 2024;14:527–38. https://doi.org/10.6007/IJARBSS/v14-i12/24001.
Cannon WB. The Wisdom of the Body. New York, NY: W. W. Norton & Company, Inc.; 1932.
Cappelleri JC, Jason Lundy J, Hays RD. Overview of Classical Test Theory and Item Response Theory for the Quantitative Assessment of Items in Developing Patient-Reported Outcomes Measures. Clin Ther. 2014;36:648–62. https://doi.org/10.1016/j.clinthera.2014.04.006.
Caprara GV, Vecchione M, Alessandri G, Gerbino M, Barbaranelli C. The contribution of personality traits and self-efficacy beliefs to academic achievement: A longitudinal study: Personality traits, self-efficacy beliefs and academic achievement. Br J Educ Psychol. 2011;81:78–96. https://doi.org/10.1348/2044-8279.002004.
Chaiken S, Ledgerwood A. 2012. A Theory of Heuristic and Systematic InformationProcessing, in: Handbook of Theories of Social Psychology: Volume 1. SAGE Publications Ltd, 1 Oliver’s Yard, 55 City Road, London EC1Y 1SP United Kingdom, pp. 246–266. https://doi.org/10.4135/9781446249215.n13
Chamorro-Koc M, Peake J, Meek A, Manimont G. Self-efficacy and trust in consumers’ use of health-technologies devices for sports. Heliyon. 2021;7:e07794. https://doi.org/10.1016/j.heliyon.2021.e07794.
Chen FF. Sensitivity of Goodness of Fit Indexes to Lack of Measurement Invariance. Struct Equ Model Multidiscip J. 2007;14:464–504. https://doi.org/10.1080/10705510701301834.
Coleman C, Neuman WR, Dasdan A, Ali S, Shah M. 2025. The Convergent Ethics of AI? Analyzing Moral Foundation Priorities in Large Language Models with a Multi-Framework Approach. https://doi.org/10.48550/ARXIV.2504.19255
Colquitt JA, Conlon DE, Wesson MJ, Porter COLH, Ng KY. Justice at the millennium: A meta-analytic review of 25 years of organizational justice research. J Appl Psychol. 2001;86:425–45. https://doi.org/10.1037/0021-9010.86.3.425.
Colquitt JA, Scott BA, LePine JA. Trust, trustworthiness, and trust propensity: A meta-analytic test of their unique relationships with risk taking and job performance. J Appl Psychol. 2007;92:909–27. https://doi.org/10.1037/0021-9010.92.4.909.
Colquitt JA, Scott BA, Rodell JB, Long DM, Zapata CP, Conlon DE, Wesson MJ. Justice at the millennium, a decade later: A meta-analytic test of social exchange and affect-based perspectives. J Appl Psychol. 2013;98:199–236. https://doi.org/10.1037/a0031757.
Costello AB, Osborne J. 2005. Best practices in exploratory factor analysis: four recommendations for getting the most from your analysis 10, 1–9. https://doi.org/10.7275/JYJ1-4868
Culnan MJ, Armstrong PK. Information Privacy Concerns, Procedural Fairness, and Impersonal Trust: An Empirical Investigation. Organ Sci. 1999;10:104–15. https://doi.org/10.1287/orsc.10.1.104.
Cyrus-Lai W, Tierney W, Du Plessis C, Nguyen M, Schaerer M, Giulia Clemente E, Uhlmann EL. Avoiding Bias in the Search for Implicit Bias. Psychol Inq. 2022;33:203–12. https://doi.org/10.1080/1047840X.2022.2106762.
Daly SJ, Wiewiora A, Hearn G. Shifting attitudes and trust in AI: Influences on organizational AI adoption. Technol Forecast Soc Change. 2025;215:124108. https://doi.org/10.1016/j.techfore.2025.124108.
Dang Q, Li G. Unveiling trust in AI: the interplay of antecedents, consequences, and cultural dynamics. AI Soc. 2025. https://doi.org/10.1007/s00146-025-02477-6.
Daronnat S, Azzopardi L, Halvey M, Dubiel M. Inferring Trust From Users’ Behaviours; Agents’ Predictability Positively Affects Trust, Task Performance and Cognitive Load in Human-Agent Real-Time Collaboration. Front Robot AI. 2021;8:642201. https://doi.org/10.3389/frobt.2021.642201.
David S, Hareli S, Hess U. The influence on perceptions of truthfulness of the emotional expressions shown when talking about failure. Eur J Psychol. 2015a;11:125–38. https://doi.org/10.5964/ejop.v11i1.877.
David S, Hareli S, Hess U. The influence on perceptions of truthfulness of the emotional expressions shown when talking about failure. Eur J Psychol. 2015b;11:125–38. https://doi.org/10.5964/ejop.v11i1.877.
De Duro ES, Veltri GA, Golino H, Stella M. 2025. Measuring and identifying factors of individuals’ trust in Large Language Models. https://doi.org/10.48550/ARXIV.2502.21028
De Fine Licht K, Brülde B. 2021. On Defining Reliance and Trust: Purposes, Conditions of Adequacy, and New Definitions. Philosophia 49, 1981–2001. https://doi.org/10.1007/s11406-021-00339-1
DeCastellarnau A. A classification of response scale characteristics that affect data quality: a literature review. Qual Quant. 2018;52:1523–59. https://doi.org/10.1007/s11135-017-0533-4.
Di W, Nie Y, Chua BL, Chye S, Teo T. Developing a Single-Item General Self-Efficacy Scale: An Initial Study. J Psychoeduc Assess. 2023;41:583–98. https://doi.org/10.1177/07342829231161884.
Dietvorst BJ, Simmons JP, Massey C. Algorithm aversion: People erroneously avoid algorithms after seeing them err. J Exp Psychol Gen. 2015;144:114–26. https://doi.org/10.1037/xge0000033.
Durán JM, Pozzi G. Trust and Trustworthiness in AI. Philos Technol. 2025;38:16. https://doi.org/10.1007/s13347-025-00843-2.
Earle T, Siegrist M. Trust, Confidence and Cooperation model: a framework for understanding the relation between trust and Risk Perception. Int J Glob Environ Issues. 2008;8:17. https://doi.org/10.1504/IJGENVI.2008.017257.
Edwards MC. An Introduction to Item Response Theory Using the Need for Cognition Scale. Soc Personal Psychol Compass. 2009;3:507–29. https://doi.org/10.1111/j.1751-9004.2009.00194.x.
Ehrhart MG, Ehrhart KH, Roesch SC, Chung-Herrera BG, Nadler K, Bradshaw K. Testing the latent factor structure and construct validity of the Ten-Item Personality Inventory. Personal Individ Differ. 2009;47:900–5. https://doi.org/10.1016/j.paid.2009.07.012.
Evans AM, Revelle W. Survey and behavioral measurements of interpersonal trust. J Res Personal. 2008;42:1585–93. https://doi.org/10.1016/j.jrp.2008.07.011.
Forscher PS, Lai CK, Axt JR, Ebersole CR, Herman M, Devine PG, Nosek BA. A meta-analysis of procedures to change implicit measures. J Pers Soc Psychol. 2019;117:522–59. https://doi.org/10.1037/pspa0000160.
Glickman M, Sharot T. How human–AI feedback loops alter human perceptual, emotional and social judgements. Nat Hum Behav. 2024;9:345–59. https://doi.org/10.1038/s41562-024-02077-2.
Gosling SD, Rentfrow PJ, Swann WB. A very brief measure of the Big-Five personality domains. J Res Personal. 2003;37:504–28. https://doi.org/10.1016/S0092-6566(03)00046-1.
Graziano WG, Eisenberg N. Agreeableness. Handbook of Personality Psychology. Elsevier; 1997. pp. 795–824. https://doi.org/10.1016/B978-012134645-4/50031-7.
Grgic-Hlaca N, Redmiles EM, Gummadi KP, Weller A. 2018. Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction, in: Proceedings of the 2018 World Wide Web Conference on World Wide Web - WWW ’18. Presented at the the 2018 World Wide Web Conference, ACM Press, Lyon, France, pp. 903–912. https://doi.org/10.1145/3178876.3186138
Groskurth K, Bluemke M, Lechner CM. Why we need to abandon fixed cutoffs for goodness-of-fit indices: An extensive simulation and possible solutions. Behav Res Methods. 2023;56:3891–914. https://doi.org/10.3758/s13428-023-02193-3.
Gustafsson J-E, Åberg-Bengtsson L. Unidimensionality and interpretability of psychological instruments. In: Embretson SE, editor. Measuring Psychological Constructs: Advances in Model-Based Approaches. Washington: American Psychological Association; 2010. pp. 97–121. https://doi.org/10.1037/12074-005.
Hancock PA, Kessler TT, Kaplan AD, Stowers K, Brill JC, Billings DR, Schaefer KE, Szalma JL. How and why humans trust: A meta-analysis and elaborated model. Front Psychol. 2023a;14:1081086. https://doi.org/10.3389/fpsyg.2023.1081086.
Hancock PA, Kessler TT, Kaplan AD, Stowers K, Brill JC, Billings DR, Schaefer KE, Szalma JL. How and why humans trust: A meta-analysis and elaborated model. Front Psychol. 2023b;14:1081086. https://doi.org/10.3389/fpsyg.2023.1081086.
Harvey K, Laurie G. Proxies of Trustworthiness: A Novel Framework to Support the Performance of Trust in Human Health Research. J Bioethical Inq. 2024;21:625–45. https://doi.org/10.1007/s11673-024-10335-1.
Hassan MU, Iqbal Z, Nazeer W. Technology trust and online purchase behaviour: a multidimensional research model. Int J Bus Forecast Mark Intell. 2019;5:464. https://doi.org/10.1504/IJBFMI.2019.105342.
Hochman G. Beyond the Surface: A New Perspective on Dual-System Theories in Decision-Making. Behav Sci. 2024;14:1028. https://doi.org/10.3390/bs14111028.
Hoff KA, Bashir M. Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust. Hum. Factors J Hum Factors Ergon Soc. 2015;57:407–34. https://doi.org/10.1177/0018720814547570.
Holland C, Perry G, Neyedli HF. 2024. Calibrating Trust, Reliance and Dependence in Variable-Reliability Automation. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 68, 604–610. https://doi.org/10.1177/10711813241277531
Holthausen BE, Wintersberger P, Walker BN, Riener A. 2020. Situational Trust Scale for Automated Driving (STS-AD): Development and Initial Validation, in: 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Presented at the AutomotiveUI ’20: 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, ACM, Virtual Event DC USA, pp. 40–47. https://doi.org/10.1145/3409120.3410637
Huang Y, Sun L, Wang, Haoran, Wu S, Zhang Q, Li Y, Gao C, Huang, Yixin, Lyu W, Zhang Y, Li X, Liu Z, Liu, Yixin, Wang Y, Zhang Z, Vidgen B, Kailkhura B, Xiong C, Xiao C, Li C, Xing E, Huang F, Liu H, Ji H, Wang, Hongyi, Zhang H, Yao H, Kellis M, Zitnik M, Jiang M, Bansal M, Zou J, Pei J, Liu J, Gao J, Han J, Zhao J, Tang J, Wang J, Vanschoren J, Mitchell J, Shu K, Xu K, Chang K-W, He L, Huang L, Backes M, Gong NZ, Yu PS, Chen P-Y, Gu Q, Xu R, Ying R, Ji S, Jana S, Chen T, Liu T, Zhou T, Wang W, Li X, Zhang X, Wang X, Xie X, Chen X, Wang, Ye Y, Cao Y, Chen Y, Zhao Y. Y., 2024. TrustLLM: Trustworthiness in Large Language Models. https://doi.org/10.48550/ARXIV.2401.05561
Huo W, Zheng G, Yan J, Sun L, Han L. Interacting with medical artificial intelligence: Integrating self-responsibility attribution, human–computer trust, and personality. Comput Hum Behav. 2022;132:107253. https://doi.org/10.1016/j.chb.2022.107253.
Ingrams A, Kaufmann W, Jacobs D. In AI we trust? Citizen perceptions of AI in government decision making. Policy Internet. 2022;14:390–409. https://doi.org/10.1002/poi3.276.
Jacovi A, Marasović A, Miller T, Goldberg Y. 2021. Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI, in: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. Presented at the FAccT ’21: 2021 ACM Conference on Fairness, Accountability, and Transparency, ACM, Virtual Event Canada, pp. 624–635. https://doi.org/10.1145/3442188.3445923
Jebb AT, Ng V, Tay L. A Review of Key Likert Scale Development Advances: 1995–2019. Front Psychol. 2021;12:637547. https://doi.org/10.3389/fpsyg.2021.637547.
Ji J, Chen Y, Jin M, Xu W, Hua W, Zhang Y. 2024. MoralBench: Moral Evaluation of LLMs. https://doi.org/10.48550/ARXIV.2406.04428
Jian J-Y, Bisantz AM, Drury CG. Foundations for an Empirically Determined Scale of Trust in Automated Systems. Int J Cogn Ergon. 2000;4:53–71. https://doi.org/10.1207/S15327566IJCE0401_04.
Kattnig M, Angerschmid A, Reichel T, Kern R. Assessing trustworthy AI: Technical and legal perspectives of fairness in AI. Comput Law Secur Rev. 2024;55:106053. https://doi.org/10.1016/j.clsr.2024.106053.
Kelly S, Kaye S-A, Oviedo-Trespalacios O. What factors contribute to the acceptance of artificial intelligence? A systematic review. Telemat Inf. 2023;77:101925. https://doi.org/10.1016/j.tele.2022.101925.
Kenny DA, Kaniskan B, McCoach DB. The Performance of RMSEA in Models With Small Degrees of Freedom. Sociol Methods Res. 2015;44:486–507. https://doi.org/10.1177/0049124114543236.
Kleizen B, Van Dooren W, Verhoest K, Tan E. Do citizens trust trustworthy artificial intelligence? Experimental evidence on the limits of ethical AI measures in government. Gov Inf Q. 2023;40:101834. https://doi.org/10.1016/j.giq.2023.101834.
Kostick-Quenet KM, Gerke S. AI in the hands of imperfect users. Npj Digit Med. 2022;5:197. https://doi.org/10.1038/s41746-022-00737-z.
Kumar V, Ashraf AR, Nadeem W. AI-powered marketing: What, where, and how? Int J Inf Manag. 2024;77:102783. https://doi.org/10.1016/j.ijinfomgt.2024.102783.
Larzelere RE, Huston TL. The Dyadic Trust Scale: Toward Understanding Interpersonal Trust in Close Relationships. J Marriage Fam. 1980;42:595. https://doi.org/10.2307/351903.
Lee JD, See KA. Trust in Automation: Designing for Appropriate Reliance. Hum. Factors J Hum Factors Ergon Soc. 2004;46:50–80. https://doi.org/10.1518/hfes.46.1.50_30392.
Lee MK. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data Soc. 2018;5:2053951718756684. https://doi.org/10.1177/2053951718756684.
Lee Y, Li JQ. The role of communication transparency and organizational trust in publics’ perceptions, attitudes and social distancing behaviour: A case study of the COVID-19 outbreak. J Contingencies Crisis Manag. 2021;29:368–84. https://doi.org/10.1111/1468-5973.12354.
Levine TR. Truth-Default Theory (TDT): A Theory of Human Deception and Deception Detection. J Lang Soc Psychol. 2014;33:378–92. https://doi.org/10.1177/0261927X14535916.
Li Y, Wu B, Huang Y, Luan S. Developing trustworthy artificial intelligence: insights from research on interpersonal, human-automation, and human-AI trust. Front Psychol. 2024;15:1382693. https://doi.org/10.3389/fpsyg.2024.1382693.
Liu D, Lemmens J, Hong X, Li B, Hao J, Yue Y. A network analysis of internet gaming disorder symptoms. Psychiatry Res. 2022;311:114507. https://doi.org/10.1016/j.psychres.2022.114507.
Malhotra NK, Kim SS, Agarwal J. Internet Users’ Information Privacy Concerns (IUIPC): The Construct, the Scale, and a Causal Model. Inf Syst Res. 2004;15:336–55. https://doi.org/10.1287/isre.1040.0032.
Marsh HW, Wen Z, Hau K-T. Structural Equation Models of Latent Interactions: Evaluation of Alternative Estimation Strategies and Indicator Construction. Psychol Methods. 2004;9:275–300. https://doi.org/10.1037/1082-989X.9.3.275.
Mayer RC, Davis JH, Schoorman FD. An Integrative Model of Organizational Trust. Acad Manage Rev. 1995a;20:709. https://doi.org/10.2307/258792.
Mayer RC, Davis JH, Schoorman FD. An Integrative Model of Organizational Trust. Acad Manage Rev. 1995b;20:709. https://doi.org/10.2307/258792.
Mayer RC, Davis JH, Schoorman FD. An Integrative Model of Organizational Trust. Acad Manage Rev. 1995c;20:709. https://doi.org/10.2307/258792.
McCrae RR, Costa PT. Personality trait structure as a human universal. Am Psychol. 1997;52:509–16. https://doi.org/10.1037/0003-066X.52.5.509.
McGrath MJ, Lack O, Tisch J, Duenser A. Measuring trust in artificial intelligence: validation of an established scale and its short form. Front Artif Intell. 2025;8:1582880. https://doi.org/10.3389/frai.2025.1582880.
Merritt SM, Ilgen DR. Not All Trust Is Created Equal: Dispositional and History-Based Trust in Human-Automation Interactions. Hum Factors J Hum Factors Ergon Soc. 2008;50:194–210. https://doi.org/10.1518/001872008X288574.
Mitchell T. Trust and Transparency in Artificial Intelligence. Philos Technol. 2025;38:87. https://doi.org/10.1007/s13347-025-00916-2.
Murray SL, Holmes JG, Griffin DW. The self-fulfilling nature of positive illusions in romantic relationships: Love is not blind, but prescient. J Pers Soc Psychol. 1996;71:1155–80. https://doi.org/10.1037/0022-3514.71.6.1155.
Myers S, Everett JAC. People expect artificial moral advisors to be more utilitarian and distrust utilitarian moral advisors. Cognition. 2025;256:106028. https://doi.org/10.1016/j.cognition.2024.106028.
National Institute of Standards and Technology. 2023. Artificial Intelligence Risk Management Framework (AI RMF 1.0).
Neuliep JW. Anxiety/Uncertainty Management (AUM) Theory. In: Kim YY, editor. The International Encyclopedia of Intercultural Communication. Wiley; 2017. pp. 1–9. https://doi.org/10.1002/9781118783665.ieicc0007.
O’brien RM. A Caution Regarding Rules of Thumb for Variance Inflation Factors. Qual Quant. 2007;41:673–90. https://doi.org/10.1007/s11135-006-9018-6.
OECD. THE IMPACT OF ARTIFICIAL INTELLIGENCE ON PRODUCTIVITY, DISTRIBUTION AND GROWTH KEY MECHANISMS. INITIAL EVIDENCE AND POLICY CHALLENGES; 2024.
Official Journal, of the European Union. 2024. REGULATION (EU) 2024/1689 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL.
Vatcheva P, Lee K, M. Multicollinearity in Regression Analyses Conducted in Epidemiologic Studies. Epidemiol Open Access. 2016;06. https://doi.org/10.4172/2161-1165.1000227.
Parasuraman R, Riley V. Humans and Automation: Use, Misuse, Disuse, Abuse. Hum. Factors J Hum Factors Ergon Soc. 1997;39:230–53. https://doi.org/10.1518/001872097778543886.
Paunonen SV. Big Five factors of personality and replicated predictions of behavior. J Pers Soc Psychol. 2003;84:411–24.
A
Rahman MM, Babiker A, Ali R, Motivation, Concerns, Attitudes Towards AI. Differences by Gender, Age, and Culture, in: Barhamgi M, Wang H, Wang X, editors, Web Information Systems Engineering – WISE 2024, Lecture Notes in Computer Science. Springer Nature Singapore, Singapore, 375–91.
https://doi.org/10.1007/978-981-96-0573-6_28.
Reinhardt K. Trust and trustworthiness in AI ethics. AI Ethics. 2023;3:735–44. https://doi.org/10.1007/s43681-022-00200-5.
Rempel JK, Holmes JG, Zanna MP. Trust in close relationships. J Pers Soc Psychol. 1985;49:95–112. https://doi.org/10.1037/0022-3514.49.1.95.
Revilla MA, Saris WE, Krosnick JA. Choosing the Number of Categories in Agree–Disagree Scales. Sociol Methods Res. 2014;43:73–97. https://doi.org/10.1177/0049124113509605.
Robins RW, Hendin HM, Trzesniewski KH. Measuring Global Self-Esteem: Construct Validation of a Single-Item Measure and the Rosenberg Self-Esteem Scale. Pers Soc Psychol Bull. 2001;27:151–61. https://doi.org/10.1177/0146167201272002.
Robinson MD, Irvin RL, Asad MR, Fereidouni H. Neuroticism’s link to threat sensitivity: Evidence from a dynamic affect reactivity task. Emotion. 2025;25:884–95. https://doi.org/10.1037/emo0001462.
Roesler E, Vollmann M, Manzey D, Onnasch L. The dynamics of human–robot trust attitude and behavior — Exploring the effects of anthropomorphism and type of failure. Comput Hum Behav. 2024;150:108008. https://doi.org/10.1016/j.chb.2023.108008.
Rosenberg M. 2011. Rosenberg Self-Esteem Scale. https://doi.org/10.1037/t01038-000
Roski J, Maier EJ, Vigilante K, Kane EA, Matheny ME. Enhancing trust in AI through industry self-governance. J Am Med Inf Assoc. 2021;28:1582–90. https://doi.org/10.1093/jamia/ocab065.
Sarker IH. Discov Artif Intell. 2024;4:40. https://doi.org/10.1007/s44163-024-00129-0. LLM potentiality and awareness: a position paper from the perspective of trustworthy and responsible AI modeling.
Schäfer A, Esterbauer R, Kubicek B. Trusting robots: a relational trust definition based on human intentionality. Humanit Soc Sci Commun. 2024;11:1412. https://doi.org/10.1057/s41599-024-03897-3.
Scharowski N, Benk M, Kühne SJ, Wettstein L, Brühlmann F. 2023. Certification Labels for Trustworthy AI: Insights From an Empirical Mixed-Method Study, in: 2023 ACM Conference on Fairness Accountability and Transparency. Presented at the FAccT ’23: the 2023 ACM Conference on Fairness, Accountability, and Transparency, ACM, Chicago IL USA, pp. 248–260. https://doi.org/10.1145/3593013.3593994
A
Scharowski N, Perrig SAC, Aeschbach LF, von Felten N, Opwis K, Wintersberger P, Brühlmann F. 2024a. To Trust or Distrust Trust Measures: Validating Questionnaires for Trust in AI.
https://doi.org/10.48550/ARXIV.2403.00582Scharowski N, Perrig SAC, Aeschbach LF, von Felten N, Opwis K, Wintersberger P, Brühlmann F. 2024b. To Trust or Distrust Trust Measures: Validating Questionnaires for Trust in AI. https://doi.org/10.48550/ARXIV.2403.00582
Schermelleh-Engel K, Moosbrugger H, Muller H. 2003. Evaluating the fit of structural equation models: tests of significance and goodness-of-fit models. Methods Psychol Res Online 23–74.
Schlicker N, Baum K, Uhde A, Sterz S, Hirsch MC, Langer M. How do we assess the trustworthiness of AI? Introducing the trustworthiness assessment model (TrAM). Comput Hum Behav. 2025;170:108671. https://doi.org/10.1016/j.chb.2025.108671.
Schwerter F, Zimmermann F. Determinants of trust: The role of personal experiences. Games Econ Behav. 2020;122:413–25. https://doi.org/10.1016/j.geb.2020.05.002.
Sorin V, Brin D, Barash Y, Konen E, Charney A, Nadkarni G, Klang E. Large Lang Models Empathy: Syst Rev J Med Internet Res. 2024;26:e52597. https://doi.org/10.2196/52597.
Stanley DJ, Meyer JP, Topolnytsky L. Employee Cynicism and Resistance to Organizational Change. J Bus Psychol. 2005;19:429–59. https://doi.org/10.1007/s10869-005-4518-2.
Syropoulos S, Leidner B, Mercado E, Li M, Cros S, Gómez A, Baka A, Chekroun P, Rottman J. How safe are we? Introducing the multidimensional model of perceived personal safety. Personal Individ Differ. 2024;224:112640. https://doi.org/10.1016/j.paid.2024.112640.
Tabachnick BG, Fidell LS, Ullman JB. 2019. Using multivariate statistics, Seventh edition. ed. Pearson, NY, NY.
Taillandier P, Zucker JD, Grignard A, Gaudou B, Huynh NQ, Drogoul A. 2025. Integrating LLM in Agent-Based Social Simulation: Opportunities and Challenges. https://doi.org/10.48550/ARXIV.2507.19364
Tao Y, Viberg O, Baker RS, Kizilcec RF. Cultural bias and cultural alignment of large language models. PNAS Nexus. 2024;3:pgae346. https://doi.org/10.1093/pnasnexus/pgae346.
Tetlock PE. Social functionalist frameworks for judgment and choice: Intuitive politicians, theologians, and prosecutors. Psychol Rev. 2002;109:451–71. https://doi.org/10.1037/0033-295X.109.3.451.
Thielmann I, Hilbig BE. Trust: An Integrative Review from a Person–Situation Perspective. Rev Gen Psychol. 2015a;19:249–77. https://doi.org/10.1037/gpr0000046.
Thielmann I, Hilbig BE. Trust: An Integrative Review from a Person–Situation Perspective. Rev Gen Psychol. 2015b;19:249–77. https://doi.org/10.1037/gpr0000046.
Van Der Biest M, Verschooren S, Verbruggen F, Brass M. Perceptual judgments are resistant to the advisor’s perceived level of trustworthiness: A deep fake approach. PLoS ONE. 2025;20:e0319039. https://doi.org/10.1371/journal.pone.0319039.
Wang L, Song M, Rezapour R, Kwon BC, Huh-Yoo J. 2023. People’s Perceptions Toward Bias and Related Concepts in Large Language Models: A Systematic Review. https://doi.org/10.48550/ARXIV.2309.14504
Weiner B. An attributional theory of achievement motivation and emotion. Psychol Rev. 1985;92:548–73.
Wester J, De Jong S, Pohl H, Van Berkel N. Exploring people’s perceptions of LLM-generated advice. Comput Hum Behav Artif Hum. 2024;2:100072. https://doi.org/10.1016/j.chbah.2024.100072.
A
Wheeless LR, Grotz J, THE MEASUREMENT OF TRUST, AND ITS RELATIONSHIP TO SELF-DISCLOSURE. Hum Commun Res 3, 250–7.
https://doi.org/10.1111/j.1468-2958.1977.tb00523.x.
Wojton HM, Porter D, Lane T, Bieber S, Madhavan C, P. Initial validation of the trust of automated systems test (TOAST). J Soc Psychol. 2020;160:735–50. https://doi.org/10.1080/00224545.2020.1749020.
Xie Y, Zhou R, Chan AHS, Jin M, Qu M. Motivation to interaction media: The impact of automation trust and self-determination theory on intention to use the new interaction technology in autonomous vehicles. Front Psychol. 2023;14:1078438. https://doi.org/10.3389/fpsyg.2023.1078438.
Xu H, Teo H-H, Tan BCY, Agarwal R. The Role of Push-Pull Technology in Privacy Calculus: The Case of Location-Based Services. J Manag Inf Syst. 2009;26:135–74. https://doi.org/10.2753/MIS0742-1222260305.
Xu S, Khan KI, Shahzad MF. Examining the influence of technological self-efficacy, perceived trust, security, and electronic word of mouth on ICT usage in the education sector. Sci Rep. 2024;14:16196. https://doi.org/10.1038/s41598-024-66689-4.
Yang Q, Van Den Bos K, Li Y. Intolerance of uncertainty, future time perspective, and self-control. Personal Individ Differ. 2021;177:110810. https://doi.org/10.1016/j.paid.2021.110810.
Yoganathan V, Osburg V-S, Janakiraman N. Lending Legitimacy to Corporate Digital Responsibility: Trust in Firm Versus Government Regulation of Artificial Intelligence Services. J Serv Res. 2025;10946705251345097. https://doi.org/10.1177/10946705251345097.
Zhang B, Wang A, Ye Y, Liu J, Lin L. The Relationship between Meaning in Life and Mental Health in Chinese Undergraduates: The Mediating Roles of Self-Esteem and Interpersonal Trust. Behav Sci. 2024;14:720. https://doi.org/10.3390/bs14080720.
Zhang M. Assessing Two Dimensions of Interpersonal Trust: Other-Focused Trust and Propensity to Trust. Front Psychol. 2021;12:654735. https://doi.org/10.3389/fpsyg.2021.654735.
Zhang X, Lyu X, Du Z, Chen Q, Zhang D, Hu H, Tan C, Zhao T, Wang Y, Zhang B, Lu H, Zhou Y, Qiu X. 2024. IntrinsicVoice: Empowering LLMs with Intrinsic Real-time Voice Interaction Abilities. https://doi.org/10.48550/ARXIV.2410.08035
Zhou J, Hu M, Li J, Zhang X, Wu X, King I, Meng H. 2024. Rethinking Machine Ethics – Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? in: Findings of the Association for Computational Linguistics: NAACL 2024. Presented at the Findings of the Association for Computational Linguistics: NAACL 2024, Association for Computational Linguistics, Mexico City, Mexico, pp. 2227–2242. https://doi.org/10.18653/v1/2024.findings-naacl.144