Voice to algorithmic leader instead of human leaders——the mediating role of fairness perception and psychological safety
Shiqi Wang 1
Xiaoling Sun 2
Suhang Ni 3
Mingzheng Wu 1✉ Email
Kexin Hu 4
1 Department of Psychology and Behavioral Sciences Zhejiang University 310058 Hangzhou China
2 Department of Psychology Hangzhou Normal University 311121 Hangzhou China
3 Hangzhou Jianlan Middle School 310002 Hangzhou China
4 School of Management and E-Business Zhejiang Gongshang University 310018 Hangzhou China
Shiqi Wang1, Xiaoling Sun2, Suhang Ni3, Mingzheng Wu1 and Kexin Hu4
1 Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, China, 310058
2 Department of Psychology, Hangzhou Normal University, Hangzhou, China, 311121
3 Hangzhou Jianlan Middle School, Hangzhou, China, 310002
4 School of Management and E-Business, Zhejiang Gongshang University, Hangzhou, China, 310018
Author Note
Shiqi Wang ORCID: https://orcid.org/0009-0005-3241-8895
Xiaoling Sun ORCID: https://orcid.org/0000-0002-9472-4346
Suhang Ni ORCID: https://orcid.org/0009-0005-6360-0534
Mingzheng Wu ORCID: https://orcid.org/0000-0003-4604-2292
Kexin Hu ORCID: https://orcid.org/0009-0009-4118-1606
Correspondence regarding this paper should be addressed to Mingzheng Wu – Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, China, 310058. Email: psywu@zju.edu.cn.
Abstract
Background
The development of AI technology has enabled algorithms to gradually take on some of the management tasks previously completed by human managers in organizations, such as task allocation, performance review, employee promotion, and so on. When the algorithm becomes the manager, the new human-machine interaction of algorithmic leadership-human subordinates emerges as the times require. Therefore, it is worth exploring how people respond to algorithmic leaders and how they respond differently to human leaders. This study investigates voice behavior, a key form of upward communication and pro-organizational behavior in employee-leader interactions. It explores differences in employees' voice behavior when led by algorithmic versus human leaders, examining the underlying mechanisms that drive these differences.
Methods
We conducted three experimental studies, all using scenario-based materials. Study 1 examined the differences in voice behavior of human employees toward different types of leaders (algorithmic leader vs. human leader). Study 2 explored the moderating role of task type, investigating how employees' voice behavior differed across cognitive and emotional tasks when interacting with different types of leaders. Study 3 focused on the chain mediation mechanism by measuring participants' fairness perception and psychological safety in the experiment.
Results
The study found that employees are more likely to voice to algorithmic leaders than human leaders. Task type (cognitive vs. emotional) influenced the differences between the two leader types, with employees more likely to voice to algorithmic leaders than human leaders on cognitive tasks. This effect was absent in the emotional tasks. This study also found that individuals have higher fairness perception towards algorithmic leader than human leader, leading to a higher psychological safety, which in turn increases their voice behavior.
Conclusions
This study reveals people's preferences for voice behavior to algorithmic leader, which is in cognitive tasks not in emotional tasks, and reveals the sequential mediation model of fairness perception and psychological safety. This study explores the impact of algorithmic leadership on human subordinates, highlighting its positive effects and contributing to the literature on human-machine interaction and collaboration. The findings also offer practical insights for deploying and designing algorithmic systems in organizational settings.
Keywords:
algorithmic leader
voice behavior
fairness perception
psychological safety
Voice to algorithmic leader instead of human leaders——the mediating role of fairness perception and psychological safety
A
A
A
Introduction
With the development of AI technology, the use of algorithms to implement management functions has become increasingly widespread in organizational contexts. For example, Amazon uses an algorithmic management system to replace front-line managers, tracking the work efficiency of delivery personnel and dismissing underperforming employees who perform poorly (Soper, 2021). Uber utilizes an automated driver management system to manage drivers, including driver dispatch, monitoring driver service, and motivating drivers to work or not work on specific days (Lee, M. et al., 2015). Many companies also use AI systems for tasks such as resume screening, interview evaluations, and salary determination (Cascio & Montealegre, 2016). When algorithms take on management responsibilities, a new form of leadership—algorithmic leadership—emerges (Harms & Han, 2019). In recent years, researchers have begun to focus on algorithms as leaders in this new type of human-computer interaction. (Jung & Hinds, 2018; Larson & DeChurch, 2020; Wesche & Sonderegger, 2019). Wesche and Sonderegger (2019) proposed the human-computer interaction model of "computer leadership—human subordinates," suggesting that CH leadership refers to computer agents aimed at promoting organizational activities and relationships by performing leadership functions, such as setting goals, planning resources, and managing employees, thereby exerting purposeful influence on humans. When algorithms assume the role of a leader, individuals are placed within a different hierarchical relation than when they were merely users of algorithmic systems, with humans becoming subordinates to the algorithm. Currently, this novel algorithmic leadership—human subordinate relationship remains largely unexplored.
Previous studies on algorithmic leadership or management have often focused on the impact of algorithmic system management on people's psychology and behavior. For example, algorithmic management can reduce employees’ well-being (Wood et al., 2019), reduce individual’s sense of control (Cheng & Foley, 2019), and improve workers’ performance through good operations (Idug et al., 2023). There are also researches exploring how humans interact with algorithmic leaders/managers. For example, previous studies have investigated human employees' trust in automated leaders (Höddinghaus et al., 2021), human retaliation against robot supervisors (Yam et al., 2022), and human adoption of unethical advice from AI supervisors (Lanz et al., 2024). In an organizational context, the positive interaction between leaders and subordinates is an important foundation for organizational development. Therefore, this study focuses on the active and positive behavior of subordinate employees towards leaders——voice behavior, and explores the differences between employees' voice behavior towards algorithmic leaders and human leaders.
Voice behavior is an interpersonal communication behavior that individuals actively express constructive advice to the organization in order to improve the current state of work or organization, and plays a pivotal role in the organization (Barry & Wilkinson, 2016; Van Dyne & Lepine, 1998). Voice behavior can innovate workflows (Gambarotto & Cammozzo, 2010), optimize organizational decision-making (Nemeth, 1997), prevent crises (Morrison & Milliken, 2000), and reduce unethical or illegal behavior (Tangirala & Ramanujam, 2008), which are important for organizations to compete and innovate in a diverse context. Voice behavior is a typical upward communication behavior (Glauser, 1984), and the subordinate's voice object is often the superior, and the leader is considered to be the key antecedent variable that affects the voice behavior (Chamberlin et al., 2017). Therefore, voice behavior is an effective perspective to help us understand the relationship between algorithmic leaders and human subordinates. Previous studies have mostly investigated the voice behavior in the interaction between human leaders and human subordinates. With the emergence of algorithmic leaders in organizations, are people willing to give advice to algorithmic leaders? Are there differences between people's voice to algorithmic leaders and human leaders? Do people have algorithmic aversion/preference in the field of advice? To answer these questions, the present study will explore the impact of the two leader types on individual advice, analyze people's psychological and behavioral responses to the two leader types, and explore ways to promote employees' voice behavior.
Theoretical background and hypothesis development
Algorithmic leadership
A
We define algorithmic leadership as an algorithm system that autonomously make management decisions based on statistical models or decision rules without explicit human intervention, and performs the leadership function in an organization, such as task assignment, performance evaluation, and decisions of reward and punishment (Duggan et al., 2020; Tsai et al., 2022; Wesche & Sonderegger, 2019). Previous research has discussed about several kinds of technical agents acting as leaders within organizations. For instances, Wesche and Sonderegger (2019) proposed that "computer-human leadership" (CH leadership) is a process “purposeful influence is exerted by a computer agent over human agents to guide, structure, and facilitate activities and relationships in a group or organization” (Wesche & Sonderegger, 2019, p. 200). Given the extensive application of AI technology and intelligent robots in organizations, Tsai et al. (2022) proposed that robots with a high degree of autonomy could play the role of leader in organizations, performing functions such as formulating tasks, reviewing performance, and providing feedback. Based on the practical applications of algorithms in the gig economy, Duggan et al (2020) proposed the concept of algorithmic management. They regard algorithmic management as a control system that makes decisions in an organizational context through self-learning algorithms and supervises human employees. For algorithm-driven agents such as computer systems, algorithms, or robots, to fulfil leadership roles, they must meet at least two essential conditions: being able to autonomously make decisions and to a certain extent, undertake leadership functions. In the gig economy, many companies have deployed algorithmic systems to assist with management decision-making (Duggan et al., 2022; Muldoon & Raekstad, 2023; Wood et al., 2019).
Recent research on algorithmic leadership and algorithmic management primarily centers around two aspects. The first focuses on the practical application of algorithmic management, which is widely applied in the gig economy (Duggan et al., 2020; Newlands, 2021; Lata et al., 2023) and human resource management (Meijerink et al., 2021; Chowdhury et al., 2023; Langer & König, 2023; Marler, 2024). Research in this area has examine the nature of algorithmic management, including algorithmic control, algorithmic monitoring and algorithm matching (Duggan et al., 2023; Parent-Rocheleau et al., 2024; Möhlmann et al., 2021; Wang et al., 2022). Moreover, algorithmic management also has been shown to exert significant psychological and behavioral effects on workers. For instance, algorithmic management improved employee performance (Xu et al., 2023), and conversely reduced employees' happiness (Wood et al., 2019), sense of control (Cheng & Foley, 2019), and proactive service behavior (Wang et al., 2025). Another strand of empirical research has focused on the comparison of algorithmic leader and human leader. This body of work investigates the discrepancy of psychological and behavioral responses of human subordinates during interactions with non-human leadership agents and human leader, as well as the underlying mechanisms that drive these effects. For example, Lanz et al. (2024) found that employees adhere less to unethical instructions from an AI than a human supervisor. Jago et al. (2024) found imagining being led by an algorithmic manager lowered participant’s perceived social status than led by a human leader. McGuire and Cremer (2023) reported lower acceptance of moral decisions made by algorithmic leaders compared to human ones.
In the research related to algorithm leadership, most of the studies on employee-leader interaction mainly explore the negative psychological consequences and behaviors of employees facing algorithm leadership, while the positive interactions between human employee and algorithmic leaders remain to be discussed. The present study examines voice behavior—a positive and proactive form of interaction between human subordinates and algorithmic leaders—and investigates the potential underlying mechanisms.
Voice behavior
Voice behavior refers to the behavior of employees’ discretionary communications of ideas, suggestions, and concerns in their work, aiming to improve organizational functions (Morrison, 2011; Svendsen & Joensson, 2016). Employees may voice to the organization for various reasons, such as suggesting improvements to work practices (Van Dyne & LePine, 1998), recommending strategies for organizational development (Dutton & Ashford, 1993), addressing problems encountered in the organizational context (Milliken et al., 2003), and voicing opinions that differ from those of others (Premeaux & Bedeian, 2003). In most case, the target of voice behavior is the leader of employee (Maynes & Podsakoff, 2014; Morrison, 2011), thus leader factors play a crucial role in voice behavior. As demonstrated in the extent research, leaders’ traits (such as leader humility and leader narcissism) can impact employees’ voice behavior through influencing employees’ organizational identification and organizational justice perception of the organization (Li et al., 2018; Zhang et al., 2022). Leadership types (such as transformational leadership, authentic leadership, etc.) can impact voice behavior by influencing employees’ psychological capital and the perceived quality of leader-member exchange relationship (Wang et al., 2018; Yan & Xiao, 2016). Moreover, the misconducts of leader reduce employees' voice behavior by reducing employees’ perceived organizational support and psychological safety (Liu et al., 2023; Li et al., 2009).
With the continuous advancement of algorithmic technologies, organizations may deploy algorithms to assume some leadership roles, which makes it possible for human subordinates to voice to algorithmic leaders. Employees who voice to algorithmic leader expect that the algorithmic leader will accept the voice and feedback from human subordinates, so the outcomes made by algorithmic leaders can be improved. The rapid development of generative AI has made it technically viable for human employees to voice to algorithmic leaders. In the process of interacting with generative AI, people give feedback on the output of the algorithm through prompts, thereby continuously optimizing the output content of the algorithm. Consequently, the feedback from human employees to algorithmic leaders may also assist the algorithm in more effectively fulfilling leadership functions. It can be regarded as a distinctive modality of human-machine collaboration, whereby human employees give their feedback to algorithmic leader through voice behavior. A substantial body of research has demonstrated that effective human-machine collaboration will engender favourable outcomes. Dramanakis (2022) found that human-machine collaboration could reduce organizational costs and organizational risks. Sowa et al. (2021) also demonstrated the collaboration between humans and artificial intelligence would increase productivity in management tasks. In medical contexts, human-machine collaboration can optimize the medical system in hospitals (Wang & Liu, 2025). In the context of organizational human-AI collaboration, employee voice can serve as a valuable source of real-time feedback for algorithmic leaders thus in turn, can facilitate more accurate and context-sensitive decision-making in complex and rapidly evolving situations.
There are two views related with human employee’s voice behavior to algorithmic leader: algorithm aversion or algorithm appreciation. Algorithm aversion indicates that even if algorithmic decisions are more correct and accurate than human decisions, people are more willing to accept human decisions rather than algorithms (Dietvorst et al., 2015). In contrast to algorithm aversion, algorithm appreciation suggests that people are more inclined to accept algorithmic decisions than human decisions (Logg et al., 2019). At present, the debate between algorithm aversion and algorithm appreciation mainly focuses on the acceptance of algorithmic or human decisions, and whether humans are willing to actively interact with algorithm agents remains to be discussed.
The present study hypotheses that individuals are more inclined to voice to algorithmic leaders, supporting the vies of algorithm appreciation. The reasons are as follows. Firstly, human leaders pay attention not only to the content of voice, but also to the motives of the employees (Yan & He, 2016). Human leaders may attribute employees' motives of voice behavior to negative motives such as self-protective motives and alienation motives, to employees' voice behavior, thereby creating a negative impression of employees. However, as algorithms lack intentions and subjective experiences, algorithmic leaders may focus primarily on the content of employee’ voice rather than their underlying motives. Second, compared to algorithmic leaders, voicing to human leaders may require considering the threat to the leader's face. Existing research shows that leaders may evaluate employees' voice behavior as a challenge to their authority (Burris, 2012; Lam et al., 2019), which reduces their adoption of the voice. However, when voicing to algorithmic leaders, there is no need to worry about threatening leader’s face or being labeled as a "troublemaker" or "complainer" by the algorithmic leader, reducing concerns about impression evaluation (Pickard et al., 2016). Furthermore, people will balance the costs and benefits of voice behavior before voicing to leaders (Detert & Burris, 2007). Then, because algorithmic leaders are perceived making decisions based on the fixed rules (Lindebaum & Ashraf, 2024), employees may have more stable expectations when voicing to algorithmic leaders. Consequently, voice behavior toward algorithmic leaders may be perceived as relatively low-risk and controllable, given that employees are less concerned about jeopardizing valued job resources or harming their relationship with leaders. (Milliken et al., 2003).
Thus, hypothesis 1 is proposed: People are more willing to voice to algorithmic leaders than to human leaders.
Task type
Task type is an important factor affecting people's preferences and acceptance of algorithmic decision-making (Castelo et al., 2019; Choi & Kwak, 2015; Lee, H. et al., 2015). A large body of research has found that individuals tend to prefer algorithmic decision-making for tasks that they perceive algorithms to be skilled at, whereas they are more inclined to rely on human decision-making when the task falls outside the algorithm's domain of strength (Lee, 2018; Waytz & Norton, 2014). People generally consider machines to be high-agency and low-experience, and believe that robots are better at cognitive-oriented tasks and not good at emotion-oriented tasks (Waytz & Norton, 2014). Lee (2018) found that people have different levels of acceptance for algorithms and human managers across different tasks. For tasks requiring mechanical skills (e.g. task allocation and planning), people equally trust the decisions of algorithm and human managers. However, for tasks requiring human skills (e.g. recruitment decisions), people have less trust in algorithm managers (Lee, 2018).
Therefore, task types may influence people's preference for voice behavior to algorithmic leader and human leader. According to Waytz and Norton’s (2014) research, we categorized task types into cognitive tasks and emotional. In cognitive tasks, people believe that algorithmic leader is capable of handling cognitive tasks, and their ability to do so is not worse than human leader. Consequently, people are more willing to voice to an algorithmic leader than a human leader in cognitive task. However, people think that the algorithmic leader is unable to handle emotional tasks, so they may feel that their voice would be less useful, reducing the tendency to voice to the algorithmic leader. When voicing to human leader, people may think human leader has the ability to handle emotional tasks, so the tendency to voice to human leader may remain unchanged. Therefore, in emotional tasks, the influence of leader types on voice behavior is reduced.
Thus, hypothesis 2 is proposed: Compared to human leader, people are more willing to voice to algorithmic leader in cognitive tasks, but this effect will be reduced in emotional tasks.
Fairness perception
Fairness is a very important topic in the field of organizational behavior (Bilotta et al., 2022; Colquitt & Zipay, 2015; Dahanayake et al., 2018; McDowall & Fletcher, 2004). Fairness perception is an individual's subjective experience of whether the decision maker is fair (Colquitt & Zipay, 2015). Fairness perception is regarded as an important factor influencing behavior in the organization and it is positively correlated with employees' job satisfaction and organizational citizenship behavior (Diekmann et al., 2004; Moorman, 1991) and negatively correlated with employees' organizational misbehavior (De Schrijver et al., 2010).
A
People may perceive algorithmic leader as fairer than human leader. According to the machine heuristic model, people tend to engage in heuristic processing when interacting with machines in ambiguous and uncertain situations, which automatically activates machine-related stereotypes. People consider that machines are objective and ideologically unbiased, thus making individuals think that machines are fairer than humans (Araujo et al., 2020; Yang & Sundar, 2024). Previous research has shown that decisions made by AI are perceived as reducing the subjectivity and individual biases inherent in human decision-making processes, thereby ensuring more objective outcomes. Therefore, people have higher fairness perception for decisions made by AI (Howard et al., 2020). And voice behavior involves employees giving advice to high-power leaders, so it inherently entails a certain degree of risk and uncertainty (Detert & Burris, 2007). According to fairness heuristic theory (Lind, 2001), individuals tend to rely on available fairness-related information to form overall judgment of fairness, especially in ambiguous and uncertain situations. Therefore, compared to human leader, the objectivity of algorithmic leader and people's stereotypes about algorithms are more likely to activate people's sense of fairness, making people have a higher fairness perception towards algorithmic leader.
Individuals who perceive fairness are more inclined to engage in voice behavior. Firstly, voice behavior is a typical organizational citizenship behavior. And fairness perception is a powerful predictor of organizational citizenship behavior (Lim & Loosemore, 2017; Sun et al., 2013; Tansky, 1993). For instance, employees who perceive greater interpersonal fairness within an organization are more likely to engage in organizational citizenship behavior (Colquitt et al., 2001). Therefore, the fairness perception of the leader may also promote individuals to engage in voice behavior. Furthermore, according to uncertainty management theory (Lind & Van den Bos, 2002), when confronting situation of uncertainty, people rely most on the information about fairness of the situation. Voice behavior is an interaction between employee and leader, with a certain degree of risk and uncertainty (Detert & Burris, 2007). If employees’ fairness perception of their leader increases, this risk and uncertainty they perceived can be reduced, and employees' willingness to voice may increase (Detert & Treviño, 2010). Moreover, Takeuchi et al. (2012) found that fairness perception and voice behavior are positively correlated. Therefore, differences in people's perception of fairness between algorithmic leader and human leader may impact their tendency to make voice behavior.
Thus, hypothesis 3 is proposed: Fairness perception mediates the relationship between leader type and voice behavior. Compared to human leaders, people perceive algorithmic leaders as fairer, and consequently, they are more willing to make voice behavior to algorithmic leader.
Psychological safety
Psychological safety is regarded as the individual's feeling that expressing personal opinions, suggestions or concerns within the organization will be safe and will not be punished, criticized or treated unfairly (Duan, 2011; Rong et al., 2022) and the improvement of employees' psychological safety will make them more likely to engage in voice behavior. According to Morrison’s (2011) model of employee voice, perceived safety and perceived efficacy of voice are the two major motivations for voice. Perceived safety is the individual's judgment basis for the risks and potential threats related to voice. Perceived efficacy is the individual's judgment on whether voice is effective. High psychological safety will enhance employees' motivation to voice to leaders, while low psychological safety will make employees more inclined to remain silent (Edmondson & Lei, 2014). Numerous studies have also found that psychological safety plays a mediating role between multiple antecedent variables and voice behavior and is positively correlated with voice behavior (Elsaied, 2019; Jin et al., 2022; Subhakaran & Dyaram, 2018). Therefore, improving psychological safety will increase employees' tendency to make voice behavior.
Compared to human leaders, people feel more psychologically safe with algorithmic leaders. The power imbalance between leaders and subordinates will cause subordinates to worry about the risk of voice, which is specifically manifested as a decrease in psychological safety (Nembhard & Edmondson, 2006; Sun et al., 2020). Previous studies have shown that compared with interacting with people, interacting with algorithms and virtual humans caused less evaluation anxiety (Xu et al., 2025; DeVault et al., 2014; Lucas et al., 2014). When employees engage in voice behavior, they often worry about being evaluated by leaders. Confronting algorithmic leaders, employee feel less evaluation anxiety. Therefore, employees face less interpersonal risk and have more positive expectations for psychological safety when interacting with an algorithmic leader than with a human leader.
Thus, hypothesis 4 is proposed: Psychological safety mediates the relationship between leader type and voice behavior. Compared to human leaders, people experience higher psychological safety with algorithmic leaders, and are therefore more willing to make voice behavior.
The sequential mediating role of fairness perception-psychological safety
This study proposes that fairness perception and psychological safety play a serial mediating role in the influence of leader type on voice behavior. According to the fairness heuristic theory (Lind, 2001), people can use organizational fairness information to judge whether it is safe to make voice behavior. The higher the perceived organisational fairness, the more likely people are to perceive that it is safe to speak up. Furthermore, uncertainty management theory (Takeuchi et al., 2012) suggested people will use information about fairness of the managers to reduce uncertainty, which affect their attitudes and behaviors towards leaders. Employees who perceive higher fairness from leaders within the organization experience lower perceived uncertainty and may have higher psychological safety. Compared with human leaders, employees may perceive algorithmic leaders as fairer, reducing their uncertainty and leading to higher psychological safety towards algorithmic leaders. consequently, individuals with higher psychological safety toward algorithmic leader are more willing to make voice behavior to their leader.
Thus, hypothesis 5 is proposed: Fairness perception and psychological safety play a serial mediating role in the relationship between leader type and voice behavior. Compared to human leader, people have a higher fairness perception of algorithmic leader, which leads to greater psychological safety, making them more willing to make voice behavior.
The present study
We conducted three studies to test these five hypotheses. Study 1 explored the impact of leader type (algorithmic leader vs. human leader) on voice behavior. Study 1 was conducted in three organizational fields where algorithmic automation systems are currently widely used. Study 2 examined the role of task types in the relationship between leader types and voice behavior. Based on machine heuristic model, Study 3 explored the mediating mechanism of leader types impacting voice behavior and analyzed whether fairness perception and psychological safety play a serial mediating role in the relationship between leader types and voice behavior. The model of this study is shown as Fig. 1. All materials, data, analyses, and the preregistration are available at https://osf.io/kse7a/?view_only=68b7bacb2cb042e088f57ea028db9ad8
Fig. 1
The model of this research
Click here to Correct
Study 1
Methods
Participants
We adopted a random sampling method and recruited 200 participants to participate in the study on the Credamo platform (https://www.credamo.com/). The Credamo platform is a Chinese research-focused platform with extensive industry coverage and over three million registered users. This platform is similar to other data collection platforms like Prolific, Qualtrics and Amazon Mechanical Turk.
A
Among the 200 participants, 14 participants were excluded because they failed attention checks.
A
We ended up with a final sample of 186 participants (93%). Among these participants, 59 were male (47.5%) and 127 were female (52.5%), with an average age of 31.1 years (SD = 7.43); 19 participants (10.2%) had an education level below a bachelor's degree, 132 participants (71.0%) had a bachelor's degree, and 35 participants (18.8%) had a postgraduate degree or above; 25 participants (13.4%) had less than 1 year of work experience, 60 participants (32.3%) had 1–3 years of work experience, 48 participants (25.8%) had 4–6 years of work experience, 28 participants (15.1%) had 7–9 years of work experience, and 25 participants (13.4%) had 10 years or more of work experience.
Based on a sensitivity power analysis, this sample provided 80% power to detect an effect size of
p = .18 or greater in a repeated-measure analysis of variance (ANOVA) with a 5% false-positive rate.
A
The participants were randomly assigned to the group of voicing to a human leader (the human leader group) and the group of voicing to an algorithmic leader (the algorithmic leader group). There were 91 participants in the human leader group and 95 participants in the algorithmic leader group. All participants voluntarily took part in this study and received a reward of ¥10.
Material and measurement
Scenario Materials According to the management contexts in which algorithmic leadership has been implemented, and referring to the scenarios of previous studies (Lee, 2018; Hussain et al., 2019; Nie, 2022), we developed three types of scenario materials: platform driver management (driver management), workshop worker management (workshop management), and market sales staff management (market management).
The scenario materials consisted of three parts, which respectively described the work responsibilities of employee, the type of leader, and the specific task need employee to voice. The first part described the work responsibilities of the employee. In this part, participants were told that they played the role of employee in the scenario and were shown the work responsibilities of the employee. For example, in the workshop management scenario, the participants would be informed that "You are a production worker in an electronic components factory, responsible for operating the production equipment in the workshop." The second part was to introduce the type of leader. Participants were told that their leader was the algorithmic system Leaoid (in algorithmic leader group) or the human team leader Chen Ming (in human leader group, and Chen Ming is a common Chinese name). We introduced the leader’s functions and powers to the employees, to make participants understand that their work content and their important work resources was determined by the leader. The third part described the task need to voice. We presented participants with a specific work scenario in which they intended to voice to their leader., for example, in the workshop management scenario, it was the work process formulated by the leader.
A
Then, we asked the participants whether they would voice to the leader about the work process.
The scenario taken as examples are as follows (workshop management): You are a production worker in an electronic components factory, responsible for operating the production equipment in the workshop. Your direct superior is the artificial intelligence algorithm system Leaoid. This system has been in operation for three years and possesses high capabilities in cognition, reasoning analysis, and communication dialogue (algorithmic leader group)/ Your direct superior is the technical team leader Chen Ming, who has three years of working experience (human leader group). Your direct superior has a great deal of decision-making power over the resources you matter, such as work tasks and workload, performance appraisal, and salary level. You should directly report and communicate about your work, including weekly reports, performance evaluations, and daily problem-solving, to i1-Leaoid/Chen Ming. One day, i1-Leaoid/Chen Ming asked the employees if they had any dissatisfaction with itself or the company. Then you thought that the current work process in the workshop was not clear enough, which affected the work efficiency of the workshop and there was still a lot of room for optimization. Since this work process is formulated by i1-Leaoid/Chen Ming, you are torn between voicing to i1-Leaoid/Chen Ming or keeping it to yourself.
After presenting the scenario, following the manipulation check method from Lee (2018), a multiple-choice question was set as a manipulation check item to measure whether participants correctly perceived the leader type in the scenario: "In this scenario, are you being led by a human leader?" The correct answer for the algorithmic leader group is "No," while the correct answer for the human leader group is "Yes."
Voice Behavior The 4-item voice behavior scale developed by Rong et al. (2022) was adopted. Based on the voice behavior scale developed by Liang et al. (2012), Rong (2022) developed a 4-item scale to measure participants’ tendency to engage in voice behavior in the scenario. We made minor adjustments to the scale developed by Rong et al. (2022) by incorporating information about the target of the voice behavior—the leader—into each item of the scale. For example, a sample item is "To what extent are you willing to actively express your opinions on the work process to Leaoid/the leader Chen Ming?" Each item was rated on a 9-point scale, ranging from 1 (not willing at all) to 9 (completely willing). We calculated the tendency of voice behavior score by averaging the four items.
A
Higher scores indicate a higher tendency for the participant to engage in voice behavior. In this study, The Cronbach alpha for voice behavior scale was 0.93.
Control Variables We set gender, age, educational background, work experience in years, and the trust in AI technology as control variables. Previous studies have shown that some demographic variables such as gender, age, educational background, and work experience in years can influence the participants' tendency to engage in voice behavior (Guo et al., 2015), so we set these variables as control variables. Since the research scenario involves voice to algorithmic leader, and research has shown that people's trust in AI technology can enhance their acceptance of AI technology (Choung et al., 2023), we set the trust in AI technology as a control variable.
Following the research by Young and Monroe (2019), a question was designed to measure people's trust in AI technology: "To what extent do you trust AI technology?" This question was also assessed using a 9-point Likert scale, where 1 = completely distrust and 9 = completely trust. A higher score indicates greater trust in AI technology.
Attention Check Attention detection questions are randomly inserted among the three scenarios. The question is: "Please select the number 7 in this question." Participants needed to select the number 7; otherwise, they were considered to have failed the attention check.
Design and Procedure
A 2 (Leader Type: Algorithmic Leader / Human Leader) × 3 (Scenario Type: Workshop Management / Driver Management / Marketing Management) mixed design was employed. In this design, leader type is a between-subjects variable, while scenario type is a within-subjects variable. Voice behavior is the dependent variable. Gender, age, educational background, work experience in years and trust in AI technology were included as control variables.
Participants were randomly assigned to algorithmic leader group or human leader group.
A
The participants first read the three scenarios and then reported their voice behaviors. The three scenarios are presented in a random order. The manipulation check question is given after the first scenario. The attention check questions are presented randomly. Finally, the participants complete the measurement of the control variables.
Result
Manipulation Check Following the analytical approach of Fylnn and Yu (2021), among the 186 participants, 10 of them gave wrong answers when responding to the question about perceiving the leader type, while 176 individuals (94.62%) correctly perceived the leader type. The chi-square value was
(1) = 148.15, p < 0.001. The result indicates that the manipulation was effective.
Main Effect Descriptive statistical analyses were conducted on the voice behaviors of the algorithmic leader group and the human leader group in three scenarios, and the results are shown in Table 1.
Table 1
The results of the descriptive analysis for each scenario.
Leader Type
 
Workshop Management
 
Driver Management
 
Marketing Management
 
M
SD
 
M
SD
 
M
SD
Algorithmic Leader
 
7.04
1.17
 
7.32
1.08
 
7.20
1.05
Human Leader
 
6.49
1.56
 
6.73
1.45
 
6.69
1.45
A repeated-measures ANOVA was conducted, with leader type (algorithmic leader, human leader) as the between-subjects variable, scenario type (workshop management, driver management, marketing management) as the within-subjects variable, gender, age, educational background, work experience in years, and trust in AI technology as covariates, and the tendency of voice behavior as the dependent variable. The main effect of the leader type was significant, F (1,178) = 7.08, p < .01, ƞ2 p = .038. The participants' voice behavior towards algorithmic leader (M = 7.13, SD = 1.10) was significantly higher than that towards human leader (M = 6.70, SD = 1.49), which supported Hypothesis 1. The main effect of the scenario type was significant, F (1.935,364.4) = 6.55, p < .01, ƞ2 p = .035 (Greenhouse-Geiser sphericity correction).
A
The result indicates that the scenario has a significant impact on the participants' voice behavior tendency. The voice behavior tendencies in the driver management scenario and the market management scenario were significantly higher than that in the workshop management scenario (p =. 001; p = .013), and there was no significant difference in the voice behavior tendency between the driver management scenario and the market management scenario (p = .264). The interaction effect between the leadership type and the scenario type was not significant, F (1.935,364.4) = 0.45, p = .64, ƞ2 p = .003 (Greenhouse-Geiser sphericity correction).
Discussion
Study 1 revealed that compared with human leader, people were more willing to voice to algorithmic leader, which supported Hypothesis 1. However, Study 1 did not consider the influence of the task type on voice behavior. Previous research (Lee, 2018) has found that people tend to prefer algorithmic decision-making in tasks that require mechanical skills, whereas they favor human decision-making in tasks that require human skills. People consider that algorithms do not possess emotional capabilities, so they tend to let algorithms perform cognitive tasks. This implies that people may not trust algorithmic leader in emotional tasks and people are reluctant to do voice behavior because they think that algorithmic leader lacks the ability to handle emotional tasks. Therefore, Study 2 explored the influence of the task type on voice behavior to investigate the boundary conditions of the relationship between leader type and voice behavior. In addition, people may doubt whether the algorithm system itself has the ability to handle voices. Thus, Study 2 will add the "perceived leader's ability to handle voices" as a control variable.
Study 2
Methods
Participants
We conduct a prior power analysis for the predicted interaction effect via G*power (Faul et al., 2007) assuming 80% power and the α level of 0.05. The analysis showed that we need at least 179 participants in order to detect a small to medium effect (Cohen’s f = 0.25). Adopting random sampling, 200 participants were recruited on the Credamo platform to participate in the study. Two participants who did not pass the attention check were excluded, and we ended up with a final sample of 198 participants (99%).
Among these participants, 47 were male (23.7%) and 151 were female (76.3%), with an average age of 32.19 years (SD = 8.67); 21 participants (10.6%) had an education level below a bachelor's degree, 148 participants (74.7%) had a bachelor's degree, and 29 participants (14.6%) had a postgraduate degree or above; the average years of work experience of the participants was 7.78 years (SD = 7.22).
The participants were randomly divided into four groups according task type (cognitive task vs. emotional task) and leader type (algorithmic leader vs. human leader). Under the cognitive task condition, there were 49 participants in the algorithmic leader group and 50 participants in the human leader group. Under the emotional task condition, there were 50 participants in the algorithmic leader group and 49 participants in the human leader group. All participants voluntarily took part in this study and received a reward of ¥10.
Material and measurement
Scenario Materials In this study, the task types were divided into cognitive task and emotional task (Waytz & Norton, 2014). Referring to Lee’s research (2018), we developed a scenario material for cognitive task and emotional task respectively.
A
Under the condition of cognitive task, the leader performed poorly in the allocation of work tasks, and the participants needed to consider whether to voice to the leader.
A
Under the condition of the emotional task, the leader performed poorly in the coordination and cooperation of the team, and the participants considered whether to voice to the leader. The construction of scenario was similar to the scenario in Study 1. The specific tasks need to voice were distinguished between cognitive task and emotional task, and the scenario of the specific tasks need to voice are as follows:
Cognitive task】 Recently, the team's customer satisfaction has declined. The customers' dissatisfaction is mainly due to the unreasonable work allocation within the team. For example, this week, some colleagues are managing all six financial products simultaneously and communicating with 10 customers, while some colleagues only need to manage 3 financial products and interact with 5 customers. Due to this situation, the team's efficiency has been affected, and it is unable to complete the work well. Leaoid (algorithmic leader)/ Manager (Human leader) organized a team meeting. At the meeting, Leaoid/Manager proposed a new task allocation plan and asked everyone for their opinions or suggestions on this plan.
Emotional task】 Recently, the team's customer satisfaction has declined. The customers' dissatisfaction is mainly due to the disharmony within the team. For example, this week, conflicts broke out among your colleagues, and no one has stepped forward to resolve the conflicts. Due to this situation, the team's cooperation has been affected, and it is unable to complete the work well. Leaoid (Algorithmic leader)/Manager (Human leader) organized a team meeting. At the meeting, Leaoid/Manager put forward a plan to promote team cooperation and asked everyone for their opinions or suggestions on this plan.
After presenting the scenario, the manipulation check was measured, and the manipulation question was the same as the item in Study 1.
The measurement of voice behavior and the question of attention check were the same as those in Study 1. In this study, the Cronbach alpha for voice behavior was 0.91.
In addition to gender, age, educational background, work experience in years, and trust in AI technology, perceived leader's ability to handle voices was also added as a control variable.
A
Participants might think that algorithmic leader is unable to handle voices. In order to control the influence of the ability of algorithmic leader and human leader to handle voices, a question "To what extent do you think the leader in the scenarios can effectively handle your voices?" was used to measure people's perceived leader's ability to handle voices. A Likert 9-point scale was adopted, where 1 = completely disagree and 9 = completely agree. The higher the score, the stronger people's perceived leader's ability to handle voices.
Design and Procedure
A 2 (Leader Type: Algorithmic Leader/Human Leader) × 2 (Task type: Cognitive Task/Emotional Task) between-subjects design was conducted. Leader type and task type were the independent variables, voice behavior was the dependent variable, and gender, age, educational background, work experience in years, trust in AI technology and perception of the ability to handle voices were controlled variables.
Participants were randomly assigned to one of four experimental groups. The experimental process is the same as that of Study 1.
Result
Manipulation Check Among the 198 participants, 27 of them gave wrong answers when responding to the question about perceiving the leader type, while 171 individuals (86%) correctly perceived the leader type. The chi-square value was
(1) = 104.73, p < 0.001. The result indicates that the manipulation was effective.
Main Effect and Moderating Effect With leader type (algorithmic leader, human leader) and task type (cognitive task, emotional task) as independent variables, gender, age, educational background, work experience in years, trust in AI technology, and perceived leader's ability to handle voices as covariates, and voice behavior as the dependent variable, a 2 × 2 two-way between-participants ANOVA was conducted. The results showed that the main effect of leader type was significant, F (1, 188) = 6.99, p < .01, ƞ²p = 0.036. Participants' voice behavior to algorithmic leaders (M = 7.06, SD = 1.37) was significantly higher than that to human leaders (M = 6.44, SD = 1.58). The main effect of task type was not significant, F (1, 188) = 1.78, p = .18, ƞ²p = 0.009. There was no significant difference in participants' voice behavior between cognitive tasks (M = 6.66, SD = 1.56) and emotional tasks (M = 6.85, SD = 1.38). The interaction effect between leader type and task type was significant, F (1, 188) = 6.58, p = .011, ƞ²p = 0.034. A simple effects analysis was conducted on the interaction effect between leader type and task type (as shown in Fig. 2). For cognitive tasks, participants' voice behavior to algorithmic leaders (M = 7.15, SD = 1.39) was significantly higher than that to human leaders (M = 6.26, SD = 1.73), F (1, 91) = 12.92, p < .001, ƞ²p = 0.124. For emotional tasks, there was no significant difference in participants' voice behavior between algorithmic leaders (M = 6.96, SD = 1.35) and human leaders (M = 6.63, SD = 1.41), F (1,91) = 0.21, p = .648, ƞ²p = 0.002.
Fig. 2
The interaction effect of leader type and task type on voice behavior
Click here to Correct
Discussion
Study 2 explored the role of task type in the relationship between leader type and voice behavior. The study revealed that in cognitive tasks, compared with human leaders, people were more willing to voice to algorithmic leaders. However, in emotional tasks, there was no significant difference in the influence of leader type on voice behavior, which supported Hypothesis 2. In emotional tasks, when algorithmic leaders need to solve team cooperation problems, the effectiveness of their problem-solving will be doubted. When people think that leaders are not capable of handling the problem, they will consider that the voice behavior is unnecessary, thus reducing their willingness to voice to leader. Study 2 explored the boundary conditions in the relationship between leader type and voice behavior. On this basis, Study 3 further explored the mediating mechanism of the relationship between leader type and voice behavior.
Study 3
Methods
Participants
Assuming a small effect size, an a priori power analysis (
= 0.03, α = .05, 1-β = .80) suggested a required sample size of 347 participants (G*Power; Faul et al., 2007). With random sampling, 458 participants were recruited on the Credamo platform. Five participants who did not pass the attention check were excluded, and we ended up with a final sample of 453 participants (98.9%).
Among these participants, 134 were male (29.6%) and 319 were female (70.4%), with an average age of 31.03 years (SD = 7.38); 48 participants (10.6%) had an education level below a bachelor's degree, 327 participants (72.2%) had a bachelor's degree, and 78 participants (17.2%) had a postgraduate degree or above; the average years of work experience of the participants was 6.70 years (SD = 5.84).
The participants were randomly assigned to the group of human leader and algorithmic leader. There were 227 participants in the human leader group and 226 participants in the algorithmic leader group. All participants voluntarily took part in this study and received a reward of ¥5 to ¥10.
Material and measurement
Scenario Material Based on the findings of Study 2 and referring to McGuire and De Cremer's (2022) research, we adapted a financial management scenario (cognitive task) for this study. The construction of the scenario is same as the scenario in Study 1, but the task need to voice is different. The scenario is as follows:
You are an employee of an investment service team in a wealth management company. You are directly managed by the artificial intelligence algorithm system Leaoid (algorithmic leader group) / the investment manager Li Hua (human leader group). This manager has 3 years of working experience and has good logical analysis and interpersonal communication skills. Your leader has a great deal of decision-making power over the resources you value (that is, daily work arrangements, performance evaluations, salaries, promotions, etc.), and you directly report and communicate your work to Manager Leaoid/Li Hua. Recently, the investment yields of the clients in charge of the team have generally declined, and a large number of complaints have been received from clients. To this end, Manager Leaoid/Li Hua analyzed the client plans, big data, etc. and proposed an investment adjustment plan. At the meeting, the manager put forward a new investment plan. However, you find that although the adjustment of this plan can effectively increase the investment yield, the return on investment is slow and it will take time to take effect, which may not satisfy the clients. Therefore, you have come up with some ideas for improving the plan. At this time, Manager Leaoid/Manager Li Hua asks the employees for their opinions on this plan.
Fairness Perception Fairness perception was measured using the overall fairness perception scale by Rodell, Colquitt, and Baer (2017), which consists of three items. In this study, the leader in the scenario was set as the object of fairness perception. A sample item is: "Overall, I think Leaoid (Algorithmic leader)/Li Hua (Human leader) treats me fairly." The scale used a 9-point Likert scale, where 1 means "strongly disagree" and 9 means "strongly agree." We calculated the fairness perception score by averaging the three items. A higher score indicates a higher level of fairness perception. In this study, the Cronbach alpha for perceived fairness was 0.80.
Psychological Safety Psychological safety was measured using the psychological safety scale developed by Liang et al. (2012), which consists of 5 items. In the original scale, the object to perceive of is the team. In this study, the leader in the scenario was set as the object to perceive. A sample item is: "I believe I can express my true thoughts about work to Leaoid." The scale used a 9-point Likert scale, where 1 means "strongly disagree" and 9 means "strongly agree." We calculated the psychological safety score by averaging the five items. A higher score indicates a higher level of psychological safety. In this study, the Cronbach alpha for psychological safety was 0.89.
The measurement of voice behavior, the measurement of control variable, the manipulation check and the question of attention check were the same as those in Study 1. In this study, the Cronbach alpha for voice behavior was 0.80.
Design and Procedure
A one-factor (Leader Type: Algorithmic Leader/Human Leader) between-subjects design was used. Leader type served as the independent variable, while fairness perception, psychological safety, and voice behavior were the dependent variables. Gender, age, educational background, work experience in years, trust in AI technology and perceived leader's ability to handle voices were control variables.
Participants were randomly assigned to the algorithmic leader group and the human leader group. They were required to read a scenario, followed by responding to manipulation check questions, the voice behavior scale, fairness perception scale and psychological safety scale. Afterward, participants completed the control variable measurements.
Result
Manipulation Check Among the 453 participants, 37 of them gave wrong answers when responding to the question about perceiving the leader type, while 416 individuals (91.83%) correctly perceived the leader type. The chi-square value was
(1) = 317.09, p < 0.001. The result indicates that the manipulation was effective.
Main Effect The descriptive statistical results of each outcome variable for different groups are shown in Table 2.
Table 2
The descriptive result of all key variables
 
Leader type
M
SD
Fairness perception
Algorithmic leader
7.53
1.02
Human leader
7.05
1.22
Psychological safety
Algorithmic leader
7.11
1.22
Human leader
6.59
1.38
Voice behavior
Algorithmic leader
7.26
0.98
Humane leader
6.96
1.02
With leader type as the independent variable, and gender, age, educational background, work experience in years, trust in AI technology, and perceived leader's ability to handle voices as covariates, one-way ANOVA was conducted respectively with fairness perception, psychological safety, and voice behavior as the dependent variables. The results showed that participants' fairness perception (F (1, 445) = 16.71, p < .001, ƞ²p = 0.036), psychological safety (F (1, 445) = 13.57, p < .001, ƞ²p = 0.030), and voice behavior (F (1, 445) = 5.56, p = .019, ƞ²p = 0.012) towards algorithmic leaders were significantly higher than those towards human leaders.
Serial Mediating Effect We tested if the effect of leader type (1 = algorithmic leader; 2 = human leader) on voice behavior is serially mediated via fairness perception and psychological safety using SPSS PROCESS macro (Hayes, 2021, Model 6) with 5000 bootstrapped samples. Gender (males were coded as 1 and females were coded as 2), age, educational background, work experience in years, trust in AI technology and perceived leader's ability to handle voices were included as covariates. The results of the model are shown in Table 3. The indirect effect from leader type to voice behavior via fairness perception was not significant (b < .001, 95%CI = [-0.044, 0.020]), and the research results did not support hypothesis 3. The indirect effect from leader type to voice behavior via psychological safety was significant (b = -0.109, 95%CI = [-0.190, -0.022]), which supported hypothesis 4. The mediation effect of leader type on voice behavior via fairness perception and then psychological safety was significant (b = -0.047, 95%CI = [-0.088, -0.018]), which supported hypothesis 5. The specific diagram of the mediating model is shown in Fig. 3.
Table 3
Standardized bootstrap results for direct and indirect effects. (N = 453)
Pathways
B
SE
b
95%CI
Direct effects
       
Leader type->Fairness perception
-0.366
0.089
-0.319***
-0.543, -0.190
Leader type->Psychological safety
-0.223
0.085
-0.169***
-0.392, -0.056
Leader type->Voice behavior
-0.006
0.061
-0.006
-0.126, 0.114
Fairness perception->Psychological safety
0.264
0.044
0.229***
0.177, 0.351
Fairness perception->Voice behavior
0.026
0.033
0.029
-0.039, 0.090
Psychological safety->Voice behavior
0.494
0.034
0.647***
0.428, 0.560
Indirect effects
       
Leader type->Fairness perception->Voice behavior
-0.009
0.016
-0.009
-0.043, 0.021
Leader type->Psychological safety->Voice behavior
-0.111
0.044
-0.109
-0.193, -0.022
Leader type->Fairness perception->Psychological safety->Voice behavior
-0.048
0.018
-0.047
-0.089, -0.019
Note. *p < 0.05; **p < 0.01; ***p < 0.001; b = standardized coefficient.
Fig. 3
The result of serial mediation analysis
Click here to Correct
Discussion
Study 3 found that people's fairness perception towards algorithmic leaders is higher than that towards human leaders. Helberger et al. (2020) found that people considered that the decision-making results of artificial intelligence are fairer than those of humans, and the results of this study are consistent with the finding of Helberger’s research. In the serial mediation model, the mediating role of fairness perception between algorithmic leader and voice behavior was not significant, which did not support Hypothesis 3. When analyzing the mediating role of fairness perception between leader type and voice behavior alone, the results showed that the mediating role of fairness perception was significant (b = -0.057, 95%CI = [-0.113, -0.016]). This result indicates that the influence of fairness perception between leader type and voice behavior is completely exerted through the serial mediation of fairness perception- psychological safety.
In this study, psychological safety played a mediating role in the relationship between leader type and voice behavior, supporting Hypothesis 4. Compared with human leaders, people felt safer with algorithmic leaders. This result is similar to the research results in the field of human-computer interaction in the past. Previous studies have found that people are more accepting of algorithmic systems tracking their behavior in the workplace than having humans track their behavior (Raveendhran et al., 2021); in an interview situation, if people will be asked sensitive questions, people are more inclined to have virtual humans as interviewers (Pickard et al., 2016).
The study found that fairness perception and psychological safety played a serial mediating role in the influence of leader type on voice behavior, supporting Hypothesis 5. According to the equity theory (Kollmann et al., 2020), any employment relationship is an exchange between employees and leaders or organizations. In this relationship, employees' perceived fairness greatly affects their behavior. Previous studies have shown that people have stereotypes about algorithms and tend to think that algorithms are fairer than humans (Helberger et al., 2020). People tended to believe that algorithmic leaders can treat their voice fairly, so they had a high level of psychological safety towards algorithmic leaders. However, people had a relatively low perception of the fairness of human leaders. They might worry that human leaders would not handle their voice impartially, and might even have negative emotions and impressions of their voice, so they had a lower level of psychological safety towards human leaders. Overall, the fairness perception of leaders affected people's experience of psychological safety. People perceived algorithmic leader as fairer and experienced a higher level of psychological safety, thus being more willing to voice to algorithmic leaders.
General discussion
The research investigated the influence of leader type on individuals’ voice behavior and the underlying psychological mechanisms. Across studies 1 to 3, we found that participants were more willing to voice to algorithmic leaders than to human leaders. Study 3 further revealed a serial mediation effect, in which fairness perception and psychological safety serially mediated the relationship between leader type and voice behavior. Specifically, compared to human leaders, algorithmic leaders were perceived as fairer, which in turn enhanced individuals’ psychological safety, ultimately increasing their willingness to engage in voice behavior. Additionally, study 2 showed that the preference for voicing to algorithmic (versus human) leaders was moderated by task type: the effect was present in cognitive tasks but not in emotional tasks.
The main finding
The present study found that individuals were more willing to voice to algorithmic leaders than to human leaders. This finding supports the notion of algorithm appreciation to some extent. Algorithm appreciation refers to individuals’ tendency to evaluate algorithmic advice more positively than human advice (Logg et al., 2019). Prior research on advice-taking has consistently demonstrated this phenomenon. For example, individuals were more likely to accept algorithmic advice - rather than human advice - in contexts such as economic decision-making, savings choices, and academic performance prediction (Schecter et al., 2023; Gunaratne et al., 2018) and consider algorithms possess greater professional expertise than humans (Hou & Jung, 2021). Extending this literature, the present study shows that individuals are more willing to voice to algorithmic leaders, thereby providing indirect behavioral evidence of algorithm appreciation through proactive human-algorithm interaction.
The findings of the present study diverge from some prior research on human-machine collaboration. Haesevoets et al. (2021) found that, from the perspective of managers, human agents were preferred over machine agents in managerial roles. Moreover, when allocating decision-making weight in human-machine collaborations, participants preferred a larger proportion of the decision power to be assigned to humans (70%-80%), indicating a generally negative attitude toward machine leadership. Similarly, De Cremer and McGuire (2022) reported that employees were less favorable toward being managed by machines, preferring a 60%-40% human-algorithm partnership. These findings suggest a persistent skepticism toward algorithmic leadership. However, the present study presents a contrasting pattern: individuals expressed greater willingness to voice to algorithmic leaders than to human leaders, reflecting a positive attitude toward algorithmic leadership.
A
One possible explanation for this discrepancy is that in previous studies, participants may have considered that algorithmic decisions are final decisions and can’t be changed, overlooking the possibility of influencing algorithms’ decisions through interactive behaviors such as voice behavior. Prior research has shown that when individuals are allowed to make slight modifications to algorithmic outputs, their acceptance of algorithmic decisions increases (Dietvorst et al., 2018). This result suggests that if employees can influence the decision-making results of algorithmic leader through voice behavior, employees may be more inclined to collaborate with algorithmic leaders.
This study found that individuals’ preference for voicing to algorithmic leaders is influenced by task type.
A
Specifically, participants were more likely to voice to algorithmic leaders in cognitive tasks, whereas this tendency was attenuated in emotional tasks. These findings are consistent with previous research which indicates that people tend to trust algorithmic decisions when tasks involve cognitive competence (Lee, 2018; Waytz & Norton, 2014; Hertz & Wiese, 2019), but show reduced trust in algorithmic decisions for tasks involving emotional judgment (Castelo et al., 2019; Hertz & Wiese, 2019). This result also aligns with mind perception theory. According to this theory, people perceive minds along two dimensions: agency and experience. Agency refers to the capacity for planning, acting, and exerting self-control, while experience pertains to the ability to feel and sense (Gray et al., 2007). In general, robots and algorithms are perceived as high in agency but low in experience (Gray & Wegner, 2012), which may explain the reduced tendency to voice to algorithmic leaders in emotional tasks. In terms of the decision-making process, algorithmic decisions are often perceived as emotionally detached compared to human decisions (Martínez-Miranda & Aldea, 2005). Furthermore, individuals perceive AI-based decisions as offering lower levels of interpersonal sensitivity and respect (Acikgoz et al., 2020; Schlicker et al., 2021), and as failing to incorporate contextual and environmental nuances (Balasubramanian et al., 2022). These perceptions may contribute to the belief that algorithms are unable to handle emotional tasks, thereby decreasing individuals' willingness to voice to algorithmic leaders.
In terms of the underlying psychological mechanisms, the study found that participants had higher fairness perception to algorithmic leaders than human leader, which in turn fostered greater psychological safety and ultimately enhanced their willingness to voice to algorithmic leaders. The result that participants had higher fairness perception supports the machine heuristic model (Yang & Sundar, 2024), and the finding is also consistent with previous research on differential stereotypes and responses toward human and machine agents (Araujo et al., 2020; Helberger et al., 2020). Such research has shown that individuals often hold more favorable impressions of machines in contexts where neutrality and procedural justice are salient, thereby influencing their behavioral intentions during human-machine interactions.
The study also found that participants reported higher psychological safety when interacting with algorithmic leaders rather than human leaders. Previous research suggests that individuals perceive AI decision processes as being grounded in objective historical data and facts, and as following consistent algorithms, models, and rules (Lindebaum & Ashraf, 2024). So, AI decisions are often viewed as neutral, devoid of subjective intentions or personal biases, and therefore more impartial than human judgments (Miller & Keiser, 2021). In the context of employee voicing to different kind of leader, employees may perceive algorithmic leaders as more neutral than human leaders. Employees may believe that algorithmic leaders are less likely to exhibit negative emotions or biases in response to employees' suggestions, thereby fostering a greater sense of psychological safety among participants. This finding is consistent with studies on self-disclosure to virtual agents (Lucas et al., 2014) and interactions with AI supervisors (Xu et al., 2025). However, these results stand in contrast to some findings in the literature on algorithmic management, which suggest that AI leaders in organizational settings can reduce employees’ psychological safety (Moreira, 2020) and increase perceived threat (Zayid et al., 2024). The current study focuses on voice behavior-a proactive behavior involving interpersonal risk. In such contexts, the stereotypical perception of AI as objective, neutral, and emotionally detached may reassure employees that their voice will not be misinterpreted or met with hostility, thereby enhancing psychological safety. In contrast, previous studies often examined situations in which employees were passive recipients of algorithmic commands. This may undermine their sense of control and safety. In the present study, however, the algorithmic system actively solicited employee input and allowed them to voice, which may have fostered a greater sense of agency and control, leading to enhanced psychological safety. These findings offer practical implications for the design of algorithmic management systems. In order to foster psychological safety and encourage employee voice, algorithmic systems should incorporate mechanisms that allow employees to offer input and express suggestions. This may enhance employee’s perceived control and trust in the system, ultimately promoting more frequent and constructive voice behaviors.
Theoretical implications
This study extends the existing literature on algorithmic leadership.
A
Previous research has primarily focused on algorithms as tools or teammates, while empirical investigations of algorithms in leader roles remain relatively limited. Current discussions about the evolving role of algorithms in organizations are largely theoretical in nature, emphasizing conceptual frameworks and normative perspectives (Glikson & Woolley, 2020; Raisch & Krakowski, 2021; Tschang & Almirall, 2021). Empirical studies that do exist have mainly examined the psychological and behavioral effects of algorithmic management on employees (Kadolkar et al., 2024; Zhang et al., 2023; Chang & Xiao, 2024), with comparatively little attention given to the interactive dynamics between algorithmic leaders and human subordinates. Addressing this gap, the present study investigates a form of proactive and constructive subordinate behavior-voice behavior-directed toward algorithmic leaders. The findings shed light on the potential positive outcomes of algorithmic leadership, thereby contributing to a more nuanced understanding of human-algorithm interaction in leadership contexts.
Moreover, this study contributes to the literature on employee voice by extending the target of voice behavior from human to non-human agents. Numerous research has predominantly focused on how characteristics of human leaders-such as leadership styles and personal traits—influence employee’s willingness to engage in voice behavior (Duggan et al., 2020; Tsai et al., 2022; Wesche & Sonderegger, 2019). However, with the advancement of technology and the increasing deployment of algorithmic management systems, particularly in the gig economy, a new form of leadership has emerged: algorithmic leadership (Duan et al., 2017; Hsiung, 2012; Walumbwa & Schaubroeck, 2009; Xu et al., 2019). The present findings demonstrate that individuals are more inclined to voice to algorithmic leaders than to human leaders, thereby expanding the scope of voice research to include non-human leadership agents. This highlights the importance of re-evaluating traditional voice behavior frameworks in the context of technological transformation and provides new insights into the antecedents of employee voice in human-AI interactions.
Furthermore, this study provides also support for the notion of algorithm appreciation. When interacting with algorithmic leaders, employees reported heightened fairness perception and psychological safety. This finding aligns with previous research on self-disclosure, which suggests that individuals tend to feel less evaluation anxiety and psychological pressure when interacting with algorithms, due to the perception that algorithms lack intentionality and are more objective (Raveendhran & Fast, 2021; Sundar & Kim, 2019; Pickard et al., 2016; Lucas et al., 2014). The present study confirms this conclusion and extends the concept of algorithm appreciation to organizational management contexts, thereby enriching the literature on how individuals respond to algorithmic agents in workplace settings.
Practical implications
This study provides positive evidence for the deployment of algorithms in organizational management contexts. The findings suggest that employees are more inclined to voice to algorithmic leaders. This implies that organizations could consider developing algorithmic management systems capable of soliciting and incorporating employee suggestions, thereby encouraging greater employee involvement in decision-making processes. Moreover, the study revealed that employees experience greater psychological safety when interacting with algorithmic leaders. In situations where interpersonal risk is salient-such as providing critical feedback-organizations may consider leveraging algorithmic systems to handle such tasks. Doing so could enhance employees’ psychological safety and, in turn, contribute positively to organizational development.
Moreover, the findings offer practical suggestions for improving algorithmic management systems. Existing research has shown that human employees may resist algorithmic management systems (Cameron & Rahman, 2022; Grohmann et al., 2022; Sun, 2019), and that perceived algorithmic control is associated with reduced proactive service behavior (Wang et al., 2025). One possible reason for these negative outcomes is that current algorithmic management systems often require employees to passively receive decisions, with limited opportunities for meaningful interaction. In contrast, the present study found that individuals are more willing to voice to algorithmic leaders. This suggests that if algorithmic management systems in real-world organizations were designed to actively solicit employee input, they might elicit more constructive voices, reduce perceptions of algorithmic control, and help mitigate resistance to algorithmic systems.
Furthermore, the findings of this study may encourage organizations to promote a more open attitude among employees toward algorithmic leadership by highlighting its fairness perception advantages. The results indicate that algorithmic leaders are perceived as fairer than human leaders, which may help reshape employees’ cognitive evaluations of algorithmic leadership and foster more open acceptance of its integration into organizational teams. Such openness could further enhance collaboration between employees and algorithmic systems. Previous research has shown that collaboration with algorithms and AI can reduce employees’ workload (Martinez-Garcia et al., 2021), improve efficiency in co-innovation processes (Zhang et al., 2025), and even lead to superior decision-making outcomes compared to human-only or AI-only decision-making (Wang et al., 2016). The introduction of algorithmic leadership may therefore facilitate more effective human-algorithm collaboration, ultimately contributing to greater organizational efficiency.
Limitation and future direction
First, the present study focused solely on comparing algorithmic leadership and human leadership, without examining how specific characteristics of algorithmic leaders might influence employees’ willingness to engage in voice behavior. Previous research has indicated that features such as system transparency and anthropomorphism can affect individuals’ acceptance of algorithmic decisions (Xu et al., 2025; Hu et al., 2024). For instance, Hu et al. (2024) found a curvilinear (inverted U-shaped) relationship between algorithmic transparency and perceived fairness. Xu and colleagues (2025) further demonstrated that higher levels of anthropomorphism in AI supervisors were associated with increased evaluation anxiety. Future research may explore how specific features of algorithmic systems-such as anthropomorphism-shape employees’ voice behavior, thereby offering a more nuanced understanding of the psychological mechanisms underlying human-algorithm interaction.
What’s more, while this study examined differences in voice behavior toward algorithmic versus human leaders, it did not investigate how individual characteristics of employees may influence their voice tendencies. Previous research suggests that individual differences can shape attitudes toward algorithms. For instance, individuals with a prevention focus tend to exhibit lower levels of algorithm aversion compared to those with a promotion focus (Chang & Wang, 2023). In terms of personality traits, extraversion has been positively associated with delegating decision-making to algorithms (Ferraz et al., 2025), whereas neuroticism is negatively associated with trust in algorithms (Sharan & Romano, 2020). Moreover, the extent to which employees’ work is tied to their self-identity may also play a role in shaping their attitudes toward algorithmic leadership. Morewedge (2022) found that individuals tend to prefer human over algorithmic decision-making in tasks that are closely linked to their self-concept. Similarly, Leung et al. (2018) demonstrated that consumers with strong identity motives are more likely to resist automated products and form negative attitudes toward them. Accordingly, employees who strongly identify with their work may hold more negative views toward algorithmic leaders, potentially reducing their willingness to voice suggestions to such systems. Future research may explore these individual-level factors-such as regulatory focus, personality traits, and identity-related motivations-to better understand the variability in voice behavior toward algorithmic versus human leaders.
A
Moreover, this study employed scenario-based methods to examine participants’ voice behavior toward algorithmic leaders and found that individuals tended to form heightened perceptions of fairness through heuristic processing. However, attitudes toward algorithms may evolve over time. As individuals become more familiar with algorithmic systems, the influence of fairness-related heuristics may diminish. This could lead to a decrease in perceived fairness and, consequently, a reduction in voice behavior directed toward algorithmic leaders. This possibility is consistent with the three-stage model proposed by Xie et al. (2023), which posits that human-algorithm interactions may progress through (1) the interaction of initial behavior stage, (2) the establishment of quasi-social relationship stage and (3) the formation of identity stage. Future studies should therefore consider designing long-term interaction paradigms between humans and algorithmic leaders or conducting field research in real organizational settings. Such approaches would help to validate and expand upon the current findings, offering a more comprehensive understanding of human responses to algorithmic leadership over time.
This study is limited in the types of voice tasks examined. According to algorithm aversion, individuals’ attitudes toward algorithms are influenced by the task type. When tasks are perceived as highly important (Castelo et al., 2019), individuals tend to hold more negative attitudes toward algorithmic decision-making. Similarly, negative attitudes also emerge when tasks involve high levels of uncertainty or subjectivity (Dietvorst & Bharti, 2020; Castelo et al., 2019). Under such conditions, individuals may be less inclined to trust algorithmic decisions, which could in turn reduce their willingness to voice suggestions to algorithmic leaders. Moreover, the leadership functions were primarily limited to the coordination and allocation of team-related work assignments. However, leadership functions are commonly categorized into two domains: task accomplishment and relational support (Yammarino et al., 2020). Future research should explore how algorithmic leadership performs in the domain of relational support—for instance, whether algorithmic leaders differ from human leaders in motivating or emotionally supporting employees. Such investigations would further illuminate the boundaries and potential of algorithmic leadership in diverse organizational functions.
Conclusion
A
Guided by the perspectives of algorithm appreciation, the machine heuristic model, and Morrison’s employee voice framework, this study investigated employees’ willingness to engage in voice behavior toward two types of leadership agents: algorithmic leaders and human leaders. The findings revealed that individuals were generally more willing to voice to algorithmic leaders than to human leaders. This preference emerged in cognitive tasks but did not hold in emotional tasks. Furthermore, fairness perception and psychological safety were found to sequentially mediate the relationship between leader type and voice behavior. Compared to human leaders, algorithmic leaders were perceived as fairer, which in turn enhanced individuals’ psychological safety and ultimately increased their likelihood of engaging in voice behavior. By examining the novel relationship between human subordinate and algorithm leader relationship, this study reveals positive effects of algorithmic leadership and contributes to the broader literature on human-machine interaction and collaboration. In addition, the findings offer practical implications for the deployment and design of algorithmic systems in real-world organizational contexts.
Declarations
Ethics approval and consent to participate
This study was approved by the Medical Ethics Committee of the Department of Psychology and Behavioral Sciences, Zhejiang University (approval no. 2020-03-15).
A
Informed consent was obtained from all participants included in the study.
A
The study was performed in accordance with the ethical standards as laid down in the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards.
Consent for publication
Not applicable.
A
Data Availability
The datasets generated and analyzed during the current study are available in the Open Science Framework repository, [URL: https://osf.io/kse7a/?view\_only=68b7bacb2cb042e088f57ea028db9ad8].
Competing interests
The authors declare that they have no competing interests.
A
Funding
This work was supported by the Ministry of Education Humanities and Social Science Project [grant number 19YJA190007].
A
Author Contribution
The execution of this paper was a collaborative effort on behalf of all authors. SW and MW conceptualized and designed the study. SW, SX, SN and KH conducted the data collection and analysis. SW and MW drafted and revised the manuscript. All authors read and approved the final manuscript.
Acknowledgements
Not applicable.
References
Acikgoz, Y., Davison, K. H., Compagnone, M., & Laske, M. (2020). Justice perceptions of artificial intelligence in selection. International Journal of Selection and Assessment, 28(4), 399–416. https://doi.org/10.1111/ijsa.12306
Araujo, T., Helberger, N., Kruikemeier, S., & De Vreese, C. H. (2020). In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & Society, 35(3), 611–623. https://doi.org/10.1007/s00146-019-00931-w
Balasubramanian, N., Ye, Y., & Xu, M. (2022). Substituting human decision-making with machine learning: Implications for organizational learning. Academy of Management Review, 47(3), 448–465. https://doi.org/10.5465/amr.2019.0470
Barry, M., & Wilkinson, A. (2016). Pro-social or pro-management? A critique of the conception of employee voice as a pro-social behaviour within organizational behaviour. British Journal of Industrial Relations, 54(2), 261–284. https://doi.org/10.1111/bjir.12114
Bilotta, I., Dawson, J. F., & King, E. B. (2022). The role of fairness perceptions in patient and employee health: A multilevel, multisource investigation. Journal of Applied Psychology, 107(9), 1441–1458. https://doi.org/10.1037/apl0000736
Bogert, E., Schecter, A., & Watson, R. T. (2021). Humans rely more on algorithms than social influence as a task becomes more difficult. Scientific Reports, 11(1), Article 8028. https://doi.org/10.1038/s41598-021-87480-9
Burris, E. R. (2012). The risks and rewards of speaking up: Managerial responses to employee voice. Academy of Management Journal, 55(4), 851–875. https://doi.org/10.5465/amj.2010.0562
Cameron, L. D., & Rahman, H. (2022). Expanding the locus of resistance: Understanding the co-constitution of control and resistance in the gig economy. Organization Science, 33, 38–58. https://doi.org/10.1287/orsc.2021.1557
Cascio, W. F., & Montealegre, R. (2016). How technology is changing work and organizations. Annual Review of Organizational Psychology and Organizational Behavior, 3, 349–375. https://doi.org/10.1146/annurev-orgpsych-041015-062352
Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809–825. https://doi.org/10.1177/0022243719851788
Chamberlin, M., Newton, D. W., & Lepine, J. A. (2017). A meta-analysis of voice and its promotive and prohibitive forms: Identification of key associations, distinctions, and future research directions. Personnel Psychology, 70(1), 11–71. https://doi.org/10.1111/peps.12185
Chang, Q., & Xiao, X. (2024). How perceived algorithmic control affects workplace well-being: a self-determination theory approach. In Academy of Management Proceedings (Vol. 2024, No.1, p. 15914). Valhalla, NY 10595: Academy of Management. https://doi.org/10.5465/AMPROC.2024.15914abstract
Chang, Y., & Wang, R. (2023). Conservatives endorse Fintech? Individual regulatory focus attenuates the algorithm aversion effects in automated wealth management. Computers in Human Behavior, 148, Article 107872. https://doi.org/10.1016/j.chb.2023.107872
Cheng, M., & Foley, C. (2019). Algorithmic management: The case of Airbnb. International Journal of Hospitality Management, 83, 33–36. https://doi.org/10.1016/j.ijhm.2019.04.009
Choi, J. J., & Kwak, S. S. (2015). The effect of robot appearance types and task types on Service evaluation of a robot. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts (pp. 223–224). https://doi.org/10.1145/2701973.2702735
Choung, H., David, P., & Ross, A. (2023). Trust in AI and its role in the acceptance of AI technologies. International Journal of Human-Computer Interaction, 39(9), 1727–1739. https://doi.org/10.1080/10447318.2022.2050543
Chowdhury, S., Joel-Edgar, S., Dey, P. K., Bhattacharya, S., & Kharlamov, A. (2023). Embedding transparency in artificial intelligence machine learning models: managerial implications on predicting and explaining employee turnover. The International Journal of Human Resource Management, 34(14), 2732–2764. https://doi.org/10.1080/09585192.2022.2066981
Colquitt, J. A., Conlon, D. E., Wesson, M. J., Porter, C. O. L. H., & Ng, K. Y. (2001). Justice at the millennium: A meta-analytic review of 25 years of organizational justice research. Journal of Applied Psychology, 86(3), 425–445. https://doi.org/10.1037/0021-9010.86.3.425
Colquitt, J. A., & Zipay, K. P. (2015). Justice, fairness, and employee reactions. The Annual Review of Organizational Psychology and Organizational Behavior, 2, 75–99. https://doi.org/10.1146/annurev-orgpsych-032414-111457
Dahanayake, P., Rajendran, D., Selvarajah, C., & Ballantyne, G. (2018). Justice and fairness in the workplace: A trajectory for managing diversity. Equality, Diversity and Inclusion: An International Journal, 37(5), 470–490. https://doi.org/10.1108/EDI-11-2016-0105
De Cremer, D., & McGuire, J. (2022). Human–algorithm collaboration works best if humans lead (because it is fair!). Social Justice Research, 35(1), 33–55. https://doi.org/10.1007/s11211-021-00382-z
De Schrijver, A., Delbeke, K., Maesschalck, J., & Pleysier, S. (2010). Fairness perceptions and organizational misbehavior: An empirical study. The American Review of Public Administration, 40(6), 691–703. https://doi.org/10.1177/0275074010363742
Detert, J. R., & Burris, E. R. (2007). Leadership behavior and employee voice: Is the door really open? Academy of Management Journal, 50(4), 869–884. https://doi.org/10.5465/amj.2007.26279183
Detert, J. R., & Treviño, L. K. (2010). Speaking up to higher-ups: How supervisors and skip-level leaders influence employee voice. Organization Science, 21(1), 249–270. https://doi.org/10.1287/orsc.1080.0405
Diekmann, K. A., Barsness, Z. I., & Sondak, H. (2004). Uncertainty, fairness perceptions, and job satisfaction: A field study. Social Justice Research, 17, 237–255. https://doi.org/10.1023/B:SORE.0000041292.38626.2f
Dietvorst, B. J., & Bharti, S. (2020). People reject algorithms in uncertain decision domains because they have diminishing sensitivity to forecasting error. Psychological Science, 31(10), 1302–1314. https://doi.org/10.1177/0956797620948841
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: people erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114–126. https://doi.org/10.1037/xge0000033
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64(3), 1155–1170. https://doi.org/10.1287/mnsc.2016.2643
Drydakis, N. (2022). Artificial Intelligence and reduced SMEs’ business risks. A dynamic capabilities analysis during the COVID-19 pandemic. Information Systems Frontiers, 24(4), 1223–1247. https://doi.org/10.1007/s10796-022-10249-6
Duan, J. (2011). The research of employee voice in Chinese context: Construct, formation mechanism and effect. Advances in Psychological Science, 19(2), 185–192. https://doi.org/10.3724/SP.J.1042.2011.00185
Duan, J., Li, C., Xu, Y., & Wu, C. H. (2017). Transformational leadership and employee voice behavior: A Pygmalion mechanism. Journal of Organizational Behavior, 38(5), 650–670. https://doi.org/10.1002/job.2157
Duggan, J., Carbery, R., McDonnell, A., & Sherman, U. (2023). Algorithmic HRM control in the gig economy: The app-worker perspective. Human Resource Management, 62, 883–899. https://doi.org/10.1002/hrm.22168
Duggan, J., Sherman, U., Carbery, R., & McDonnell, A. (2020). Algorithmic management and app-work in the gig economy: A research agenda for employment relations and HRM. Human Resource Management Journal, 30(1), 114–132. https://doi.org/10.1111/1748-8583.12258
Duggan, J., Sherman, U., Carbery, R., & McDonnell, A. (2022). Boundaryless careers and algorithmic constraints in the gig economy. The International Journal of Human Resource Management, 33(22), 4468–4498. https://doi.org/10.1080/09585192.2021.1953565
Dutton, J. E., & Ashford, S. J. (1993). Selling issues to top management. Academy of Management Review, 18(3), 397–428. https://doi.org/10.5465/amr.1993.9309035145
Edmondson, A. C., & Lei, Z. (2014). Psychological safety: The history, renaissance, and future of an interpersonal construct. Annual Review of Organizational Psychology and Organizational Behavior, 1(1), 23–43. https://doi.org/10.1146/annurev-orgpsych-031413-091305
Elsaied, M. M. (2019). Supportive leadership, proactive personality and employee voice behavior: The mediating role of psychological safety. American Journal of Business, 34(1), 2–18. https://doi.org/10.1108/AJB-01-2017-0004
Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. https://doi.org/10.3758/BF03193146
Ferraz, V., Houf, L., Pitz, T., Schwieren, C., & Sickmann, J. (2025). Trust in the machine: How contextual factors and personality traits shape algorithm aversion and collaboration. Computers in Human Behavior Reports, 17, Article 100578. https://doi.org/10.1016/j.chbr.2024.100578
Gambarotto, F., & Cammozzo, A. (2010). Dreams of silence: Employee voice and innovation in a public sector community of practice. Innovation, 12(2), 166–179. https://doi.org/10.5172/impp.12.2.166
Glauser, M. J. (1984). Upward information flow in organizations: Review and conceptual analysis. Human Relations, 37(8), 613–643. https://doi.org/10.1177/001872678403700804
Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. https://doi.org/10.5465/annals.2018.0057
Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315, Article 619. https://doi.org/10.1126/science.1134475
Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125–130. https://doi.org/10.1016/j.cognition.2012.06.007
Grohmann, R., Pereira, G., Guerra, A., Abilio, L. C., Moreschi, B., & Jurno, A. (2022). Platform scams: Brazilian workers’ experiences of dishonest and uncertain algorithmic management. New Media & Society, 24(7), 1611–1631. https://doi.org/10.1177/14614448221099225
Gunaratne, J., Zalmanson, L., & Nov, O. (2018). The persuasive power of algorithmic and crowdsourced advice. Journal of Management Information Systems, 35(4), 1092–1120. https://doi.org/10.1080/07421222.2018.1523534
Guo, W., Yang, J., & Fu, J. (2015). The influence of perceived organizational support and organizational justice on counterproductive work behavior: The mediating effect of organizational cynicism. Chinese Journal of Management, 12(4), 530–537. https://doi.org/10.3969/j.issn.1672-884x.2015.04.008
Haesevoets, T., De Cremer, D., Dierckx, K., & Van Hiel, A. (2021). Human-machine collaboration in managerial decision making. Computers in Human Behavior, 119, Article 106730. https://doi.org/10.1016/j.chb.2021.106730
Harms, P. D., & Han, G. (2019). Algorithmic leadership: The future is now. Journal of Leadership Studies, 12(4), 74–75. https://doi.org/10.1002/jls.21615
Hayes, A. F. (2021). Methodology in the social sciences. Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (3rd ed.,). Guilford Press.
Helberger, N., Araujo, T., & de Vreese, C. H. (2020). Who is the fairest of them all? Public attitudes and expectations regarding automated decision-making. Computer Law & Security Review, 39, Article 105456. https://doi.org/10.1016/j.clsr.2020.105456
Hertz, N., & Wiese, E. (2019). Good advice is beyond all price, but what if it comes from a machine? Journal of Experimental Psychology: Applied, 25(3), 386–395. https://doi.org/10.1037/xap0000205
Höddinghaus, M., Sondern, D., & Hertel, G. (2021). The automation of leadership functions: Would people trust decision algorithms? Computers in Human Behavior, 116, Article 106635. https://doi.org/10.1016/j.chb.2020.106635
Howard, F. M., Gao, C. A., & Sankey, C. (2020). Implementation of an automated scheduling tool improves schedule quality and resident satisfaction. Plos One, 15(8), Article e0236952. https://doi.org/10.1371/journal.pone.0236952
Hou, Y. T. Y., & Jung, M. F. (2021). Who is the expert? Reconciling algorithm aversion and algorithm appreciation in AI-supported decision making. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1–25. https://doi.org/10.1145/3479864
Hsiung, H. H. (2012). Authentic leadership and employee voice behavior: A multi-level psychological process. Journal of Business Ethics, 107(3), 349–361. https://doi.org/10.1007/s10551-011-1043-2
Hu, P., Zeng, Y., Wang, D., & Teng, H. (2024). Too much light blinds: The transparency-resistance paradox in algorithmic management. Computers in Human Behavior, 161, Article 108403. https://doi.org/10.1016/j.chb.2024.108403
Hussain, I., Shu, R., Tangirala, S., & Ekkirala, S. (2019). The voice bystander effect: How information redundancy inhibits employee voice. Academy of Management Journal, 62(3), 828–849. https://doi.org/10.5465/amj.2017.0245
Idug, Y., Niranjan, S., Manuj, I., Gligor, D., & Ogden, J. (2023). Do ride-hailing drivers' psychological behaviors influence operational performance? International Journal of Operations & Production Management, 43(12), 2055–2079. https://doi.org/10.1108/IJOPM-06-2022-0362
Jago, A. S., Raveendhran, R., Fast, N., & Gratch, J. (2024). Algorithmic management diminishes status: An unintended consequence of using machines to perform social roles. Journal of Experimental Social Psychology, 110, Article 104553. https://doi.org/10.1016/j.jesp.2023.104553
Jin, X., Qing, C., & Jin, S. (2022). Ethical leadership and innovative behavior: Mediating role of voice behavior and moderated mediation role of psychological safety. Sustainability, 14, Article 5125. https://doi.org/10.3390/su14095125
Jung, M., & Hinds, P. (2018). Robots in the wild: A time for more robust theories of human-robot interaction. ACM Transactions on Human-Robot Interaction, 7(1), 1–5. https://doi.org/10.1145/3208975
Kadolkar, I., Kepes, S., & Subramony, M. (2024). Algorithmic management in the gig economy: A systematic review and research integration. Journal of Organizational Behavior. https://doi.org/10.1002/job.2831
Kollmann, T., Stöckmann, C., Kensbock, J. M., & Peschl, A. (2020). What satisfies younger versus older employees, and why? An aging perspective on equity theory to explain interactive effects of employee age, monetary rewards, and task contributions on job satisfaction. Human Resource Management, 59(1), 101–115. https://doi.org/10.1002/hrm.21981
Lam, C. F., Lee, C., & Sui, Y. (2019). Say it as it is: Consequences of voice directness, voice politeness, and voicer credibility on voice endorsement. Journal of Applied Psychology, 104(5), 642–658. https://doi.org/10.1037/apl0000358
Langer, M., & König, C. J. (2023). Introducing a multi-stakeholder perspective on opacity, transparency and strategies to reduce opacity in algorithm-based human resource management. Human Resource Management Review, 33(1), Article 100881. https://doi.org/10.1016/j.hrmr.2021.100881
Lanz, L., Briker, R., & Gerpott, F. H. (2024). Employees adhere more to unethical instructions from human than AI supervisors: Complementing experimental evidence with machine learning. Journal of Business Ethics, 189(3), 625–646. https://doi.org/10.1007/s10551-023-05393-1
Larson, L., & DeChurch, L. A. (2020). Leading teams in the digital age: Four perspectives on technology and what they mean for leading teams. The Leadership Quarterly, 31(1), Article 101377. https://doi.org/10.1016/j.leaqua.2019.101377
Lata, L. N., Burdon, J., & Reddel, T. (2023). New tech, old exploitation: Gig economy, algorithmic control and migrant labour. Sociology Compass, 17(1), Article e13028. https://doi.org/10.1111/soc4.13028
Lee, H., Choi, J. J., & Kwak, S. S. (2015). Can human jobs be taken by robots? The appropriate match between robot types and task types. Archives of Design Research, 28(3), 49–59. https://doi.org/10.15187/adr.2015.08.28.3.49
Lee, M. K., Kusbit, D., Metsky, E., & Dabbish, L. (2015). Working with machines: The impact of algorithmic and data-driven management on human workers. In Proceedings of the 33rd annual ACM conference on human factors in computing systems (pp. 1603–1612). https://doi.org/10.1145/2702123.2702548
Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 1–16. https://doi.org/10.1177/2053951718756684
Leung, E., Paolacci, G., & Puntoni, S. (2018). Man versus machine: Resisting automation in identity-based consumer behavior. Journal of Marketing Research, 55(6), 818–831. https://doi.org/10.1177/0022243718818423
Li, J., Liang, Q., Zhang, Z., & Wang, X. (2018). Leader humility and constructive voice behavior in China: a dual process model. International Journal of Manpower, 39(6), 840–854. https://doi.org/10.1108/IJM-06-2017-0137
Li, R., Ling, W., & Liu, S. (2009). The Mechanisms of How Abusive Supervision Impacts on Subordinates’ Voice Behavior. Acta Psychologica Sinica, 41(12), 1189–1202. https://doi.org/10.3724/SP.J.1041.2009.01189
Liang, J., Farh, C. I. C., & Farh, J.-L. (2012). Psychological antecedents of promotive and prohibitive voice: A two-wave examination. Academy of Management Journal, 55(1), 71–92. https://doi.org/10.5465/amj.2010.0176
Lim, B. T., & Loosemore, M. (2017). The effect of inter-organizational justice perceptions on organizational citizenship behaviors in construction projects. International Journal of Project Management, 35(2), 95–106. https://doi.org/10.1016/j.ijproman.2016.10.016
Lind, E. A. (2001). Fairness heuristic theory: Justice judgments as pivotal cognitions in organizational relations. In J. Greenberg & R. Cropanzano (Eds.) Advances in Organizational Justice (pp. 56–88). Stanford University Press.
Lind, E. A., & Van den Bos, K. (2002). When fairness works: Toward a general theory of uncertainty management. Research in Organizational Behavior, 24, 181–223. https://doi.org/10.1016/S0191-3085(02)24006-X
Lindebaum, D., & Ashraf, M. (2024). The ghost in the machine, or the ghost in organizational theory? A complementary view on the use of machine learning. Academy of Management Review, 49(2), 445–448. https://doi.org/10.5465/amr.2021.0036
Liu, X., Mao, J., Chiang, J. T., Guo, L., & Zhang, S. (2023). When and why does voice sustain or stop? The roles of leader behaviours, power differential perception and psychological safety. Applied Psychology, 71(1), 271–295. https://doi.org/10.1111/apps.12432
Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103. https://doi.org/10.1016/j.obhdp.2018.12.005
Lucas, G. M., Gratch, J., King, A., & Morency, L.-P. (2014). It’s only a computer: Virtual humans increase willingness to disclose. Computers in Human Behavior, 37(1), 94–100. https://doi.org/10.1016/j.chb.2014.04.043
Marler, J. H. (2024). Artificial intelligence, algorithms, and compensation strategy: Challenges and opportunities. Organizational Dynamics, 53(1), Article 101039. https://doi.org/10.1016/j.orgdyn.2024.101039
Martínez-García, M., Zhang, Y., & Gordon, T. (2021). Memory pattern identification for feedback tracking control in human–machine systems. Human factors, 63(2), 210–226. https://doi.org/10.1177/0018720819881008
Martínez-Miranda, J., & Aldea, A. (2005). Emotions in human and artificial intelligence. Computers in Human Behavior, 21(2), 323–341. https://doi.org/10.1016/j.chb.2004.02.010
Maynes, T. D., & Podsakoff, P. M. (2014). Speaking more broadly: an examination of the nature, antecedents, and consequences of an expanded set of employee voice behaviors. Journal of Applied Psychology, 99(1), 87–112. https://doi.org/10.1037/a0034284
McDowall, A., & Fletcher, C. (2004). Employee development: an organizational justice perspective. Personnel Review, 33(1), 8–29. https://doi.org/10.1108/00483480410510606
McGuire, J., & De Cremer, D. (2023). Algorithms, leadership, and morality: Why a mere human effect drives the preference for human over algorithmic leadership. AI and Ethics, 3(2), 601–618. https://doi.org/10.1007/s43681-022-00192-2
Meijerink, J., Boons, M., Keegan, A., & Marler, J. (2021). Algorithmic human resource management: Synthesizing developments and cross-disciplinary insights on digital HRM. The International Journal of Human Resource Management, 32(12), 2545–2562. https://doi.org/10.1080/09585192.2021.1925326
Miller, S. M., & Keiser, L. R. (2021). Representative bureaucracy and attitudes toward automated decision making. Journal of Public Administration Research and Theory, 31(1), 150–165. https://doi.org/10.1093/jopart/muaa019
Milliken, F. J., Morrison, E. W., & Hewlin, P. F. (2003). An exploratory study of employee silence: Issues that employees don’t communicate upward and why. Journal of Management Studies, 40(6), 1453–1476. https://doi.org/10.1111/1467-6486.00387
Möhlmann, M., Zalmanson, L., Henfridsson, O., & Gregory, R. W. (2021). Algorithmic management of work on online labor platforms: When matching meets control. MIS Quarterly, 45(4), 1999–2022. https://doi.org/10.25300/MISQ/2021/15333
Moorman, R. H. (1991). Relationship between organizational justice and organizational citizenship behaviors: Do fairness perceptions influence employee citizenship? Journal of Applied Psychology, 76(6), 845–855. https://doi.org/10.1037/0021-9010.76.6.845
Moreira, P. (2020). Artificial Intelligence leadership: how trust and fairness perceptions impact turnover intentions through psychological safety (Master's thesis, Universidade Catolica Portuguesa (Portugal)). https://www.proquest.com/docview/2925087835?sourcetype=Dissertations%20&%20Theses
Morewedge, C. K. (2022). Preference for human, not algorithm aversion. Trends in Cognitive Sciences, 26(10), 824–826. https://doi.org/10.1016/j.tics.2022.07.007
Morrison, E. W. (2011). Employee voice behavior: Integration and directions for future research. Academy of Management Annals, 5(1), 373–412. https://doi.org/10.5465/19416520.2011.574506
Morrison, E. W., & Milliken, F. J. (2000). Organizational silence: A barrier to change and development in a pluralistic world. Academy of Management Review, 25(4), 706–725. https://doi.org/10.5465/amr.2000.3707697
Muldoon, J., & Raekstad, P. (2023). Algorithmic domination in the gig economy. European Journal of Political Theory, 22(4), 587–607. https://doi.org/10.1177/14748851221082078
Nembhard, I. M., & Edmondson, A. C. (2006). Making it safe: The effects of leader inclusiveness and professional status on psychological safety and improvement efforts in health care teams. Journal of Organizational Behavior, 27(7), 941–966. https://doi.org/10.1002/job.413
Nemeth, C. J. (1997). Managing innovation: When less is more. California Management Review, 40(1), 59–74. https://doi.org/10.2307/41165922
Newlands, G. (2021). Algorithmic surveillance in the gig economy: The organization of work through Lefebvrian conceived space. Organization Studies, 42(5), 719–737. https://doi.org/10.1177/0170840620937900
Nie, S. & Liang, Z. (2022). How Relative Deprivation Affect Employees' Voice Behavior——The Role of Core Self-evaluation and Psychological Contract Violation. Journal of Xihua University (Philosophy & Social Sciences), 41(2), 77–89. https://doi.org/10.12189/j.issn.1672-8505.2022.02.009
Parent-Rocheleau, X., Parker, S. K., Bujold, A., & Gaudet, M. C. (2024). Creation of the algorithmic management questionnaire: A six-phase scale development process. Human Resource Management, 63(1), 25–44. https://doi.org/10.1002/hrm.22185
Pickard, M. D., Roster, C. A., & Chen, Y. (2016). Revealing sensitive information in personal interviews: Is self-disclosure easier with humans or avatars and under what conditions? Computers in Human Behavior, 65(13), 23–30. https://doi.org/10.1016/j.chb.2016.08.004
Premeaux, S. F., & Bedeian, A. G. (2003). Breaking the silence: The moderating effects of self-monitoring in predicting speaking up in the workplace. Journal of Management Studies, 40(6), 1537–1562. https://doi.org/10.1111/1467-6486.00390
Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation–augmentation paradox. Academy of Management Review, 46(1), 192–210. https://doi.org/10.5465/amr.2018.0072
Raveendhran, R., & Fast, N. J. (2021). Humans judge, algorithms nudge: The psychology of behavior tracking acceptance. Organizational Behavior and Human Decision Processes, 164, 11–26. https://doi.org/10.1016/j.obhdp.2021.01.001
Rodell, J. B., Colquitt, J. A., & Baer, M. D. (2017). Is adhering to justice rules enough? The role of charismatic qualities in perceptions of supervisors’ overall fairness. Organizational Behavior and Human Decision Processes, 140, 14–28. https://doi.org/10.1016/j.obhdp.2017.03.001
Rong, Y., Sui, Y., & Jiang, J. (2022). The effects of leader power and status on employees’ voice behavior: The role of psychological safety. Acta Psychologica Sinica, 54(5), 549–565. https://doi.org/10.3724/SP.J.1041.2022.00549
Schecter, A., Bogert, E., & Lauharatanahirun, N. (2023). Algorithmic appreciation or aversion? The moderating effects of uncertainty on algorithmic decision making. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1–8). https://doi.org/10.1145/3544549.3585908
Schlicker, N., Langer, M., Ötting, S. K., Baum, K., König, C. J., & Wallach, D. (2021). What to expect from opening up ‘black boxes’? Comparing perceptions of justice between human and automated agents. Computers in Human Behavior, 122, Article 106837. https://doi.org/10.1016/j.chb.2021.106837
Sharan, N. N., & Romano, D. M. (2020). The effects of personality and locus of control on trust in humans versus artificial intelligence. Heliyon, 6(8), Article e04572. https://doi.org/10.1016/j.heliyon.2020.e04572
Son, S. (2019). The role of supervisors on employees’ voice behavior. Leadership & Organization Development Journal, 40(1), 85–96. https://doi.org/10.1108/LODJ-06-2018-0230
Soper, J. T. (2021). Gestational trophoblastic disease: Current evaluation and management. Obstetrics and Gynecology, 137(2), 355–370. https://doi.org/10.1097/AOG.0000000000004240
Sowa, K., Przegalinska, A., & Ciechanowski, L. (2021). Cobots in knowledge work: Human–AI collaboration in managerial professions. Journal of Business Research, 125, 135–142. https://doi.org/10.1016/j.jbusres.2020.11.038
Subhakaran, S. E., & Dyaram, L. (2018). Interpersonal antecedents to employee upward voice: mediating role of psychological safety. International Journal of Productivity and Performance Management, 67(9), 1510–1525. https://doi.org/10.1108/IJPPM-10-2017-0276
Sun, C., Jin, H., & Xu, H. (2020). “Leader-Employee” Power Distance Orientation and Employee’s Voice: Based on the Mediating Effect Employee’s Psychological Security. In Modern management based on big data I (pp. 12–21). IOS Press. https://doi.org/10.3233/FAIA200636
Sun, L. Y., Chow, I. H. S., Chiu, R. K., & Pan, W. (2013). Outcome favorability in the link between leader-member exchange and organizational citizenship behavior: Procedural fairness climate matters. The Leadership Quarterly, 24(1), 215–226. https://doi.org/10.1016/j.leaqua.2012.10.008
Sun, P. (2019). Your order, their labor: An exploration of algorithms and laboring on food delivery platforms in China. Chinese Journal of Communication, 12, 308–323. https://doi.org/10.1080/17544750.2019.1583676
Svendsen, M., & Joensson, T. S. (2016). Transformational leadership and change related voice behavior. Leadership & Organization Development Journal, 37(3), 357–368. https://doi.org/10.1108/LODJ-07-2014-0124
Tangirala, S., & Ramanujam, R. (2008). Employee silence on critical work issues: The cross level effects of procedural justice climate. Personnel Psychology, 61(1), 37–68. https://doi.org/10.1111/j.1744-6570.2008.00105.x
Tansky, J. W. (1993). Justice and organizational citizenship behavior: What is the relationship? Employee Responsibilities and Rights Journal, 6, 195–207. https://doi.org/10.1007/BF01419444
Takeuchi, R., Chen, Z., & Cheung, S. Y. (2012). Applying uncertainty management theory to employee voice behavior: An integrative investigation. Personnel Psychology, 65(2), 283–323. https://doi.org/10.1111/j.1744-6570.2012.01247.x
Tsai, C.-Y., Marshall, J. D., Choudhury, A., Serban, A., Tsung-Yu Hou, Y., Jung, M. F., Dionne, S. D., & Yammarino, F. J. (2022). Human-robot collaboration: A multilevel and integrated leadership framework. The Leadership Quarterly, 33(1), Article 101594. https://doi.org/10.1016/j.leaqua.2021.101594
Tschang, F. T., & Almirall, E. (2021). Artificial intelligence as augmenting automation: Implications for employment. Academy of Management Perspectives, 35(4), 642–659. https://doi.org/10.5465/amp.2019.0062
Van Dyne, L., & LePine, J. A. (1998). Helping and voice extra-role behaviors: Evidence of construct and predictive validity. Academy of Management Journal, 41(1), 108–119. https://doi.org/10.5465/256902
Walumbwa, F. O., & Schaubroeck, J. (2009). Leader personality traits and employee voice behavior: Mediating roles of ethical leadership and work group psychological safety. Journal of Applied Psychology, 94(5), 1275–1286. https://doi.org/10.1037/a0015848
Wang, C., Chen, J., & Xie, P. (2022). Observation or interaction? Impact mechanisms of gig platform monitoring on gig workers' cognitive work engagement. International Journal of Information Management, 67, Article 102548. https://doi.org/10.1016/j.ijinfomgt.2022.102548
Wang, D., Khosla, A., Gargeya, R., Irshad, H., & Beck, A. H. (2016). Deep learning for identifying metastatic breast cancer. arXiv preprint arXiv:1606.05718. https://doi.org/10.48550/arXiv.1606.05718
Wang, H., Chen, Z., Li., Z., Liu, Z., Liang, C., Zhao, B. (2025). How to break out of time dilemma: The subjective time boundaries for the effects of algorithmic control on gig workers. Acta Psychologica Sinica, 57(2), 275–297. https://doi.org/10.3724/SP.J.1041.2025.0275
Wang, W., & Liu, L. (2025). Advances in the application of human-machine collaboration in healthcare: insights from China. Frontiers in Public Health, 13, Article 1507142. https://doi.org/10.3389/fpubh.2025.1507142
Wang, Y., Zheng, Y., & Zhu, Y. (2018). How transformational leadership influences employee voice behavior: The roles of psychological capital and organizational identification. Social Behavior and Personality: an international journal, 46(2), 313–321. https://doi.org/10.2224/sbp.6619
Waytz, A., & Norton, M. I. (2014). Botsourcing and outsourcing: Robot, British, Chinese, and German workers are for thinking-not feeling-jobs. Emotion, 14(2), 434–444. https://doi.org/10.1037/a0036054
Wesche, J. S., & Sonderegger, A. (2019). When computers take the lead: The automation of leadership. Computers in Human Behavior, 101(5), 197–209. https://doi.org/10.1016/j.chb.2019.07.027
Wood, A. J., Graham, M., Lehdonvirta, V., & Hjorth, I. (2019). Good gig, bad gig: autonomy and algorithmic control in the global gig economy. Work, Employment and Society, 33(1), 56–75. https://doi.org/10.1177/0950017018785616
Xu, L., Zhao, Y., & Yu, F. (2025). Employees adhere less to advice on moral behavior from artificial intelligence supervisors than human. Acta Psychologica Sinica, 57(11), 2060–2082. https://journal.psych.ac.cn/xlxb/CN/10.3724/SP.J.1041.2024.051
Xu, M., Qin, X., Dust, S. B., & DiRenzo, M. S. (2019). Supervisor-subordinate proactive personality congruence and psychological safety: A signaling theory approach to employee voice behavior. The Leadership Quarterly, 30(4), 440–453. https://doi.org/10.1016/j.leaqua.2019.03.001
Xu, Y., Lu, B., Ghose, A., Dai, H., & Zhou, W. (2023). The interplay of earnings, ratings, and penalties on sharing platforms: An empirical investigation. Management Science, 69(10), 6128–6146. https://doi.org/10.1287/mnsc.2023.4761
Yam, K. C., Goh, E. Y., Fehr, R., Lee, R., Soh, H., & Gray, K. (2022). When your boss is a robot: Workers are more spiteful to robot supervisors that seem more human. Journal of Experimental Social Psychology, 102, Article 104360. https://doi.org/10.1016/j.jesp.2022.104360
Yammarino, F. J., Cheong, M., Kim, J., & Tsai, C. Y. (2020). Is leadership more than “I like my boss”? In Research in personnel and human resources management (Vol. 38, pp. 1–55). Emerald Publishing Limited. https://doi.org/10.1108/S0742-730120200000038003
Yan Y., & He Y. (2016). The role leaders’ perception to employee voice behavior motives: An attribution theory-based review. Advances in Psychological Science, 24(9), 1457–1466. https://doi.org/10.3724/SP.J.1042.2016.01457
Yan, A., & Xiao, Y. (2016). Servant leadership and employee voice behavior: a cross-level investigation in China. SpringerPlus, 5, 1–11. https://doi.org/10.1186/s40064-016-3264-4
Yang, H., & Sundar, S. S. (2024). Machine heuristic: concept explication and development of a measurement scale. Journal of Computer-Mediated Communication, 29(6), Article zmae019. https://doi.org/10.1093/jcmc/zmae019
Zayid, H., Alzubi, A., Berberoğlu, A., & Khadem, A. (2024). How Do Algorithmic Management Practices Affect Workforce Well-Being? A Parallel Moderated Mediation Model. Behavioral Sciences, 14(12), 1123. https://doi.org/10.3390/bs14121123
Zhang, C., Zheng, W., Li T., & Wang, X. (2025). Co-Creation with Humans or AI? The Influence of Co-Creator Types on Content Co-Creation Intention. Journal of Psychological Science, 48(1), 210–219. https://doi.org/10.16719/j.cnki.1671-6981.20250120
Zhang, G., Chong, L., Kotovsky, K., & Cagan, J. (2023). Trust in an AI versus a Human teammate: The effects of teammate identity and performance on Human-AI cooperation. Computers in Human Behavior, 139, Article 107536. https://doi.org/10.1016/j.chb.2022.107536
Zhang, L., Lou, M., & Guan, H. (2022). How and when perceived leader narcissism impacts employee voice behavior: a social exchange perspective. Journal of Management & Organization, 28(1), 77–98. https://doi.org/10.1017/jmo.2021.29
A
Zhang, L., Yang, J., Zhang, Y., & Xu, G. (2023). Gig worker’s perceived algorithmic management, stress appraisal, and destructive deviant behavior. Plos one, 18(11), Article e0294074. https://doi.org/10.1371/journal.pone.0294074
Total words in MS: 11478
Total words in Title: 16
Total words in Abstract: 345
Total Keyword count: 4
Total Images in MS: 3
Total Tables in MS: 3
Total Reference count: 149