Sub-theme 53: The Impact of Algorithmic Decision-Making on Marginalized Worker Identities
Call for Papers
The turn to algorithmic decision making in people management (PM) operations, processes and practices has been the subject
of an emergent body of research within work and organization studies. At times articulated through arguments surrounding a
lack of regulatory measures (Ajunwa, 2019) or “good” employment data (Citron & Pasquale, 2014), what connects this research
is the rise of a problematic yet powerful discourse in the blind faith placed in the use of algorithms in PM. Critical research
has considered the implications of algorithmic decision-making on the various aspects of PM including: employee control, surveillance,
ethics and discrimination (Ajunwa, 2020; Beer, 2017). Research also draws attention to how human biases can be inscribed into
the code of the PM algorithms embedding and maintaining inequalities against marginalized worker identities (based on gender,
race, age, sexual orientation, ability, among others) while assuming a veneer of objectivity (Raghavan et al., 2020; Vassilopoulou
et al. 2024). Evolving from this body of work, critical scholars have called for additional theorizing of the ramifications
of algorithmic decision-making in organizations (Lindebaum et al., 2019) exploring especially the processes through which
PM algorithms may mask inequality and discrimination against marginalized worker identities, replicating social and organizational
inequalities and in some instances while amplifying human bias in others (Hmoud & Laszlo, 2019).
The
overall purpose therefore of this sub-theme is to advance knowledge on the antecedents and consequences of algorithmic decision
making with a specific focus on marginalized worker identities in organizations. Specifically, the first aim of the sub-theme
is to invite research studies that develop new theoretical and empirical insights on how algorithmic decision making affects
fundamental employment results, such as employment opportunities and wages, pathways for career development and promotions,
for individuals with marginalized identities. Moreover, the rapid change of organizational forms and boundaries demands an
in-depth exploration of the interrelationship between algorithmic decision-making and the current arrangements of employee
mobility, the rise of new business models, such as platform companies, that characterize the gig economy (Vallas & Schor,
2020), as well as new management models where algorithms deliver a wide range of managerial tasks such as for instance recruitment
and performance management. It is such processes, systems and tasks that have provoked a controversy around the appropriate
regulation of algorithmic decision-making, coordination, and control (Healy & Pekarek, 2020).
The second
aim of this sub-theme is to explore how such technological developments in algorithmic decision-making create new forms of
social and economic inequality and exclusion for individuals with marginalized identities. Such an exploration not only advances
organizational and management theory and research, enriching insights into work and organizations, but it also has practical
implications for employees, managers, organizations, communities, and society as a whole.
Finally, even less
is known about the impact of algorithmic decision-making and algorithmic biases on employees’ and relevant stakeholders’ cognitions,
emotions, and behaviors. An urgent question pertains to whether (or not) and how discrimination, inequality and disadvantage
prompts employee mobilization, solidarity, advocacy and resistance, as the explosive rise of the platform economy and algorithmic
management takes worker exploitation and control to new levels (Healy et al., 2017). Accordingly, the third aim of this sub-theme
is to explore and understand the consequences and impact of algorithmic decision making on employee sentiments and whether
(or not) and how these sentiments affect how inequality is challenged (or accepted).
An indicative but not
exhaustive list of questions that could be addressed by papers in this sub-theme includes the following:
What are the (un)intended consequences of algorithmic decision-making, especially as it benefits certain individuals or social identity groups while restricting opportunities and excluding individuals with marginalized identities inside and outside organizations?
How does algorithmic decision-making in recruitment and selection, career development, performance management and reward systems, training and development, among other organizational processes, affect employees’ careers and lived experiences in the workplace, especially those with marginalized identities?
Furthermore, how are such processes shaped by algorithmic decision making resulting in different outcomes for different groups of workers with a negative impact on already marginalized groups of workers?
How do emergent technologies, such as machine learning, predictive and prescriptive algorithms, online platforms, etc., influence candidate screening and hiring, the allocation of tasks and jobs to employees, and therefore individual outcomes at work and especially for those with marginalized identities?
How do new organizational forms, such as online platforms and network-based firms, that are based on emergent AI technologies, have an impact on the distribution of power in organizations and labor markets shaping inequality at the workplace?
Given the potential/actual negative implications of algorithmic technology, what is its measure of success and who or what is evaluating this?
How do organizations and decision makers address and adjust to algorithmic decision-making and the future of work? How do algorithms affect decision makers and their behavior in organizations, especially in the context of PM decision making?
How do the myths associated with globalization, meritocracy and/or efficiency interplay with algorithmic decision making and define working practices and deepen systemic inequalities against individuals with marginalized identities in organizations?
How does algorithmic decision-making change or reshape existing norms and institutions? How does this affect individuals with marginalized identities?
How does algorithmic decision making shape or reshape the segmented labour market?
How is the emergence of algorithmic decision-making being experienced by individuals with marginalized identities? How does algorithmic bias demonstrate itself in the day-to-day and mundane employee experiences at work and how does it affect individuals’ life outside work?
How do individuals with marginalized identities navigate algorithmic decision-making?
How are identities rooted in and outside of the workplace activated to perpetuate or disrupt algorithmic biases in organizations through for instance solidarity, advocacy and resistance?
What strategies have proved successful in disrupting algorithmic biases in organizations? How does algorithmic decision-making help to reach an inclusive society? How can we regulate algorithmic decision making?
References
- Ajunwa, I., & Green, D. (2019): “Platforms at Work: Automated Hiring Platforms and Other New Intermediaries in the Organization of Work.” In: S.P. Vallas & A. Kovalainen, A. (eds.): Work and Labor in the Digital Age. Research in the Sociology of Work, Vol. 33. Leeds: Emerald Publishing Limited, 61–91.
- Beer, D. (2017): “The social power of algorithms.” Information, Communication & Society, 20 (1), 1–13.
- Citron, D.K., & Pasquale, F.A. (2014): “The scored society: Due process for automated predictions.” Washington Law Review, 89, 1–33.
- Healy, J., & Pekarek, A. (2020): “Work and wages in the gig economy: Can there be a High Road?” In: A. Wilkinson & M. Barry (eds): The Future of Work and Employment. Cheltenham: Edward Elgar Publishing, 156–173.
- Healy, J., Nicholson, D., & Pekarek, A. (2017): “Should we take the gig economy seriously?” Labour & Industry: A Journal of the Social and Economic Relations of Work, 27, 232–248.
- Hmoud, B., & Laszlo, C. (2019): “Will artificial intelligence take over human resources recruitment and selection?” Network Intelligence Studies, 8 (13), 21–30.
- Lamont, M., Beljean, S., & Clair, M. (2014): “What is missing? Cultural processes and causal pathways to inequality.” Socio-Economic Review, 12 (3), 573–608.
- Lindebaum, D., Vessa, M., & den Hond, F. (2019): “Insights from ‘The Machine Stops’ to better understand rational assumptions in algorithmic decision making and its implications for organizations.” Academy of Management Review, 45 (1), 1–38.
- Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020): “Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices.” Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 469–481.
- Ulbricht, L., & Yeung, K. (2022): “Algorithmic regulation: A maturing concept for investigating regulation of and through algorithms.” Regulation & Governance, 16, 3–22.
- Vallas, S., & Schor, J.B. (2020): “What Do Platforms Do? Understanding the Gig Economy.” Annual Review of Sociology, 46 (1), 273–294.
- Vassilopoulou, J., Kyriakidou, O., Ozbilign, M., & Groutsis, D. (2024): “Scientism as illusion in HR algorithms: Towards a framework for algorithmic hygiene for bias proofing.” Human Resource Management Journal, 34 (1), 311–325.