Sub-theme 59 (Cancelled): The Interplay between Technology & Human Rights in Business
Call for Papers
Call for short
papers (pdf)
The interplay between humans and technology raises thought-provoking questions about its impact
on business and human rights. The Fourth Industrial Revolution (also known as Industry 4.0 or I4.0) is characterized by the
inclusion of automation and intelligent digital technology (e.g., the internet of things, cloud connectivity, Artificial Intelligence
(AI), machine learning, etc.) in manufacturing and industrial production (World Economic Forum, 2024). On the one hand, the
integration of I4.0 technologies has been recognized as a “game-changer” for addressing human rights challenges (Berg et al.,
2020). These new technologies can provide instantaneous information regarding production conditions and the flow of materials
throughout a company’s supply chain, allowing businesses and other interested stakeholders new avenues to conduct human rights
due diligence and to monitor supply chains for impacts on stakeholders (Karmaker et al., 2023; Lopes de Sousa Jabbour et al.,
2018; Smit et al., 2021).
Further, there are valuable examples of the potential for I4.0 technologies to
address the risk of human rights abuses in production (Emanuilov & Yordanova, 2022). This includes using blockchain for
supply chain visibility (Rogerson & Parry, 2020) as well as digital platforms for incident reporting and stakeholder outreach
(Al-Billeh et al., 2024; Searcy et al., 2022). Digital platforms have empowered secondary stakeholders such as activists to
connect collectively and to exert pressure on corporations to address environmental harms and other human rights concerns
(Leonel et al., 2024; Leong et al., 2019). AI and big data analysis has been used to anticipate where future human rights
abuses may occur (such as modern slavery in fisheries; see Nakamuria et al., 2018), Internet of Things sensors that can detect
pollution, and so on (Costa et al., 2023).
On the other hand, scholars also highlight the dark side of AI
and digital technology when it comes to social justice and human rights, including issues regarding how such technologies
deal with privacy, surveillance, trust, accuracy of information, access, willingness to pay, inequality, and inclusivity (see,
e.g., McGrath et al., 2021; Trittin-Ulbrich & Martin, 2022; Trittin-Ulbrich et al., 2021). For example, a recent review
of literature on AI across various disciplines finds that without intervention, technologies and AI could create widening
inequality among different parts of society (Lei & Kim, 2024). This is because technologies based on AI can worsen prospects
for workers – in large part by replacing different kinds of skills – leaving particular kinds of workers prone to exploitation
depending on their location or role in the workforce (Lei & Kim, 2024).
An example of the dark side of
technology includes large technology companies such as Google and Apple facilitating an illegal online slave market, including
providing and approving apps used for buying and selling domestic workers in Kuwait (Maids for Sale, 2019). Moreover, companies
can use technology to surveil workers in ways that can lead to human rights violations, including but not limited to freedom
of association and privacy rights (De Stefano & Taes, 2023). Furthermore, technology can be used by businesses and governments
to monitor and harass activists protesting against perceived business irresponsibility (Storbeck et al., 2025; Zalnieriute,
2025).
Additionally, there appear to be fundamental problems with the advent of organizational interventions
governed by AI machine-based learning solutions in place of human judgment (Lei & Kim, 2024). This is because protecting
human rights involves imagination and human judgment to anticipate risks posed by operations before the harm is caused. Solutions
based on the analysis of data connected with algorithmic decision-making are thus likely to be flawed when it comes to these
aims to protect human rights, especially because algorithms cannot anticipate correct judgments (Moser et al., 2022). Even
with perfect information there is rarely a single course of action that applies in general or ongoing ways, but rather a weighing
up of alternatives based on sensitivity to contextual circumstances (Crane et al., 2019). Therefore, an outcome at one point
of time is not necessarily the universal answer in the future, which is problematic because the expectations for human rights
due diligence include an ongoing and dynamic learning process, informed and adapted based on previous and continuous learning
(Rogerson et al., 2023).
There are yet other possible ways in which the use of technology by business can
potentially lead to violations of human rights. Social media (among other technologies) can allow people to connect with each
other, but it can also facilitate human rights violations in myriad ways that include state oppression of dissidents and other
disfavored groups (Gondwe, 2024; Workneh, 2021) as well as fueling forms of discrimination that include racism (Matamoros-Fernández
& Farkas, 2021), sexism (Rodríguez-Sánchez et al., 2024), and homophobia (Sánchez-Sánchez et al., 2024). The case of social
media suggests that concerns about technology and human rights in business relate not only to direct violations of rights
by businesses, but also to violations of human rights by governments and other social actors that are facilitated by technologies
created and distributed by businesses.
Knowledge about what technology can enable, and also what challenges
it brings to the role of business in society, must thus be a key focus for firms to ensure the protection of human rights
as firms increasingly proceed with digitalization of due diligence mechanisms as well as their wider operations. It is important
to understand that the use of technology is value-laden rather than value-neutral, and its use therefore must be assessed
critically and comprehensively in the light of potential effects on human rights (den Hond & Moser, 2022), especially
because technology introduces new risks for managers who need to be alert to its potential for harm within their operations.
For example, technologies adopted to strengthen rational decision-making may also perpetuate social division and inequality
– an outcome at odds with a firm that is trying to protect human rights (Joyce et al., 2021; Zazjo, 2022). There is an urgent
need to reframe theories and practices concerning business and human rights in the context of technology adoption, which this
proposed stream seeks to critically examine.
Relevant research questions for this sub-theme could include
the following:
What is the impact of technological tools on power, trust, and inequality in global supply chains?
What value is created from the use of these technological tools for responsible supply chains and human rights due diligence, and who captures that value?
Can technology allow affected rightsholders to achieve respect for their human rights and remedy for human rights harms?
Transparency: How might tensions between public desires for more transparency, with user desire for privacy, and businesses’ desires to retain information and protect reputation while demonstrating responsibility be resolved?
What is the impact of technologies on increasing transparency about firms’ human rights practices?
In terms of governance, what kinds of initiatives can firms and/or industries take to maximize the value of new technologies and/or address the dark side of technologies with respect to human rights?
In terms of technology governance, what steps have governmental actors (country, international) and/or social activists taken to address these new technologies and their impacts on human rights?
What is the impact of technological tools on stakeholders who are either enacting human rights policies and/or impacted by a firm’s human rights approaches?
Does the use of technology make it harder (or easier) for stakeholders seeking to organize collectively and/or protest against human rights violations brought about by businesses?
How does social media facilitate human rights violations, and what are the responsibilities of businesses to ameliorate and remedy such harms?
What is the (positive, negative, and mixed) impact of technology on the likelihood of human rights violations?
The sub-theme convenors are open to a variety of
positions, research modalities, and epistemological positions in addressing these and related research questions germane to
the interplay between technology and human rights in business.
References
- Al-Billeh, T., Al-Hammouri, A., Khashashneh, T., Makhmari, M.A., & Al Kalbani, H. (2024): “Digital evidence in human rights violations and international criminal justice.” Journal of Human Rights, Culture and Legal System, 4( 3), 842–871.
- BBC News Africa (2019): “Maids for sale: Silicon Valley’s online slave market.” Documentary, https://www.bbc.co.uk/news/av/technology-50240012.
- Berg, L., Farbenblum, B., & Kintominas, A.(2020): “Addressing exploitation in supply chains: Is technology a game changer for worker voice?” Anti-Trafficking Review, 14, 47–66.
- Costa, F., Frecassetti, S., Rossini, M., & Portioli-Staudacher, A. (2023): “Industry 4.0 digital technologies enhancing sustainability: Applications and barriers from the agricultural industry in an emerging economy.” Journal of Cleaner Production, 408, https://doi.org/10.1016/j.jclepro.2023.137208.
- Clarke, A., & Lynes, J. (2020): “Online education for responsible management.” In: D.C. Moosmayer, O. Laasch, C. Parkes, & H.G. Brown (eds.): The SAGE Handbook of Responsible Management Learning and Education. Thousand Oaks, CA: SAGE Publications, 332–344.
- Crane, A., Matten, D., Glozer, S., & Spence, L.J. (2019): Business Ethics: Managing Corporate Citizenship and Sustainability in the Age of Globalization. Oxford, UK: Oxford University Press.
- den Hond, F., & Moser, C. (2023): “Useful servant or dangerous master? Technology in business and society debates.” Business & Society, 62 (1), 87–116.
- De Stefano, V., & Taes, S. (2023): “Algorithmic management and collective bargaining.” Transfer: European Review of Labour and Research, 29 (1), 21–36.
- Dupuis, M. (2025): “Algorithmic management and control at work in a manufacturing sector: Workplace regime, union power and shopfloor conflict over digitalization.” New Technology, Work and Employment, 40 (1), 81–101.
- Emanuilov, I., & Yordanova, K. (2022): “Business and human rights in Industry 4.0: A blueprint for collaborative human rights due diligence in the factories of the future.” Journal of Responsible Technology, 10, https://doi.org/10.1016/j.jrt.2022.100028.
- Gondwe, G. (2024): “Digital natives, digital activists in non-digital environments: How the youth in Zambia use mundane technology to circumvent government surveillance and censorship.” Technology in Society, 79, https://doi.org/10.1016/j.techsoc.2024.102741.
- Joyce, K., Smith-Doerr, L., Alegria, S., Bell, S., Cruz, T., Hoffman, S.G., Noble, S.U., & Shestakofsky, B. (2021): “Toward a Sociology of Artificial Intelligence: A Call for Research on Inequalities and Structural Change.” Socius, 7, https://doi.org/10.1177/2378023121999581.
- Kulkarni, M., Mantere, S., Vaara, E., van den Broek, E., Pachidi, S., Glaser, V.L., Gehman, J., Petriglieri, G., Lindebaum, D., Cameron, L.D., Rahman, H.A., Islam, G., & Greenwood, M. (2024): “The Future of Research in an Artificial Intelligence-Driven World.” Journal of Management Inquiry, 33 (3), 207–229.
- Lei, Y.W., & Kim, R. (2024): “Automation and Augmentation: Artificial Intelligence, Robots, and Work.” Annual Review of Sociology, 50, 251–272.
- Leonel, R., Rehbein, K., Westermann‐Behaylo, M.K., & Perrault, E. (2024): “Firms’ response to slacktivism: When and why are E‐petitions effective?” Journal of Management Studies, 61 (7), 3148–3183.
- Leong, C., Pan, S., Bahri, S., & Fauzi, A. (2019): “Social media empowerment in social movements: Power activation and power accrual in digital activism.” European Journal of Information Systems, 28 (2), 173–204.
- Matamoros-Fernández, A., & Farkas, J. (2021): “Racism, hate speech, and social media: A systematic review and critique.” Television & New Media, 22(2), 205–224.
- McGrath, P., McCarthy, L., Marshall, D., & Rehme, J. (2021): “Tools and technologies of transparency in sustainable global supply chains.” California Management Review, 64 (1), 67–89.
- Moser, C., den Hond, F., & Lindebaum, D. (2022): “Morality in the age of artificially intelligent algorithms.” Academy of Management Learning & Education, 21 (1), 139–155.
- Nakamura, K., Bishop, L., Ward, T., Pramod, G., Thomson, D.C., Tungpuchayakul, P., & Srakaew, S. (2018): “Seeing slavery in seafood supply chains.” Science Advances, 4 (7), https://www.science.org/doi/10.1126/sciadv.1701833.
- Rodríguez-Sánchez, F., Carrillo-de-Albornoz, J., & Plaza, L. (2024): “Detecting sexism in social media: An empirical analysis of linguistic patterns and strategies.” Applied Intelligence, 54 (21), 10995–11019.
- Rogerson, M., & Parry, G.C. (2020): “Blockchain: Case studies in food supply chain visibility.” Supply Chain Management: An International Journal, 25 (5), 601–614.
- Rogerson, M., Scarpa, F., & Snelson-Powell, A. (2024): “Accounting for human rights: Evidence of due diligence in EU-listed firms’ reporting.” Critical Perspectives on Accounting, 99, https://doi.org/10.1016/j.cpa.2024.102716.
- Sánchez-Sánchez, A.M., Ruiz-Muñoz, D., & Sánchez-Sánchez, F.J. (2024): “Mapping homophobia and transphobia on social media.” Sexuality Research and Social Policy, 21 (1), 210–226.
- Searcy, C., Castka, P., Mohr, J., & Fischer, S. (2022): “Transformational transparency in supply chains: Leveraging technology to drive radical change.” California Management Review, 65 (1), 19–43.
- Smit, L., Holly, G., McCorquodale, R., & Neely, S. (2021): “Human rights due diligence in global supply chains: Evidence of corporate practices to inform a legal standard.” The International Journal of Human Rights, 25 (6), 945–973.
- Storbeck, M., Jacobs, G., Schuilenburg, M., & van den Akker, R. (2025): “Surveillance experiences of extinction rebellion activists and police: Unpacking the technologization of Dutch protest policing.” Big Data & Society, 12 (1), https://doi.org/10.1177/20539517241307892.
- Trittin-Ulbrich, H., & Martin, K. (2022): “Towards a Human-Centered View on Digital Technologies.” Journal of Business Ethics, https://ssrn.com/abstract=4238099.
- Trittin-Ulbrich, H., Scherer, A.G., Munro, I., & Whelan, G. (2021): “Exploring the dark and unexpected sides of digitalization: Toward a critical agenda.” Organization, 28 (1), 8–25.
- Workneh, T.W. (2021): “Social media, protest, & outrage communication in Ethiopia: toward fractured publics or pluralistic polity?” Information, Communication & Society, 24 (3), 309–328.
- World Economic Forum (2024): “Fourth Industrial Revolution: What is ‘Industry 4.0’ and what does it mean for front-line workers?” January 8, 2024, https://www.weforum.org/stories/2024/01/industry-4-fourth-industrial-revolution-workers/.
- Zajko, M. (2022): “Artificial intelligence, algorithms, and social inequality: Sociological contributions to contemporary debates.” Sociology Compass, 16 (3), https://doi.org/10.1111/soc4.12962.
- Zalnieriute, M. (2025): “Facial recognition surveillance and public space: Protecting protest movements.” International Review of Law, Computers & Technology, 39 (1), 116–135.

