Sub-theme 73: AI Power, Organization & Subjectivity: How Does AI Affect Our Subjectivities and Identities?

To upload your short paper, please log in to the Member Area.
Convenors:
David Knights
Lancaster University, United Kingdom
Guy Huber
Oxford Brookes University, United Kingdom
Ella W. Hafermalz
Vrije Universiteit Amsterdam, The Netherlands

Call for Papers


AI programs are “more and more pervasive in our day-to-day lives… turning into an integral, seamless” aspect of living (Fowler in Forbes, June 22, 2020). Indeed, over a short period of time, AI has become a daily news item and, regardless of our intentions, it is now impossible to avoid interacting with, or “speaking to” AI, if only because of its displacement of human-to-human interaction in everyday activities. From corporate customer relations to social media communications as well as a diverse range of other commercial, work-related and domestic usages, we cannot be other than deeply affected by AI. Social scientists see it as ‘affecting our very existence’ (Bucher, 2016) yet we remain largely unaware of the hidden algorithms doing the heavy legwork on social platforms.
 
The part that technology plays in exercising power has long been critical to organization studies (Beyes et al., 2022; Coombs et al., 1992; Fleming & Spicer, 2014; Orlikowski 2000). For example, technologies may transform workers into what has been called “cyborgs” (Haraway, 1985), who enact innovative identities in relation to technological power. However, less understood is how our engagement with AI transforms us into beings who secure our “sense of meaning, identity and reality” through participating in discourses and practices invoked by this power and knowledge (Knights, 2006: 708). We wish to promote the interrogation of this problematic by authors drawing on wide-ranging research practices and theory.
 
There is an embryonic critique of the dangers of AI negatively affecting work relations, including producing forms of discrimination within the organization and management literature, yet, the potentially dangerous implications of AI power on subjectivity has yet to be examined in relation to identity scholarship. Yet, our interactions with AI have important implications for how we think, feel and act not least because of our propensity to fashion “ourselves and our social life in the image that the technology is creating of us” (Moser et al., 2022: 140). This potentially dangerous impact is often ignored if not denied perhaps because we need to “reframe the concept of technology” as a discursive practice that performatively ‘constrains and enables everyday life’ (Leclercq-Vandelannoitte, 2011: 1247). AI exercises power in ways that affect our subjectivity because we freely identify with – and participate in – the pursuits enabled by its presence. What kinds of effects, affects, emotions, deep unconscious responses might this unleash (Hayles, 2010), through our identities, where with others we are able to author our own lives?
 
AI is not some great conspiracy designed to control the world and all its inhabitants but more like a force, neither necessarily good nor bad (Foucault, 1997). For example, algorithms often produce responses from workers whose practices, including identity work, are managed at a distance through AI surveillance (Fleming, 2017). At Uber, for example, workers are reliant on an algorithm that disciplines them through the threat of exile (Hafermalz, 2021). Yet, Uber’s drivers do not necessarily experience AI negatively, as the algorithm allows drivers to decide when they would like to work; helpfully provides directions; and offers positive, self-reinforcing, feedback in a way that some may not have experienced before in the workplace.
 
But this might be because of its subtlety in capturing our willing compliance with its demands. For while AI can transform the way we work in positive and negative ways there is little doubt that it exercises power that affects the subjectivity of all those who interact with it. Yet, as with many other responses to power that seem inevitable, workers’ ambivalence may signal the onset of apathy towards AI rather than “pessimistic activism” (Foucault, 1991: 343) where we might refuse to be subdued when faced with a possibly overwhelming force. As Bucher (2016: 35) notes, “It is not just [that] the categories and classifications that algorithms rely on match our own sense of self, but to what extent we come to see ourselves through the ‘eyes’ of the algorithm” and become deeply attached to what we see? AI is involved, in processes of interpellation, often in ways we struggle to comprehend when it appeals to our attachment to identities.
 
This may be so even though we can question AI’s discursive practice through the availability of discourses that contradict and problematize our thinking. However, if AI exercises a dominant voice in the affairs of people then this cannot be democratic, or inclusive, in nature. It is instead authoritarian and hegemonic in effect, where it produces likewise thinking in those with whom it interacts. Hegemony is a cultural process where power is exercised asymmetrically to mobilize and/ or manipulate people’s perceptions, feelings and behaviour in specific ways (Gramsci, 1971). Is there not a danger of AI power generating hegemonies while simultaneously silencing alternative discourses that produce the possibility of us thinking and acting differently?
 
Our sub-theme explores the relationship between AI power, organizing and identity formation as well as the dangers of failing to question its social construction. We will extend the above debates by redressing questions of how AI affects subjectivities in and around work, both intentionally (through the logic of management) and perhaps unintentionally (through the discursive practices that it enacts). We expect papers to “open up” debates within the field of organization studies and management. AI is dangerous in so far as it affects our subjectivity but whether for “good” or “bad” is open to debate (Foucault, 1997) and maybe the problem are the very binaries that moral debates invoke. AI, technology, organizations, discursive practices, subjectivities, and identities are enmeshed and mutually constituting, which suggests there is an array of possible research designs and theories that one might engage in to interrogate our concerns (e.g., Barad, 2007; Deleuze, 1996; Foucault, 1984; Mead, 1934).
 
This sub-theme will be particularly attentive to AI power, including possible forms of resistance, as well as the power relations implicit in AI design. We will be open to work based on discursive approaches to AI, including but not limited to posthuman, sociomaterialism, actor-network and performative framings. While we expect all paper submissions to address AI we will accept papers that do not deal exclusively with each of the major themes in the topic of the sub-theme because we wish to encourage novel and thought-provoking work that stretches our imaginations. Below is a list of suggestive rather than exhaustive themes that might be addressed:

  • How does AI produce performativity through the exercise of power?

  • How might AI potentially endanger our subjectivity and welfare?

  • What ethics should we consider in the development of AI power in and around work?

  • How might AI surveillance affect workplace ethics, including morally orientated resistance? For example, does it potentially constrain ethical practices such as whistle-blowing?

  • What identities and subjectivities does AI “call us” towards?

  • How might AI affect alternative, queer, feminine and/or marginalised subjectivities?

  • How do people (potentially) resist the logic of the algorithm and the language/discourses it enacts?

  • How can we incorporate continental and New World philosophies, for example, Phenomenology, Genealogy, American pragmatism, into our research analysis?

  • How can we be creative in our research write-ups in terms of our own interactions with AI and its power effects?

 


References


  • Barad, K. (2007): Meeting the Universe Halfway. Quantum Physics and the Entanglement of Matter and Meaning. Durham: Duke University Press.
  • Beyes, T., Chun, W.H.K., Clarke, J., Flyverbom, M., & Holt, R. (2022): “Ten theses on technology and organization: Introduction to the special issue.” Organization Studies, 43 (7), 1001–1018.
  • Bucher, T. (2016): “Neither black nor box: Ways of knowing algorithms.” In S. Kubitschko & A. Kaun (eds.): Innovative Methods in Media and Communication Research. Cham: Palgrave MacMillan, 81–98.
  • Coombs, R., Knights, D., & Willmott, H.C. (1992): “Culture, control and competition; towards a conceptual framework for the study of information technology in organizations.” Organization Studies, 13 (1), 051–72.
  • Deleuze, G. (1996): Deleuze: A Critical Reader, edited by P. Paton. Cambridge, MA: Blackwell Publishers Inc.
  • Fleming, P. (2017): “The human capital hoax: Work, debt and insecurity in the era of Uberization.” Organization Studies, 38 (5), 691–709.
  • Fleming, P., & Spicer, A. (2014): “Power in management and organization science.” Academy of Management Annals, 8 (1), 237–298.
  • Foucault, M. (1984): “Of Other Spaces Utopias and Heterotopias” (translated by J. Miskowiec). Architecture/Mouvement Continuité, 1–9.
  • Foucault, M. (1991): The Foucault Reader, edited by P. Rabinow. London: Penguin.
  • Foucault, M. (1997): Ethics: Subjectivity and Truth. The essential Works of Foucault 1954–1984, edited by P. Rabinow. New York: The New Press.
  • Fowler, G. (2020): “AI And Consciousness: Could It Become ‘Human’?” Forbes, https://www.forbes.com/councils/forbesbusinessdevelopmentcouncil/2020/06/22/ai-and-consciousness-could-it-become-human/.
  • Gramsci, A. (1971): Selections from the Prison Notebooks, edited and translated by Q. Hoare & G. Nowell Smith. London: Lawrence & Wishart.
  • Hafermalz, E. (2021): “Out of the Panopticon and into Exile: Visibility and control in distributed new culture organizations.” Organization Studies, 42 (5), 697–717.
  • Haraway, D.J. (1985): “A Cyborg Manifesto: Science, Technology, and Socialist Feminism in the1980s.” Socialist Review, 80, 65–107.
  • Hayles, N.K. (2010): “How we became posthuman: Ten years on an interview with N. Katherine Hayles.” Paragraph, 45 (1), 318–330.
  • Knights, D. (2006): “Authority at work: reflections and recollections.” Organization Studies, 27 (5), 699–720.
  • Knights, D., & Huber, G. (2023): “AI: how it hands power to machines to transform the way we view the world.” The Conversation, November 16, 2023; https://theconversation.com/ai-how-it-hands-power-to-machines-to-transform-the-way-we-view-the-world-211632.
  • Leclercq-Vandelannoitte, A. (2011): “Organizations as discursive constructions: A Foucauldian approach.” Organization Studies, 32 (9), 1247–1271.
  • Mead, G.H. (1934/1962): Mind, Self and Society. Chicago: University of Chicago Press.
  • Moser, C., den Hond, F., & Lindebaum, D. (2022): “Morality in the age of artificially intelligent algorithms.” Academy of Management Learning and Education, 21 (1), 139–155.
  • Orlikowski, W.J. (2000): “Using technology and constituting structures: A practice lens for studying technology in organizations.” Organization Science, 11 (4), 404–428.

 

David Knights is Professor Emeritus at Lancaster University, United Kingdom, where he was a Distinguished Professor until November 2020. He was the co-founder and Editor-in-Chief of ‘Gender, Work and Organization’ from 1994–2016. David’s most recent publication is “AI: how it hands power to machines to transform the way we view the world”, in: The Conversation, November 16, 2023.
Guy Huber is a Senior Lecturer at the Oxford Brookes University Business School, United Kingdom. His primary research interests centre on power, subjectivity and identity. Guy has published in international scholarly journals including ‘Human Relations’ and ‘Organization Studies’, among others.
Ella W. Hafermalz is an Associate Professor at Vrije Universiteit Amsterdam, The Netherlands. Her research looks at how work is changing with the introduction of new technologies. Her research is published in international journals including ‘Academy of Management Discoveries’ and ‘Organization Studies’, among others.
To upload your short paper, please log in to the Member Area.