Sub-theme 71: The Algorithm and the Decision: Foolishness, Playfulness, Capriciousness
Call for Papers
“Any sufficiently advanced technology is indistinguishable from magic.”
Arthur
C. Clarke, 1962
Algorithms are increasingly entangled with processes and routines in organizations, aiding knowledge
workers in making decisions and performing their tasks, and – potentially – making decisions for them (Moser et al., 2022b).
They are fundamental for organizations to help them understand complexity and deal with uncertainty (Glaser et al., 2021).
Although algorithms are designed to help organizations adapt to the environment and manage risks associated with uncertain
futures (Hardy & Maguire, 2016), they have also been found to prevent organizations from changing, and produce organizational
inertia (Omidvar et al., 2023). Algorithms may change organizational decision-making in terms of process and/or outcome (Lindebaum
et al., 2020), and there is a question of whether decisions that rely on human judgement can be substituted by data-driven
calculations (Moser et al., 2022a).
Decisions underpin the fabric of organizations. Although there are organizational
roles in which the primary task is to make decisions, e.g., about someone else’s job, strategic orientation, etc., everyone
in the organization, at one point or another, is faced with decision-making. In essence, making a decision is about making
a choice, and in the organizational context, there is an implicit assumption or expectation that decision-making is rational
(Brunsson & Brunsson, 2017). This view of decision-making is rooted in economic decision theory, however, its notion of
rationality is highly problematic. Kahneman, Slovic and Twersky (1982) were the first to scrutinise conventional assumptions
about rational decisions-making and reveal heuristics and biases in decisions. Other scholars pointed at many cases where
probabilities required to make rational or optimised decisions are not available or cannot be calculated. There is growing
evidence that often decisions are rationalised afterwards to lend legitimacy to decisions that do not follow the so-called
formal rationality logic.
Furthermore, formal rationality does not address aspects of power, whereby decision-making
becomes a political process (Bachrach & Baratz, 1962), which can manifest itself in, for example, nondecision-making (Bachrach
& Baratz, 1963) that limits the scope of decision-making to “safe” problems. Prior to these, Herbert Simon introduced
an alternative – bounded rationality in decision-making, which is based on satisficing rather than optimising behaviour (Simon,
1957). Yet, while combining his research on decision-making with AI, Simon proposed that computation-based decision-making
in complex systems will become possible with a thinking machine (Heyck, 2008). Up until recently this idea remained technologically
unattainable. However, with recent developments in machine learning and generative AI, it might appear that the dream of decision
computability has become closer to fulfilment. Still this computability rests on computable target function satisfaction or
optimization (Vesa & Tienari, 2022), which is different from decision-making as a social ‘act between humans’ (Lindebaum
& Fleming, 2024).
Thus, in this sub-theme, we wish to explore this relationship between the algorithm
and the decision, asking: “How do AI systems become appropriated into the social practices of decision-making, and then what?”
We are inherently sceptical of the rationality often assumed in this process, noting that the materiality of algorithms, the
practices of decision-making, and the ideologization of rationality are all socially constructed. Hence, the ‘magic’ of the
algorithm is subject to the same human condition from which it arose. We need to pay much closer attention to how the calculated
outputs of algorithms (e.g., predictions, solutions, proposals, content, descriptions, analysis, etc.) are consumed and appropriated
with the social practice of decision-making. And, in particular, we need to understand better what happens with such outputs
when they interlace with basic human foolishness, playfulness, and capriciousness, as well as cunningness, pursuit of interests,
office politics, and manipulation.
We invite scholars to contribute in creative ways on how to engage with
and study algorithms in the context of organizational decision-making. We encourage scholars to engage with the impact of
introducing algorithms to support decision-making at multiple levels of analysis, individual, organizational and inter-organizational,
and to consider how algorithms affect organizational processes and practices (Glaser et al., 2021). The theme can be explored
from different angles and with reference to diverse and less than conventional methodologies, empirical settings and topics.
To name a few, studies on the intersection of simulations and games have been around for the past 40 years (Harviainen &
Stenros, 2021).
In recent years, we have seen an emergence of AI creativity, and examples of AI-assisted
development creative outputs (Amabile, 2020), calling for research in how AI might impact innovation practices, consumers,
organizations, and society as we know it. The implementation of algorithms at work more broadly reshapes the dynamics among
workers and between workers and algorithms. For instance, it can lead to anthropomorphizing of algorithms within communities
of practice (Spanellis & Pyrko, 2021) or the emergence of new learning configurations and partnerships in occupational
communities (Beane & Anthony, 2024). It is often met with resistance, leading to emergent algo-activism (Kellogg et al.,
2020), as well as invoking basic human emotions, such as foolishness and capriciousness. Examples such as these offer an opportunity
for studying how algorithms reshape organizational decision-making.
We welcome submissions in diverse formats
and traditions, including theory papers, essays, empirical work. Our take on methodologies and field work is inclusive. Possible
venues of inquiry include but are not limited to:
Algorithms and organizational practices and processes, e.g., how organizational practices of decision-making are reshaped by algorithms.
New and creative learning configurations and partnerships in response to the introduction of algorithms in learning practices.
Algo-activism, algo-scepticism and resistance to AI in organizational decision-making.
Playing with algorithms, e.g., emergent playfulness in organizations using algorithmic technologies, the interplay between human and machine identity, the symbiosis between algorithms and gameful technologies.
Algorithms and creativity, e.g., how algorithms affect creative and generative organizational processes, or enable experimentation in decision-making.
AI-driven or influenced gamification of organizations and organizing.
The interlace between algorithms and basic human foolishness, capriciousness, cunningness, and pursuit of interests.
Hacking the work mediated by algorithms.
The politics of algorithmic power, e.g., how algorithms alter existing power relations, and become an instrument of office politics and manipulation in decision-making.
Creative methodologies, i.e., creative approaches to studying algorithms.
References
- Amabile, T. (2020): “Creativity, Artificial Intelligence, and a World of Surprises.” Academy of Management Discoveries, 6 (3), 351–354.
- Bachrach, P., & Baratz, M.S. (1962): “Two Faces of Power.” American Political Science Review, 56 (4), 947–952.
- Bachrach, P., & Baratz, M. S. (1963): “Decisions and Nondecisions: An Analytical Framework.” American Political Science Review, 57 (3), 632–642.
- Beane, M., & Anthony, C. (2024): “Inverted Apprenticeship: How Senior Occupational Members Develop Practical Expertise and Preserve Their Position When New Technologies Arrive.” Organization Science, 35 (2), 405-431.
- Brunsson, K., & Brunsson, N. (2017): Decisions: The Complexities of Individual and Organizational Decision-Making. Cheltenham, UK: Edward Elgar Publishing.
- Glaser, V.L., Pollock, N., & D’Adderio, L. (2021): “The Biography of an Algorithm: Performing Algorithmic Technologies In Organizations.” Organization Theory, 2 (2), https://doi.org/10.1177/26317877211004609.
- Hardy, C., & Maguire, S. (2016): “Organizing Risk: Discourse, Power, and ‘Riskification’.” Academy of Management Review, 41 (1), 80–108.
- Harviainen, J.T., & Stenros, J. (2021): “Central Theories of Games and Play.” In: Vesa, M. (ed.): Organizational Gamification. New York: Routledge. 20–39.
- Heyck, H. (2008): “Defining the Computer: Herbert Simon and the Bureaucratic Mind – Part 1.” IEEE Annals of the History of Computing, 30 (2), 42–51.
- Kahneman, D., Slovic, P., & Tversky, A. (1982): Judgment Under Uncertainty: Heuristics and Biases. Cambridge, UK: Cambridge University Press.
- Kellogg, K.C., Valentine, M.A., & Christin, A. (2020): “Algorithms at Work: The New Contested Terrain of Control.” Academy of Management Annals, 14 (1), 366–410.
- Lindebaum, D., & Fleming, P. (2024): “ChatGPT Undermines Human Reflexivity, Scientific Responsibility and Responsible Management Research.” British Journal of Management, 35 (2), 566–575.
- Lindebaum, D., Vesa, M., & den Hond, F. (2020): “Insights From ‘The Machine Stops’ to Better Understand Rational Assumptions in Algorithmic Decision Making and Its Implications for Organizations.” Academy of Management Review, 45 (1), 247–263.
- Moser, C., den Hond, F., & Lindebaum, D. (2022a): “Morality in the Age of Artificially Intelligent Algorithms.” Academy of Management Learning & Education, 21 (1), 139–155.
- Moser, C., den Hond, F., & Lindebaum, D. (2022b): “What Humans Lose When We Let AI Decide.” MIT Sloan Management Review, 63 (3), 12–14.
- Omidvar, O., Safavi, M., & Glaser, V.L. (2023): “Algorithmic Routines and Dynamic Inertia: How Organizations Avoid Adapting to Changes in the Environment.” Journal of Management Studies, 60 (2), 313–345.
- Simon, H.A. (1957): Models of Man: Social and Rational; Mathematical Essays on Rational Human Behavior in Society Setting. New York: John Wiley & Sons.
- Spanellis, A., & Pyrko, I. (2021): “Gamifying Communities of Practice: Blending the Modes of Human–Machine Identification.” In: Vesa, M. (ed.): Organizational Gamification. New York: Routledge, 90–108.
- Vesa, M., & Tienari, J. (2022): “Artificial Intelligence and Rationalized Unaccountability: Ideology of the Elites?” Organization, 29 (6), 1133–1145.