Sub-theme 11: [SWG] Explaining AI in the Context of Organizations

Marleen Huysman
Vrije Universiteit Amsterdam, The Netherlands
Ella W. Hafermalz
Vrije Universiteit Amsterdam, The Netherlands
Reza Mousavi Baygi
Vrije Universiteit Amsterdam, The Netherlands

Call for Papers

“The computer says no” is a statement most are not happy to accept. Yet as algorithmic decision-making, increasingly reliant on machine learning ((Faraj, Pachidi, & Sayegh, 2018), becomes a normal component of organizational processes, employees, managers, and clients are facing a reality where machines make ‘decisions’ that are implemented without meaningful avenues for questioning and redress.
As users and stakeholders working with such AI systems grow increasingly aware of issues of bias, discrimination and inaccuracy (AI NOW, 2018) and the public concerns about a black-box society (Pasquale, 2015) and surveillance capitalism (Zuboff, 2019) increase, the need to ensure that systems are fair, transparent and accountable have rapidly grown. The intense focus on these issues has brought forward a general consensus that “explainability” forms a critical and urgent move towards ensuring that AI systems can be trusted and safely integrated throughout many sensitive and high-impact domains (HLEG AI, 2019).
However, the discussion and developments around Explainable AI have thus far been held in relatively siloed communities of technologists, and policy makers and ethicists, exposing a gap that may prevent meaningful and productive progress: the lack of integration between policy makers, technologists and the contexts of practice aggravates what is called the “sociotechnical” gap between technical feasibility and social requirements (Ackerman, 2000). This phenomenon has been identified as the main hurdle in ensuring the adoption and economic success of new technologies and media.
This gap arises in part because technologists have mostly developed technical techniques to “interpret” the internal workings of algorithms and machine learning models (Došilović, Brčić, & Hlupić, 2018; Guidotti et al., 2018). While such techniques are desperately needed, it remains a challenge to bridge the gap between these technical interpretations and the actual social needs of a context of use and organizational practice, for example, with regards to how such tools are embedded in specific knowledge practices (Andrejevic, 2020).
In particular, what is lacking in the current discussions on Explainable AI is empirical evidence from organizations that employ AI tools. As long as we do not incorporate organizational insights about the need for Explainable AI and existing local solutions, many of the present and future technological solutions and political guidelines might have missed the point. As Introna (2016) points out, hard-to-scrutinize algorithms are “subsumed in the flow of daily practices” and this urges us to study and develop explainability that is situated in the web of different domains of knowledge and subjectivities that are enacted through governing practices. To understand both how explanations around AI play out in practice, and in order to close the gap between technical solutions and social requirements, we need to expand the view of Explainable AI to include an organizational practice perspective.
An organizational practice perspective on Explainable AI invites us to look at how AI is used and developed in situ (Hafermalz & Huysman, 2021). In particular, we want to stress that the development and use of AI relies on a host of stakeholders: for example data scientists, domain experts, end-users, brokers, managers, and more (Waardenburg, Huysman, & Agterberg, 2021; Zhang et al., 2020). “Explanations” may therefore play a role beyond that of aspiring to technical transparency, for example in matters of persuasion, politics, and rhetoric (Alvesson, 2001). Further, explanations that pertain to AI may be constructed in ways that range from mostly social (Runde & de Rond, 2010) to mostly technical; with intentions that range from ethical transparency to influence. Unearthing the specificities of sociomaterial practices of explanation across different organizational contexts will add needed richness and nuance to the broader Explainable AI (XAI) initiative.
An organizational practice perspective is suited to answering the following questions, that until so far are left mostly unexplored:

  • What does Explainable AI mean in the context of organizations, and what are its (unintended) consequences?

  • Who is the actor or actors in need of Explainable AI and do these various actors call for different solutions (Kirsch, 2017)?

  • When do these actors need explanations from AI? Why do we need Explainable AI in the context of organizational practice?

Addressing the above questions will give us better insight into how to design Explainable AI that addresses the sociotechnical gap between design and use (Bailey & Barley, 2020).
Contributions can include but are not limited to:

  • Case studies of organizational uses of AI with attention paid to explainability, accountability, visibility, and transparency (Ananny & Crawford, 2018; Flyverbom et al., 2016).

  • Studies of Explainable AI in terms of knowing practices (Pachidi et al., 2020), knowledge management, the shareability and automation of knowledge (Andrejevic, 2020).

  • Explorations of open AI initiatives (such as OpenAI) and their organizational principles.

  • Conceptual reflections on the limits of transparency and explainability, also in relation to organizational opacities (Burrell, 2016; Geiger, 2017; Roberts, 2009; Tsoukas, 1997).

  • Empirical process studies of how explainability is considered and/or 'built in' during the design and development of AI systems.

  • Methodological reflections on reverse engineering or otherwise opening up the ‘black boxes’ of AI (Kitchin, 2017), archaeologies of the operations of AI (Mackenzie, 2017).

  • Empirically informed ethical reflections on the use of AI in organizational contexts.

  • A stakeholder perspective on Explainable AI in organizations, that looks at explanations as rhetorical device rather than (only) an avenue for achieving transparency.



  • Ackerman, M.S. (2000): “The intellectual challenge of CSCW: the gap between social requirements and technical feasibility.” Human–Computer Interaction, 15 (2–3), 179–203.
  • AI NOW (2018): AI Now Report 2018. Retrieved from
  • Alvesson, M. (2001): “Knowledge work: Ambiguity, image and identity.” Human relations, 54 (7), 863–886.
  • Ananny, M., & Crawford, K. (2018): “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability.” New Media & Society, 20 (3), 973–989.
  • Andrejevic, M. (2020): “Data Civics: A Response to the ‘Ethical Turn’.” Television & New Media, 21 (6), 562–567.
  • Bailey, D.E., & Barley, S.R. (2020): “Beyond design and use: How scholars should study intelligent technologies.” Information and Organization, 30 (2), 100286.
  • Burrell, J. (2016): “How the machine ‘thinks’: Understanding opacity in machine learning algorithms.” Big Data & Society, 3 (1),
  • Došilović, F.K., Brčić, M., & Hlupić, N. (2018): Explainable artificial intelligence: A survey. Paper presented at the 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija (Croatia), May 21–25, 2018;
  • Faraj, S., Pachidi, S. & Sayegh, K. (2018): "Working and organizing in the age of the learning algorithm." Information and Organization, 28 (1), 62–70.
  • Flyverbom, M., Leonardi, P., Stohl, C., & Stohl, M. (2016): “Digital Age| The Management of Visibilities in the Digital Age – Introduction.” International Journal of Communication, 10, 98–109.
  • Geiger, R.S. (2017): “Beyond opening up the black box: Investigating the role of algorithmic systems in Wikipedian organizational culture.” Big Data & Society, 4 (2),
  • Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D. (2018): “A survey of methods for explaining black box models.” ACM Computing Surveys (CSUR), 51 (5), 1–42.
  • Hafermalz, E., & Huysman, M. (2021): “Please Explain: Key Questions for Explainable AI Research from an Organizational Eerspective.” Morals + Machines, 1 (2), 10–23.
  • HLEG AI (2019): Ethics Guidelines for Trustworthy AI. Retrieved from European Commission, Brussels:
  • Introna, L.D. (2016): "Algorithms, governance, and governmentality: On governing academic writing." Science, Technology, & Human Values, 41 (1), 17–49.
  • Kirsch, A. (2017): Explain to Whom? Putting the User in the Center of Explainable AI. Paper presented at the Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML 2017 co-located with 16th International Conference of the Italian Association for Artificial Intelligence (AI* IA 2017).
  • Kitchin, R. (2017): "Thinking critically about and researching algorithms." Information, Communication & Society, 20 (1), 14–29.
  • Mackenzie, A. (2017): Machine Learners: Archaeology of a Data Practice. Cambridge: MIT Press.
  • Pachidi, S., Berends, H., Faraj, S., & Huysman, M. (2020): “Make Way for the Algorithms: Symbolic Actions and Change in a Regime of Knowing.” Organization science, 32 (1), 18–41.
  • Pasquale, F. (2015): The Black Box Society. Cambridge: Harvard University Press.
  • Roberts, J. (2009): “No one is perfect: The limits of transparency and an ethic for ‘intelligent’ accountability.” Accounting, Organizations and Society, 34 (8), 957–970.
  • Runde, J., & de Rond, M. (2010): “Evaluating causal explanations of specific events.” Organization Studies, 31 (4), 431–450.
  • Tsoukas, H. (1997): "The tyranny of light: The temptations and the paradoxes of the information society." Futures, 29 (9), 827–843.
  • Waardenburg, L., Huysman, M., & Agterberg, M. (2021): Managing AI Wisely: From Development to Organizational Change in Practice. Cheltenham: Edward Elgar Publishing.
  • Zhang, Z., Nandhakumar, J., Hummel, J. & Waardenburg, L. (2020): “Addressing the key challenges of developing machine learning AI systems for knowledge-intensive work.” MIS Quarterly Executive, 19 (4), Article 5,
  • Zuboff, S. (2019): The Age of Surveillance capitalism: The Fight for a Human Future at the New Frontier of Power. London: Profile Books.
Marleen Huysman is Professor of Knowledge and Organization at the School of Business and Economics, Vrije Universiteit Amsterdam, The Netherlands, where she leads the KIN research group and the KIN Center for Digital Innovation. She teaches and publishes on topics related to the practices of developing and using digital technologies – in particular artificial intelligence –and new ways of working. Marleen’s research has been published in various leading journals in the field of information systems and organization science.
Ella W. Hafermalz is an Associate Professor in the KIN Research Group at the Vrije Universiteit Amsterdam, The Netherlands. She studies how digital technologies play a role in new and old ways of working and organizing with digital technologies. Ella’s research investigates the experience of new ways of working, taking a practice based perspective to theorise remote work, AI@Work, and the digitalisation of work. She has published her work, among others, in ‘Organization Science’, ‘Organization Studies’, ‘The European Journal of Information Systems’, ‘Information Systems Journal’, and ‘Information and Organization’.
Reza Mousavi Baygi is an Assistant Professor at the KIN Center for Digital Innovation of the School of Business and Economics at the Vrije Universiteit (VU) Amsterdam, Netherlands. His research interests relate to novel forms of work, organizing, and activism through social media and other multi-sided digital platforms. Reza approaches these topics through practice and process perspectives with an interest on questions of temporality and performativity in human–technology relations.