Sub-theme 60: (Ir)responsible Uses of Technologies and Human-Centered Future of Work: Individual, Organizational, and Societal Dilemmas and Implications

To upload your short paper, please log in to the Member Area.
Convenors:
Aizhan Tursunbayeva
University of Naples Parthenope, Italy
Luigi Moschera
University of Naples Parthenope, Italy
Daniel Samaan
International Labour Organization (ILO), Switzerland

Call for Papers


Advances in technologies such as Artificial intelligence (AI) and its applications such as intelligent robots, algorithmic decision-making, and large language models are transforming individuals, organizations, and society. For employees, the adoption of such technologies can have significant implications for their privacy, autonomy, opportunities, incomes, behaviours, and their overall well-being (Pereira et al., 2023). The potential for decision biases or discrimination is also a concern, particularly as new flexible and demand-based working arrangements emerge at the same time, and the nature of work and its allocation are being transformed (Alfes et al., 2022). For organizations, the adoption of new technologies can have operational, financial, and reputational implications, as well as trigger a transformation of core institutional values, norms, and rules (Orlikowski & Scott, 2023). At the societal level, the adoption of new technologies can have consequences for environmental and social sustainability, the functioning of the labour market, and regulatory changes, among others (Tursunbayeva et al., 2022).
 
The sensitivity of responsibly deploying AI applications is evident from calls for a directive on AI in the workplace, such as the EU’s AI Act, and the diffusion of Responsible AI guidelines and principles aimed at mitigating and preventing adverse effects that AI may bring to society (Jobin et al., 2019). Such sensitivity is also a focal point of debate among interdisciplinary researchers studying technologies and work. Concerns have been raised about the complexity of AI and Robotics and their integration into work processes and organizations, the robustness and validity of AI-powered applications for HRM practices (e.g., assessment methods), and the challenges they pose in terms of transforming professions, occupations, and the required skills set and knowledge (Orlikowski & Scott, 2023). These challenges highlight the need to examine the dynamic manifestations of such phenomena at different levels (Renkema et al., 2017), and within increasingly complex institutional contexts, characterized by multiple and often conflicting values, norms, and behaviours (Greenwood et al., 2011).
 
The obstacle to managing these changes is the difficulty that stakeholders can face in understanding how these technologies actually work, with algorithms and data flows often being opaque. In addition, many individuals often lack the necessary skills to fully embrace exponential technologies at/for work. A key question underlying these technologies is how “human-centered” and responsible AI-based technologies are. On the one hand, they are sometimes portrayed as empowering, enabling, and beneficial to employees, yet on the other hand, they provide more power for management to quantify, track, incentivize, and discipline their staff (among others). More knowledge and understanding of how these technologies are evolving and being used for organizing and managing (Leonardi, 2021), as well as their soft (Tursunbayeva & Renkema, 2023) and hard impacts at multiple levels, are therefore needed if the goal of “human-centeredness” in organizations is to be achieved and to ensure a decent future of work for all.
 
Given this rapid diffusion of AI applications within organizations, the resulting transformations of the organizational processes and practices, and the ethical concerns and risks they introduce for a sustainable and human-centered future of people at work (aligned with the United Nation’s Sustainable Development Goals 3, 5, and 8, and International Labour Organization’s International Labour Standards), we invite submissions from multi-disciplinary researchers and practitioners that creatively and critically reflect on and analyze ethical and responsible applications of AI in organizations, and their dilemmas and implications for stakeholders, organizations, and society. We welcome conceptual and empirical contributions, reviews, case studies, and experience-in-the-field reports inspired by interdisciplinary, multi-level, multi-stakeholder, multi-method, and culture-sensitive approaches that could address existing and future challenges and uncertainties, define an agenda for future research, and provide good practice recommendations and instruments for designing and evaluating human-centered, trustworthy, and responsible AI in organizations.
 
Topics of interest include but are not limited to:
 
Responsible AI that is made-to-last:

  • What are Responsible AI applications for work and organizations that will last?

  • Is the “staying power” of lasting AI intrinsic to the technology characteristics, the processes involved in its development (application of Responsibility principles), the personal characteristics of developers, implementers or users, the defining elements of their social and industry contexts, or/and the subsequent processes of social evaluation?

  • What can we learn from comparing Responsible AI products, projects, approaches to use or strategies?

  • What is the role of responsibility and ethics in the development, implementation, and use of AI in/for work and organizations?

  • What are the emerging creative forms of work embraced by AI that are here to stay?

  • How AI is re-organizing professions, job categories, organizational roles, processes, and competencies?

 
Responsibility that preserves responsibility:

  • What can we learn from (ir)responsible AI about old (or augmented) risks for Diversity and Inclusion in organizations?

  • What are the similarities and differences between responsible AI for work and organizations that are “made-to-last” and the irresponsible AI that is “made-to-vanish”?

 
Responsibility that transforms (ir)responsibility:

  • What are the actors’ creative workarounds to erase or reverse as much as it is possible harmful and unforeseen consequences of AI at work and in organizations?

  • What logics, practices, and values are involved in replacing human-centric creativity (e.g., in decision-making) with creativity generated by AI?

  • What are the similarities and differences between organizing for responsible creativity and organizing for creativity propelled by AI?

  • What are the emerging methodologies and theories for studying, conceptualizing, measuring, and managing the Responsible adoption of AI for work and organizations?


Please note: Relevant and outstanding articles will receive an invitation to be submitted for a special issue on “(Ir)responsible uses of technologies and the future of work: Managerial and organizational dilemmas” in the European Management Journal: https://doi.org/10.1016/j.emj.2023.05.007.
 


References


  • Alfes, K., Avgoustaki, A., Beauregard, T.A., Cañibano, A., & Muratbekova-Touron, M. (2022): “New ways of working and the implications for employees: A systematic framework and suggestions for future research.” The International Journal of Human Resource Management, 33 (22), 4361–4385.
  • Greenwood, R., Raynard, M., Kodeih, F., Micelotta, E.R., & Lounsbury, M. (2011): “Institutional Complexity and Organizational Responses.” The Academy of Management Annals, 5 (1), 317–371.
  • Jobin, A., Ienca, M., & Vayena, E. (2019): “The global landscape of AI ethics guidelines.” Nature Machine Intelligence, 1 (9), 389–399.
  • Leonardi, P.M. (2021): “COVID-19 and the New Technologies of Organizing: Digital Exhaust, Digital Footprints, and Artificial Intelligence in the Wake of Remote Work.” Journal of Management Studies, 58 (1), 249–253.
  • Orlikowski, W.J., & Scott, S.V. (2023): “The Digital Undertow and Institutional Displacement: A Sociomaterial Approach.” Organization Theory, 4 (2); https://doi.org/10.1177/26317877231180898.
  • Pereira, V., Hadjielias, E., Christofi, M., & Vrontis, D. (2023): “A systematic literature review on the impact of artificial intelligence on workplace outcomes: A multi-process perspective.” Human Resource Management Review, 33 (1); https://doi.org/10.1016/j.hrmr.2021.100857.
  • Renkema, M., Meijerink, J., & Bondarouk, T. (2017): “Advancing multilevel thinking in human resource management research: Applications and guidelines.” Human Resource Management Review, 27 (3), 397–415.
  • Tursunbayeva, A., Pagliari, C., Di Lauro, S., & Antonelli, G. (2022): “The ethics of people analytics: Risks, opportunities and recommendations.” Personnel Review, 51 (3), 900–921.
  • Tursunbayeva, A., & Renkema, M. (2023): “Artificial intelligence in health‐care: Implications for the job design of healthcare professionals.” Asia Pacific Journal of Human Resources, 61 (4), 845–887.
  •  
Aizhan Tursunbayeva is an Assistant Professor at the University of Naples Parthenope, Italy. She teaches Organizational Design, Human Resource Management (HRM), and People Analytics. Her research lies at the intersection of HRM, technology, innovation, and healthcare. The results of Aizhan’s research were published in ‘Personnel Review, Journal of the American Medical Informatics Association’, ‘Information Technology & People’, ‘Management Learning’, and ‘International Journal of Information Management’, among others.
Luigi Moschera is a Full Professor of Organization Studies at the University of Naples Parthenope, Italy, where he teaches Organization Theory, Inter-firm Network Design, and Human Resource Management. His most recent research focuses on the responsible use of exponential technologies in organizations and relevant human resource management practices, with a particular focus on contingent/alternative employment arrangements and their implications for employee well-being and behaviour. Luigi has authored several international publications on organizational change in the temporary work agency sector.
Daniel Samaan is a Senior Economist and Researcher at the International Labour Organization (ILO), Switzerland, specialized in the analysis of global labour market trends. His expertise covers globalisation, new technologies, AI, sustainable development, and a new work culture. He has authored and contributed to numerous ILO reports and academic articles. Daniel is a regular public speaker on labour market and HR topics and the Future of Work at international conferences and events.
To upload your short paper, please log in to the Member Area.