In this paper, we explore the effects of algorithmic transparency and algo-rithmic accountability on organizations. Algorithmic transparency refers to the idea that algorithms should be transparent to the people who use, regu-late, and are affected by those algorithms. This means that the input, pro-cesses, and their use should be known, even though the algorithms do not necessarily need to be fair. Algorithmic accountability refers to the idea that organizations should be held responsible for decisions made by the algo-rithms they use, even if they are unable to explain how the algorithms pro-duce their results. Considering that algorithms are increasingly substituting many human ac-tivities—making decisions in place of humans—organizational actors may not always agree with algorithmic decisions, yet they remain legally ac-countable for their choices. We posit that organizational actors may face an internal legitimacy challenge regarding rationalization and consensus when their preferences do not align with the opaque decisions made by the algo-rithm. We discuss the role played by organizational actors, as Centaurs or Cyborgs, depending on whether they totally delegate specific tasks to AI or deeply interact with AI to enhance their own decisions. We maintain that the contemporary emphasis on AI does not include an accurate and compre-hensive reflection on the long-term effects on organizational systems.

Algorithmic transparency and algorithmic accountability in organizations

Di Nauta Primiano;Martinez Marcello
2025-01-01

Abstract

In this paper, we explore the effects of algorithmic transparency and algo-rithmic accountability on organizations. Algorithmic transparency refers to the idea that algorithms should be transparent to the people who use, regu-late, and are affected by those algorithms. This means that the input, pro-cesses, and their use should be known, even though the algorithms do not necessarily need to be fair. Algorithmic accountability refers to the idea that organizations should be held responsible for decisions made by the algo-rithms they use, even if they are unable to explain how the algorithms pro-duce their results. Considering that algorithms are increasingly substituting many human ac-tivities—making decisions in place of humans—organizational actors may not always agree with algorithmic decisions, yet they remain legally ac-countable for their choices. We posit that organizational actors may face an internal legitimacy challenge regarding rationalization and consensus when their preferences do not align with the opaque decisions made by the algo-rithm. We discuss the role played by organizational actors, as Centaurs or Cyborgs, depending on whether they totally delegate specific tasks to AI or deeply interact with AI to enhance their own decisions. We maintain that the contemporary emphasis on AI does not include an accurate and compre-hensive reflection on the long-term effects on organizational systems.
2025
978-3-032-01395-8
978-3-032-01396-5
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11369/469472
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact