AI systems are complex sociotechnical systems – that is, they consist of material and social components which, by being put into particular kinds of relations, work together in specific ways. Consequently, it is not sufficient to understand AI systems as isolated lines of code – instead, AI systems should be understood as intertwined with data, computational power, storage, market relations, organizational ontologies, societal practices, and epistemic capabilities. In effect, AI systems are not isolated from the various ‘other’ issues often thought of as tangential, such as “those pesky humans that get in the way of AI systems functioning properly.”
In this paper, we first outline some of the limitations of AI systems from a data science perspective. While many of these issues have been discussed before, they provide a fundamental lens to understanding the principles upon which these technologies are built, within the confines of ‘the code itself’. This includes questioning the techniques used for developing AI systems, common confusions around the interpretation or application of data and methods, and concerns about the use of particular parameters for making decisions (i.e. about how to design a system).
Building on this discussion, in the second section we detail how AI systems cannot be contained as merely technical issues, instead brimming with complexity. The issues of ‘the code itself’ cannot be regarded merely as technical faults with technological fixes – in fact, such an understanding is inherently problematic. However, understanding the wider issues only becomes possible with a grasp of the data and computer science fundamentals, both their strengths and limitations, because it is precisely these mechanisms which lead to particular kinds of issues in contexts of different scales and scope. In this section, we explore matters of power, scale and structure, as well as the value(s) we enact with AI systems.
With this basis, we move on in the third section to raise some questions and offer a series of indications for dealing with AI systems. We seek to draw in wider questions and contemplations to round off our reflection on the implications of complex, sociotechnical AI systems in our world. Understanding AI systems as complex sociotechnical systems unfolds pathways to address existing issues, and we argue that different ways of thinking will be key to handling the challenges to governance posed by AI systems in very particular contexts. In the boxes rounding off this paper, we display a small selection of further concerns which informed our approach and arguments.
To conclude, we briefly touch upon governance – briefly only, for we hope that the thoughts and ideas that we have provided will feed into collaborative efforts to make AI systems (with) care for our world.
Fariba Karimi and Rania Wazir acknowledge the funding from WWTF roadmap grant number RO22-002.