När maskiner fattar beslut - vem är ansvarig?
Författare
Summary, in English
Robots making decisions on social benefits, driverless cars causing traffic accidents, search engines presenting a selected narrow picture of the world – the rapid devel- opment of AI technology gives rise to machines that makes their own decisions, without direct influence from humans, but who is responsible for what a machine does? Can the machine itself be responsible? The aim of this article is to discuss and problematize responsibility relations when machines make decisions. The overarch- ing question is whether machines can be responsible and if so, under which circum- stances. Drawing on theories on responsibility, machine ethics, robot philosophy, and on recent AI development, the article demonstrates how functionalistic argu- ments can lead to the conclusion that machines are responsible for their actions, while approaches building on philosophical understandings of autonomy and agency rules out machine responsibility. Unless the machine is conscious, human actors always need to be responsible for decisions taking by machines. However, as self-improving systems increase machine autonomy and decrease human control, the question is raised whether we are witnessing an emerging responsibility gap, or if this development rather describes a situation of blurred responsibility, in which responsibility needs to be distributed between many different actors – AI develop- ers, programmers, distributors, users, policy makers.