Your browser has javascript turned off or blocked. This will lead to some parts of our website to not work properly or at all. Turn on javascript for best performance.

The browser you are using is not supported by this website. All versions of Internet Explorer are no longer supported, either by us or Microsoft (read more here: https://www.microsoft.com/en-us/microsoft-365/windows/end-of-ie-support).

Please use a modern browser to fully experience our website, such as the newest versions of Edge, Chrome, Firefox or Safari etc.

How will different distributions of responsibility affect the long-term development of Artificial Intelligence?

Consequences of decisions on AI for safety, democracy and future development 

The project is funded by the Marianne and Marcus Wallenberg Foundation during July 1, 2019—June 30, 2023.

The purpose is to

  1. better understand how different distributions of forward-looking responsibility for the development of AI today will affect the long-term development of AI regarding three criteria: safety; democracy; and promotion of the development of AI; and to
  2. initiate a discussion about what distribution of forward-looking responsibility should be implemented depending on how we value development of AI, democracy, and safety.

The project will identify realistic scenarios for distribution of responsibility (who should do what, when, and under which circumstances), and compare their projected long-term societal effects based on the three criteria: promotion of the development of AI, democracy, and safety.
 

Researchers in the project