[math-fun] Autonomous Weapons meet the Halting Problem
FYI -- Cory Doctorow has been chiding non-computer people for years about their lack of understanding of the logical limitations of computers -- "why can't I tell a computer to only do good stuff?". I would imagine that this misunderstanding problem will continue until undecidability is part of the required curriculum for all high schools. ---- http://arxiv.org/abs/1411.2842 Logical Limitations to Machine Ethics with Consequences to Lethal Autonomous Weapons Matthias Englert, Sandra Siebert, Martin Ziegler (Submitted on 11 Nov 2014) Lethal Autonomous Weapons promise to revolutionize warfare -- and raise a multitude of ethical and legal questions. It has thus been suggested to program values and principles of conduct (such as the Geneva Conventions) into the machines' control, thereby rendering them both physically and morally superior to human combatants. We employ mathematical logic and theoretical computer science to explore fundamental limitations to the moral behaviour of intelligent machines in a series of "Gedankenexperiments": Refining and sharpening variants of the Trolley Problem leads us to construct an (admittedly artificial but) fully deterministic situation where a robot is presented with two choices: one morally clearly preferable over the other -- yet, based on the undecidability of the Halting problem, it provably cannot decide algorithmically which one. Our considerations have surprising implications to the question of responsibility and liability for an autonomous system's actions and lead to specific technical recommendations. Subjects: Computers and Society (cs.CY); Artificial Intelligence (cs.AI) ACM classes: K.4.1; I.2.0 Cite as: arXiv:1411.2842 [cs.CY] (or arXiv:1411.2842v1 [cs.CY] for this version) Submission history From: Martin Ziegler [view email] [v1] Tue, 11 Nov 2014 15:05:01 GMT (22kb)
participants (1)
-
Henry Baker