Do you have to know what the machine WILL do to make a moral decision about it? Isn't it enough to know what the machine COULD do? I can easily make the moral decision that putting an Earth-shattering bomb under control of a robot (or anyone else for that matter) is a BAD thing.
-----Original Message----- From: math-fun [mailto:math-fun-bounces@mailman.xmission.com] On Behalf Of Warren D Smith Sent: Wednesday, November 19, 2014 2:21 PM To: math-fun@mailman.xmission.com Subject: [math-fun] undecidability of morality
http://arxiv.org/abs/1411.2842
Logical Limitations to Machine Ethics with Consequences to Lethal Autonomous Weapons
Matthias Englert, Sandra Siebert, Martin Ziegler
--so apparently the idea is, if there is some killer machine, whose construction and programming is known to you, and you have to make a moral decision about what to do about it -- then since you cannot solve the halting problem, you cannot tell what said machine will do, hence deciding which is more moral among two courses of action can be Turing-undecidable. So lethal autonomous weapons will either be unable to function, or will be unable to be moral.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun