This seems to me a really uninteresting result, generated by a highly unnatural definition of "morality". Suppose you find yourself confronted with the following highly artificial problem (very similar to the artificial problems considered in this paper); You encounter a system consisting of a box with two buttons on it, connected to a computer, which is also connected to two guillotines with innocent people strapped into them. You try to analyze the program, to see what the result of pressing the buttons would be, and it's very easy to see that if no button is pressed within the next hour, both people will be killed. You try to analyze what the result of pressing button A or button B will be, since it might result in only one person being killed, or even no-one at all, but there are some programs so complex that you can't tell what the results of pressing the buttons would be, so you do your best to try to figure things out, and then take your best guess as to which button to press. Since you can't analyze an arbitrary computer program in an hour, you will sometimes press the wrong button, resulting in a death that could have been avoided if you pressed the other button. Is this "immoral"? Of course not. Is it a proof that "people cannot be moral"? Only with a definition of "moral" completely at odds with its ordinary usage. It might be immoral not do do the best you can to figure out what pressing the buttons will do. The fact that you're not always smart enough to succeed in this endeavor does not make you immoral. The claimed proof of immorality of a robot is of exactly this sort of immorality; sometimes it will do something, when something else would have better consequences, because it isn't smart enough to figure out the consequences of its actions. The proof that there is no such thing as an "infinitely smart robot" that never has this problem seems totally unsurprising, and irrelevant to the moral status of robots. Andy On Wed, Nov 19, 2014 at 2:20 PM, Warren D Smith <warren.wds@gmail.com> wrote:
http://arxiv.org/abs/1411.2842
Logical Limitations to Machine Ethics with Consequences to Lethal Autonomous Weapons
Matthias Englert, Sandra Siebert, Martin Ziegler
--so apparently the idea is, if there is some killer machine, whose construction and programming is known to you, and you have to make a moral decision about what to do about it -- then since you cannot solve the halting problem, you cannot tell what said machine will do, hence deciding which is more moral among two courses of action can be Turing-undecidable. So lethal autonomous weapons will either be unable to function, or will be unable to be moral.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- Andy.Latto@pobox.com