So lethal autonomous weapons will either be unable to function, or will be unable to be moral.
More precisely: weapons, autonomous or otherwise, can guarantee at most one of functionality and morality. The possibility that an autonomous weapons system which is always functional and significantly more moral than humans, or which is perfectly moral but occasionally nonfunctional, is left open. Charles Greathouse Analyst/Programmer Case Western Reserve University On Wed, Nov 19, 2014 at 2:20 PM, Warren D Smith <warren.wds@gmail.com> wrote:
http://arxiv.org/abs/1411.2842
Logical Limitations to Machine Ethics with Consequences to Lethal Autonomous Weapons
Matthias Englert, Sandra Siebert, Martin Ziegler
--so apparently the idea is, if there is some killer machine, whose construction and programming is known to you, and you have to make a moral decision about what to do about it -- then since you cannot solve the halting problem, you cannot tell what said machine will do, hence deciding which is more moral among two courses of action can be Turing-undecidable. So lethal autonomous weapons will either be unable to function, or will be unable to be moral.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun