[math-fun] undecidability of morality
http://arxiv.org/abs/1411.2842
Logical Limitations to Machine Ethics with Consequences to Lethal Autonomous Weapons
Matthias Englert, Sandra Siebert, Martin Ziegler
--so apparently the idea is, if there is some killer machine, whose construction and programming is known to you, and you have to make a moral decision about what to do about it -- then since you cannot solve the halting problem, you cannot tell what said machine will do, hence deciding which is more moral among two courses of action can be Turing-undecidable. So lethal autonomous weapons will either be unable to function, or will be unable to be moral.
And similarly lethal autonomous people will necessarily be unable to be moral. -- Warren D. Smith http://RangeVoting.org <-- add your endorsement (by clicking "endorse" as 1st step)
So lethal autonomous weapons will either be unable to function, or will be unable to be moral.
More precisely: weapons, autonomous or otherwise, can guarantee at most one of functionality and morality. The possibility that an autonomous weapons system which is always functional and significantly more moral than humans, or which is perfectly moral but occasionally nonfunctional, is left open. Charles Greathouse Analyst/Programmer Case Western Reserve University On Wed, Nov 19, 2014 at 2:20 PM, Warren D Smith <warren.wds@gmail.com> wrote:
http://arxiv.org/abs/1411.2842
Logical Limitations to Machine Ethics with Consequences to Lethal Autonomous Weapons
Matthias Englert, Sandra Siebert, Martin Ziegler
--so apparently the idea is, if there is some killer machine, whose construction and programming is known to you, and you have to make a moral decision about what to do about it -- then since you cannot solve the halting problem, you cannot tell what said machine will do, hence deciding which is more moral among two courses of action can be Turing-undecidable. So lethal autonomous weapons will either be unable to function, or will be unable to be moral.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Morality is to a large extent a personal judgement, so there can be no universal agreement that any particular robot is acting in a moral or immoral manner. -- Gene From: Charles Greathouse <charles.greathouse@case.edu> To: math-fun <math-fun@mailman.xmission.com> Sent: Wednesday, November 19, 2014 11:40 AM Subject: Re: [math-fun] undecidability of morality
So lethal autonomous weapons will either be unable to function, or will be unable to be moral.
More precisely: weapons, autonomous or otherwise, can guarantee at most one of functionality and morality. The possibility that an autonomous weapons system which is always functional and significantly more moral than humans, or which is perfectly moral but occasionally nonfunctional, is left open. Charles Greathouse Analyst/Programmer Case Western Reserve University On Wed, Nov 19, 2014 at 2:20 PM, Warren D Smith <warren.wds@gmail.com> wrote:
http://arxiv.org/abs/1411.2842
Logical Limitations to Machine Ethics with Consequences to Lethal Autonomous Weapons
Matthias Englert, Sandra Siebert, Martin Ziegler
--so apparently the idea is, if there is some killer machine, whose construction and programming is known to you, and you have to make a moral decision about what to do about it -- then since you cannot solve the halting problem, you cannot tell what said machine will do, hence deciding which is more moral among two courses of action can be Turing-undecidable. So lethal autonomous weapons will either be unable to function, or will be unable to be moral.
They bypass discussing that issue (!) by a reduction, essentially, to Rice's Theorem. Charles Greathouse Analyst/Programmer Case Western Reserve University On Wed, Nov 19, 2014 at 3:00 PM, Eugene Salamin via math-fun < math-fun@mailman.xmission.com> wrote:
Morality is to a large extent a personal judgement, so there can be no universal agreement that any particular robot is acting in a moral or immoral manner. -- Gene
From: Charles Greathouse <charles.greathouse@case.edu> To: math-fun <math-fun@mailman.xmission.com> Sent: Wednesday, November 19, 2014 11:40 AM Subject: Re: [math-fun] undecidability of morality
So lethal autonomous weapons will either be unable to function, or will be unable to be moral.
More precisely: weapons, autonomous or otherwise, can guarantee at most one of functionality and morality. The possibility that an autonomous weapons system which is always functional and significantly more moral than humans, or which is perfectly moral but occasionally nonfunctional, is left open.
Charles Greathouse Analyst/Programmer Case Western Reserve University
On Wed, Nov 19, 2014 at 2:20 PM, Warren D Smith <warren.wds@gmail.com> wrote:
http://arxiv.org/abs/1411.2842
Logical Limitations to Machine Ethics with Consequences to Lethal Autonomous Weapons
Matthias Englert, Sandra Siebert, Martin Ziegler
--so apparently the idea is, if there is some killer machine, whose construction and programming is known to you, and you have to make a moral decision about what to do about it -- then since you cannot solve the halting problem, you cannot tell what said machine will do, hence deciding which is more moral among two courses of action can be Turing-undecidable. So lethal autonomous weapons will either be unable to function, or will be unable to be moral.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Can you say ED-209?
-----Original Message----- From: math-fun [mailto:math-fun-bounces@mailman.xmission.com] On Behalf Of Charles Greathouse Sent: Wednesday, November 19, 2014 2:40 PM To: math-fun Subject: Re: [math-fun] undecidability of morality
So lethal autonomous weapons will either be unable to function, or will be unable to be moral.
More precisely: weapons, autonomous or otherwise, can guarantee at most one of functionality and morality. The possibility that an autonomous weapons system which is always functional and significantly more moral than humans, or which is perfectly moral but occasionally nonfunctional, is left open.
Charles Greathouse Analyst/Programmer Case Western Reserve University
On Wed, Nov 19, 2014 at 2:20 PM, Warren D Smith <warren.wds@gmail.com> wrote:
http://arxiv.org/abs/1411.2842
Logical Limitations to Machine Ethics with Consequences to Lethal Autonomous Weapons
Matthias Englert, Sandra Siebert, Martin Ziegler
--so apparently the idea is, if there is some killer machine, whose construction and programming is known to you, and you have to make a moral decision about what to do about it -- then since you cannot solve the halting problem, you cannot tell what said machine will do, hence deciding which is more moral among two courses of action can be Turing-undecidable. So lethal autonomous weapons will either be unable to function, or will be unable to be moral.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Do you have to know what the machine WILL do to make a moral decision about it? Isn't it enough to know what the machine COULD do? I can easily make the moral decision that putting an Earth-shattering bomb under control of a robot (or anyone else for that matter) is a BAD thing.
-----Original Message----- From: math-fun [mailto:math-fun-bounces@mailman.xmission.com] On Behalf Of Warren D Smith Sent: Wednesday, November 19, 2014 2:21 PM To: math-fun@mailman.xmission.com Subject: [math-fun] undecidability of morality
http://arxiv.org/abs/1411.2842
Logical Limitations to Machine Ethics with Consequences to Lethal Autonomous Weapons
Matthias Englert, Sandra Siebert, Martin Ziegler
--so apparently the idea is, if there is some killer machine, whose construction and programming is known to you, and you have to make a moral decision about what to do about it -- then since you cannot solve the halting problem, you cannot tell what said machine will do, hence deciding which is more moral among two courses of action can be Turing-undecidable. So lethal autonomous weapons will either be unable to function, or will be unable to be moral.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
This seems to me a really uninteresting result, generated by a highly unnatural definition of "morality". Suppose you find yourself confronted with the following highly artificial problem (very similar to the artificial problems considered in this paper); You encounter a system consisting of a box with two buttons on it, connected to a computer, which is also connected to two guillotines with innocent people strapped into them. You try to analyze the program, to see what the result of pressing the buttons would be, and it's very easy to see that if no button is pressed within the next hour, both people will be killed. You try to analyze what the result of pressing button A or button B will be, since it might result in only one person being killed, or even no-one at all, but there are some programs so complex that you can't tell what the results of pressing the buttons would be, so you do your best to try to figure things out, and then take your best guess as to which button to press. Since you can't analyze an arbitrary computer program in an hour, you will sometimes press the wrong button, resulting in a death that could have been avoided if you pressed the other button. Is this "immoral"? Of course not. Is it a proof that "people cannot be moral"? Only with a definition of "moral" completely at odds with its ordinary usage. It might be immoral not do do the best you can to figure out what pressing the buttons will do. The fact that you're not always smart enough to succeed in this endeavor does not make you immoral. The claimed proof of immorality of a robot is of exactly this sort of immorality; sometimes it will do something, when something else would have better consequences, because it isn't smart enough to figure out the consequences of its actions. The proof that there is no such thing as an "infinitely smart robot" that never has this problem seems totally unsurprising, and irrelevant to the moral status of robots. Andy On Wed, Nov 19, 2014 at 2:20 PM, Warren D Smith <warren.wds@gmail.com> wrote:
http://arxiv.org/abs/1411.2842
Logical Limitations to Machine Ethics with Consequences to Lethal Autonomous Weapons
Matthias Englert, Sandra Siebert, Martin Ziegler
--so apparently the idea is, if there is some killer machine, whose construction and programming is known to you, and you have to make a moral decision about what to do about it -- then since you cannot solve the halting problem, you cannot tell what said machine will do, hence deciding which is more moral among two courses of action can be Turing-undecidable. So lethal autonomous weapons will either be unable to function, or will be unable to be moral.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- Andy.Latto@pobox.com
participants (6)
-
Andy Latto -
Charles Greathouse -
David Wilson -
Eugene Salamin -
meekerdb -
Warren D Smith