Re: [math-fun] quantum theory foundational issues, my theory of how they should be resolved
So you think we could eventually build a robot that exhibited behavior which we would consider human-like and indicative of consciousness - but the robot wouldn't be conscious. What if we replace neurons in your brain, one-by-one, with input-output, functionally identical artificial components so that your behavior was unchanged? Do you think you would gradually lose consciousness?
Possibly. I can't really speculate until this experiment has been performed (on my brain, since consciousness cannot be measured externally). What happens if someone creates an exact copy of you (impossible by Pauli exclusion, but never mind)? Which one of those (the original or the replica) is `you'?
Whenever anyone writes "obviously" my B.S. meter quivers. I don't think either of your last two paragraphs is right. First, whether a Turing machine is conscious would depend on what program it is running.
Assuming that Turing machines can be conscious, then (say) a universal machine simultaneously dovetailing all Turing machines would necessarily be conscious.
Does it have enough self-reflection to prove Godel's incompleteness theorem?
Plausibly, yes.
Does it create a narrative memory? I'm not sure exactly what program instantiates consciousness, but I very much doubt it's just a matter of "complexity" however that's measured.
The reason that we haven't come to a definite conclusion is that people haven't unanimously agreed on a good definition of consciousness. Sincerely, Adam P. Goucher
On 8/5/2013 2:29 AM, Adam P. Goucher wrote:
So you think we could eventually build a robot that exhibited behavior which we would consider human-like and indicative of consciousness - but the robot wouldn't be conscious. What if we replace neurons in your brain, one-by-one, with input-output, functionally identical artificial components so that your behavior was unchanged? Do you think you would gradually lose consciousness? Possibly. I can't really speculate until this experiment has been performed (on my brain, since consciousness cannot be measured externally).
A loss of consciousness (which you could not express) would imply some magic, aphysical consciousness stuff that neurons have and artificial neurons don't.
What happens if someone creates an exact copy of you (impossible by Pauli exclusion, but never mind)? Which one of those (the original or the replica) is `you'?
Both or neither. Supposedly this is what happens in the Everett interpretation of quantum mechanics. You get duplicated and the apparent randomness is just the first-person uncertainty. I don't think you even need quantum level cloning (which is what is impossible). There are good arguments that whatever the brain does, at the information processing level it is essentially classical. That's what thermochemical analysis says, and it's exactly what you'd expect from evolution.
Whenever anyone writes "obviously" my B.S. meter quivers. I don't think either of your last two paragraphs is right. First, whether a Turing machine is conscious would depend on what program it is running. Assuming that Turing machines can be conscious, then (say) a universal machine simultaneously dovetailing all Turing machines would necessarily be conscious.
Does it have enough self-reflection to prove Godel's incompleteness theorem? Plausibly, yes.
Does it create a narrative memory? I'm not sure exactly what program instantiates consciousness, but I very much doubt it's just a matter of "complexity" however that's measured. The reason that we haven't come to a definite conclusion is that people haven't unanimously agreed on a good definition of consciousness.
I don't think consciousness is definable. Among humans it can be 'defined' ostensively: I point to an elephant. You look at it. I say, "See that's what it's like to see an elephant." On the other hand I think intelligent and purposeful action can be observed. I think what will happen is that these 'mysteries' about consciousness will be overtaken by engineering. When we start to make Mars Rovers much more intelligent and provide them with learning capability and reasoning and have them create narrative memories to learn from, they will exhibit intelligence and we will assume they are conscious because they act conscious. We will know how to program them so that they are aggressive or docile or humourous or creative and whether they are conscious will be seen as 'the wrong question'. Brent
I recommend this book: How the Hippies Saved Physics: Science, Counterculture, and the Quantum Revival by David Kaiser You can draw parallels, flattering and otherwise, to the current discussion. Hilarie
Hilarie, Thanks! I'd heard of the book but didn't know anyone who'd read it until now. --Dan On 2013-08-06, at 11:40 AM, Hilarie Orman wrote:
I recommend this book:
How the Hippies Saved Physics: Science, Counterculture, and the Quantum Revival by David Kaiser
You can draw parallels, flattering and otherwise, to the current discussion.
Hilarie
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
participants (4)
-
Adam P. Goucher -
Dan Asimov -
Hilarie Orman -
meekerdb