So you think we could eventually build a robot that exhibited behavior which we would consider human-like and indicative of consciousness - but the robot wouldn't be conscious. What if we replace neurons in your brain, one-by-one, with input-output, functionally identical artificial components so that your behavior was unchanged? Do you think you would gradually lose consciousness?
Possibly. I can't really speculate until this experiment has been performed (on my brain, since consciousness cannot be measured externally). What happens if someone creates an exact copy of you (impossible by Pauli exclusion, but never mind)? Which one of those (the original or the replica) is `you'?
Whenever anyone writes "obviously" my B.S. meter quivers. I don't think either of your last two paragraphs is right. First, whether a Turing machine is conscious would depend on what program it is running.
Assuming that Turing machines can be conscious, then (say) a universal machine simultaneously dovetailing all Turing machines would necessarily be conscious.
Does it have enough self-reflection to prove Godel's incompleteness theorem?
Plausibly, yes.
Does it create a narrative memory? I'm not sure exactly what program instantiates consciousness, but I very much doubt it's just a matter of "complexity" however that's measured.
The reason that we haven't come to a definite conclusion is that people haven't unanimously agreed on a good definition of consciousness. Sincerely, Adam P. Goucher