As I understand it, Babbage's "analytical engine" failed because it pushed engineering techniques beyond their contemporary limits. But in any case, I don't regard that as a "true" computer; nor would I conflate that notion with self-reproduction. The essential extra component is the "stored program", a development which had to await the concept of a universal Turing machine. WFL On 4/2/16, Henry Baker <hbaker1@pipeline.com> wrote:
This is a serious question, and not just a steampunk (Google it) fantasy.
Perhaps the rest of you have had the same experience of being incredibly impressed by the willingness of Victorian mathematicians to do prodigious calculations by hand. I'm not talking about merely algebraic manipulations, although some of those are legendary, but also numerical calculations.
Babbage & Lovelace had already shown how to make a computer in the 1840's, and modern reconstructions showed that it worked just as they had predicted. Babbage was intimately involved in code-breaking, so he would have had precisely the same rationale for building a computer as did Turing & the Bletchley-ites.
Now the manufacture of machinery wasn't in the greatest of shapes in 1840, although the manufacture of arms and bullets pushed things along dramatically in the Civil War and the European wars in the latter part of the 19th C.
The auto industry dramatically improved the precision and scale of manufacturing, but we still didn't see computers in 1910.
Boole & Frege made substantial improvements in mathematical logic, and Hilbert basically laid out the program in 1900. But still no computers.
My conclusion:
The inventor of the first true computer died in WWI. No one knows his/her name, but WWI put such a huge notch in the demographics of England and Germany, the two most likely places for the first computer to have been developed, that the engineering talent to develop a computer was wiped out, and took another generation to develop.
The homelands of both England and Germany were more-or-less unscathed by WWI, but like Passover or a neutron bomb, the structures remained but the people were gone.
Yes, a mechanical computer would not have been terribly reliable, but neither were the first electronic computers. However, once even an unreliable mechanical computer had been developed, the race would have been on to develop faster & more reliable computers, which would have forced an electronic revolution -- probably 20 years earlier. What would the 20th C. have looked like had Moore's Law started 20 years earlier?
BTW, a similar question can be addressed to the clockmakers of several hundred years earlier. How come they didn't develop a programmable computer?
Ditto with the Greeks and their Antikythera Mechanism.
Ditto with the Romans and their extensive water works. The Romans could have developed *fluidic logic* with water on some of the mountain streams, and used it to do some basic calculations.
Nature figured this out a long time ago:
We now know that minimal universal calculating machines can be extremely simple; the number of gates in the simplest CPU is very small. The question is: why did it take mankind so long to discover this idea?
Evolution apparently discovered self-replication pretty early on -- perhaps only a few hundred million years after the Earth was formed -- i.e., on about the same time scale as it took the Moon's orbit to settle down. RNA self replication wasn't terribly reliable, but then evolution came up with DNA, which is a more-or-less "pure" digital storage mechanism. I don't know the exact dates for DNA, but they may go back further than 1.5 billion years ago. Once a reliable storage (i.e., *tape*) mechanism was available, life was off to the races.
References:
https://en.wikipedia.org/wiki/Mechanical_calculator
https://en.wikipedia.org/wiki/Mathematical_logic
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun