[math-fun] Why no actual Victorian computers?
This is a serious question, and not just a steampunk (Google it) fantasy. Perhaps the rest of you have had the same experience of being incredibly impressed by the willingness of Victorian mathematicians to do prodigious calculations by hand. I'm not talking about merely algebraic manipulations, although some of those are legendary, but also numerical calculations. Babbage & Lovelace had already shown how to make a computer in the 1840's, and modern reconstructions showed that it worked just as they had predicted. Babbage was intimately involved in code-breaking, so he would have had precisely the same rationale for building a computer as did Turing & the Bletchley-ites. Now the manufacture of machinery wasn't in the greatest of shapes in 1840, although the manufacture of arms and bullets pushed things along dramatically in the Civil War and the European wars in the latter part of the 19th C. The auto industry dramatically improved the precision and scale of manufacturing, but we still didn't see computers in 1910. Boole & Frege made substantial improvements in mathematical logic, and Hilbert basically laid out the program in 1900. But still no computers. My conclusion: The inventor of the first true computer died in WWI. No one knows his/her name, but WWI put such a huge notch in the demographics of England and Germany, the two most likely places for the first computer to have been developed, that the engineering talent to develop a computer was wiped out, and took another generation to develop. The homelands of both England and Germany were more-or-less unscathed by WWI, but like Passover or a neutron bomb, the structures remained but the people were gone. Yes, a mechanical computer would not have been terribly reliable, but neither were the first electronic computers. However, once even an unreliable mechanical computer had been developed, the race would have been on to develop faster & more reliable computers, which would have forced an electronic revolution -- probably 20 years earlier. What would the 20th C. have looked like had Moore's Law started 20 years earlier? BTW, a similar question can be addressed to the clockmakers of several hundred years earlier. How come they didn't develop a programmable computer? Ditto with the Greeks and their Antikythera Mechanism. Ditto with the Romans and their extensive water works. The Romans could have developed *fluidic logic* with water on some of the mountain streams, and used it to do some basic calculations. Nature figured this out a long time ago: We now know that minimal universal calculating machines can be extremely simple; the number of gates in the simplest CPU is very small. The question is: why did it take mankind so long to discover this idea? Evolution apparently discovered self-replication pretty early on -- perhaps only a few hundred million years after the Earth was formed -- i.e., on about the same time scale as it took the Moon's orbit to settle down. RNA self replication wasn't terribly reliable, but then evolution came up with DNA, which is a more-or-less "pure" digital storage mechanism. I don't know the exact dates for DNA, but they may go back further than 1.5 billion years ago. Once a reliable storage (i.e., *tape*) mechanism was available, life was off to the races. References: https://en.wikipedia.org/wiki/Mechanical_calculator https://en.wikipedia.org/wiki/Mathematical_logic
As I understand it, Babbage's "analytical engine" failed because it pushed engineering techniques beyond their contemporary limits. But in any case, I don't regard that as a "true" computer; nor would I conflate that notion with self-reproduction. The essential extra component is the "stored program", a development which had to await the concept of a universal Turing machine. WFL On 4/2/16, Henry Baker <hbaker1@pipeline.com> wrote:
This is a serious question, and not just a steampunk (Google it) fantasy.
Perhaps the rest of you have had the same experience of being incredibly impressed by the willingness of Victorian mathematicians to do prodigious calculations by hand. I'm not talking about merely algebraic manipulations, although some of those are legendary, but also numerical calculations.
Babbage & Lovelace had already shown how to make a computer in the 1840's, and modern reconstructions showed that it worked just as they had predicted. Babbage was intimately involved in code-breaking, so he would have had precisely the same rationale for building a computer as did Turing & the Bletchley-ites.
Now the manufacture of machinery wasn't in the greatest of shapes in 1840, although the manufacture of arms and bullets pushed things along dramatically in the Civil War and the European wars in the latter part of the 19th C.
The auto industry dramatically improved the precision and scale of manufacturing, but we still didn't see computers in 1910.
Boole & Frege made substantial improvements in mathematical logic, and Hilbert basically laid out the program in 1900. But still no computers.
My conclusion:
The inventor of the first true computer died in WWI. No one knows his/her name, but WWI put such a huge notch in the demographics of England and Germany, the two most likely places for the first computer to have been developed, that the engineering talent to develop a computer was wiped out, and took another generation to develop.
The homelands of both England and Germany were more-or-less unscathed by WWI, but like Passover or a neutron bomb, the structures remained but the people were gone.
Yes, a mechanical computer would not have been terribly reliable, but neither were the first electronic computers. However, once even an unreliable mechanical computer had been developed, the race would have been on to develop faster & more reliable computers, which would have forced an electronic revolution -- probably 20 years earlier. What would the 20th C. have looked like had Moore's Law started 20 years earlier?
BTW, a similar question can be addressed to the clockmakers of several hundred years earlier. How come they didn't develop a programmable computer?
Ditto with the Greeks and their Antikythera Mechanism.
Ditto with the Romans and their extensive water works. The Romans could have developed *fluidic logic* with water on some of the mountain streams, and used it to do some basic calculations.
Nature figured this out a long time ago:
We now know that minimal universal calculating machines can be extremely simple; the number of gates in the simplest CPU is very small. The question is: why did it take mankind so long to discover this idea?
Evolution apparently discovered self-replication pretty early on -- perhaps only a few hundred million years after the Earth was formed -- i.e., on about the same time scale as it took the Moon's orbit to settle down. RNA self replication wasn't terribly reliable, but then evolution came up with DNA, which is a more-or-less "pure" digital storage mechanism. I don't know the exact dates for DNA, but they may go back further than 1.5 billion years ago. Once a reliable storage (i.e., *tape*) mechanism was available, life was off to the races.
References:
https://en.wikipedia.org/wiki/Mechanical_calculator
https://en.wikipedia.org/wiki/Mathematical_logic
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
On Sat, Apr 2, 2016 at 9:00 AM, Fred Lunnon <fred.lunnon@gmail.com> wrote:
As I understand it, Babbage's "analytical engine" failed because it pushed engineering techniques beyond their contemporary limits.
Even the difference engine pushed engineering past its limits; the analytical engine was never even attempted.
But in any case, I don't regard that as a "true" computer;
It would have been programmable, in much the same sense as an FPGA is.
The essential extra component is the "stored program", a development which had to await the concept of a universal Turing machine.
I think, had the analytical engine ever been built and employed, that the stored program would have followed shortly thereafter. -- Mike Stay - metaweta@gmail.com http://www.cs.auckland.ac.nz/~mike http://reperiendi.wordpress.com
Surely the major issue was the use of decimal rather than binary arithmetic. Decimal requires dramatically higher precision and complexity in the core units.
On Apr 2, 2016, at 12:50 PM, Mike Stay <metaweta@gmail.com> wrote:
On Sat, Apr 2, 2016 at 9:00 AM, Fred Lunnon <fred.lunnon@gmail.com> wrote:
As I understand it, Babbage's "analytical engine" failed because it pushed engineering techniques beyond their contemporary limits.
Even the difference engine pushed engineering past its limits; the analytical engine was never even attempted.
But in any case, I don't regard that as a "true" computer;
It would have been programmable, in much the same sense as an FPGA is.
The essential extra component is the "stored program", a development which had to await the concept of a universal Turing machine.
I think, had the analytical engine ever been built and employed, that the stored program would have followed shortly thereafter. -- Mike Stay - metaweta@gmail.com http://www.cs.auckland.ac.nz/~mike http://reperiendi.wordpress.com
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
I seriously doubt that base 10 was the problem. Victorian crypto people were already using base 24, 25 and base 26, base 60 arithmetic had been used for at least 3,000 years prior, and in any case, Victorian mathematicians were WAAAAY beyond bases. Dickson & many others were studying finite fields in the 1890's. I suspect that one could build a mechanical computer whose storage unit was a very large Oriental-style (i.e., base-10) abacus. My first IBM 1401 (~$250k) had 4,000 base-10 "characters" of memory; I believe it was possible to order 1401's with even smaller amounts of memory. I seem to recall that my first 1401 had a multiply instruction (my second one certainly did, and had 16,000 characters of memory), but multiply, too, was an extra cost option. A 4000 position mechanical abacus was well within the capabilities of Victorian technology; they had weaving looms which were nearly as complex. A machine shop circa 1900 was a huge array of spinning shafts & belts, because there were no small electric motors. Within 20 years most of those shafts & belts had given way to small electric motors. A moderately handy New England machine shop circa ~1900 could have built a 4000 position abacus, using such shafts & belts to control it. Heck, the Wright Brothers could have built such a device, had they not been sky-gazing! At 10:03 AM 4/2/2016, Tom Knight wrote:
Surely the major issue was the use of decimal rather than binary arithmetic. Decimal requires dramatically higher precision and complexity in the core units.
On Apr 2, 2016, at 12:50 PM, Mike Stay <metaweta@gmail.com> wrote:
On Sat, Apr 2, 2016 at 9:00 AM, Fred Lunnon <fred.lunnon@gmail.com> wrote:
As I understand it, Babbage's "analytical engine" failed because it pushed engineering techniques beyond their contemporary limits.
Even the difference engine pushed engineering past its limits; the analytical engine was never even attempted.
But in any case, I don't regard that as a "true" computer;
It would have been programmable, in much the same sense as an FPGA is.
The essential extra component is the "stored program", a development which had to await the concept of a universal Turing machine.
I think, had the analytical engine ever been built and employed, that the stored program would have followed shortly thereafter. -- Mike Stay - metaweta@gmail.com http://www.cs.auckland.ac.nz/~mike http://reperiendi.wordpress.com
As I understand it, Babbage's "analytical engine" failed because it pushed engineering techniques beyond their contemporary limits.
But in any case, I don't regard that as a "true" computer;
It's powerful enough that I was able to implement a convolutional neural network on it: https://cp4space.wordpress.com/2016/02/06/deep-learning-with-the-analytical-... https://gitlab.com/apgoucher/DLAE Best wishes, Adam P. Goucher
participants (5)
-
Adam P. Goucher -
Fred Lunnon -
Henry Baker -
Mike Stay -
Tom Knight