[math-fun] cellular automata soup experiments
I'm reporting a couple of soup experiments in Conway's Life. Life is a two-dimensional, two-state, cellular automata. The succession rule is based on counting how many of a cell's 8 nearest neighbors are in the ON (or 1) state. If two neighbors are ON, the cell's state is preserved in the next time step. If three neighbors are ON, the cell is turned ON (or left ON) for the next time step. Otherwise, the cell is turned (or left) OFF. My soup experiments are run in either a 1024*1024 field, or an 8192*8192 field. The fields are initialized from a random number generator to be 50% ON. Cells beyond the edge of the field are treated as always 0. It's known from old experiments that a random field, after a long time, seems to settle down to a mostly stable pattern with about about 2.8% of the cells ON. The time to settle is thousands of time steps, perhaps tens of thousands. I have no information on longer term behavior, including whether or not all the active areas eventually stabilize, or if there's a permanent long term low level percolation. My first set of experiments was looking at what happens if there's a small mutation rate. I began with a random 1024*1024 field and ran it 3000 steps. The initial population was 523295, and by generation 3000 it had dropped to 34240 (3%). A small visual sample of the field appeared static, but there were probably a few active areas (based on past experience). I reran the experiment with the same starting position, but I changed the rule by adding mutation. After each time step, 1024 (.1%) of the cells were randomly flipped. After 3000 steps, the population was 74710 (7%), with large active areas. After 10000 steps, the population was 73545 (7%), still with large active areas. The stable areas had the usual Life objects. Then I changed the mutation rule. I started over with the same initial position (50%), and used the same sequence of mutated cells, 1024 (.1%) mutations after each time step. But the mutation rule was "force to ON". If a cell was already on, the mutation was a nop. After 10000 steps, the population was 90045 (9%) with large active areas. There were a few of the usual Life objects. My final mutation experiment repeated the starting setup and the mutation sequence, but the mutation rule was "force to OFF". After 10000 steps, the population was 8875 (.9%), and the only ON patterns were blocks. (A block is a Life pattern of 4 ON cells in a 2x2 square. It's the most common stable pattern in regular Life. If one cell is turned OFF, it regenerates on the next step.) It's possible that over the longer term, the blocks in this pattern would slowly evaporate from double hits. I then ran an experiment in information propagation. The main experiment was with a size 8192*8192 field. I made two random starting positions that matched on the left half but had different (random) right halves. I ran the positions for N generations and compared the results, determining how far the difference propagated into the left half of the field. I measured the position of the leftmost difference (from the midline) for the whole field, and the average position of the leftmost difference (averaged over all rows). The first number represents the maximum leftward penetration of the information from the differing right halves; the second number is the average leftward penetration. (My initial experiments were with 1024*1024 fields, but the 10K generation case reached the left edge of the field. So I bumped the field size to 8192^2.) 10 steps max 10 avg 4 20 20 9 50 46 21 100 78 36 200 125 57 500 200 106 1000 302 154 2000 432 219 5000 501 274 10000 566 288 20000 566 290 After 20K steps, the maximum leftward information flow is 566 columns, and the average is about half that. This might be interpreted as evidence that a random field blocks or dilutes information flow to only 566 columns; or that a random change to a random field usually has a finite effect. Or maybe I just haven't run enough steps to see the ultimate effect. One possible "practical" consequence is that big random Life patterns can be subdivided and run on many parallel machines, with an overlapping margin of perhaps 2000 cells, for many time steps, and only modest synchronization effort. Rich rcs@cs.arizona.edu
----- Original Message ----- From: "Richard Schroeppel" <rcs@CS.Arizona.EDU> To: <math-fun@mailman.xmission.com> Sent: Friday, April 11, 2003 10:44 PM Subject: [math-fun] cellular automata soup experiments [snip]
One possible "practical" consequence is that big random Life patterns can be subdivided and run on many parallel machines, with an overlapping margin of perhaps 2000 cells, for many time steps, and only modest synchronization effort.
If you have two adjacent swaths, you can give them identical margins of say 2000 cells. After 1000 generations, half of each margin could be bad, so you overwrite the bad halves of both margins. This assures that no matter how fast effects travel, the whole image will always be correct. The margin correction step might be faster than the neighborhood process itself. I used to do exactly this on the Binary Image Processor in 1975, but the swaths were 36 bits wide minus an 8-bit margin on each side. I updated margins every fourth step. We were computing I would guess about 5 neighborhoods per microsecond including overhead. Of course that was using one processor, not many, but the principle is the same. We were among the first experimenters with Life. Unfortunately we were supposed to be developing a product so we couldn't compete with Gosper and MIT at finding new phenomena (ignoring any difference in talent). Steve Gray
participants (2)
-
Richard Schroeppel -
Steve Gray