Re: [math-fun] 473-gene "artificial bactierium"
We've been running this evolutionary experiment with Unix/Linux/OpenWRT/... for the past 50 years, and the outcome is clear: it is nearly impossible to come up with a mechanism that enforces small DNA/RNA/kernel size. Or to flip this around, the marginal cost of adding additional "functionality" -- especially functionality that is executed only occasionally (i.e., small *dynamic* cost) -- is almost free. Nature has apparently come to the same conclusion: functionality that hasn't been extensively used for 1,000 generations still remains more- or-less functional in humans today. Thus, it is likely that bacteria & viruses trapped in the Siberian permafrost for 50,000 to 100,000 years will still find humans reasonably resistant. Good example: the indigenous Americans were cut off from Europeans for at least 10,000 and possibly as much as 20,000 years. Yes, Columbus brought over diseases that likely wiped out 60-80% of indigenous Americans. Nevertheless, these diseases didn't wipe them *all* out. (I don't recall specifically, but I think the indigenous Americans gave Columbus & the other explorers some diseases that Europeans hadn't seen for quite a while, as well.) <shields up for incoming flack> At 10:38 AM 5/12/2016, Allan Wechsler wrote:
A very late reply to this interesting thread. I am curious to know whether any biologists have tried letting evolution do this work: that is, breed some well-understood model organism (almost certainly E. coli K12 or something similar) and select for small genome. I think genome size can be easily calculated by electrophoresis. This ought to be easy to mechanize, and with a generation time of 17 minutes I suspect that evolution would start whittling away at the 4.6 Mbp genome pretty quickly. Leave it running for a few months and see how close it can get to this 531 Kbp goal line. I have a sneaking suspicion that it would surprise us.
2016-03-27 13:02 GMT-04:00 Warren D Smith <warren.wds@gmail.com>:
http://www.livescience.com/54165-artificial-bacterium-has-smallest-genome.ht...
531K base pairs.
Here’s a recent post that claims ~95% of native Americans were killed by disease Columbus brought; discusses why Europeans generally had stronger immune systems; and says it wasn’t all one-way: the native Americans gave the Old World syphilis! http://www.todayifoundout.com/index.php/2014/03/native-americans-didnt-wipe-... <http://www.todayifoundout.com/index.php/2014/03/native-americans-didnt-wipe-europeans-diseases/> — Mike
On May 12, 2016, at 2:44 PM, Henry Baker <hbaker1@pipeline.com> wrote:
[…] Good example: the indigenous Americans were cut off from Europeans for at least 10,000 and possibly as much as 20,000 years. Yes, Columbus brought over diseases that likely wiped out 60-80% of indigenous Americans. Nevertheless, these diseases didn't wipe them *all* out. (I don't recall specifically, but I think the indigenous Americans gave Columbus & the other explorers some diseases that Europeans hadn't seen for quite a while, as well.)
<shields up for incoming flack>
Also, from Bill Ruddiman's article in Scientific American, March, 2005: "An even worse catastrophe followed in the Americas after 1492, when Europeans introduced smallpox and a host of other diseases that killed around 50 million people, or about 90 percent of the pre-Columbian population. The American pandemic coincides with the largest CO2 drop of all, from 1550 to 1800." [Pre-Columbian natives regularly burnt forests to encourage certain types of wildlife & to allow farming.] "Global climate would have cooled as a result, until each pandemic passed and rebounding populations began cutting and burning forests anew." From: https://physics.ucf.edu/~britt/Climate/Reading5-Did%20humans%20alter%20globa... To: C:\Reading5-Did humans alter global climate.pdf Size: 1.5 MB (1,569,673 bytes) At 02:41 PM 5/12/2016, Mike Beeler wrote:
Here's a recent post that claims ~95% of native Americans were killed by disease Columbus brought;
discusses why Europeans generally had stronger immune systems; and says it wasn't all one-way:
the native Americans gave the Old World syphilis!
http://www.todayifoundout.com/index.php/2014/03/native-americans-didnt-wipe-... <http://www.todayifoundout.com/index.php/2014/03/native-americans-didnt-wipe-europeans-diseases/>
-- Mike
On May 12, 2016, at 2:44 PM, Henry Baker <hbaker1@pipeline.com> wrote:
[...] Good example: the indigenous Americans were cut off from Europeans for at least 10,000 and possibly as much as 20,000 years. Yes, Columbus brought over diseases that likely wiped out 60-80% of indigenous Americans. Nevertheless, these diseases didn't wipe them *all* out. (I don't recall specifically, but I think the indigenous Americans gave Columbus & the other explorers some diseases that Europeans hadn't seen for quite a while, as well.)
<shields up for incoming flack>
On 5/12/2016 11:44 AM, Henry Baker wrote:
We've been running this evolutionary experiment with Unix/Linux/OpenWRT/... for the past 50 years, and the outcome is clear: it is nearly impossible to come up with a mechanism that enforces small DNA/RNA/kernel size.
Or to flip this around, the marginal cost of adding additional "functionality" -- especially functionality that is executed only occasionally (i.e., small *dynamic* cost) -- is almost free.
Which also means that non-functional code is also almost free and may persist indefinitely. And some code may even go from non-functional to functional and vice versa, depending on mutations. Brent
Nature has apparently come to the same conclusion: functionality that hasn't been extensively used for 1,000 generations still remains more- or-less functional in humans today.
Thus, it is likely that bacteria & viruses trapped in the Siberian permafrost for 50,000 to 100,000 years will still find humans reasonably resistant.
Good example: the indigenous Americans were cut off from Europeans for at least 10,000 and possibly as much as 20,000 years. Yes, Columbus brought over diseases that likely wiped out 60-80% of indigenous Americans. Nevertheless, these diseases didn't wipe them *all* out. (I don't recall specifically, but I think the indigenous Americans gave Columbus & the other explorers some diseases that Europeans hadn't seen for quite a while, as well.)
<shields up for incoming flack>
At 10:38 AM 5/12/2016, Allan Wechsler wrote:
A very late reply to this interesting thread. I am curious to know whether any biologists have tried letting evolution do this work: that is, breed some well-understood model organism (almost certainly E. coli K12 or something similar) and select for small genome. I think genome size can be easily calculated by electrophoresis. This ought to be easy to mechanize, and with a generation time of 17 minutes I suspect that evolution would start whittling away at the 4.6 Mbp genome pretty quickly. Leave it running for a few months and see how close it can get to this 531 Kbp goal line. I have a sneaking suspicion that it would surprise us.
2016-03-27 13:02 GMT-04:00 Warren D Smith <warren.wds@gmail.com>:
http://www.livescience.com/54165-artificial-bacterium-has-smallest-genome.ht...
531K base pairs.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Yes. Code that is "commented out" is still left in for documentation and to provide a fall-back should conditions change which require going back to a previous method. I suspect that the same thing is going on in the genome, where evolution plays with alternatives, but keeps the older deprecated and disabled versions around (either in this particular individual, or in the pool of genes in the entire population) as fall-backs. At 03:29 PM 5/12/2016, Brent Meeker wrote:
On 5/12/2016 11:44 AM, Henry Baker wrote:
We've been running this evolutionary experiment with Unix/Linux/OpenWRT/... for the past 50 years, and the outcome is clear: it is nearly impossible to come up with a mechanism that enforces small DNA/RNA/kernel size.
Or to flip this around, the marginal cost of adding additional "functionality" -- especially functionality that is executed only occasionally (i.e., small *dynamic* cost) -- is almost free.
Which also means that non-functional code is also almost free and may persist indefinitely. And some code may even go from non-functional to functional and vice versa, depending on mutations.
What Henry says makes perfect sense, since the conditions under which some old DNA were good for might very well return at some point. Maybe this helps explain how surprised I often am when an organism seems to mutate into a better-adapted one in a ridiculously short time. The random mutation theory just doesn't seem to explain this, if you ask me. —Dan
On May 13, 2016, at 8:38 AM, Henry Baker <hbaker1@pipeline.com> wrote:
Yes. Code that is "commented out" is still left in for documentation and to provide a fall-back should conditions change which require going back to a previous method.
I suspect that the same thing is going on in the genome, where evolution plays with alternatives, but keeps the older deprecated and disabled versions around (either in this particular individual, or in the pool of genes in the entire population) as fall-backs.
At 03:29 PM 5/12/2016, Brent Meeker wrote:
On 5/12/2016 11:44 AM, Henry Baker wrote:
We've been running this evolutionary experiment with Unix/Linux/OpenWRT/... for the past 50 years, and the outcome is clear: it is nearly impossible to come up with a mechanism that enforces small DNA/RNA/kernel size.
Or to flip this around, the marginal cost of adding additional "functionality" -- especially functionality that is executed only occasionally (i.e., small *dynamic* cost) -- is almost free.
Which also means that non-functional code is also almost free and may persist indefinitely. And some code may even go from non-functional to functional and vice versa, depending on mutations.
Don't implicitly assume that all codons in DNA were at some time useful. It appears that many, such as introns and repeated sequences and viral fragments are just free-riders on the replication machinery. It's as you would expect from a random mutation process, that a lot of junk with no selection pressure against it could accumulate. Something like 80% of human DNA is active, in the sense that it codes for some other molecule. But those other molecules are not necessarily functional. http://gbe.oxfordjournals.org/content/early/2013/02/20/gbe.evt028.full.pdf+h... How much of human DNA is actually functional for the organism is controversial, but comparison with what has been conserved by evolution in related species suggests that only around 15% is. Brent “The onion test is a simple reality check for anyone who thinks they can assign a function to every nucleotide in the human genome. Whatever your proposed functions are, ask yourself this question: Why does an onion need a genome that is about five times larger than ours?” --—T. Ryan Gregory On 5/13/2016 3:52 PM, Dan Asimov wrote:
What Henry says makes perfect sense, since the conditions under which some old DNA were good for might very well return at some point.
Maybe this helps explain how surprised I often am when an organism seems to mutate into a better-adapted one in a ridiculously short time. The random mutation theory just doesn't seem to explain this, if you ask me.
—Dan
On May 13, 2016, at 8:38 AM, Henry Baker <hbaker1@pipeline.com> wrote:
Yes. Code that is "commented out" is still left in for documentation and to provide a fall-back should conditions change which require going back to a previous method.
I suspect that the same thing is going on in the genome, where evolution plays with alternatives, but keeps the older deprecated and disabled versions around (either in this particular individual, or in the pool of genes in the entire population) as fall-backs.
At 03:29 PM 5/12/2016, Brent Meeker wrote:
On 5/12/2016 11:44 AM, Henry Baker wrote:
We've been running this evolutionary experiment with Unix/Linux/OpenWRT/... for the past 50 years, and the outcome is clear: it is nearly impossible to come up with a mechanism that enforces small DNA/RNA/kernel size.
Or to flip this around, the marginal cost of adding additional "functionality" -- especially functionality that is executed only occasionally (i.e., small *dynamic* cost) -- is almost free. Which also means that non-functional code is also almost free and may persist indefinitely. And some code may even go from non-functional to functional and vice versa, depending on mutations.
math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
The rationale I've heard for plants having extra-large genomes is that they can't run from their enemies, and they have little motion to mix their genes outside of a confined geographical area. So the genome of the individual can't mooch off the pool of the entire species. (Perhaps this should be called the "No AirBnB" theory?) At 04:56 PM 5/13/2016, Brent Meeker wrote:
Don't implicitly assume that all codons in DNA were at some time useful.
It appears that many, such as introns and repeated sequences and viral fragments are just free-riders on the replication machinery.
It's as you would expect from a random mutation process, that a lot of junk with no selection pressure against it could accumulate.
Something like 80% of human DNA is active, in the sense that it codes for some other molecule.
But those other molecules are not necessarily functional.
http://gbe.oxfordjournals.org/content/early/2013/02/20/gbe.evt028.full.pdf+h...
How much of human DNA is actually functional for the organism is controversial, but comparison with what has been conserved by evolution in related species suggests that only around 15% is.
Brent
"The onion test is a simple reality check for anyone who thinks they can assign a function to every nucleotide in the human genome.
Whatever your proposed functions are, ask yourself this question: Why does an onion need a genome that is about five times larger than ours?"
----T. Ryan Gregorry
On 5/13/2016 3:52 PM, Dan Asimov wrote:
What Henry says makes perfect sense, since the conditions under which some old DNA were good for might very well return at some point.
Maybe this helps explain how surprised I often am when an organism seems to mutate into a better-adapted one in a ridiculously short time. The random mutation theory just doesn't seem to explain this, if you ask me.
--Dan
On May 13, 2016, at 8:38 AM, Henry Baaker <hbaker1@pipeline.com> wrote:
Yes. Code that is "commented out" is still left in for documentation and to provide a fall-back should conditions change which require going back to a previous method.
I suspect that the same thing is going on in the genome, where evolution plays with alternatives, but keeps the older deprecated and disabled versions around (either in this particular individual, or in the pool of genes in the entire population) as fall-backs.
At 03:29 PM 5/12/2016, Brent Meeker wrote:
On 5/12/2016 11:44 AM, Henry Baker wrote:
We've been running this evolutionary experiment with Unix/Linux/OpenWRT/... for the past 50 years, and the outcome is clear: it is nearly impossible to come up with a mechanism that enforces small DNA/RNA/kernel size.
Or to flip this around, the marginal cost of adding additional "functionality" -- especially functionality that is executed only occasionally (i.e., small *dynamic* cost) -- is almost free. Which also means that non-functional code is also almost free and may persist indefinitely. And some code may even go from non-functional to functional and vice versa, depending on mutations.
A lot of DNA is part of the control code, such as controlling how it folds and what parts are exposed to activators and repressors. On 13-May-16 19:56, Brent Meeker wrote:
Don't implicitly assume that all codons in DNA were at some time useful. It appears that many, such as introns and repeated sequences and viral fragments are just free-riders on the replication machinery. It's as you would expect from a random mutation process, that a lot of junk with no selection pressure against it could accumulate. Something like 80% of human DNA is active, in the sense that it codes for some other molecule. But those other molecules are not necessarily functional.
http://gbe.oxfordjournals.org/content/early/2013/02/20/gbe.evt028.full.pdf+h...
How much of human DNA is actually functional for the organism is controversial, but comparison with what has been conserved by evolution in related species suggests that only around 15% is.
Brent “The onion test is a simple reality check for anyone who thinks they can assign a function to every nucleotide in the human genome. Whatever your proposed functions are, ask yourself this question: Why does an onion need a genome that is about five times larger than ours?” --—T. Ryan Gregory
On 5/13/2016 3:52 PM, Dan Asimov wrote:
What Henry says makes perfect sense, since the conditions under which some old DNA were good for might very well return at some point.
Maybe this helps explain how surprised I often am when an organism seems to mutate into a better-adapted one in a ridiculously short time. The random mutation theory just doesn't seem to explain this, if you ask me.
—Dan
On May 13, 2016, at 8:38 AM, Henry Baker <hbaker1@pipeline.com> wrote:
Yes. Code that is "commented out" is still left in for documentation and to provide a fall-back should conditions change which require going back to a previous method.
I suspect that the same thing is going on in the genome, where evolution plays with alternatives, but keeps the older deprecated and disabled versions around (either in this particular individual, or in the pool of genes in the entire population) as fall-backs.
At 03:29 PM 5/12/2016, Brent Meeker wrote:
On 5/12/2016 11:44 AM, Henry Baker wrote:
We've been running this evolutionary experiment with Unix/Linux/OpenWRT/... for the past 50 years, and the outcome is clear: it is nearly impossible to come up with a mechanism that enforces small DNA/RNA/kernel size.
Or to flip this around, the marginal cost of adding additional "functionality" -- especially functionality that is executed only occasionally (i.e., small *dynamic* cost) -- is almost free. Which also means that non-functional code is also almost free and may persist indefinitely. And some code may even go from non-functional to functional and vice versa, depending on mutations.
math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
I suspect that is is possible to quantify these effects. I posit that there is a "long *fat* tail" of various useful snippets of DNA, where one can place these in the order of usefulness, and the probability of the n'th such snippet falls off monotonically with n. Since it is a fat-tailed distribution, the integral to infinity doesn't converge. https://en.wikipedia.org/wiki/Fat-tailed_distribution The length of the tail for an individual is the size of the genome; the length of the tail for the species is the tail for the pool of the entire species. It is possible to discover so-called "bottlenecks" for a species in which the length of the tail for the species is only a bit larger than that of an individual -- this happens when the # of individuals becomes quite small -- a few tens or hundreds of individuals. "A 2005 study from Rutgers University theorized that the native population of the Americas are the descendants of only **70** individuals who crossed the land bridge between Asia and North America." "a bottleneck of the human population occurred c. 70,000 years ago, proposing that the human population was reduced to perhaps 10,000-30,000 individuals." https://en.wikipedia.org/wiki/Population_bottleneck At 03:52 PM 5/13/2016, Dan Asimov wrote:
What Henry says makes perfect sense, since the conditions under which some old DNA were good for might very well return at some point.
Maybe this helps explain how surprised I often am when an organism seems to mutate into a better-adapted one in a ridiculously short time.
The random mutation theory just doesn't seem to explain this, if you ask me.
--Dan
On May 13, 2016, at 8:38 AM, Henryy Baker <hbaker1@pipeline.com> wrote:
Yes. Code that is "commented out" is still left in for documentation and to provide a fall-back should conditions change which require going back to a previous method.
I suspect that the same thing is going on in the genome, where evolution plays with alternatives, but keeps the older deprecated and disabled versions around (either in this particular individual, or in the pool of genes in the entire population) as fall-backs.
At 03:29 PM 5/12/2016, Brent Meeker wrote:
On 5/12/2016 11:44 AM, Henry Baker wrote:
We've been running this evolutionary experiment with Unix/Linux/OpenWRT/... for the past 50 years, and the outcome is clear: it is nearly impossible to come up with a mechanism that enforces small DNA/RNA/kernel size.
Or to flip this around, the marginal cost of adding additional "functionality" -- especially functionality that is executed only occasionally (i.e., small *dynamic* cost) -- is almost free.
Which also means that non-functional code is also almost free and may persist indefinitely. And some code may even go from non-functional to functional and vice versa, depending on mutations.
participants (5)
-
Brent Meeker -
Dan Asimov -
Henry Baker -
Mike Beeler -
Mike Speciner