Re: [math-fun] Game Theory/Chess strategy question
Just as we can characterize a computer chess player as aggressive or defensive, I characterize evolution in the same way. There may be no "thought" behind either the computer or evolution, but we humans tend to see things this way, no matter what. There is some evidence that evolution becomes more aggressive/attacking as the stress on the species becomes more severe -- e.g., famine, etc. Evolution seems to notice that what was working before isn't working now, and starts broadening its search for new niches. Returning to the original question: In what types of games is an aggressive/attacking strategy optimal, and in what types of games is a defensive strategy optimal? There is obviously a cost in building up defenses, when some of those same resources could be used to attack. In chess, you need to pay close attention to both in order to win. What would be very interesting is a game with a "knob" that you could turn that would force the optimal strategy to smoothly changes from defensive to offensive. I'm just wondering if there's any game theory literature on what types of games might favor the investment in defenses v. the investment in attacks. At 01:58 PM 8/15/2014, Michael Kleber wrote:
Evolution certainly does "try", in the sense that it tries everything! But I think Henry's question still could make sense: in a game where the players' strategy is "try everything and do whatever works", do offensive or defensive things tend to work more?
In the case of evolution, the best you can hope for in a "defensive" setting is dominating your niche, i.e. expanding to its carrying capacity. In the long term there's no up side; species that diversify and expand their habitat will of course have more ways to grow over time. But at some point they will count as a different species, so I'm not sure *who* counts as winning when that happens.
Well, I suppose you could compete in the defensive niche by expanding your habitat! And actually beavers do exactly this. But in some sense that's actually an offensive strategy; I'm not sure the two categories are well-defined.
As Dawkins has pointed out eloquently, it makes more sense to think of *genes*, rather than species, when thinking about the evolution game. I don't think I know what offense or defense mean there.
--Michael
On Fri, Aug 15, 2014 at 1:46 PM, W. Edwin Clark <wclark@mail.usf.edu> wrote:
Does evolution "attempt", "try", "sacrifice" or "hope"? That's not my understanding of evolution.
On Fri, Aug 15, 2014 at 3:51 PM, Henry Baker <hbaker1@pipeline.com> wrote:
Good question, however, I'm focusing on the game of evolution itself, not on the strategies of individuals of the species.
So the real question is whether evolution attempts to protect a particular niche, versus attacking new niches. Many evolutionary biologists would claim that evolution tries to attack new niches with some significant effort, hence the willingness of evolution to sacrifice many failed experiments in genetic mixing in the hope of finding some new successful combinations.
At 12:40 PM 8/15/2014, Dan Asimov wrote:
What about rabbits, pill bugs, turtles, porcupines, squirrels?
--Dan
On Aug 15, 2014, at 6:44 AM, Henry Baker <hbaker1@pipeline.com> wrote:
Some evolutionary biologists have claimed that Darwinian evolution optimizes by using an almost purely offensive strategy.
I'm just wondering if there's any game theory literature on what types of games might favor the investment in defenses v. the investment in attacks.
For games such as chess, the optimum strategy is what it is, there's no choice about it. Attack and defence, aggression and resistance, are human concepts with no basis in reality. Of course, we don't usually know what the optimum strategy is, so ideas about aggression and defense are useful framing concepts when trying to approach best play. Lots of games have very different objectives than chess, where other concepts are more appropriate to describe playing styles.
Hex is a game I like a lot (thanks, Martin), in which offense is synonymous with defense. Which shows there may be no clear-cut definitions that can distinguish the two. --Dan On Aug 15, 2014, at 4:37 PM, Dave Dyer <ddyer@real-me.net> wrote:
I'm just wondering if there's any game theory literature on what types of games might favor the investment in defenses v. the investment in attacks.
For games such as chess, the optimum strategy is what it is, there's no choice about it. Attack and defence, aggression and resistance, are human concepts with no basis in reality.
Of course, we don't usually know what the optimum strategy is, so ideas about aggression and defense are useful framing concepts when trying to approach best play.
Lots of games have very different objectives than chess, where other concepts are more appropriate to describe playing styles.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
I wonder if there necessarily *is* only one optimum strategy. Of course in any given game -- let's say chess -- there just might accidentally be several strategies that are tied for optimum. Even if not, if we define a tolerance vaguely by saying that even 10 moves ahead the best human players could not say which of 2 different strategies is better . . . there might be plenty of strategies like that. Or, suppose we could eliminate draws from chess -- so everyone is just trying to win -- would there be a theorem that there is a unique optimal strategy? That's really too vague to answer, but still. --Dan On Aug 15, 2014, at 4:37 PM, Dave Dyer <ddyer@real-me.net> wrote:
For games such as chess, the optimum strategy is what it is, there's no choice about it. Attack and defence, aggression and resistance, are human concepts with no basis in reality.
For chess, each possible move leads to a win, loss, or tie. Since there are usually many more than 3 moves available, the best move is usually not unique. Among the moves in each equivalence class, some might be characterized by us as more or less aggressive. Game theory doesn't have a category of better or worse winning moves, but human analyzers might have other ideas. Working on the AIs for my robots, I frequently have to tweak them so (for example) they will win a game in as few moves as possible, or avoid making obviously stupid "giveaway" moves in a game they see as inevitably lost.
How is a strategy defined? There may be a checkmate from any given position on the board (including the initial one) and so playing that set of moves is optimal, but that's not what one usually means by a strategy, that's a solution. Strategy implies some kind of heuristic weighting of finite look-aheads. So is an aggressive strategy one that assumes your opponent will make a mistake or not as far ahead as you do and so you may make a move that will is poor against an equal player but is good against a player who looks fewer moves ahead or has an inferior weighting function? Brent Meeker On 8/15/2014 5:41 PM, Dan Asimov wrote:
I wonder if there necessarily *is* only one optimum strategy.
Of course in any given game -- let's say chess -- there just might accidentally be several strategies that are tied for optimum.
Even if not, if we define a tolerance vaguely by saying that even 10 moves ahead the best human players could not say which of 2 different strategies is better . . . there might be plenty of strategies like that.
Or, suppose we could eliminate draws from chess -- so everyone is just trying to win -- would there be a theorem that there is a unique optimal strategy?
That's really too vague to answer, but still.
--Dan
On Aug 15, 2014, at 4:37 PM, Dave Dyer <ddyer@real-me.net> wrote:
For games such as chess, the optimum strategy is what it is, there's no choice about it. Attack and defence, aggression and resistance, are human concepts with no basis in reality.
math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
In tic-tac-toe, there are at least two very different optimum strategies for the first player, depending on whether you choose to play in the centre or in the corner. Sincerely, Adam P. Goucher
Sent: Saturday, August 16, 2014 at 1:41 AM From: "Dan Asimov" <dasimov@earthlink.net> To: math-fun <math-fun@mailman.xmission.com> Subject: Re: [math-fun] Game Theory/Chess strategy question
I wonder if there necessarily *is* only one optimum strategy.
Of course in any given game -- let's say chess -- there just might accidentally be several strategies that are tied for optimum.
Even if not, if we define a tolerance vaguely by saying that even 10 moves ahead the best human players could not say which of 2 different strategies is better . . . there might be plenty of strategies like that.
Or, suppose we could eliminate draws from chess -- so everyone is just trying to win -- would there be a theorem that there is a unique optimal strategy?
That's really too vague to answer, but still.
--Dan
On Aug 15, 2014, at 4:37 PM, Dave Dyer <ddyer@real-me.net> wrote:
For games such as chess, the optimum strategy is what it is, there's no choice about it. Attack and defence, aggression and resistance, are human concepts with no basis in reality.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Speaking of strategies, consider this game (which arose from my failed attempts, including some conversations with the incredibly smart algebraist George Bergman, to define an infinite version of Hex on the hexagonally tessellated plane): Given a countable set X -- WLOG let X = Z+ = {1,2,3,...,n,...} -- players alternately remove an element of their choice, always picking from only the remaining elements. The players know each other's plays. The turns proceed, indexed by countable ordinals. The player who picks the last remaining element is the winner. By the well-ordering principle (equivalent to the Axiom of Choice), the winning player is well-defined. [Note: There is a natural way to define whose turn it is at any limit ordinal. Namely, put the turn's ordinal index into Cantor normal form (< http://en.wikipedia.org/wiki/Cantor_normal_form#Cantor_normal_form >) and add up the integer coefficients. If the sum is even, it's the First player's turn; if odd, it's the Second's.] Is there a strategy for the First or Second player? Or not? (The formal definition of a strategy for a game of perfect information and well-ordered play like this one is: For whichever player P the strategy is for: Each time it's P's turn, the strategy dictates what P's play should be -- so that if the strategy is followed at each of P's turns, then P is guaranteed to win. So it's a function from the set of all possible sequences of previous plays before any turn of P, to the set of all possible plays P has at that juncture. It's very easy to show that for any *finite* game of perfect information between two players, one of them has a strategy. (Exercise.) --Dan ___________________________________________________________________________ P.S. Then there's that proposed set-theory axiom, the Axiom of Determinacy, that says that given any subset A of [0,1] consider the game G_A for two players, who alternately choose either a 0 or a 1 -- knowing each other's plays -- to jointly produce a countable binary string -- and thus an element of [0,1]. If that element is in A, First wins; otherwise Second. It's known that this is inconsistent with the Axiom of Choice, since that can be used to construct games with no strategy. On Aug 16, 2014, at 11:00 AM, Adam P. Goucher <apgoucher@gmx.com> wrote:
In tic-tac-toe, there are at least two very different optimum strategies for the first player, depending on whether you choose to play in the centre or in the corner
I don't understand. Say the players alternately choose 1, 2, 3, 4, ...; who wins? Or is that not an allowed game? Jim Propp On Saturday, August 16, 2014, Dan Asimov <dasimov@earthlink.net> wrote:
Speaking of strategies, consider this game (which arose from my failed attempts, including some conversations with the incredibly smart algebraist George Bergman, to define an infinite version of Hex on the hexagonally tessellated plane):
Given a countable set X -- WLOG let X = Z+ = {1,2,3,...,n,...} -- players alternately remove an element of their choice, always picking from only the remaining elements. The players know each other's plays.
The turns proceed, indexed by countable ordinals. The player who picks the last remaining element is the winner. By the well-ordering principle (equivalent to the Axiom of Choice), the winning player is well-defined.
[Note: There is a natural way to define whose turn it is at any limit ordinal. Namely, put the turn's ordinal index into Cantor normal form (< http://en.wikipedia.org/wiki/Cantor_normal_form#Cantor_normal_form
) and add up the integer coefficients. If the sum is even, it's the First player's turn; if odd, it's the Second's.]
Is there a strategy for the First or Second player? Or not?
(The formal definition of a strategy for a game of perfect information and well-ordered play like this one is: For whichever player P the strategy is for: Each time it's P's turn, the strategy dictates what P's play should be -- so that if the strategy is followed at each of P's turns, then P is guaranteed to win.
So it's a function from the set of all possible sequences of previous plays before any turn of P, to the set of all possible plays P has at that juncture.
It's very easy to show that for any *finite* game of perfect information between two players, one of them has a strategy. (Exercise.)
--Dan ___________________________________________________________________________ P.S. Then there's that proposed set-theory axiom, the Axiom of Determinacy, that says that given any subset A of [0,1] consider the game G_A for two players, who alternately choose either a 0 or a 1 -- knowing each other's plays -- to jointly produce a countable binary string -- and thus an element of [0,1].
If that element is in A, First wins; otherwise Second.
It's known that this is inconsistent with the Axiom of Choice, since that can be used to construct games with no strategy.
On Aug 16, 2014, at 11:00 AM, Adam P. Goucher <apgoucher@gmx.com <javascript:;>> wrote:
In tic-tac-toe, there are at least two very different optimum strategies for the first player, depending on whether you choose to play in the centre or in the corner
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com <javascript:;> https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
You're right -- I described it from memory without thinking it through. CORRECTION: The winner is the first player to have *no integer left* to play. According to the Note on how to determine whose turn it is after a limit ordinal, if First goes first originally, and the upcoming turn has index ordinal w (omega), (i.e., the turns so far have the order type of the nonnegative integers), then it is the Second player whose turn it is. So if the positive integers are exhausted before the first limit ordinal (w), then the Second player is the one whose turn it is, and having no integer to choose, Second wins. --Dan On Aug 16, 2014, at 3:26 PM, James Propp <jamespropp@gmail.com> wrote:
I don't understand. Say the players alternately choose 1, 2, 3, 4, ...; who wins? Or is that not an allowed game?
Jim Propp
On Saturday, August 16, 2014, Dan Asimov <dasimov@earthlink.net> wrote:
Speaking of strategies, consider this game (which arose from my failed attempts, including some conversations with the incredibly smart algebraist George Bergman, to define an infinite version of Hex on the hexagonally tessellated plane):
Given a countable set X -- WLOG let X = Z+ = {1,2,3,...,n,...} -- players alternately remove an element of their choice, always picking from only the remaining elements. The players know each other's plays.
The turns proceed, indexed by countable ordinals. The player who picks the last remaining element is the winner. By the well-ordering principle (equivalent to the Axiom of Choice), the winning player is well-defined.
[Note: There is a natural way to define whose turn it is at any limit ordinal. Namely, put the turn's ordinal index into Cantor normal form (< http://en.wikipedia.org/wiki/Cantor_normal_form#Cantor_normal_form
) and add up the integer coefficients. If the sum is even, it's the First player's turn; if odd, it's the Second's.]
Is there a strategy for the First or Second player? Or not?
(The formal definition of a strategy for a game of perfect information and well-ordered play like this one is: For whichever player P the strategy is for: Each time it's P's turn, the strategy dictates what P's play should be -- so that if the strategy is followed at each of P's turns, then P is guaranteed to win.
So it's a function from the set of all possible sequences of previous plays before any turn of P, to the set of all possible plays P has at that juncture.
It's very easy to show that for any *finite* game of perfect information between two players, one of them has a strategy. (Exercise.)
--Dan ___________________________________________________________________________ P.S. Then there's that proposed set-theory axiom, the Axiom of Determinacy, that says that given any subset A of [0,1] consider the game G_A for two players, who alternately choose either a 0 or a 1 -- knowing each other's plays -- to jointly produce a countable binary string -- and thus an element of [0,1].
If that element is in A, First wins; otherwise Second.
It's known that this is inconsistent with the Axiom of Choice, since that can be used to construct games with no strategy.
On Aug 16, 2014, at 11:00 AM, Adam P. Goucher <apgoucher@gmx.com <javascript:;>> wrote:
In tic-tac-toe, there are at least two very different optimum strategies for the first player, depending on whether you choose to play in the centre or in the corner
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com <javascript:;> https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
I am with those who have said, in different words, that Henry's plausible-seeming query eventually founders on the difficulty of rigorizing the notions of "attack" and "defense". I suspect that these notions are intuitions that come from the *embodiment* or *model* of a given game. But when the game is stripped of its model, leaving only an abstract edge-colored directed graph of positions and legal moves, "attack" and "defense" lose their intuitive force, while the nature of what constitutes a good strategy is unchanged. One can even imagine re-embodying a given abstract game with a new model, in which some originally defensive moves now engage our "attack" intuitions. Dan's transfinite game eludes me entirely, but it gives me an occasion to quibble with his definition of a strategy. For me, at least, a strategy need not produce an answer for *all possible sequences* of moves to that point, but rather, only for *all sequences possible for a player following the strategy.* Perhaps this is what Dan meant and I am reading too strictly. That is, my strategy need not give me a well-defined response even if I am required to let five-year-old cousin choose my move every once in a while. On Sat, Aug 16, 2014 at 6:37 PM, Dan Asimov <dasimov@earthlink.net> wrote:
You're right -- I described it from memory without thinking it through.
CORRECTION: The winner is the first player to have *no integer left* to play.
According to the Note on how to determine whose turn it is after a limit ordinal, if First goes first originally, and the upcoming turn has index ordinal w (omega), (i.e., the turns so far have the order type of the nonnegative integers), then it is the Second player whose turn it is.
So if the positive integers are exhausted before the first limit ordinal (w), then the Second player is the one whose turn it is, and having no integer to choose, Second wins.
--Dan
On Aug 16, 2014, at 3:26 PM, James Propp <jamespropp@gmail.com> wrote:
I don't understand. Say the players alternately choose 1, 2, 3, 4, ...; who wins? Or is that not an allowed game?
Jim Propp
On Saturday, August 16, 2014, Dan Asimov <dasimov@earthlink.net> wrote:
Speaking of strategies, consider this game (which arose from my failed attempts, including some conversations with the incredibly smart algebraist George Bergman, to define an infinite version of Hex on the hexagonally tessellated plane):
Given a countable set X -- WLOG let X = Z+ = {1,2,3,...,n,...} -- players alternately remove an element of their choice, always picking from only the remaining elements. The players know each other's plays.
The turns proceed, indexed by countable ordinals. The player who picks the last remaining element is the winner. By the well-ordering principle (equivalent to the Axiom of Choice), the winning player is well-defined.
[Note: There is a natural way to define whose turn it is at any limit ordinal. Namely, put the turn's ordinal index into Cantor normal form (< http://en.wikipedia.org/wiki/Cantor_normal_form#Cantor_normal_form
) and add up the integer coefficients. If the sum is even, it's the First player's turn; if odd, it's the Second's.]
Is there a strategy for the First or Second player? Or not?
(The formal definition of a strategy for a game of perfect information and well-ordered play like this one is: For whichever player P the strategy is for: Each time it's P's turn, the strategy dictates what P's play should be -- so that if the strategy is followed at each of P's turns, then P is guaranteed to win.
So it's a function from the set of all possible sequences of previous plays before any turn of P, to the set of all possible plays P has at that juncture.
It's very easy to show that for any *finite* game of perfect information between two players, one of them has a strategy. (Exercise.)
--Dan
P.S. Then there's that proposed set-theory axiom, the Axiom of Determinacy, that says that given any subset A of [0,1] consider the game G_A for two players, who alternately choose either a 0 or a 1 -- knowing each other's plays -- to jointly produce a countable binary string -- and thus an element of [0,1].
If that element is in A, First wins; otherwise Second.
It's known that this is inconsistent with the Axiom of Choice, since that can be used to construct games with no strategy.
On Aug 16, 2014, at 11:00 AM, Adam P. Goucher <apgoucher@gmx.com <javascript:;>> wrote:
In tic-tac-toe, there are at least two very different optimum strategies for the first player, depending on whether you choose to play in the centre or in the corner
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com <javascript:;> https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Yes, I was speaking only about prior sequences of moves that are possible according to the rules of the game -- and yes, that could occur with the lucky player following that strategy. I did not say it, but thank you, Allan, for stating it precisely. ((( Also, I know which player in the integer-choosing game (if any) has a winning strategy, and can prove it. It's kind of interesting. Anyone else up for the challenge? ))) --Dan On Aug 16, 2014, at 4:13 PM, Allan Wechsler <acwacw@gmail.com> wrote:
I am with those who have said, in different words, that Henry's plausible-seeming query eventually founders on the difficulty of rigorizing the notions of "attack" and "defense". I suspect that these notions are intuitions that come from the *embodiment* or *model* of a given game. But when the game is stripped of its model, leaving only an abstract edge-colored directed graph of positions and legal moves, "attack" and "defense" lose their intuitive force, while the nature of what constitutes a good strategy is unchanged. One can even imagine re-embodying a given abstract game with a new model, in which some originally defensive moves now engage our "attack" intuitions.
Dan's transfinite game eludes me entirely, but it gives me an occasion to quibble with his definition of a strategy. For me, at least, a strategy need not produce an answer for *all possible sequences* of moves to that point, but rather, only for *all sequences possible for a player following the strategy.* Perhaps this is what Dan meant and I am reading too strictly. That is, my strategy need not give me a well-defined response even if I am required to let five-year-old cousin choose my move every once in a while.
Also, I know which player in the integer-choosing game (if any) has a winning strategy, and can prove it. It's kind of interesting.
Anyone else up for the challenge?
First, two remarks. (1) It's a bit weird to have the winner be the first player with no legal move; the usual convention in combinatorial game theory is the reverse. (2) I concede that "parity of sum of coefficients in Cantor normal form" is a *simple* rule but have trouble seeing why I should consider it *natural*. [Next two paragraphs serve mostly as filler to enable people who don't want to read the answer to avoid doing so.] Now. It looks at first as if there should be a strategy-stealing argument showing that the game can't be a second-player win: if second player wins then first player plays 0 and then copies second player's strategy but with all numbers increased by 1 and wins, contradiction. But once we reach turn w the players' roles are no longer swapped so this doesn't work. It also looks at first as if there should be a simple strategy enabling the second player to win with the normal play convention (i.e., reverse of what's stipulated here): always play first player's move xor 1. But again this doesn't work with the CNF rule at limit ordinals because at w the second player has to play without an immediately preceding move to look at. [OK, now here's the actual answer.] However, it looks as if the following works with the misere convention Dan stipulated: the second player always plays the smallest available number. Because then at w there are no numbers left. (Proof: For n finite, after n full turns all numbers <n are taken, by induction on n; hence at w all numbers are taken.) And at w it's the second player's turn, boom. With the normal play convention, the *first* player adopts that strategy instead and wins. (Lightly unified version of the discussion above: either player can arrange that the game ends at turn w with no numbers left to play; the game cannot end before turn w; therefore, the winner is whoever wins when the game ends at turn w.) -- g
For what it's worth, I'm very familiar with a game in which offense/defense decisions are paramount, chinese checkers. I'm familiar with it because it's my mother's favorite game which she taught be to play when I was six. Now she's a 100 and she's still very good at it. At each turn a major factor in your choice is between blocking your opponent path or advancing your paths. Brent Meeker On 8/15/2014 4:01 PM, Henry Baker wrote:
Just as we can characterize a computer chess player as aggressive or defensive, I characterize evolution in the same way. There may be no "thought" behind either the computer or evolution, but we humans tend to see things this way, no matter what.
There is some evidence that evolution becomes more aggressive/attacking as the stress on the species becomes more severe -- e.g., famine, etc. Evolution seems to notice that what was working before isn't working now, and starts broadening its search for new niches.
Returning to the original question: In what types of games is an aggressive/attacking strategy optimal, and in what types of games is a defensive strategy optimal?
There is obviously a cost in building up defenses, when some of those same resources could be used to attack. In chess, you need to pay close attention to both in order to win.
What would be very interesting is a game with a "knob" that you could turn that would force the optimal strategy to smoothly changes from defensive to offensive.
I'm just wondering if there's any game theory literature on what types of games might favor the investment in defenses v. the investment in attacks.
At 01:58 PM 8/15/2014, Michael Kleber wrote:
Evolution certainly does "try", in the sense that it tries everything! But I think Henry's question still could make sense: in a game where the players' strategy is "try everything and do whatever works", do offensive or defensive things tend to work more?
In the case of evolution, the best you can hope for in a "defensive" setting is dominating your niche, i.e. expanding to its carrying capacity. In the long term there's no up side; species that diversify and expand their habitat will of course have more ways to grow over time. But at some point they will count as a different species, so I'm not sure *who* counts as winning when that happens.
Well, I suppose you could compete in the defensive niche by expanding your habitat! And actually beavers do exactly this. But in some sense that's actually an offensive strategy; I'm not sure the two categories are well-defined.
As Dawkins has pointed out eloquently, it makes more sense to think of *genes*, rather than species, when thinking about the evolution game. I don't think I know what offense or defense mean there.
--Michael
On Fri, Aug 15, 2014 at 1:46 PM, W. Edwin Clark <wclark@mail.usf.edu> wrote:
Does evolution "attempt", "try", "sacrifice" or "hope"? That's not my understanding of evolution.
On Fri, Aug 15, 2014 at 3:51 PM, Henry Baker <hbaker1@pipeline.com> wrote:
Good question, however, I'm focusing on the game of evolution itself, not on the strategies of individuals of the species.
So the real question is whether evolution attempts to protect a particular niche, versus attacking new niches. Many evolutionary biologists would claim that evolution tries to attack new niches with some significant effort, hence the willingness of evolution to sacrifice many failed experiments in genetic mixing in the hope of finding some new successful combinations.
At 12:40 PM 8/15/2014, Dan Asimov wrote:
What about rabbits, pill bugs, turtles, porcupines, squirrels?
--Dan
On Aug 15, 2014, at 6:44 AM, Henry Baker <hbaker1@pipeline.com> wrote:
Some evolutionary biologists have claimed that Darwinian evolution optimizes by using an almost purely offensive strategy.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
participants (8)
-
Adam P. Goucher -
Allan Wechsler -
Dan Asimov -
Dave Dyer -
Gareth McCaughan -
Henry Baker -
James Propp -
meekerdb