The fundamental theorem of poker
The fundamental theorem of poker, introduced by David Sklansky, states that: every time you play your hand the way you would if you could see your opponent's cards, you gain, and every time your opponent plays his cards differently from the way he would play them if he could see your cards, you gain.[1] This theorem is the foundation for many poker strategy topics. For example, bluffing and slow-playing (explained below) are examples of using deception to induce your opponents to play differently than they would if they could see your cards. There are some exceptions to the fundamental theorem in certain multi-way pot situations, as described in Morton's theorem.
Pot odds, implied odds and poker probabilities
The relationship between pot odds and odds of winning is one of the most important concepts in poker strategy. Pot odds are the ratio of the size of the pot to the size of the bet required to stay in the pot.[1] For example, if a player must call $10 for a chance to win a $40 pot (not including his $10 call), his pot odds are 4-to-1. To have a positive expectation, a player's odds of winning must be better than his pot odds. If the player's odds of winning are also 4-to-1 (20% chance of winning), and if he plays the pot five times, his expected return is to break even (losing four times and winning once).
Implied odds is a more complicated concept, though related to pot odds. The implied odds on a hand are based not on the money currently in the pot, but on the expected size of the pot at the end of the hand. When facing an even money situation (like described in the previous paragraph) and holding a strong drawing hand (say a four-flush) a skilled player will consider calling a bet or even opening based on their implied odds. This is particularly true in multi-way pots, where it is likely that one or more opponents will call all the way to showdown.
Deception
By employing deception, a poker player hopes to induce his opponent(s) to act differently than they would if they could see his cards. Bluffing is a form of deception to induce opponents to fold superior hands. If opponents observe that a player never bluffs, they won't call his bets unless they have very good hands. Slow-playing is deceptive play in poker that is roughly the opposite of bluffing: betting weakly with a strong holding rather than betting strongly with a weak one. If opponents observe that a player never slow plays, they can pounce at any sign of weakness.
Position
Position refers to the order in which players are seated around the table and the strategic consequences of this. Generally, players in earlier position (who have to act first) need stronger hands to bet or raise than players in later position. For example, if there are five opponents yet to act behind a player, there is a greater chance one of the opponents will have a better hand than if there were only one opponent yet to act. Being in late position is an advantage because a player gets to see how his opponents in earlier position act (which provides the player more information about their hands than they have about his). Position is one of the most vital elements to understand in order to be a long-term winning player. As a player's position improves, so too does the range of cards with which he can profitably enter a hand. Conversely this commonly held knowledge can be used to an intelligent poker player's advantage. If playing against observant opponents in tournament style play (when the amount of chips one has is finite, which is to say there are no 'rebuys') then a raise with any two cards can 'steal the blinds,' if executed against passive players at the right time.
Reasons to raise
Unlike calling, raising has an extra way to win: opponents may fold. An opening bet may be considered a raise from a strategy perspective. David Sklansky gives seven reasons for raising, summarized below.[1]
- To get more money in the pot when a player has the best hand: If a player has the best hand, raising for value enables him to win a bigger pot.
- To drive out opponents when a player has the best hand: If a player has a made hand, raising may protect his hand by driving out opponents with drawing hands who may otherwise improve to a better hand.
- To bluff or semi-bluff: If a player raises with an inferior or drawing hand, the player may induce a better hand to fold. In the case of semi-bluff, if the player is called, he still has a chance to improve to a better hand (and also win a larger pot).
- To get a free card: If a player raises with a drawing hand, his opponent may check to him on the next betting round, giving him a chance to get a free card to improve his hand.
- To gain information: If a player raises with an uncertain hand, he gains information about the strength of his opponent's hand if he is called. Players may use an opening bet on a later betting round (probe or continuation bets) to gain information by being called or raised (or may win the pot immediately).
- To drive out worse hands when a player's own hand may be second best: Sometimes, if a player raises with the second best hand with cards to come, raising to drive out opponents with worse hands (but who might improve) may increase the expected value of his hand by giving him a higher probability of winning in the event his hand improves.
- To drive out better hands when a drawing hand bets: If an opponent with an apparent drawing hand bets before a player, if the player raises, opponents behind him who may have a better hand may fold rather than call a bet and raise. This is a form of isolation play.
Reasons to call
There are several reasons for calling a bet or raise, summarized below.
- To see more cards: With a drawing hand, a player may be receiving the correct pot odds with the call to see more cards.
- To limit loss in equity: Calling may be appropriate when a player has adequate pot odds to call but will lose equity on money contributed to the pot.
- To avoid a re-raise: Only calling (and not raising) denies the original bettor the option of re-raising. However, this is only completely safe in case the player is last to act (i.e. "closing the action").
- To conceal the strength of a player's hand: If a player has a very strong hand, he might smooth call on an early betting round to avoid giving away the strength of his hand on the hope of getting more money into the pot in later betting rounds.
- To manipulate pot odds: By calling (not raising), a player offers any opponents yet to act behind him more favorable pot odds to also call. For example, if a player has a very strong hand, a smooth call may encourage opponents behind him to overcall, building the pot. Particularly in limit games, building the pot in an earlier betting round may induce opponents to call future bets in later betting rounds because of the pot odds they will be receiving.
- To set up a bluff on a later betting round: Sometimes referred to as a long-ball bluff, calling on an earlier betting round can set up a bluff (or semi-bluff) on a later betting round. A recent online term for "long-ball bluffing" is floating.[2]
Gap concept
The gap concept states that a player needs a better hand to play against someone who has already opened (or raised) the betting than he would need to open himself.[3] The gap concept reflects that players prefer to avoid confrontations with another player who has already indicated strength, and that calling only has one way to win (by having the best hand), whereas opening may also win immediately if your opponent(s) fold.
Sandwich effect
Related to the gap effect, the sandwich effect states that a player needs a stronger hand to stay in a pot when there are opponents yet to act behind him.[2] Because the player doesn't know how many opponents will be involved in the pot or whether he will have to call a re-raise, he doesn't know what his effective pot odds actually are. Therefore, a stronger hand is desired as compensation for this uncertainty.
Loose/tight play
Loose players play relatively more hands and tend to continue with weaker hands; hence they don't often fold. Tight players play relatively fewer hands and tend not to continue with weaker hands; hence they often fold. The following concepts are applicable in loose games (and their inverse in tight games):[1]
- Bluffs and semi-bluffs are less effective because loose opponents are less likely to fold.
- Requirements for continuing with made hands may be lower because loose players may also be playing lower value hands.
- Drawing to incomplete hands, like flushes, tends to be more valuable as draws will often get favorable pot odds and a stronger hand (rather than merely one pair) is often required to win in multi-way pots.
Aggressive/passive play
Aggressive play refers to betting and raising. Passive play refers to checking and calling. Unless passive play is being used deceptively as mentioned above, aggressive play is generally considered stronger than passive play because of the bluff value of bets and raises and because it offers more opportunities for your opponents to make mistakes.[1]
See the article on aggressive play for more details.
Hand reading and tells
Hand reading is the process of making educated guesses about the possible cards an opponent may hold based on the sequence of actions in the pot. The term 'hand reading' is actually a misnomer due to the fact that a professional poker player does not attempt to put a player on an exact hand. Rather he attempts to narrow the possibilities down to a range of hands which makes sense based on the past actions of his opponent. A tell is a detectable change in an opponent's behavior or demeanor that gives clues about his hand. Educated guesses about an opponent's cards can help a player avoid mistakes in his own play, induce mistakes by his opponent(s), or influence the player to take actions that he would normally not take under the circumstances. For example, a tell might suggest an opponent has missed a draw, so a player seeing it may decide a bluff would be more effective than usual.
Table image and opponent profiling
By observing the tendencies and patterns of one's opponents, one can make more educated guesses about others' potential holdings. For example, if a player has been playing extremely tightly (playing very few hands), then when he/she finally enters a pot, one may surmise that he/she has stronger than average cards. One's table image is the perception by one's opponents of one's own pattern of play. A player can leverage his/her table image by playing out of character and thereby inducing his/her opponents to misjudge his/her hand and make a mistake.
Equity
A player's equity in a pot is his expected share of the pot, expressed either as a percentage (probability of winning) or expected value (amount of pot * probability of winning). Negative equity, or loss in equity, occurs when contributing to a pot with a probability of winning less than 1 / (number of opponents matching the contribution).
- Example
- Alice contributes $12 to a pot and is matched by two other opponents. Alice's $12 contribution "bought" the chance to win $36. If Alice's probability of winning is 50%, her equity in the $36 pot is $18 (a gain in equity because her $12 is now "worth" $18). If her probability of winning is only 10%, Alice loses equity because her $12 is now only "worth" $3.60 (amount of pot * probability of winning).
If there is already money in the pot, the pot odds associated with a particular play may indicate a positive expected value even though it may have negative equity.
- Texas hold 'em example
- Alice holds J♦7♠. Bob holds K♥6♠. After the flop, the board is 5♥6♥8♦. If both hands are played to a showdown, Alice has a 45% chance to win, Bob has a 53% chance to win and there is a 2% chance to split the pot. The pot currently has $51. Alice goes all-in for $45 reasoning Bob has to call to stay in game. Alice's implied pot odds for the all-in bet are 32%. Bob's simple pot odds for the call are also 32%. Since both have a probability of winning greater than 32%, both plays (the raise and the call) have a positive expectation. However, since Bob has more equity in the pot than Alice (53% vs. 45%), Alice would have been better off playing the pot as cheaply as possible. When Alice went all-in, she gave up the difference in equity on the money she contributed to the pot.
Also see fold equity.
Short-handed considerations
When playing short-handed (at a table with fewer players than normal), players must loosen up their play (play more hands) for several reasons:[1]
- There is less likelihood of another player having a strong hand because there are fewer players.
- Each player's share of the forced bets increases because there are fewer players contributing to the forced bets, thus waiting for premium hands becomes more expensive.
This type of situation comes up most often in tournament style play. In a cash game, the adjustments are very similar, but not quite as drastic as the table can ask for what is known as a 'rake break.' A rake break occurs when the floor-man, who represents the casino, agrees to take a smaller portion than usual for the hand. For example a random casino might normally receive 10% of the pot up to 5 dollars for a 'rake.' In this case the table would only owe 10% up to 3 dollars until there are a sufficient number of players again. In online poker rake breaks are determined automatically.
Structure considerations
The blinds and antes and limit structure of the game have a significant influence on poker strategy. For example, it is easier to manipulate pot odds in no-limit and pot-limit games than in limit games. In tournaments, as the size of the forced bets relative to the chip stacks grows, pressure is placed on players to play pots to avoid being anted/blinded away.[4]
Representation of games
The games studied in game theory are well-defined mathematical objects. A game consists of a set of players, a set of moves (or strategies) available to those players, and a specification of payoffs for each combination of strategies. Most cooperative games are presented in the characteristic function form, while the extensive and the normal forms are used to define noncooperative games.
Extensive form
The extensive form can be used to formalize games with some important order. Games here are often presented as trees (as pictured to the left). Here each vertex (or node) represents a point of choice for a player. The player is specified by a number listed by the vertex. The lines out of the vertex represent a possible action for that player. The payoffs are specified at the bottom of the tree.
In the game pictured here, there are two players. Player 1 moves first and chooses either F or U. Player 2 sees Player 1's move and then chooses A or R. Suppose that Player 1 chooses U and then Player 2 chooses A, then Player 1 gets 8 and Player 2 gets 2.
The extensive form can also capture simultaneous-move games and games with imperfect information. To represent it, either a dotted line connects different vertices to represent them as being part of the same information set (i.e., the players do not know at which point they are), or a closed line is drawn around them.
Normal form
Player 2 chooses Left | Player 2 chooses Right | |
Player 1 chooses Up | 4, 3 | –1, –1 |
Player 1 chooses Down | 0, 0 | 3, 4 |
Normal form or payoff matrix of a 2-player, 2-strategy game |
The normal (or strategic form) game is usually represented by a matrix which shows the players, strategies, and payoffs (see the example to the right). More generally it can be represented by any function that associates a payoff for each player with every possible combination of actions. In the accompanying example there are two players; one chooses the row and the other chooses the column. Each player has two strategies, which are specified by the number of rows and the number of columns. The payoffs are provided in the interior. The first number is the payoff received by the row player (Player 1 in our example); the second is the payoff for the column player (Player 2 in our example). Suppose that Player 1 plays Up and that Player 2 plays Left. Then Player 1 gets a payoff of 4, and Player 2 gets 3.
When a game is presented in normal form, it is presumed that each player acts simultaneously or, at least, without knowing the actions of the other. If players have some information about the choices of other players, the game is usually presented in extensive form.
Characteristic function form
In cooperative games with transferable utility no individual payoffs are given. Instead, the characteristic function determines the payoff of each coalition. The standard assumption is that the empty coalition obtains a payoff of 0.
The origin of this form is to be found in the seminal book of von Neumann and Morgenstern who, when studying coalitional normal form games, assumed that when a coalition C forms, it plays against the complementary coalition () as if they were playing a 2-player game. The equilibrium payoff of C is characteristic. Now there are different models to derive coalitional values from normal form games, but not all games in characteristic function form can be derived from normal form games.
Formally, a characteristic function form game (also known as a TU-game) is given as a pair (N,v), where N denotes a set of players and is a characteristic function.
The characteristic function form has been generalised to games without the assumption of transferable utility.
Partition function form
The characteristic function form ignores the possible externalities of coalition formation. In the partition function form the payoff of a coalition depends not only on its members, but also on the way the rest of the players are partitioned (Thrall & Lucas 1963).
Application and challenges
Game theory has been used to study a wide variety of human and animal behaviors. It was initially developed in economics to understand a large collection of economic behaviors, including behaviors of firms, markets, and consumers. The use of game theory in the social sciences has expanded, and game theory has been applied to political, sociological, and psychological behaviors as well.
Game theoretic analysis was initially used to study animal behavior by Ronald Fisher in the 1930s (although even Charles Darwin makes a few informal game theoretic statements). This work predates the name "game theory", but it shares many important features with this field. The developments in economics were later applied to biology largely by John Maynard Smith in his book Evolution and the Theory of Games.
In addition to being used to predict and explain behavior, game theory has also been used to attempt to develop theories of ethical or normative behavior. In economics and philosophy, scholars have applied game theory to help in the understanding of good or proper behavior. Game theoretic arguments of this type can be found as far back as Plato.[1]
Political science
The application of game theory to political science is focused in the overlapping areas of fair division, political economy, public choice, positive political theory, and social choice theory. In each of these areas, researchers have developed game theoretic models in which the players are often voters, states, special interest groups, and politicians.
For early examples of game theory applied to political science, see the work of Anthony Downs. In his book An Economic Theory of Democracy (Downs 1957), he applies a hotelling firm location model to the political process. In the Downsian model, political candidates commit to ideologies on a one-dimensional policy space. The theorist shows how the political candidates will converge to the ideology preferred by the median voter. For more recent examples, see the books by Steven Brams, George Tsebelis, Gene M. Grossman and Elhanan Helpman, or David Austen-Smith and Jeffrey S. Banks.
A game-theoretic explanation for democratic peace is that public and open debate in democracies send clear and reliable information regarding their intentions to other states. In contrast, it is difficult to know the intentions of nondemocratic leaders, what effect concessions will have, and if promises will be kept. Thus there will be mistrust and unwillingness to make concessions if at least one of the parties in a dispute is a nondemocracy (Levy & Razin 2003).
Economics and business
Economists have long used game theory to analyze a wide array of economic phenomena, including auctions, bargaining, duopolies, fair division, oligopolies, social network formation, and voting systems. This research usually focuses on particular sets of strategies known as equilibria in games. These "solution concepts" are usually based on what is required by norms of rationality. In non-cooperative games, the most famous of these is the Nash equilibrium. A set of strategies is a Nash equilibrium if each represents a best response to the other strategies. So, if all the players are playing the strategies in a Nash equilibrium, they have no unilateral incentive to deviate, since their strategy is the best they can do given what others are doing.
The payoffs of the game are generally taken to represent the utility of individual players. Often in modeling situations the payoffs represent money, which presumably corresponds to an individual's utility. This assumption, however, can be faulty.
A prototypical paper on game theory in economics begins by presenting a game that is an abstraction of some particular economic situation. One or more solution concepts are chosen, and the author demonstrates which strategy sets in the presented game are equilibria of the appropriate type. Naturally one might wonder to what use should this information be put. Economists and business professors suggest two primary uses: descriptive and prescriptive.
Descriptive
The first known use is to inform us about how actual human populations behave. Some scholars believe that by finding the equilibria of games they can predict how actual human populations will behave when confronted with situations analogous to the game being studied. This particular view of game theory has come under recent criticism. First, it is criticized because the assumptions made by game theorists are often violated. Game theorists may assume players always act in a way to directly maximize their wins (the Homo economicus model), but in practice, human behavior often deviates from this model. Explanations of this phenomenon are many; irrationality, new models of deliberation, or even different motives (like that of altruism). Game theorists respond by comparing their assumptions to those used in physics. Thus while their assumptions do not always hold, they can treat game theory as a reasonable scientific ideal akin to the models used by physicists. However, additional criticism of this use of game theory has been levied because some experiments have demonstrated that individuals do not play equilibrium strategies. For instance, in the centipede game, guess 2/3 of the average game, and the dictator game, people regularly do not play Nash equilibria. There is an ongoing debate regarding the importance of these experiments.[2]
Alternatively, some authors claim that Nash equilibria do not provide predictions for human populations, but rather provide an explanation for why populations that play Nash equilibria remain in that state. However, the question of how populations reach those points remains open.
Some game theorists have turned to evolutionary game theory in order to resolve these worries. These models presume either no rationality or bounded rationality on the part of players. Despite the name, evolutionary game theory does not necessarily presume natural selection in the biological sense. Evolutionary game theory includes both biological as well as cultural evolution and also models of individual learning (for example, fictitious play dynamics).
Prescriptive or normative analysis
Cooperate | Defect | |
Cooperate | -1, -1 | -10, 0 |
Defect | 0, -10 | -5, -5 |
The Prisoner's Dilemma |
On the other hand, some scholars see game theory not as a predictive tool for the behavior of human beings, but as a suggestion for how people ought to behave. Since a Nash equilibrium of a game constitutes one's best response to the actions of the other players, playing a strategy that is part of a Nash equilibrium seems appropriate. However, this use for game theory has also come under criticism. First, in some cases it is appropriate to play a non-equilibrium strategy if one expects others to play non-equilibrium strategies as well. For an example, see Guess 2/3 of the average.
Second, the Prisoner's dilemma presents another potential counterexample. In the Prisoner's Dilemma, each player pursuing his own self-interest leads both players to be worse off than had they not pursued their own self-interests.
Biology
Hawk | Dove | |
Hawk | v−c, v−c | 2v, 0 |
Dove | 0, 2v | v, v |
The hawk-dove game |
Unlike economics, the payoffs for games in biology are often interpreted as corresponding to fitness. In addition, the focus has been less on equilibria that correspond to a notion of rationality, but rather on ones that would be maintained by evolutionary forces. The best known equilibrium in biology is known as the evolutionarily stable strategy (or ESS), and was first introduced in (Smith & Price 1973). Although its initial motivation did not involve any of the mental requirements of the Nash equilibrium, every ESS is a Nash equilibrium.
In biology, game theory has been used to understand many different phenomena. It was first used to explain the evolution (and stability) of the approximate 1:1 sex ratios. (Fisher 1930) suggested that the 1:1 sex ratios are a result of evolutionary forces acting on individuals who could be seen as trying to maximize their number of grandchildren.
Additionally, biologists have used evolutionary game theory and the ESS to explain the emergence of animal communication (Harper & Maynard Smith 2003). The analysis of signaling games and other communication games has provided some insight into the evolution of communication among animals. For example, the mobbing behavior of many species, in which a large number of prey animals attack a larger predator, seems to be an example of spontaneous emergent organization.
Biologists have used the game of chicken to analyze fighting behavior and territoriality.[citation needed]
Maynard Smith, in the preface to Evolution and the Theory of Games, writes, "[p]aradoxically, it has turned out that game theory is more readily applied to biology than to the field of economic behaviour for which it was originally designed". Evolutionary game theory has been used to explain many seemingly incongruous phenomena in nature.[3]
One such phenomenon is known as biological altruism. This is a situation in which an organism appears to act in a way that benefits other organisms and is detrimental to itself. This is distinct from traditional notions of altruism because such actions are not conscious, but appear to be evolutionary adaptations to increase overall fitness. Examples can be found in species ranging from vampire bats that regurgitate blood they have obtained from a night's hunting and give it to group members who have failed to feed, to worker bees that care for the queen bee for their entire lives and never mate, to Vervet monkeys that warn group members of a predator's approach, even when it endangers that individual's chance of survival.[4] All of these actions increase the overall fitness of a group, but occur at a cost to the individual.
Evolutionary game theory explains this altruism with the idea of kin selection. Altruists discriminate between the individuals they help; and favor relatives. Hamilton's rule explains the evolutionary reasoning behind this selection with the equation c[4] The coefficient values depend heavily on the scope of the playing field; for example if the choice of whom to favor includes all genetic living things, not just all relatives, we assume the discrepancy between all humans only accounts for approximately 1% of the diversity in the playing field, a co-efficient that was ½ in the smaller field becomes 0.995. Similarly if it is considered that information other than that of a genetic nature (e.g. epigenetics, religion, science, etc) persisted through time the playing field becomes larger still, and the discrepancies smaller.
Computer science and logic
Game theory has come to play an increasingly important role in logic and in computer science. Several logical theories have a basis in game semantics. In addition, computer scientists have used games to model interactive computations. Also, game theory provides a theoretical basis to the field of multi-agent systems.
Separately, game theory has played a role in online algorithms. In particular, the k-server problem, which has in the past been referred to as games with moving costs and request-answer games (Ben David, Borodin & Karp et al. 1994). Yao's principle is a game-theoretic technique for proving lower bounds on the computational complexity of randomized algorithms, and especially of online algorithms.
The field of algorithmic game theory combines computer science concepts of complexity and algorithm design with game theory and economic theory. The emergence of the internet has motivated the development of algorithms for finding equilibria in games, markets, computational auctions, peer-to-peer systems, and security and information markets.[5]
Philosophy
Stag | Hare | |
Stag | 3, 3 | 0, 2 |
Hare | 2, 0 | 2, 2 |
Stag hunt |
Game theory has been put to several uses in philosophy. Responding to two papers by W.V.O. Quine (1960, 1967), Lewis (1969) used game theory to develop a philosophical account of convention. In so doing, he provided the first analysis of common knowledge and employed it in analyzing play in coordination games. In addition, he first suggested that one can understand meaning in terms of signaling games. This later suggestion has been pursued by several philosophers since Lewis (Skyrms (1996), Grim, Kokalis, and Alai-Tafti et al. (2004)). Following Lewis (1969) game-theoretic account of conventions, Ullmann Margalit (1977) and Bicchieri (2006) have developed theories of social norms that define them as Nash equilibria that result from transforming a mixed-motive game into a coordination game.[6]
Game theory has also challenged philosophers to think in terms of interactive epistemology: what it means for a collective to have common beliefs or knowledge, and what are the consequences of this knowledge for the social outcomes resulting from agents' interactions. Philosophers who have worked in this area include Bicchieri (1989, 1993),[7] Skyrms (1990),[8] and Stalnaker (1999).[9]
In ethics, some authors have attempted to pursue the project, begun by Thomas Hobbes, of deriving morality from self-interest. Since games like the Prisoner's dilemma present an apparent conflict between morality and self-interest, explaining why cooperation is required by self-interest is an important component of this project. This general strategy is a component of the general social contract view in political philosophy (for examples, see Gauthier (1986) and Kavka (1986).[10]
Other authors have attempted to use evolutionary game theory in order to explain the emergence of human attitudes about morality and corresponding animal behaviors. These authors look at several games including the Prisoner's dilemma, Stag hunt, and the Nash bargaining game as providing an explanation for the emergence of attitudes about morality (see, e.g., Skyrms (1996, 2004) and Sober and Wilson (1999)).
Some assumptions used in some parts of game theory have been challenged in philosophy; psychological egoism states that rationality reduces to self-interest—a claim debated among philosophers. (see Psychological egoism#Criticism)
Types of games
Cooperative or non-cooperative
A game is cooperative if the players are able to form binding commitments. For instance the legal system requires them to adhere to their promises. In noncooperative games this is not possible.
Often it is assumed that communication among players is allowed in cooperative games, but not in noncooperative ones. This classification on two binary criteria has been rejected (Harsanyi 1974).
Of the two types of games, noncooperative games are able to model situations to the finest details, producing accurate results. Cooperative games focus on the game at large. Considerable efforts have been made to link the two approaches. The so-called Nash-programme[clarification needed] has already established many of the cooperative solutions as noncooperative equilibria.
Hybrid games contain cooperative and non-cooperative elements. For instance, coalitions of players are formed in a cooperative game, but these play in a non-cooperative fashion.
Symmetric and asymmetric
E | F | |
E | 1, 2 | 0, 0 |
F | 0, 0 | 1, 2 |
An asymmetric game |
A symmetric game is a game where the payoffs for playing a particular strategy depend only on the other strategies employed, not on who is playing them. If the identities of the players can be changed without changing the payoff to the strategies, then a game is symmetric. Many of the commonly studied 2×2 games are symmetric. The standard representations of chicken, the prisoner's dilemma, and the stag hunt are all symmetric games. Some scholars would consider certain asymmetric games as examples of these games as well. However, the most common payoffs for each of these games are symmetric.
Most commonly studied asymmetric games are games where there are not identical strategy sets for both players. For instance, the ultimatum game and similarly the dictator game have different strategies for each player. It is possible, however, for a game to have identical strategies for both players, yet be asymmetric. For example, the game pictured to the right is asymmetric despite having identical strategy sets for both players.
Zero-sum and non-zero-sum
A | B | |
A | –1, 1 | 3, –3 |
B | 0, 0 | –2, 2 |
A zero-sum game |
Zero-sum games are a special case of constant-sum games, in which choices by players can neither increase nor decrease the available resources. In zero-sum games the total benefit to all players in the game, for every combination of strategies, always adds to zero (more informally, a player benefits only at the equal expense of others). Poker exemplifies a zero-sum game (ignoring the possibility of the house's cut), because one wins exactly the amount one's opponents lose. Other zero-sum games include matching pennies and most classical board games including Go and chess.
Many games studied by game theorists (including the famous prisoner's dilemma) are non-zero-sum games, because some outcomes have net results greater or less than zero. Informally, in non-zero-sum games, a gain by one player does not necessarily correspond with a loss by another.
Constant-sum games correspond to activities like theft and gambling, but not to the fundamental economic situation in which there are potential gains from trade. It is possible to transform any game into a (possibly asymmetric) zero-sum game by adding an additional dummy player (often called "the board"), whose losses compensate the players' net winnings.
Simultaneous and sequential
Simultaneous games are games where both players move simultaneously, or if they do not move simultaneously, the later players are unaware of the earlier players' actions (making them effectively simultaneous). Sequential game (or dynamic games) are games where later players have some knowledge about earlier actions. This need not be perfect information about every action of earlier players; it might be very little knowledge. For instance, a player may know that an earlier player did not perform one particular action, while he does not know which of the other available actions the first player actually performed.
The difference between simultaneous and sequential games is captured in the different representations discussed above. Often, normal form is used to represent simultaneous games, and extensive form is used to represent sequential ones; although this isn't a strict rule in a technical sense.
Perfect information and imperfect information
An important subset of sequential games consists of games of perfect information. A game is one of perfect information if all players know the moves previously made by all other players. Thus, only sequential games can be games of perfect information, since in simultaneous games not every player knows the actions of the others. Most games studied in game theory are imperfect-information games, although there are some interesting examples of perfect-information games, including the ultimatum game and centipede game. Perfect-information games include also chess, go, mancala, and arimaa.
Perfect information is often confused with complete information, which is a similar concept. Complete information requires that every player know the strategies and payoffs of the other players but not necessarily the actions.
Infinitely long games
Games, as studied by economists and real-world game players, are generally finished in a finite number of moves. Pure mathematicians are not so constrained, and set theorists in particular study games that last for an infinite number of moves, with the winner (or other payoff) not known until after all those moves are completed.
The focus of attention is usually not so much on what is the best way to play such a game, but simply on whether one or the other player has a winning strategy. (It can be proven, using the axiom of choice, that there are games—even with perfect information, and where the only outcomes are "win" or "lose"—for which neither player has a winning strategy.) The existence of such strategies, for cleverly designed games, has important consequences in descriptive set theory.
[edit] Discrete and continuous games
Much of game theory is concerned with finite, discrete games, that have a finite number of players, moves, events, outcomes, etc. Many concepts can be extended, however. Continuous games allow players to choose a strategy from a continuous strategy set. For instance, Cournot competition is typically modeled with players' strategies being any non-negative quantities, including fractional quantities.
Differential games such as the continuous pursuit and evasion game are continuous games.
One-player and many-player games
Individual decision problems are sometimes considered "one-player games". While these situations are not game theoretical, they are modeled using many of the same tools within the discipline of decision theory. It is only with two or more players that a problem becomes game theoretical. A randomly acting player who makes "chance moves", also known as "moves by nature", is often added (Osborne & Rubinstein 1994). This player is not typically considered a third player in what is otherwise a two-player game, but merely serves to provide a roll of the dice where required by the game. Games with an infinite number of players are often called n-person games (Luce & Raiffa 1957).
Metagames
These are games the play of which is the development of the rules for another game, the target or subject game. Metagames seek to maximize the utility value of the rule set developed. The theory of metagames is related to mechanism design theory.
History
The first known discussion of game theory occurred in a letter written by James Waldegrave in 1713. In this letter, Waldegrave provides a minimax mixed strategy solution to a two-person version of the card game le Her.
James Madison made what we now recognize as a game-theoretic analysis of the ways states can be expected to behave under different systems of taxation.[11][12]
It was not until the publication of Antoine Augustin Cournot's Recherches sur les principes mathématiques de la théorie des richesses (Researches into the Mathematical Principles of the Theory of Wealth) in 1838 that a general game theoretic analysis was pursued. In this work Cournot considers a duopoly and presents a solution that is a restricted version of the Nash equilibrium.
Although Cournot's analysis is more general than Waldegrave's, game theory did not really exist as a unique field until John von Neumann published a series of papers in 1928. While the French mathematician Émile Borel did some earlier work on games, Von Neumann can rightfully be credited as the inventor of game theory. Von Neumann was a brilliant mathematician whose work was far-reaching from set theory to his calculations that were key to development of both the Atom and Hydrogen bombs and finally to his work developing computers. Von Neumann's work in game theory culminated in the 1944 book Theory of Games and Economic Behavior by von Neumann and Oskar Morgenstern. This profound work contains the method for finding mutually consistent solutions for two-person zero-sum games. During this time period, work on game theory was primarily focused on cooperative game theory, which analyzes optimal strategies for groups of individuals, presuming that they can enforce agreements between them about proper strategies.
In 1950, the first discussion of the prisoner's dilemma appeared, and an experiment was undertaken on this game at the RAND corporation. Around this same time, John Nash developed a criterion for mutual consistency of players' strategies, known as Nash equilibrium, applicable to a wider variety of games than the criterion proposed by von Neumann and Morgenstern. This equilibrium is sufficiently general to allow for the analysis of non-cooperative games in addition to cooperative ones.
Game theory experienced a flurry of activity in the 1950s, during which time the concepts of the core, the extensive form game, fictitious play, repeated games, and the Shapley value were developed. In addition, the first applications of Game theory to philosophy and political science occurred during this time.
In 1965, Reinhard Selten introduced his solution concept of subgame perfect equilibria, which further refined the Nash equilibrium (later he would introduce trembling hand perfection as well). In 1967, John Harsanyi developed the concepts of complete information and Bayesian games. Nash, Selten and Harsanyi became Economics Nobel Laureates in 1994 for their contributions to economic game theory.
In the 1970s, game theory was extensively applied in biology, largely as a result of the work of John Maynard Smith and his evolutionarily stable strategy. In addition, the concepts of correlated equilibrium, trembling hand perfection, and common knowledge[13] were introduced and analyzed.
In 2005, game theorists Thomas Schelling and Robert Aumann followed Nash, Selten and Harsanyi as Nobel Laureates. Schelling worked on dynamic models, early examples of evolutionary game theory. Aumann contributed more to the equilibrium school, introducing an equilibrium coarsening, correlated equilibrium, and developing an extensive formal analysis of the assumption of common knowledge and of its consequences.
In 2007, Roger Myerson, together with Leonid Hurwicz and Eric Maskin, was awarded the Nobel Prize in Economics "for having laid the foundations of mechanism design theory." Myerson's contributions include the notion of proper equilibrium, and an important graduate text: Game Theory, Analysis of Conflict (Myerson 1997).
See also
- Combinatorial game theory
- Glossary of game theory
- List of games in game theory
- Quantum game theory
- Self-confirming equilibrium
Notes
- ^ Ross, Don. "Game Theory". The Stanford Encyclopedia of Philosophy (Spring 2008 Edition). Edward N. Zalta (ed.). http://plato.stanford.edu/archives/spr2008/entries/game-theory/. Retrieved 2008-08-21.
- ^ Experimental work in game theory goes by many names, experimental economics, behavioral economics, and behavioural game theory are several. For a recent discussion on this field see Camerer (2003).
- ^ Evolutionary Game Theory (Stanford Encyclopedia of Philosophy)
- ^ a b Biological Altruism (Stanford Encyclopedia of Philosophy)
- ^ Algorithmic Game Theory. http://www.cambridge.org/journals/nisan/downloads/Nisan_Non-printable.pdf.
- ^ E. Ullmann Margalit, The Emergence of Norms, Oxford University Press, 1977. C. Bicchieri, The Grammar of Society: the Nature and Dynamics of Social Norms, Cambridge University Press, 2006.
- ^ "Self-Refuting Theories of Strategic Interaction: A Paradox of Common Knowledge ", Erkenntnis 30, 1989: 69-85. See also Rationality and Coordination, Cambridge University Press, 1993.
- ^ The Dynamics of Rational Deliberation, Harvard University Press, 1990.
- ^ "Knowledge, Belief, and Counterfactual Reasoning in Games." In Cristina Bicchieri, Richard Jeffrey, and Brian Skyrms, eds., The Logic of Strategy. New York: Oxford University Press, 1999.
- ^ For a more detailed discussion of the use of Game Theory in ethics see the Stanford Encyclopedia of Philosophy's entry game theory and ethics.
- ^ James Madison, Vices of the Political System of the United States, April, 1787. Link
- ^ Jack Rakove, "James Madison and the Constitution", History Now, Issue 13 September 2007. Link
- ^ Although common knowledge was first discussed by the philosopher David Lewis in his dissertation (and later book) Convention in the late 1960s, it was not widely considered by economists until Robert Aumann's work in the 1970s.
References
Wikibooks has a book on the topic of |
Wikiversity has learning materials about Game Theory |
Look up game theory in Wiktionary, the free dictionary. |
Wikimedia Commons has media related to: Game theory |
Textbooks and general references
- Aumann, Robert J. (1987), "game theory,", The New Palgrave: A Dictionary of Economics, 2, pp. 460–82.
- (2008). The New Palgrave Dictionary of Economics, 2nd Edition:
- "game theory" by Robert J. Aumann, Abstract.
- "game theory in economics, origins of," by Robert Leonard. Abstract.
- "behavioural economics and game theory" by Faruk Gul. Abstract.
- Dutta, Prajit K. (1999), Strategies and games: theory and practice, MIT Press, ISBN 978-0-262-04169-0. Suitable for undergraduate and business students.
- Fernandez, L F.; Bierman, H S. (1998), Game theory with economic applications, Addison-Wesley, ISBN 978-0-201-84758-1. Suitable for upper-level undergraduates.
- Fudenberg, Drew; Tirole, Jean (1991), Game theory, MIT Press, ISBN 978-0-262-06141-4. Acclaimed reference text, public description.
- Gibbons, Robert D. (1992), Game theory for applied economists, Princeton University Press, ISBN 978-0-691-00395-5. Suitable for advanced undergraduates.
-
- Published in Europe as Robert Gibbons (2001), A Primer in Game Theory, London: Harvester Wheatsheaf, ISBN 978-0-7450-1159-2.
- Gintis, Herbert (2000), Game theory evolving: a problem-centered introduction to modeling strategic behavior, Princeton University Press, ISBN 978-0-691-00943-8
- Green, Jerry R.; Mas-Colell, Andreu; Whinston, Michael D. (1995), Microeconomic theory, Oxford University Press, ISBN 978-0-19-507340-9. Presents game theory in formal way suitable for graduate level.
- edited by Vincent F. Hendricks, Pelle G. Hansen. (2007), Hansen, Pelle G.; Hendricks, Vincent F., eds., Game Theory: 5 Questions, New York, London: Automatic Press / VIP, ISBN 9788799101344. Snippets from interviews.
- Isaacs, Rufus (1999), Differential Games: A Mathematical Theory With Applications to Warfare and Pursuit, Control and Optimization, New York: Dover Publications, ISBN 978-0-486-40682-4
- Leyton-Brown, Kevin; Shoham, Yoav (2008), Essentials of Game Theory: A Concise, Multidisciplinary Introduction, San Rafael, CA: Morgan & Claypool Publishers, ISBN 978-1-598-29593-1, http://www.gtessentials.org. An 88-page mathematical introduction; free online at many universities.
- Miller, James H. (2003), Game theory at work: how to use game theory to outthink and outmaneuver your competition, New York: McGraw-Hill, ISBN 978-0-07-140020-6. Suitable for a general audience.
- Myerson, Roger B. (1991), Game theory: analysis of conflict, Harvard University Press, ISBN 978-0-674-34116-6
- Osborne, Martin J. (2004), An introduction to game theory, Oxford University Press, ISBN 978-0-19-512895-6. Undergraduate textbook.
- Osborne, Martin J.; Rubinstein, Ariel (1994), A course in game theory, MIT Press, ISBN 978-0-262-65040-3. A modern introduction at the graduate level.
- Poundstone, William (1992), Prisoner's Dilemma: John von Neumann, Game Theory and the Puzzle of the Bomb, Anchor, ISBN 978-0-385-41580-4. A general history of game theory and game theoreticians.
- Rasmusen, Eric (2006), Games and Information: An Introduction to Game Theory (4th ed.), Wiley-Blackwell, ISBN 978-1-4051-3666-2, http://www.rasmusen.org/GI/index.html
- Shoham, Yoav; Leyton-Brown, Kevin (2009), Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, New York: Cambridge University Press, ISBN 978-0-521-89943-7, http://www.masfoundations.org. A comprehensive reference from a computational perspective; downloadable free online.
- Williams, John Davis (1954) (PDF), The Compleat Strategyst: Being a Primer on the Theory of Games of Strategy, Santa Monica: RAND Corp., ISBN 9780833042224, http://www.rand.org/pubs/commercial_books/2007/RAND_CB113-1.pdf Praised primer and popular introduction for everybody, never out of print.
Historically important texts
- Aumann, R.J. and Shapley, L.S. (1974), Values of Non-Atomic Games, Princeton University Press
- Cournot, A. Augustin (1838), "Recherches sur les principles mathematiques de la théorie des richesses", Libraire des sciences politiques et sociales (Paris: M. Rivière & C.ie)
- Edgeworth, Francis Y. (1881), Mathematical Psychics, London: Kegan Paul
- Fisher, Ronald (1930), The Genetical Theory of Natural Selection, Oxford: Clarendon Press
-
- reprinted edition: R.A. Fisher ; edited with a foreword and notes by J.H. Bennett. (1999), The Genetical Theory of Natural Selection: A Complete Variorum Edition, Oxford University Press, ISBN 978-0-19-850440-5
- Luce, R. Duncan; Raiffa, Howard (1957), Games and decisions: introduction and critical survey, New York: Wiley
-
- reprinted edition: R. Duncan Luce ; Howard Raiffa (1989), Games and decisions: introduction and critical survey, New York: Dover Publications, ISBN 978-0-486-65943-5
- Maynard Smith, John (1982), Evolution and the theory of games, Cambridge University Press, ISBN 978-0-521-28884-2
- Smith, John Maynard; Price, George R. (1973), "The logic of animal conflict", Nature 246: 15–18, doi:
- Nash, John (1950), "Equilibrium points in n-person games", Proceedings of the National Academy of Sciences of the United States of America 36 (1): 48–49, doi:, http://www.pnas.org/cgi/search?sendit=Search&pubdate_year=&volume=&firstpage=&DOI=&author1=nash&author2=&title=equilibrium&andorexacttitle=and&titleabstract=&andorexacttitleabs=and&fulltext=&andorexactfulltext=and&fmonth=Jan&fyear=1915&tmonth=Feb&tyear=2008&fdatedef=15+January+1915&tdatedef=6+February+2008&tocsectionid=all&RESULTFORMAT=1&hits=10&hitsbrief=25&sortspec=relevance&sortspecbrief=relevance
- Shapley, L.S. (1953), A Value for n-person Games, In: Contributions to the Theory of Games volume II, H.W. Kuhn and A.W. Tucker (eds.)
- Shapley, L.S. (1953), Stochastic Games, Proceedings of National Academy of Science Vol. 39, pp. 1095–1100.
- von Neumann, John (1928), "Zur Theorie der Gesellschaftspiele" ([dead link] – Scholar search), Mathematische Annalen 100 (1): 295–320, doi:, http://www.digizeitschriften.de/home/services/pdfterms/?ID=363311
- von Neumann, John; Morgenstern, Oskar (1944), Theory of games and economic behavior, Princeton University Press
- Zermelo, Ernst (1913), "Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels", Proceedings of the Fifth International Congress of Mathematicians 2: 501–4
Poker plays
- Aggressive plays
- Bluffing plays
- Check-raise plays
- Defense plays
- Drawing plays
- Isolation plays
- Position plays
- Protection plays
- Slow plays
- Stealing plays