NAJ EconomicsNAJ Economics2011-01-27T12:48:38-08:00tag:http://www.najecon.org,2011:/www.dklevine.comCopyright (c) 2011, Creative CommonsNAJ Economics
http://www.najecon.org/
http://creativecommons.org/licenses/by-nd/2.0/NAJ Economicsen-usdavid@dklevine.comCopyright 20112011-01-27T12:48:38-08:00daily12000-01-01T12:00+00:00Overcoming Ideological Bias in Elections by Vijay Krishna and John MorganDavid K. Levinehttp://www.najecon.org/v16.htm
Imagine that voters care about the quality of the candidate but also lean towards one of the two candidates. With sincere voting the larger party wins - which will be inefficient if they don't care very much and the minority party does. Now take an off-the-rack voter participation model with stochastic participation costs and lots of strategic voters. Because the party that cares more is more willing to incur participation costs they have a better chance of winning. So much so that - amazingly - majority voting gets the outcome exactly right.
814577000000000501board4@http://www.dklevine.com/Overcoming Ideological Bias in Elections by Vijay Krishna and John Morgan2010-09-28T06:38:27-08:00Competitive Markets without Commitment by Nick Netzer and Florian ScheuerArthur Robsonhttp://www.najecon.org/v15.htm
This paper shows that a problem that seems very awkward for a benevolent planner is at least ameliorated by means of an unsophisticated invisible hand. Consider a contract between a principal and a risk-averse agent who also subject to moral hazard, so there is a tension between incentives and risk-sharing. In a competitive market counterpart, with no commitment on either side, the outcome generated Pareto dominates the single principal outcome.
814577000000000465board4@http://www.dklevine.com/Competitive Markets without Commitment by Nick Netzer and Florian Scheuer2010-03-22T12:53:27-08:00Judicial Precedent as a Dynamic Rationale for Axiomatic Bargaining Theory by Marc Fleurbaey and John RoemerDebraj Rayhttp://www.najecon.org/v15.htm
Suppose that an arbitrator must allocate payoffs at every date from a freshly drawn two-person bargaining problem. She pays a penalty if her allocation at some date violates one of the Nash axioms relative to her past behavior. (Penalties are lower for inconsistencies with the more distant past.) Conditions are given for penalty-minimizing behavior to converge to the Nash bargaining solution over time.
This working paper opens (by example) a new line that might apply more generally to axiomatic solution concepts. If successful, it would connect judicial precedent to axiomatic reasoning.
814577000000000445board4@http://www.dklevine.com/Judicial Precedent as a Dynamic Rationale for Axiomatic Bargaining Theory by Marc Fleurbaey and John Roemer2010-02-01T20:47:39-08:00Revealed Attention by Yusufcan Masatlioglu, Daisuke Nakajima and Erkut OzbayRan Spieglerhttp://www.najecon.org/v15.htm
How can we infer preferences from choices, when the decision maker may be unaware of some of the feasible alternatives? This paper enriches the standard model of rational choice by assuming that the decision maker is characterized by two unobservables: her preferences, and an "attention filter" which reduces every choice set to a "consideration set", to which preferences are applied. The paper addresses the problem of identifying these two components from observed choices.
814577000000000413board4@http://www.dklevine.com/Revealed Attention by Yusufcan Masatlioglu, Daisuke Nakajima and Erkut Ozbay2009-12-01T13:01:07-08:00Choice by Sequential Procedures by Jose Apesteguia and Miguel BallesterRan Spieglerhttp://www.najecon.org/v15.htm
Consider a decision maker who employs a sequence of incomplete, acyclic preference relations to eliminate alternatives from the choice set, until she reaches a unique element, which is the one she ends up choosing. The paper axiomatizes this procedure and relates it to other methods of rationalizing choice behavior. It also extends a result due to Mariotti and Manzini (2007) showing that a sequentially rationalizable choice function violates IIA only if the preference relation revealed by choices from pairs contains cycles.
814577000000000407board4@http://www.dklevine.com/Choice by Sequential Procedures by Jose Apesteguia and Miguel Ballester2009-12-01T12:48:30-08:00Democratic Peace and Electoral Accountability by Paola Conconi, Nicolas Sahuguet and Maurizio ZanardiMatthew O. Jacksonhttp://www.najecon.org/v15.htm
This paper sheds very interesting new light on the so-called ``democratic peace,'' which refers to the fact that it is extremely rare for two democracies go to war with each other. The authors show that this is largely correlated with whether or not a democratic leader faces reelection, and that in fact democratic leaders facing term limits act like autocrats. Thus, incentives for reelection play a large part in explaining the democratic peace.
814577000000000392board4@http://www.dklevine.com/Democratic Peace and Electoral Accountability by Paola Conconi, Nicolas Sahuguet and Maurizio Zanardi2009-11-21T17:55:51-08:00Bayesian Persuasion by Emir Kamenica and Matthew GentzkowArthur Robsonhttp://www.najecon.org/v14.htm
A DA who always wishes to convict structures a case to a judge who wishes to do the right thing. The DA can select the forensic tests to perform-- which must be truthfully reported-- such that the judge will rationally convict a larger fraction of those on trial than are actually guilty. This paper breathes new life into Aumann and Maschler's results for repeated games with incomplete information.
814577000000000372board4@http://www.dklevine.com/Bayesian Persuasion by Emir Kamenica and Matthew Gentzkow2009-10-14T21:10:38-08:00Overconfidence by Jean-Pierre Benoit and Juan DubraDavid K. Levinehttp://www.najecon.org/v14.htm
It has been firmly established in the experimental laboratory and in survey data that Garrison Keeler is right and everbody thinks that they are above average. It turns out that in a population of rational people with rational expectations and noisy data, this is exactly as theory predicts.
122247000000002151board4@http://www.dklevine.com/Overconfidence by Jean-Pierre Benoit and Juan Dubra2008-05-05T13:44:52-08:00Kludged by Jeffrey C. ElyDavid K. Levinehttp://www.najecon.org/v14.htm
Probably we should try to avoid the editors reviewing each others papers - but this one I can't resist. Anyone who has ever written computer code realizes that patches accumulate, and as they accumulate it gets harder to write additional code. Eventually programmers tear the thing apart and start over again, sometimes with good results, sometimes (Microsoft Vista) with catastrophic ones. Evolution of biological organisms is limited to patches - evolutionary processes cannot start all over again at the bottom. This paper works out an explicit evolutionary model. Even with large mutations occuring infinitely often, behavior can be perpetually suboptimal. (My own thought: evolution produced computer programmers, who can start over again.)
122247000000002047board4@http://www.dklevine.com/Kludged by Jeffrey C. Ely2008-04-03T14:02:10-08:00The Optimal Multi-Stage Contest by Qiang Fu and JingFeng LuJeff Elyhttp://www.najecon.org/v14.htm
A Principal wants to maximize productive effort from a group of agents. The principal has a fixed budget to be allocated as prizes in some contest. This paper considers a general set of contests that potentially involve multiple knockout stages and analyzes the optimal multi-stage structure as well as the allocation of prizes over time. The effort-maximizing contest eliminates a single contestant in each period until two remain in the "finale" and reserves all prize money for the winner of the finale.
843644000000000390board4@http://www.dklevine.com/The Optimal Multi-Stage Contest by Qiang Fu and JingFeng Lu2007-08-17T16:47:09-08:00Equilibrium Degeneracy and Reputation Effects by Eduardo Faingold and Yuliy SannikovDavid K. Levinehttp://www.najecon.org/v14.htm
This paper examines reputation in continuous time models where a noisy signal of the long-run player follows a diffusion process. Without "KWMR" types the equilibrium is completely degenerate and the long-run player is limited to the static Nash equilibrium payoff. With "KWMR" types the equilibrium is non-degenerate. The key idea is that the length of effective horizon for the "audience" of short-run player(s) is critical. Without "KWMR" types, in continuous time, reaction to the long-run player must occur continuously, and the diffusion is very noisy over short time intervals. With "KWMR" types longer term information matters - a reputation is not won in a day - and over a longer period of time the diffusion is much less noisy.
843644000000000219board4@http://www.dklevine.com/Equilibrium Degeneracy and Reputation Effects by Eduardo Faingold and Yuliy Sannikov2007-07-03T08:14:31-08:00Parental Guidance and Supervised Learning by Alessandro Lizzeri and Marciano SiniscalchiMatthew O. Jacksonhttp://www.najecon.org/v13.htm
The authors examine learning in a situation where a parent can guide a child's learning by intervening to eliminate mistakes. The parent faces a tradeoff between improving short term well-being and slowing learning. While there are large literatures in psychology and game theory on learning, examining such guided learning from a formal perspective provides interesting insights regarding the ability of the child, the discount rate, and the parent's intervention.
843644000000000093board4@http://www.dklevine.com/Parental Guidance and Supervised Learning by Alessandro Lizzeri and Marciano Siniscalchi2007-06-02T18:21:47-08:00Contractually Stable Networks by Jean-Francois Caulier, Ana Mauleon and Vincent VAnnetelboschMatthew O. Jacksonhttp://www.najecon.org/v13.htm
The authors provide a model where utility or productive value depends on how players are partitioned into communities as well as how they are connected in a network. The results extend notions of stability and value allocations. While this is still preliminary work, the setting will help our understanding of how individuals maintain relationships of different types at the same time, and how such layered relationships interact in determining social structure.
843644000000000088board4@http://www.dklevine.com/Contractually Stable Networks by Jean-Francois Caulier, Ana Mauleon and Vincent VAnnetelbosch2007-06-02T18:18:49-08:00Private Monitoring with Infinite Histories by Christopher Phelan and Andrzej SkrzypaczMatthew O. Jacksonhttp://www.najecon.org/v13.htm
This paper uses a clever formulation of a repeated game with private monitoring to develop new techniques for characterizing equilibria. The authors examine time sequences that are infinite in both directions, so there is no ``starting period.'' This helps in formulating how strategies depend on past history, as it allows for a stationarity not possible in games with a starting period, and allows the authors to examine a class of equilibria playable by finite automata. This also provides new results into how coordination on past histories map into correlated equilibria.
843644000000000082board4@http://www.dklevine.com/Private Monitoring with Infinite Histories by Christopher Phelan and Andrzej Skrzypacz2007-06-02T18:14:37-08:00Mechanism Design with Private Communication by Vianney Dequiedt and David MartimortMatthew O. Jacksonhttp://www.najecon.org/v13.htm
This paper considers a twist on the familiar principal-multiple agent mechanism design environment. Agents never see any of the other agents' messages to the principal, other than what the principal tells them and the principal can lie. While one might think this is just a simple enrichment of the usual setting, it ends up with some important implications. It limits the ability of the principal to make use of information from one agent that is correlated with the type of another agent, because of incentive compatibility constraints. This restores a continuity of mechanism design in the information structure, and small amounts of correlation no longer have drastic effects. The mechanisms also have some intuitive features, and take a simpler form than in settings where all messages are verifiable by all agents.
843644000000000077board4@http://www.dklevine.com/Mechanism Design with Private Communication by Vianney Dequiedt and David Martimort2007-06-02T18:10:51-08:00A Dynamic Theory of Public Spending, Taxation and Debt by Marco Battaglini and Stephen CoateJon Levinhttp://www.najecon.org/v13.htm
This impressive paper integrates a number of important ideas from public finance, political economy, and macroeconomics. Robert Barro argued nearly thirty years ago that government should use public debt to smooth disortionary taxation over time. Battaglini and Coate depart from Barro's benevolent planner and model the voting process through which taxes, public good provision, pork spending and government borrowing are chosen. They provide a sharp characterization of equilibrium spending, taxation and debt management. Among other results, they show that in time of crisis the government will eschew pork spending and finance public goods with bonds that are paid off slowly as the crisis recedes.
321307000000000029board4@http://www.dklevine.com/A Dynamic Theory of Public Spending, Taxation and Debt by Marco Battaglini and Stephen Coate2006-05-11T11:45:55-08:00Sequential Innovation, Patents, and Innovation by James Bessen and Eric MaskinDavid K. Levinehttp://www.najecon.org/v12.htm
Based on standard theory there should be little or no innovation without patents. Surprisingly, there is little if any empirical evidence that there is more innovation when patenting is possible than when it is not. This paper provides a theory of why this may be the case. It starts with the common observation that patents may inhibit innovation by raising the cost of downstream innovations that build on existing ideas. This is captured in an elegant model of sequential innovation that directs our attention to the features of the market and technology that make patent systems more or less desirable. Of particular importance is the interplay between two forces. On the one hand, private information about the value of a patent prevents existing patent holders from engaging in efficient licensing. On the other hand, if little profit is dissipated through competition then patents provide little additional incentive for innovation.
321307000000000024board4@http://www.dklevine.com/Sequential Innovation, Patents, and Innovation by James Bessen and Eric Maskin2006-05-10T14:21:16-08:00A Theory of Momentum in Sequential Voting by Nageeb Ali and Navin KartikJeff Elyhttp://www.najecon.org/v12.htm
What explains the intense competition for small early states like New Hampshire and Iowa in US Presidential primary elections? Presumably, other things equal, winning an early contest increases the chance of winning bigger, later states. This paper explains how such momentum effects can arise in a strict equilibrium of a sequential common-value election. Voters update their beliefs about the value of candidates based on the preceding history. Momentum is similar to an informational cascade.
321307000000000019board4@http://www.dklevine.com/A Theory of Momentum in Sequential Voting by Nageeb Ali and Navin Kartik2006-05-10T13:15:34-08:00Experientia Docet: Professionals Play Minimax in Laboratory Experiments by Ignacio Palacios-Huerta and Oscar VolijMatthew O. Jacksonhttp://www.najecon.org/v12.htm
This is a provacative experimental paper, that contrasts the play of professional soccer players with college students in zero-sum games. The paper is interesting not only because of the finding that the professionals play much closer to equilibrium than the college students in a game that replicates a soccer penalty kick; but also because it shows that the the professionals play closer to equilibrium than the college students when they play a zero-sum game that none of the subjects is likely to be familiar with. This not only tells us how important it is to carefully define ``experience'' in experimental settings, and also provides some insight into the transfer of knowledge across strategic settings.
122247000000001053board4@http://www.dklevine.com/Experientia Docet: Professionals Play Minimax in Laboratory Experiments by Ignacio Palacios-Huerta and Oscar Volij2006-01-13T17:36:58-08:00Bayesian Consistent Prior Selection by Christopher Chambers and Takashi HayashiJeff Elyhttp://www.najecon.org/v12.htm
The paper is about rules for selecting priors from sets. Suppose you are given some information which implies that the probability belongs to some (compact, convex) subset. Assume you have a rule that selects from any such set a prior. (The paper extends to rules that select subsets.)
Consider two thought experiments. First, suppose you are given a subset F and your rule selects a prior p from it.
Next, suppose there is some additional piece of data d, and suppose you are given the subset F* where F* is obtained by updating, prior by prior, the elements of F based on d.
It seems natural to ask that the prior selected from F* is equal to the posterior derived from p based on d. Essentially this would be assuming that your rule for selecting priors is invariant to the order in which you receive information.
There is no rule satisfying this condition.
784828000000000532board4@http://www.dklevine.com/Bayesian Consistent Prior Selection by Christopher Chambers and Takashi Hayashi2005-11-02T18:18:03-08:00Optimal Menu of Menus with Self-Control Preferences by Susanna Esteban and Eiichi MiyagawaJeff Elyhttp://www.najecon.org/v12.htm
In the standard model of monopoly pricing with incomplete information, the firm offers a menu of price-quantity pairs. On the other hand, many real-world tariff schedules consist of a menu of *non-linear* prices. For example, most cell-phone service plans provide initial minutes at a low marginal price (usually zero) and further quantity at a high price. This paper shows how this is necessarily part of an optimal tariff schedule for consumers who have self-control preferences. By adding a steep price for extra minutes to plans targeted for low value consumers, the monopoly relaxes the incentive constraint for high-value consumers. This is because high-value consumers forsee that if they select the low plan they will be tempted to use extra minutes and pay the high price.
784828000000000458board4@http://www.dklevine.com/Optimal Menu of Menus with Self-Control Preferences by Susanna Esteban and Eiichi Miyagawa2005-09-24T19:01:55-08:00Strategic Experimentation in Networks by Yann Bramoulle and Rachel KrantonMatthew O. Jacksonhttp://www.najecon.org/v11.htm
Bramoulle and Kranton study the play of local public goods games when players are linked by a network. Players derive payoffs from their own and immediate neighbors actions, and the authors discuss applications to experimentation where players learn and benefit from the actions of immediate neighbors, but not indirect neighbors. Nevertheless, indirect neighbors’ play affect direct neighbors’ choices, as actions are strategic substitutes. While tractability is a challenge, the authors are able to deduce some interesting patterns of behavior and specialization as a function of network architecture.
784828000000000420board4@http://www.dklevine.com/Strategic Experimentation in Networks by Yann Bramoulle and Rachel Kranton2005-09-09T17:13:12-08:00On the Existence of Monotone Pure Strategy Equilibria in Bayesian Games by Philip J. RenyMatthew O. Jacksonhttp://www.najecon.org/v11.htm
Phil Reny provides new techniques for proving existence of monotone pure strategy equilibria Bayesian games with multidimensional types and actions. It clarifies the role of single-crossing, and uses a weaker version than previously employed. The paper leaves open questions as it has a continuity assumption that while allowing for a wide variety of games with finite strategy sets, does not admit discontinuous games with continuum action spaces, as in many auction models. Nevertheless, the set of games covered is of substantial interest and more general than in previous results, the arguments deepen our understanding of what is needed for existence, and the use of contractability is clever and looks like to be useful beyond the current work.
784828000000000415board4@http://www.dklevine.com/On the Existence of Monotone Pure Strategy Equilibria in Bayesian Games by Philip J. Reny2005-09-09T17:00:25-08:00Noise, Information and the Favorite-Longshot Bias by Marco Ottaviani and Peter SorensonJeff Elyhttp://www.najecon.org/v11.htm
In pari-mutuel (i.e. horse race) betting, the payout odds on a horse are determined by the fraction out of the total betting pool wagered on the horse. There is a well-documented regularity, the favorite-longshot bias: the odds for longshots overstate, and the odds for favorites understate, the true probability of winning. This paper provides a simple and elegant explanation based on a model of privately informed bettors. Because the odds are determined only after betting closes, bettors do not know the payout odds when they bet. After the betting closes the revelation of odds aggregates information but by then bets cannot be changed. In particular, those who bet on the horse which turned out to be the longshot realize ex post that the probability of winning is lower than they thought, hence the bias.
784828000000000400board4@http://www.dklevine.com/Noise, Information and the Favorite-Longshot Bias by Marco Ottaviani and Peter Sorenson2005-09-02T12:10:33-08:00Revenue Comparisons for Auctions when Bidders Have Arbitrary Types by Yeon-Koo Che and Ian GaleJon Levinhttp://www.najecon.org/v11.htm
This paper introduces a clever approach to compare the expected revenue from different auction designs, assuming bidders in each auction use symmetric equilibrium strategies. Roughly, the idea is replace each bidder with a twin who is risk-neutral and has a single-dimensional value distribution but whose equilibrium bidding behavior would be the same. The expected revenue from the original auction must equal the second-order statistic of the twin's value distribution. This approach is used to extend the revenue equivalence theorem to discrete value distributions and to greatly generalize the revenue ranking of first and second price auctions with risk-averse bidders.
784828000000000015board4@http://www.dklevine.com/Revenue Comparisons for Auctions when Bidders Have Arbitrary Types by Yeon-Koo Che and Ian Gale2005-04-19T10:15:14-08:00Axiomatic Justification of Stable Equilibria by Srihari Govindan and Robert WilsonJeff Elyhttp://www.najecon.org/v11.htm
Refinements usually judge equilibria by the plausibility of the beliefs that support them. This is an extensive form criterion. On the other hand, a traditional viewpoint is that rationality-based theories of behavior should depend only on the strategic form. This paper embraces both views. Invariance, the requirement that solutions depend only on the reduced strategic form, together with a version of backward induction, imply Kohlberg-Mertens stability.
784828000000000009board4@http://www.dklevine.com/Axiomatic Justification of Stable Equilibria by Srihari Govindan and Robert Wilson2005-04-17T18:25:16-08:00Discounting and altruism to future decision-makers by Maria Saez-Marti and Jorgen W. WeibullTed Bergstromhttp://www.najecon.org/v10.htm
Suppose that parents have an altruistic utility function that is a weighted sum of the "selfish" utilities of their own consumption and that of each of their descendants. Each descendant in turn has such an altruistic utility function.
When can a parent's altruistic utility be written as a positively weighted linear combination of her own selfish utility and the altruistic utilities of her descendants.
The authors show that there are interesting cases where this cannot be done and where it can be done. They provide remarkably crisp necessary and sufficient conditions for
when it can be done.
784828000000000004board4@http://www.dklevine.com/Discounting and altruism to future decision-makers by Maria Saez-Marti and Jorgen W. Weibull2005-04-17T13:01:36-08:00Contracts, Liability Restrictions and Costly Verification by Francesco SquintaniRan Spieglerhttp://www.najecon.org/v10.htm
This paper is an original attempt to open the black box known as "the court" - specifically, the notion of "verifiability" - in contract theory. Before playing a normal-form game, two players sign a contract that conditions on the outcome of the game. The verifiability constraint is modeled as a partition of the set of outcomes. So far, this description follows Bernheim and Whinston (AER 1998). Squintani takes a step forward and allows for non-product partitions: even when the court can verify a breach of contract, it may be unable to verify who did it. In such cases (assuming individual liability), the players may want to write a "roundabout" contract containing an explicit commitment which the players expect to be violated in equilibrium. Although such a contract is unenforceable, it may dominate all enforceable contracts. Squintani also considers non-partitional verifiability structures and examines conditions for their desirability, in terms of familiar properties of non-partitional information structures.
172782000000000104board4@http://www.dklevine.com/Contracts, Liability Restrictions and Costly Verification by Francesco Squintani2005-04-06T06:57:29-08:00Simultaneous Search by Hector Chade and Lones SmithJon Levinhttp://www.najecon.org/v10.htm
The authors study the following "college application" problem. A student who assigns different probabilities to getting in to different colleges and faces a per-college cost of application must identify the optimal set of schools to which to submit applications. The paper develops a marginal improvement algorithm to solve for the optimal application portfolio and provides elegant characterization results. The optimal portfolio turns out to be less risky than if applications are made sequentially but more risky than if the student picked the most individually promising schools in order ignoring the portfolio aspect of the problem.
172782000000000036board4@http://www.dklevine.com/Simultaneous Search by Hector Chade and Lones Smith2005-03-29T13:40:31-08:00The Folk Theorem for Games with Private, Almost-Perfect Monitoring by Johannes Horner and Wojciech OlszewskiJeff Elyhttp://www.najecon.org/v10.htm
This completes a step toward the Folk Thereom for repeated games with private monitoring. It establishes the result for all n-player games where monitoring is "almost-perfect" (and the usual dimensionality conditions are satisfied.) Previous authors had accomplished as much as possible using limited methods to avoid getting their hands dirty. The quantity of dirt on these authors' hands is impressive.
172782000000000009board4@http://www.dklevine.com/The Folk Theorem for Games with Private, Almost-Perfect Monitoring by Johannes Horner and Wojciech Olszewski2005-03-25T09:52:31-08:00Art and the Internet: Blessing the Curse? by Patrick LegrosMichele Boldrinhttp://www.najecon.org/v10.htm
Brilliant survey of the field, containing also original work. The boldrin-levine model is extended to model the markets for art. An artist can embody each original idea in m different works of art y(1), ..., y(m). The creativity of an artist is indexed by n, the number of original ideas. Each artist has a fixed capacity k of making works of art, hence he will make s=k/n embodiments for each of his own n ideas. This defines his portfolio. Consumers and artists have access to reproduction technologies. The welfare theorems apply, and are used to derive supporting prices and decentralization mechanisms. He argues that incentive provisions for new creative ideas have little to do, at least in principle, with most of the crying about piracy and in support of copyrights.
172782000000000003board4@http://www.dklevine.com/Art and the Internet: Blessing the Curse? by Patrick Legros2005-03-21T17:41:13-08:00Information Transmission with Cheap and Almost-Cheap Talk by Navin KartikTed Bergstromhttp://www.najecon.org/v9.htm
Consider a signaling model with many equilibria, some of which are more informative than others. Suppose that truth is free, but lies are costly. Then significant information can be transmitted by talk. So some of the less informative equilibria disappear. Which equilibria remain in the limiting case as the cost of lying approaches zero? The paper shows that under "a standard condition", only the most informative equilibrium of the original model survives.
(This paper was presented at the Soutwest Economic Theory
Conference in March, 2005.)
666156000000000652board4@http://www.dklevine.com/Information Transmission with Cheap and Almost-Cheap Talk by Navin Kartik2005-03-17T11:35:12-08:00Revealing Preferences for Fairness in Ultimatum Bargaining by James Andreoni, Marco Castillo and Ragan PetrieJeff Elyhttp://www.najecon.org/v9.htm
[Fairness and Reciprocity Special Issue]Part of the controversy over the Fehr/Schmidt (and other) calibrations revolves around the fact that existing ultimatum experiments do not generate enough information to pin down preferences. The correct response to this criticism is to get better data. This study does exactly that by allowing the responder to shrink offers as well as to accept and reject them. The underlying preferences appear to satisfy ordinary convexity and regularity assumptions, but are non-monotonic and fairly heterogeneous across individuals. Reviewed by Jeff Ely, Drew Fudenberg, and David K. Levine
666156000000000648board4@http://www.dklevine.com/Revealing Preferences for Fairness in Ultimatum Bargaining by James Andreoni, Marco Castillo and Ragan Petrie2005-03-16T20:46:21-08:00The Canonical Type Space for Interdependent Preferences by Faruk Gul and Wolfgang PesendorferJeff Elyhttp://www.najecon.org/v9.htm
[Fairness and Reciprocity Special Issue] An alternative to theories of "fairness" that seem to have a degree of arbitrariness as to what is "fair" are theories of reciprocity in which people want to be kind or cruel based upon their perception of whether their opponent(s) are kind or cruel. Gul-Pesendorfer provide an axiomatic basis for interpersonal utility that leads to a theory of reciprocity. As an application they show how their model is consistent with data on the ultimatum game and related experiments. Reviewed by Jeff Ely, Drew Fudenberg, and David K. Levine.
666156000000000638board4@http://www.dklevine.com/The Canonical Type Space for Interdependent Preferences by Faruk Gul and Wolfgang Pesendorfer2005-03-16T20:39:45-08:00Contracts, Fairness and Incentives by Ernst Fehr, Alexander Klein and Klaus M. SchmidtJeff Elyhttp://www.najecon.org/v9.htm
[Fairness and Reciprocity Special Issue] This is a very recent application of the Fehr-Schmidt methodology. Experimentally and theoretically it is shown that it is better not to rely soley on either trust or incentives when desiging contracts; bonus contracts that combine elements of incentives and trust do the best. Reviewed by Jeff Ely, Drew Fudenberg and David K. Levine.
666156000000000630board4@http://www.dklevine.com/Contracts, Fairness and Incentives by Ernst Fehr, Alexander Klein and Klaus M. Schmidt2005-03-16T20:16:59-08:00Brief Reply by Avner ShakedJeff Elyhttp://www.najecon.org/v9.htm
[Fairness and Reciprocity Special Issue Begins in Previous Volume] Shaked's brief reply takes a less pejorative tone and boils the debate down to one serious concern. When a researcher selects parameter values for a theoretical model consistent with data from already existing experiments, to what extent has it been shown that the model "explains" the data? Reviewed by Jeff Ely and David K. Levine
666156000000000623board4@http://www.dklevine.com/Brief Reply by Avner Shaked2005-03-16T20:07:09-08:00The Rhetoric of Inequity Aversion- A Reply by Ernst Fehr and Klaus SchmidtJeff Elyhttp://www.najecon.org/v8.htm
[Fairness and Reciprocity Special Issue] This is the reply to the Shaked "pamphlet" by Fehr and Schmidt. It provides substantive answers to the substantive points raised by Shaked. It points out that the questions about the analytic results arise from a typo not a substantive error, and provides additional insight into why the particular parameter values were chosen for the calibration. Reviewed by Jeff Ely and David K. Levine.[Fairness and Reciprocity Special Issue continued in next volume]
666156000000000619board4@http://www.dklevine.com/The Rhetoric of Inequity Aversion- A Reply by Ernst Fehr and Klaus Schmidt2005-03-16T19:59:43-08:00The Rhetoric of Inequity Aversion by Avner ShakedTed Bergstromhttp://www.najecon.org/v8.htm
Avner Shaked presents a sharply critical discussion of claims that Ernst Fehr and Klaus Schmidt have made for their
theory of "inequity aversion" and of the methods that they have used to promote this theory. A vigorous response by Fehr and Schmidt and a brief rejoinder by Shaked can also be found at the above link. While Shaked's criticism is directed at Fehr and Schmidt, it raises important issues about the handling of evidence in many branches of economics.
666156000000000614board4@http://www.dklevine.com/The Rhetoric of Inequity Aversion by Avner Shaked2005-03-15T21:26:52-08:00Wishful Thinking in Strategic Environments by Muhamet YildizRan Spieglerhttp://www.najecon.org/v8.htm
Recently there has been a proliferation of economic models with over-optimistic agents. However, these models have a catch: players have biased beliefs regarding the moves of Nature, yet standard equilibrium analysis implies that they are not allowed to hold biased beliefs regarding other players' moves. Here is an interesting attempt to address this problem. The paper analyzes complete-information games with players who are "wishful thinkers": they choose not only how to act but also what to believe regarding the opponent's action. Yildiz constructs an epistemic model, in which "rationality in a state" is replaced with "wishful thinking in a state", and "common knowledge of rationality" is replaced with "common knowledge of wishful thinking". Yildiz shows that only strategies that are played in Nash equilibrium are consistent with common knowledge of wishful thinking. The only kind of biased beliefs that the model essentially leaves room for is optimism about which Nash equilibrium is going to be played.
666156000000000600board4@http://www.dklevine.com/Wishful Thinking in Strategic Environments by Muhamet Yildiz2005-03-11T22:58:29-08:00Robust Mechanism Design by Dirk Bergemann and Stephen MorrisMatthew O. Jacksonhttp://www.najecon.org/v8.htm
The authors investigate mechanism design in a robust sense: requiring that the mechanism result in the desired equilibrium outcomes even when a large type space is considered, so that agents beliefs, beliefs about beliefs, etc., are incorporated into types and can vary. Anything that is ex post implementable is robustly implementable, and the authors identify settings where the converse holds so that these two concepts are equivalent. The authors also have an interesting companion paper (http://www.econ.yale.edu/%7Esm326/rmd-full.pdf) that looks at the full implementation question (accounting for all equilibria of a mechanism) in the face of such robustness requirements.
666156000000000596board4@http://www.dklevine.com/Robust Mechanism Design by Dirk Bergemann and Stephen Morris2005-03-10T15:20:38-08:00Who's Who in Networks. Wanted: the Key Player by Coralio Ballester, Antoni Calvo-Armengol and Yves ZenouMatthew O. Jacksonhttp://www.najecon.org/v8.htm
The authors provide results linking equilibrium behavior in a game among networked players to social networks-based measures of centrality, providing an interesting bridge between the economics and sociology literatures. A network of players each picks a level of some activity in a game where there are negative global externalities (competition) and local positive externalities (learning, cooperation, etc.) that come through the network. This system has feedback effects, and the authors show how equilibrium activity levels can be expressed in terms of a centrality measure from the social networks literature (Bonacich centrality). Besides deriving some comparative statics, the authors show how the centrality index can be used to identify ``key’’ players in terms of their decisions having maximum influence on overall activity.
666156000000000590board4@http://www.dklevine.com/Who's Who in Networks. Wanted: the Key Player by Coralio Ballester, Antoni Calvo-Armengol and Yves Zenou2005-03-10T14:14:35-08:00Untitled by Alvaro SandroniJeff Elyhttp://www.najecon.org/v7.htm
This is the latest word in a fascinating literature on testing expert forecasters. A forecaster is making probabilistic predictions about the realizations of a stochastic process. A principal wishes to test these predictions against the observed outcomes to determine whether the forecaster is a true expert or not. Previous literature had considered "calibration tests" and it is known that even a completely ignorant forecaster can pass any such test. Here there are literally no restrictions on the type of test and it is shown that an ignorant forecaster can use a mixed forecasting strategy to pass *any* test that a true expert can pass. The mixed strategy depends on the test, so an open question is whether the principal can improve by randomizing the test and keeping it secret.
Erratum added December 3, 2005: The assertion in (4.1) on p. 7 is not correct, and the main Proposition, Proposition 1, must be viewed as unproven.
666156000000000461board4@http://www.dklevine.com/Untitled by Alvaro Sandroni2005-02-01T20:50:59-08:00The Concept of Income in a General Equilibrium by J Sefton and M WealeDavid K. Levinehttp://www.najecon.org/v7.htm
Theorists are rightfully skeptical of national income accounting, recognizing that the arbitrary methods used have no theoretical basis. This paper shows that - if it is done correctly - national income accounting can have a theoretical basis, and income measured to directly correlate with welfare.
122247000000000847board4@http://www.dklevine.com/The Concept of Income in a General Equilibrium by J Sefton and M Weale2005-01-15T11:19:57-08:00Optimal Voting Schemes with Costly Information Acquisition by Alex Gershkov and Balazs SzentesDavid K. Levinehttp://www.najecon.org/v7.htm
This is a nice representative of a growing literature on mechanism design when information acquisition is costly. It examines the case of a common objective in a setting where commitment to ex post inefficiency is not practical, and characterizes the optimal mechanism. The optimal mechanism is not a committee, but rather to anonymously and sequentially consult people until a threshold of precision is reached. As a practical matter it can be thought of as a process of getting a second (and third) opinion based on the information in the first (and second) opinion. For incentive reasons, it is best not to let the different "doctors" know that you have consulted with the others.
122247000000000314board4@http://www.dklevine.com/Optimal Voting Schemes with Costly Information Acquisition by Alex Gershkov and Balazs Szentes2004-07-20T14:58:49-08:00Fairness and Redistribution by Alberto Alesina and George-Marios AngeletosDavid K. Levinehttp://www.najecon.org/v7.htm
If wealth is due to luck optimal insurance implies a confiscatory tax is efficient; if wealth is due to effort transfers should be low to encourage effort. But even if wealth is due to effort, if taxes are confiscatory, effort does not generate wealth, only luck does so beliefs that only luck matters will be self-confirming. Alesina and Angeletos use the resulting multiplicity of self-confirming equilibria to reconcile cross-country correlation of perceptions about wealth formation and tax policy.
122247000000000309board4@http://www.dklevine.com/Fairness and Redistribution by Alberto Alesina and George-Marios Angeletos2004-07-20T14:33:36-08:00Price Dispersion, Inflation and Welfare by Allen Head and Alok KumarDavid K. Levinehttp://www.najecon.org/v7.htm
This paper introduces prices dispersion into a monetary model. This has two striking consequences: first, inflation effects the variance of prices as well as the level of prices. Second, because price dispersion has an impact on the monopoly power of firms, inflation has unexpected welfare consequences. A mild inflation can be beneficial because it induces more search by consumers and reduces the monopoly power of firms.
122247000000000244board4@http://www.dklevine.com/Price Dispersion, Inflation and Welfare by Allen Head and Alok Kumar2004-06-04T13:47:39-08:00Limited Computational Resources Favor Rationality by Yuval SalantAriel Rubinsteinhttp://www.najecon.org/v6.htm
The paper presents an approach to computational aspects of choice functions. Computational complexity is measured
by the number of memory cells needed to carry out a computation. The main result states that the choice functions which require the least amount of memory are rationalizable while most functions require "much more" memory.
This is a very nice paper written by a very promising young researcher.
666156000000000084board4@http://www.dklevine.com/Limited Computational Resources Favor Rationality by Yuval Salant2003-07-20T05:21:11-08:00Carrot Or Stick: Group Selection and the Evolution of Reciprocal Preferences by Florian HeroldTed Bergstromhttp://www.najecon.org/v6.htm
This paper has the most interesting answer that I have seen to the question: How could natural selection produce creatures who get angry and bear costs to punish bad behavior even if no repeated encounter is likely? The paper also proposes an explanation of why some people will bear costs to reward good behavior, even without hope of reciprocity.
The paper uses a "haystack model" in which individuals are randomly assembled into groups where they interact and reproduce. The number of offspring that a player has is her payoff in an n-player prisoners' dilemma game in her group after account is taken for punishments and rewards. If there are enough punishers or enough rewarders in a group, it pays everybody in the group to cooperate. Otherwise they all defect. The paper shows that with this setup, there exists an evolutionarily stable equilibrium in which all players are programmed to engage in costly punishment and where everyone therefore cooperates. It also shows that there is a polymorphic equilibrium in which some individuals reward cooperation and that there is no equilibrium in which nobody rewards cooperation. Here is a glimpse of how a population of costly punishers can be stable. If almost everybody in the population at large is a punisher, then in almost all groups, there is a preponderance of punishers and so everybody chooses to cooperate. So punishers never have to bear the costs of punishing. The only way that a non-punisher could have a different payoff from a punisher would be if the random matching process puts her in a group of enough non-punishers so that everybody in the group plays defect. The remarkable thing that Herold notices is that when non-punishers are rare in the population at large, the expected payoff to non-punishers will actually be lower than the expected payoff to punishers.
666156000000000075board4@http://www.dklevine.com/Carrot Or Stick: Group Selection and the Evolution of Reciprocal Preferences by Florian Herold2003-06-28T19:08:40-08:00Building Rational Cooperation by Jim Andreoni and Larry SamuelsonTed Bergstromhttp://www.najecon.org/v6.htm
This paper is a showcase for the way that economic theory can inform laboratory testing and vice versa. Previous experimental results suggest that some (but not all) subjects prefer to cooperate in single-shot prisoners' dilemma iff they believe their opponents will cooperate. The paper presents a neat theory of how a heterogeneous population including some conditional cooperators would behave in a game of twice-repeated prisoners' dilemma. The theory is tested experimentally and seems to fare well.
666156000000000071board4@http://www.dklevine.com/Building Rational Cooperation by Jim Andreoni and Larry Samuelson2003-06-18T17:43:06-08:00Aggregative Public Goods Games by Richard Cornes and Roger HartleyTed Bergstromhttp://www.najecon.org/v6.htm
This paper introduces a clever trick for dealing with games in which each player's utility depends on his own consumption and on the sum of all players' contributions to a "public good". The proof greatly simplifies proofs of known results and seems to be a powerful tool for finding new ones.
666156000000000064board4@http://www.dklevine.com/Aggregative Public Goods Games by Richard Cornes and Roger Hartley2003-06-17T17:28:34-08:00The Linking of Collective Decisions and Efficiency by Matthew O. Jackson and Hugo F. SonnenscheinThomas R. Palfreyhttp://www.najecon.org/v6.htm
In Bayesian mechanism design problems, side payments are the usual way to relax incentive constraints. Without side payments, incentive constraints can be relaxed by linking mechanisms across several such problems. For example, in voting over a single issue individuals cannot express strength of preference, but with multiple issues this can be done by logrolling. The main result in this important paper is that, with many decision problems, if individuals have independent private values across these decisions, incentive constraints can be avoided entirely. This paper proposes a mechanism for achieving efficiency in the limit by requiring each agent’s reported profile of preferences across all decisions to match the prior distribution. Approximate truthful revelation is incentive compatible in a strong sense. There are many applications of this result.
666156000000000060board4@http://www.dklevine.com/The Linking of Collective Decisions and Efficiency by Matthew O. Jackson and Hugo F. Sonnenschein2003-06-14T08:03:02-08:00Addiction and Cue-Conditioned Cognitive Processes by B. D. Bernheim and Antonio RangelThomas R. Palfreyhttp://www.najecon.org/v5.htm
Bernheim and Rangel use facts on drug addiction to motivate an ingenious dynamic model of decision making featuring interactions of two separate cognitive systems, emotion and (rational) cognition. Agents always consume the drug when in a "hot mode", and may consume in a "cold mode". Level of addiction is a state variable, which goes up after consumption and goes down after abstention. They characterize the value function and show that for addictive goods it is declining in the state, so rational agents should never intentionally consume. The paper is a showcase piece for behavioral economics. The model is well-grounded in facts about neuroscience and addiction, and leads to interesting and testable empirical predictions with significant welfare implications.
666156000000000055board4@http://www.dklevine.com/Addiction and Cue-Conditioned Cognitive Processes by B. D. Bernheim and Antonio Rangel2003-06-14T06:51:16-08:00Beauty Contests, Bubbles and Iterated Expectations in Asset Markets by Franklin Allen, Stephen Morris and Hyun S. ShinDrew Fudenberghttp://www.najecon.org/v5.htm
This paper points out that the law of iterated expectations doesn't apply when averaged over a group of agents. Consequently, in a financial market with short-lived traders, the date 1 price need not equal the date 1 average expectation of the date 3 price: In constrast to representative-agent models, there need not be a martingale representation of the price process.
391749000000000557board4@http://www.dklevine.com/Beauty Contests, Bubbles and Iterated Expectations in Asset Markets by Franklin Allen, Stephen Morris and Hyun S. Shin2003-04-17T11:56:40-08:00Bounded Memory and Biases in Information Processing by Andrea WilsonDrew Fudenberghttp://www.najecon.org/v5.htm
This paper shows that some forms of biases in information processing are consistent with the optimal use of a finite memory. An infinitely-lived decision maker receives a sequence of signals, after which she must make a decision. The agent has a fixed number of memory states available, and chooses the updating rule and the map from memory to actions to maximize her expected payoff. The paper obtains a strikingly sharp characterization of the optimal rule when the agent is likely to observe a great many signals before needing to act.
234936000000000072board4@http://www.dklevine.com/Bounded Memory and Biases in Information Processing by Andrea Wilson2003-04-17T11:23:43-08:00Buyer Coalition Against Monopolistic Screening: On the Role of Asymmetric Information among Buyers by Doh-Shin Jeon and Domenico MenicucciDavid K. Levinehttp://www.najecon.org/v5.htm
A monopolist can achieve a degree of price discrimination by allowing consumers to self-select among a menu of alternatives. But what if the consumers collude? This paper establishes the surprising result that the monopolist can do as well in the face of collusion as in its absence. It does so by exploiting the fact that consumers also face asymmetric information.
506439000000000028board4@http://www.dklevine.com/Buyer Coalition Against Monopolistic Screening: On the Role of Asymmetric Information among Buyers by Doh-Shin Jeon and Domenico Menicucci2002-08-26T11:40:09-08:00Persistence in Law-of-One-Price Deviations: Evidence From Micro-Price Data by Mario J. Crucini and Mototsugu ShintaniDavid K. Levinehttp://www.najecon.org/v5.htm
A long-standing puzzle in the theory of exchange rates is that short-term exchange rate variations appear to have a half life of 3-5 years - difficult to explain as a consequence of nominal rigidities. Current thinking is that violations in the law of one price between countries is due to real factors, such as differences in the prices of non-traded inputs (such as land). Crucini and Shintani use prices on 270 goods across 90 countries and 13 US cities to study the long and short-term adjustment of prices. They find strong evidence that there are long-term price differences between cities, but not within the US. On the other hand, adjustment back to these long-term prices following short-term exchange rate shocks is rapid, with a half life of a year or less, both between countries and within the US.
506439000000000022board4@http://www.dklevine.com/Persistence in Law-of-One-Price Deviations: Evidence From Micro-Price Data by Mario J. Crucini and Mototsugu Shintani2002-08-04T12:53:59-08:00Inductive Inference: An Axiomatic Approach by Itzhak Gilboa and David SchmeidlerWolfgang Pesendorferhttp://www.najecon.org/v4.htm
An agent must rank the likelihood of eventualities based on a memory of past cases. For each memory, the agent is assumed to have a complete ranking. The paper provides axioms that yield the following representation: a weight is assigned to each case-eventuality pair and eventualities are ranked according to the sum of their weights (summed over all cases in memory). The key axiom asserts that if for two disjoint memories x is deemed more likely than y then the same ranking holds for the combined memory.
391749000000000547board4@http://www.dklevine.com/Inductive Inference: An Axiomatic Approach by Itzhak Gilboa and David Schmeidler2002-05-10T14:21:58-08:00Two-Class Voting: A Mechanism for Conflict Resolution? by Ernst Maug and Bilge YilmazWolfgang Pesendorferhttp://www.najecon.org/v4.htm
A group of agents must vote on a proposed policy. Agents have private information about the merits of the policy. The paper compares a simple voting rule with a "two-class" voting system. A simple voting rule requires k votes for the policy to be implemented. The two-class system partitions the agents into two groups and specifies a simple voting rule for each group. The policy is implemented if it is approved by both groups. The paper shows that two-class-voting aggregates more information if agents have sufficiently diverse preferences.
391749000000000539board4@http://www.dklevine.com/Two-Class Voting: A Mechanism for Conflict Resolution? by Ernst Maug and Bilge Yilmaz2002-05-03T09:33:09-08:00Coalitional Rationalizability by Attila AmbrusDrew Fudenberghttp://www.najecon.org/v4.htm
Suppose that whenever it is of mutual interest for a group of players to avoid certain strategies, the members of the group will make an implicit agreement not to play them. This leads to an iterative procedure of restricting players' beliefs and action choices; the strategies that remain are called coalitionally rationalizable. In contrast to coalitional solution concepts based on the notion of Nash equilibrium, the set of coalitionally rationalizable strategies is always nonempty.
391749000000000521board4@http://www.dklevine.com/Coalitional Rationalizability by Attila Ambrus2002-03-28T11:26:58-08:00Bad Reputation by Jeffrey Ely and Jusso ValimakiDrew Fudenberghttp://www.najecon.org/v4.htm
This paper constructs a striking example of a game played by a long-run player against a sequence of short-run opponents. When the long-run player is known to be rational, then regardless of the player's discount factor these is an equilibrium that achieves the highest feasible payoff, while introducing a particular “bad” commitment type lowers the equilibrium payoff of a patient long-run player. Moreover, holding fixed the probability of the bad type, the equilibrium payoff of a patient long-run player is lower than its payoff in a one-time interaction.
391749000000000517board4@http://www.dklevine.com/Bad Reputation by Jeffrey Ely and Jusso Valimaki2002-03-26T10:13:31-08:00Sequentially Optimal Mechanisms by Vasiliki SkretaMichele Boldrinhttp://www.najecon.org/v4.htm
You do not renounce selling a good just because the first round of bargaining failed. The literature on auctions under incomplete information assumes you would. If the first round fails, you committ to renounce selling the good forever. Skreta's paper looks at sequential mechanism design without this kind of committment. When designing today's mechanism for selling the good, you cannot commit to tomorrow's mechanism. Hence, the revelation principle cannot be applied. A characterization of the optimal dynamic incentive scheme for two-period problems without committment is provided. After characterizing the seller's problem for arbitrary agent types, the author shows that, in sequential bilateral bargaining, the optimal mechanism is to post a price each period.
391749000000000490board4@http://www.dklevine.com/Sequentially Optimal Mechanisms by Vasiliki Skreta2002-03-05T16:03:28-08:00Can We Really Observe Hyperbolic Discounting? by Jesus Fernandez-Villaverde and Arijit MukherjiDavid K. Levinehttp://www.najecon.org/v3.htm
Short answer: no. In the presence of uncertainty about future preferences geometric discounting gives rise to exactly the type of "preference reversal" observed in psychology experiments. However, hyperbolic discounting implies a preference for commitment, a preference that is not present in an experiment carefully designed to distinguish the two theories.
391749000000000481board4@http://www.dklevine.com/Can We Really Observe Hyperbolic Discounting? by Jesus Fernandez-Villaverde and Arijit Mukherji2002-02-16T17:42:47-08:00Social Choice without Rationality by Gil KalaiAriel Rubinsteinhttp://www.najecon.org/v3.htm
The power of high-caliber mathematicians is knocking on the doors of Social Choice Theory with some intersting and general results. The paper overviews two involved mathematical results proved by Saharon Shelah and Gil Kalai, which relate to the aggregation of classes of choice functions. In particular the following is result is discussed: Let C be a class of choice functions which does not contain all choice functions, and which is closed to all permutations of the names of the alternatives. Let F be a function which aggregates profiles of functions in C into C and which satisfies that if all individuals agree on the choice from a set so does the aggregator and that the choice from a set depends only on the individuals' choices from that set. Then, F must be a "dictatorship".
391749000000000457board4@http://www.dklevine.com/Social Choice without Rationality by Gil Kalai2002-01-28T09:31:30-08:00Optimal Indirect and Capital Taxation by Mikhail Golosov, Narayana Kocherlakota and Aleh TsyvinskiSusan Atheyhttp://www.najecon.org/v3.htm
This paper analyzes the classic Mirrlees problem of designing a taxation scheme to provide insurance, when agents' skills are private information. However, this paper considers a much more general model, where agents' skills may be multidimensional and can follow any stochastic process, and the tax system can be nonlinear and history-dependent. The paper provides an important and general insight: investment should be discouraged relative to the complete-information solution, because future investment income makes it more costly to provide incentives for truthful revelation. Thus, the optimal tax scheme has a positive capital income tax.
391749000000000453board4@http://www.dklevine.com/Optimal Indirect and Capital Taxation by Mikhail Golosov, Narayana Kocherlakota and Aleh Tsyvinski2002-01-10T10:22:51-08:00Testing Threats in Repeated Games by Ran SpieglerAriel Rubinsteinhttp://www.najecon.org/v3.htm
Two players play a 2X2 repeated game. Strategies are implemented by automatae. When a player responds to a state of the other machine with an action that is not the one-shot best response, he is deterred by some threat from the other player's machine. The paper suggests a solution
concept which essentially requires that for any player
(1) if the other player's machine has a recurrent state, the machine will eventually play the best response against it
and
(2) the solution path has the property that a player who does not play the one shot best response will be able to point to an event in the past which shows that the deterring threat exists. A partial characterization of the solution for repeated chicken and prisoner's dillema and a folk theorem are provided.
391749000000000447board4@http://www.dklevine.com/Testing Threats in Repeated Games by Ran Spiegler2002-01-07T13:29:47-08:00Learning To Play Games In Extensive Form By Valuation by Philippe Jehiel and Dov SametAriel Rubinsteinhttp://www.najecon.org/v3.htm
Extensive game theoretic models of reinforcement learning assume that players make their decisions based on their experienced valuation of the extensive game strategies. A new and exciting direction of research is suggested. Each player evaluates each move separately and chooses the move with the highest valuation. When applied to win-lose games and when the valuation of a move is taken to be the payoff incurred the last time the action was used, a player with a winning strategy always wins. For general payoffs, when the valuation of a move is the average of the payoffs incurred when it was used, and with a small "exploration" probability
the players converge to a subgame perfect equilibrium.
391749000000000013board4@http://www.dklevine.com/Learning To Play Games In Extensive Form By Valuation by Philippe Jehiel and Dov Samet2001-12-12T03:14:15-08:00Arms Races and Negotiations by Sandeep Baliga and Tomas SjostromTed Bergstromhttp://www.najecon.org/v2.htm
A "solution" to Schelling's burglar's dilemma. Players are randomly matched to play a two-person game where a player's type is private information. For some low types, defect is dominant strategy, while for higher types "cooperate" is a best response if and only if the probability that one's match cooperates is high enough. In equilibrium without talk, mutual distrust feeds on itself and all defect; but with pregame cheap talk, there is a remarkable partially separating equilibrium that maintains a good deal of cooperation.
391749000000000008board4@http://www.dklevine.com/Arms Races and Negotiations by Sandeep Baliga and Tomas Sjostrom2001-12-06T13:57:05-08:00Signals, Evolution and the Explanatory Power of Transient Information by Brian SkyrmsTed Bergstromhttp://www.najecon.org/v2.htm
Talk about consumers' surplus; Skyrms shows that in evolutionary dynamic models of games, talk can be extremely valuable, even though it is cheap. In games like the stag hunt or Nash bargaining game with multiple Nash equilibria, a cheap talk phase can create new polymorphic equilibria, and change the stability and the basins of attraction of equilibria in the base game.
391749000000000003board4@http://www.dklevine.com/Signals, Evolution and the Explanatory Power of Transient Information by Brian Skyrms2001-12-04T17:02:02-08:00Instantaneous Gratification by Christopher Harris and David LaibsonDavid K. Levinehttp://www.najecon.org/v2.htm
Worried that hyperbolic discounting is unusable? That there are a continuum of equilibria? This paper gives a clean usable formulation of hyperbolic discounting in continuous time with an illustrative application to a savings problem.
625018000000000270board4@http://www.dklevine.com/Instantaneous Gratification by Christopher Harris and David Laibson2001-10-26T12:16:26-08:00Repeated Games with Almost-Public Monitoring by George J. Mailath and Stephen MorrisSusan Atheyhttp://www.najecon.org/v2.htm
This paper provides positive and interpretable results for repeated games where monitoring is private (i.e. each player privately observes a noisy signal of opponent actions), but close to perfect. The main result shows that if in a (strict) perfect public equilibrium of a game with public monitoring, players condition their behavior only on a finite history, then such an equilibrium also exists in close-by games with private monitoring. The paper gives several instructive examples, illustrating how the restriction on memory implies that players have approximate common knowledge of the state of the game.
625018000000000260board4@http://www.dklevine.com/Repeated Games with Almost-Public Monitoring by George J. Mailath and Stephen Morris2001-10-04T22:17:04-08:00Consumption Savings Decisions with Quasi-Geometric Discounting by Per Krusell and Anthony A. Smith, Jr.Wolfgang Pesendorferhttp://www.najecon.org/v2.htm
A consumption-savings problem of an infinitely lived consumer with beta-delta preferences is shown to have many Markov perfect equilibria. In particular, the consumer's capital holdings may converge to any point in a wide interval.
625018000000000254board4@http://www.dklevine.com/Consumption Savings Decisions with Quasi-Geometric Discounting by Per Krusell and Anthony A. Smith, Jr.2001-10-02T13:21:06-08:00Participation Externalities and Asset Price Volatility by Helios HerreraMichele Boldrinhttp://www.najecon.org/v1.htm
Common wisdom and previous literature argue that, due to a law of large number effect, increasing participation decreases price volatility. Available evidence suggests the opposite is true. The model developed here reconciles theory with facts. Key assumptions are: (i) exogenous fixed cost of entry and, (ii) heterogeneity in risk aversion. In equilibrium new entrants are more risk averse than people already in the market. Hence their participation increases the volatility of supporting prices.
625018000000000244board4@http://www.dklevine.com/Participation Externalities and Asset Price Volatility by Helios Herrera2001-09-24T15:54:50-08:00The Long March of History: Farm Laborers Wages in England 1208-1850 by Gregory ClarkMichele Boldrinhttp://www.najecon.org/v1.htm
In which it is once again shown, on the basis of reliable historical records, that the idea of a long economic stagnation until the miracle of the industrial revolution is quite incorrect. Growth in labor productivity comes from far back in human history, and this is true even for England.
625018000000000240board4@http://www.dklevine.com/The Long March of History: Farm Laborers Wages in England 1208-1850 by Gregory Clark2001-09-24T15:48:20-08:00Costly Voting by Tilman BörgersWolfgang Pesendorferhttp://www.najecon.org/v1.htm
Should voting be mandatory or voluntary? The paper shows that voluntary voting Pareto dominates in a symmetric environment with costly voting.
625018000000000234board4@http://www.dklevine.com/Costly Voting by Tilman Börgers2001-09-24T07:42:46-08:00Two Competing Models of How People Learn in Games by Ed HopkinsDrew Fudenberghttp://www.najecon.org/v1.htm
This paper shows that the steady states and local stability properties of stochastic fictitious play and reinforcement learning are very similar. Apparent evidence to the contrary provided by Erev and Roth had ignored the way that the noise terms in reinforcement learning can push the steady states away from the Nash equilibria of the unperturbed game.
625018000000000228board4@http://www.dklevine.com/Two Competing Models of How People Learn in Games by Ed Hopkins2001-09-21T11:38:53-08:00Is it 'Economics and Psychology'?: The Case of Hyperbolic Discounting by Ariel RubinsteinDavid K. Levinehttp://www.najecon.org/v1.htm
If you have never heard of hyperbolic discounting, if you have heard that economics is all screwed up because it has been proven that in reality people use hyperbolic rather than exponential discounting, or if you are just wondering what the fuss is all about, this is the paper to read.
625018000000000221board4@http://www.dklevine.com/Is it 'Economics and Psychology'?: The Case of Hyperbolic Discounting by Ariel Rubinstein2001-09-21T09:57:08-08:00