NAJ Economics
Peer Reviews of Economics Publications


Charter Editorial Board
ISSN 15584682 
Volume 6  June 14, 2003 Previous Next
1. Matthew O. Jackson and Hugo F. Sonnenschein The Linking of Collective Decisions and Efficiency In Bayesian mechanism design problems, side payments are the usual way to relax incentive constraints. Without side payments, incentive constraints can be relaxed by linking mechanisms across several such problems. For example, in voting over a single issue individuals cannot express strength of preference, but with multiple issues this can be done by logrolling. The main result in this important paper is that, with many decision problems, if individuals have independent private values across these decisions, incentive constraints can be avoided entirely. This paper proposes a mechanism for achieving efficiency in the limit by requiring each agent’s reported profile of preferences across all decisions to match the prior distribution. Approximate truthful revelation is incentive compatible in a strong sense. There are many applications of this result.
2. Richard Cornes and Roger Hartley Aggregative Public Goods Games This paper introduces a clever trick for dealing with games in which each player's utility depends on his own consumption and on the sum of all players' contributions to a "public good". The proof greatly simplifies proofs of known results and seems to be a powerful tool for finding new ones.
3. Jim Andreoni and Larry Samuelson Building Rational Cooperation This paper is a showcase for the way that economic theory can inform laboratory testing and vice versa. Previous experimental results suggest that some (but not all) subjects prefer to cooperate in singleshot prisoners' dilemma iff they believe their opponents will cooperate. The paper presents a neat theory of how a heterogeneous population including some conditional cooperators would behave in a game of twicerepeated prisoners' dilemma. The theory is tested experimentally and seems to fare well.
4. Florian Herold Carrot Or Stick: Group Selection and the Evolution of Reciprocal Preferences This paper has the most interesting answer that I have seen to the question: How could natural selection produce creatures who get angry and bear costs to punish bad behavior even if no repeated encounter is likely? The paper also proposes an explanation of why some people will bear costs to reward good behavior, even without hope of reciprocity. The paper uses a "haystack model" in which individuals are randomly assembled into groups where they interact and reproduce. The number of offspring that a player has is her payoff in an nplayer prisoners' dilemma game in her group after account is taken for punishments and rewards. If there are enough punishers or enough rewarders in a group, it pays everybody in the group to cooperate. Otherwise they all defect. The paper shows that with this setup, there exists an evolutionarily stable equilibrium in which all players are programmed to engage in costly punishment and where everyone therefore cooperates. It also shows that there is a polymorphic equilibrium in which some individuals reward cooperation and that there is no equilibrium in which nobody rewards cooperation. Here is a glimpse of how a population of costly punishers can be stable. If almost everybody in the population at large is a punisher, then in almost all groups, there is a preponderance of punishers and so everybody chooses to cooperate. So punishers never have to bear the costs of punishing. The only way that a nonpunisher could have a different payoff from a punisher would be if the random matching process puts her in a group of enough nonpunishers so that everybody in the group plays defect. The remarkable thing that Herold notices is that when nonpunishers are rare in the population at large, the expected payoff to nonpunishers will actually be lower than the expected payoff to punishers.
5. Yuval Salant Limited Computational Resources Favor Rationality The paper presents an approach to computational aspects of choice functions. Computational complexity is measured by the number of memory cells needed to carry out a computation. The main result states that the choice functions which require the least amount of memory are rationalizable while most functions require "much more" memory. This is a very nice paper written by a very promising young researcher.
