NAJ EconomicsNAJ Economics2011-01-27T12:48:38-08:00tag:http://www.najecon.org,2011:/www.dklevine.comCopyright (c) 2011, Creative CommonsOvercoming Ideological Bias in Elections by Vijay Krishna and John Morgan2010-09-28T06:38:27-08:002010-09-28T06:38:27-08:00tag:http://www.najecon.org,2011:8145770000000005012010-09-28T06:38:27-08:00Imagine that voters care about the quality of the candidate but also lean towards one of the two candidates. With sincere voting the larger party wins - which will be inefficient if they don't care very much and the minority party does. Now take an off-the-rack voter participation model with stochastic participation costs and lots of strategic voters. Because the party that cares more is more willing to incur participation costs they have a better chance of winning. So much so that - amazingly - majority voting gets the outcome exactly right.David K. LevineOvercoming Ideological Bias in Elections by Vijay Krishna and John MorganCompetitive Markets without Commitment by Nick Netzer and Florian Scheuer2010-03-22T12:53:27-08:002010-03-22T12:53:27-08:00tag:http://www.najecon.org,2011:8145770000000004652010-03-22T12:53:27-08:00This paper shows that a problem that seems very awkward for a benevolent planner is at least ameliorated by means of an unsophisticated invisible hand. Consider a contract between a principal and a risk-averse agent who also subject to moral hazard, so there is a tension between incentives and risk-sharing. In a competitive market counterpart, with no commitment on either side, the outcome generated Pareto dominates the single principal outcome.Arthur RobsonCompetitive Markets without Commitment by Nick Netzer and Florian ScheuerJudicial Precedent as a Dynamic Rationale for Axiomatic Bargaining Theory by Marc Fleurbaey and John Roemer2010-02-01T20:47:39-08:002010-02-01T20:47:39-08:00tag:http://www.najecon.org,2011:8145770000000004452010-02-01T20:47:39-08:00Suppose that an arbitrator must allocate payoffs at every date from a freshly drawn two-person bargaining problem. She pays a penalty if her allocation at some date violates one of the Nash axioms relative to her past behavior. (Penalties are lower for inconsistencies with the more distant past.) Conditions are given for penalty-minimizing behavior to converge to the Nash bargaining solution over time.
This working paper opens (by example) a new line that might apply more generally to axiomatic solution concepts. If successful, it would connect judicial precedent to axiomatic reasoning.Debraj RayJudicial Precedent as a Dynamic Rationale for Axiomatic Bargaining Theory by Marc Fleurbaey and John RoemerRevealed Attention by Yusufcan Masatlioglu, Daisuke Nakajima and Erkut Ozbay2009-12-01T13:01:07-08:002009-12-01T13:01:07-08:00tag:http://www.najecon.org,2011:8145770000000004132009-12-01T13:01:07-08:00How can we infer preferences from choices, when the decision maker may be unaware of some of the feasible alternatives? This paper enriches the standard model of rational choice by assuming that the decision maker is characterized by two unobservables: her preferences, and an "attention filter" which reduces every choice set to a "consideration set", to which preferences are applied. The paper addresses the problem of identifying these two components from observed choices.Ran SpieglerRevealed Attention by Yusufcan Masatlioglu, Daisuke Nakajima and Erkut OzbayChoice by Sequential Procedures by Jose Apesteguia and Miguel Ballester2009-12-01T12:48:30-08:002009-12-01T12:48:30-08:00tag:http://www.najecon.org,2011:8145770000000004072009-12-01T12:48:30-08:00Consider a decision maker who employs a sequence of incomplete, acyclic preference relations to eliminate alternatives from the choice set, until she reaches a unique element, which is the one she ends up choosing. The paper axiomatizes this procedure and relates it to other methods of rationalizing choice behavior. It also extends a result due to Mariotti and Manzini (2007) showing that a sequentially rationalizable choice function violates IIA only if the preference relation revealed by choices from pairs contains cycles.Ran SpieglerChoice by Sequential Procedures by Jose Apesteguia and Miguel BallesterDemocratic Peace and Electoral Accountability by Paola Conconi, Nicolas Sahuguet and Maurizio Zanardi2009-11-21T17:55:51-08:002009-11-21T17:55:51-08:00tag:http://www.najecon.org,2011:8145770000000003922009-11-21T17:55:51-08:00This paper sheds very interesting new light on the so-called ``democratic peace,'' which refers to the fact that it is extremely rare for two democracies go to war with each other. The authors show that this is largely correlated with whether or not a democratic leader faces reelection, and that in fact democratic leaders facing term limits act like autocrats. Thus, incentives for reelection play a large part in explaining the democratic peace.Matthew O. JacksonDemocratic Peace and Electoral Accountability by Paola Conconi, Nicolas Sahuguet and Maurizio ZanardiBayesian Persuasion by Emir Kamenica and Matthew Gentzkow2009-10-14T21:10:38-08:002009-10-14T21:10:38-08:00tag:http://www.najecon.org,2011:8145770000000003722009-10-14T21:10:38-08:00A DA who always wishes to convict structures a case to a judge who wishes to do the right thing. The DA can select the forensic tests to perform-- which must be truthfully reported-- such that the judge will rationally convict a larger fraction of those on trial than are actually guilty. This paper breathes new life into Aumann and Maschler's results for repeated games with incomplete information.Arthur RobsonBayesian Persuasion by Emir Kamenica and Matthew GentzkowOverconfidence by Jean-Pierre Benoit and Juan Dubra2008-05-05T13:44:52-08:002008-05-05T13:44:52-08:00tag:http://www.najecon.org,2011:1222470000000021512008-05-05T13:44:52-08:00It has been firmly established in the experimental laboratory and in survey data that Garrison Keeler is right and everbody thinks that they are above average. It turns out that in a population of rational people with rational expectations and noisy data, this is exactly as theory predicts.David K. LevineOverconfidence by Jean-Pierre Benoit and Juan DubraKludged by Jeffrey C. Ely2008-04-03T14:02:10-08:002008-04-03T14:02:10-08:00tag:http://www.najecon.org,2011:1222470000000020472008-04-03T14:02:10-08:00Probably we should try to avoid the editors reviewing each others papers - but this one I can't resist. Anyone who has ever written computer code realizes that patches accumulate, and as they accumulate it gets harder to write additional code. Eventually programmers tear the thing apart and start over again, sometimes with good results, sometimes (Microsoft Vista) with catastrophic ones. Evolution of biological organisms is limited to patches - evolutionary processes cannot start all over again at the bottom. This paper works out an explicit evolutionary model. Even with large mutations occuring infinitely often, behavior can be perpetually suboptimal. (My own thought: evolution produced computer programmers, who can start over again.)David K. LevineKludged by Jeffrey C. ElyThe Optimal Multi-Stage Contest by Qiang Fu and JingFeng Lu2007-08-17T16:47:09-08:002007-08-17T16:47:09-08:00tag:http://www.najecon.org,2011:8436440000000003902007-08-17T16:47:09-08:00A Principal wants to maximize productive effort from a group of agents. The principal has a fixed budget to be allocated as prizes in some contest. This paper considers a general set of contests that potentially involve multiple knockout stages and analyzes the optimal multi-stage structure as well as the allocation of prizes over time. The effort-maximizing contest eliminates a single contestant in each period until two remain in the "finale" and reserves all prize money for the winner of the finale.Jeff ElyThe Optimal Multi-Stage Contest by Qiang Fu and JingFeng LuEquilibrium Degeneracy and Reputation Effects by Eduardo Faingold and Yuliy Sannikov2007-07-03T08:14:31-08:002007-07-03T08:14:31-08:00tag:http://www.najecon.org,2011:8436440000000002192007-07-03T08:14:31-08:00This paper examines reputation in continuous time models where a noisy signal of the long-run player follows a diffusion process. Without "KWMR" types the equilibrium is completely degenerate and the long-run player is limited to the static Nash equilibrium payoff. With "KWMR" types the equilibrium is non-degenerate. The key idea is that the length of effective horizon for the "audience" of short-run player(s) is critical. Without "KWMR" types, in continuous time, reaction to the long-run player must occur continuously, and the diffusion is very noisy over short time intervals. With "KWMR" types longer term information matters - a reputation is not won in a day - and over a longer period of time the diffusion is much less noisy.David K. LevineEquilibrium Degeneracy and Reputation Effects by Eduardo Faingold and Yuliy SannikovParental Guidance and Supervised Learning by Alessandro Lizzeri and Marciano Siniscalchi2007-06-02T18:21:47-08:002007-06-02T18:21:47-08:00tag:http://www.najecon.org,2011:8436440000000000932007-06-02T18:21:47-08:00The authors examine learning in a situation where a parent can guide a child's learning by intervening to eliminate mistakes. The parent faces a tradeoff between improving short term well-being and slowing learning. While there are large literatures in psychology and game theory on learning, examining such guided learning from a formal perspective provides interesting insights regarding the ability of the child, the discount rate, and the parent's intervention.Matthew O. JacksonParental Guidance and Supervised Learning by Alessandro Lizzeri and Marciano SiniscalchiContractually Stable Networks by Jean-Francois Caulier, Ana Mauleon and Vincent VAnnetelbosch2007-06-02T18:18:49-08:002007-06-02T18:18:49-08:00tag:http://www.najecon.org,2011:8436440000000000882007-06-02T18:18:49-08:00The authors provide a model where utility or productive value depends on how players are partitioned into communities as well as how they are connected in a network. The results extend notions of stability and value allocations. While this is still preliminary work, the setting will help our understanding of how individuals maintain relationships of different types at the same time, and how such layered relationships interact in determining social structure.Matthew O. JacksonContractually Stable Networks by Jean-Francois Caulier, Ana Mauleon and Vincent VAnnetelboschPrivate Monitoring with Infinite Histories by Christopher Phelan and Andrzej Skrzypacz2007-06-02T18:14:37-08:002007-06-02T18:14:37-08:00tag:http://www.najecon.org,2011:8436440000000000822007-06-02T18:14:37-08:00This paper uses a clever formulation of a repeated game with private monitoring to develop new techniques for characterizing equilibria. The authors examine time sequences that are infinite in both directions, so there is no ``starting period.'' This helps in formulating how strategies depend on past history, as it allows for a stationarity not possible in games with a starting period, and allows the authors to examine a class of equilibria playable by finite automata. This also provides new results into how coordination on past histories map into correlated equilibria.Matthew O. JacksonPrivate Monitoring with Infinite Histories by Christopher Phelan and Andrzej SkrzypaczMechanism Design with Private Communication by Vianney Dequiedt and David Martimort2007-06-02T18:10:51-08:002007-06-02T18:10:51-08:00tag:http://www.najecon.org,2011:8436440000000000772007-06-02T18:10:51-08:00This paper considers a twist on the familiar principal-multiple agent mechanism design environment. Agents never see any of the other agents' messages to the principal, other than what the principal tells them and the principal can lie. While one might think this is just a simple enrichment of the usual setting, it ends up with some important implications. It limits the ability of the principal to make use of information from one agent that is correlated with the type of another agent, because of incentive compatibility constraints. This restores a continuity of mechanism design in the information structure, and small amounts of correlation no longer have drastic effects. The mechanisms also have some intuitive features, and take a simpler form than in settings where all messages are verifiable by all agents.Matthew O. JacksonMechanism Design with Private Communication by Vianney Dequiedt and David MartimortA Dynamic Theory of Public Spending, Taxation and Debt by Marco Battaglini and Stephen Coate2006-05-11T11:45:55-08:002006-05-11T11:45:55-08:00tag:http://www.najecon.org,2011:3213070000000000292006-05-11T11:45:55-08:00This impressive paper integrates a number of important ideas from public finance, political economy, and macroeconomics. Robert Barro argued nearly thirty years ago that government should use public debt to smooth disortionary taxation over time. Battaglini and Coate depart from Barro's benevolent planner and model the voting process through which taxes, public good provision, pork spending and government borrowing are chosen. They provide a sharp characterization of equilibrium spending, taxation and debt management. Among other results, they show that in time of crisis the government will eschew pork spending and finance public goods with bonds that are paid off slowly as the crisis recedes.Jon LevinA Dynamic Theory of Public Spending, Taxation and Debt by Marco Battaglini and Stephen CoateSequential Innovation, Patents, and Innovation by James Bessen and Eric Maskin2006-05-10T14:21:16-08:002006-05-10T14:21:16-08:00tag:http://www.najecon.org,2011:3213070000000000242006-05-10T14:21:16-08:00Based on standard theory there should be little or no innovation without patents. Surprisingly, there is little if any empirical evidence that there is more innovation when patenting is possible than when it is not. This paper provides a theory of why this may be the case. It starts with the common observation that patents may inhibit innovation by raising the cost of downstream innovations that build on existing ideas. This is captured in an elegant model of sequential innovation that directs our attention to the features of the market and technology that make patent systems more or less desirable. Of particular importance is the interplay between two forces. On the one hand, private information about the value of a patent prevents existing patent holders from engaging in efficient licensing. On the other hand, if little profit is dissipated through competition then patents provide little additional incentive for innovation.David K. LevineSequential Innovation, Patents, and Innovation by James Bessen and Eric MaskinA Theory of Momentum in Sequential Voting by Nageeb Ali and Navin Kartik2006-05-10T13:15:34-08:002006-05-10T13:15:34-08:00tag:http://www.najecon.org,2011:3213070000000000192006-05-10T13:15:34-08:00What explains the intense competition for small early states like New Hampshire and Iowa in US Presidential primary elections? Presumably, other things equal, winning an early contest increases the chance of winning bigger, later states. This paper explains how such momentum effects can arise in a strict equilibrium of a sequential common-value election. Voters update their beliefs about the value of candidates based on the preceding history. Momentum is similar to an informational cascade.Jeff ElyA Theory of Momentum in Sequential Voting by Nageeb Ali and Navin KartikExperientia Docet: Professionals Play Minimax in Laboratory Experiments by Ignacio Palacios-Huerta and Oscar Volij2006-01-13T17:36:58-08:002006-01-13T17:36:58-08:00tag:http://www.najecon.org,2011:1222470000000010532006-01-13T17:36:58-08:00This is a provacative experimental paper, that contrasts the play of professional soccer players with college students in zero-sum games. The paper is interesting not only because of the finding that the professionals play much closer to equilibrium than the college students in a game that replicates a soccer penalty kick; but also because it shows that the the professionals play closer to equilibrium than the college students when they play a zero-sum game that none of the subjects is likely to be familiar with. This not only tells us how important it is to carefully define ``experience'' in experimental settings, and also provides some insight into the transfer of knowledge across strategic settings.Matthew O. JacksonExperientia Docet: Professionals Play Minimax in Laboratory Experiments by Ignacio Palacios-Huerta and Oscar VolijBayesian Consistent Prior Selection by Christopher Chambers and Takashi Hayashi2005-11-02T18:18:03-08:002005-11-02T18:18:03-08:00tag:http://www.najecon.org,2011:7848280000000005322005-11-02T18:18:03-08:00The paper is about rules for selecting priors from sets. Suppose you are given some information which implies that the probability belongs to some (compact, convex) subset. Assume you have a rule that selects from any such set a prior. (The paper extends to rules that select subsets.)
Consider two thought experiments. First, suppose you are given a subset F and your rule selects a prior p from it.
Next, suppose there is some additional piece of data d, and suppose you are given the subset F* where F* is obtained by updating, prior by prior, the elements of F based on d.
It seems natural to ask that the prior selected from F* is equal to the posterior derived from p based on d. Essentially this would be assuming that your rule for selecting priors is invariant to the order in which you receive information.
There is no rule satisfying this condition.Jeff ElyBayesian Consistent Prior Selection by Christopher Chambers and Takashi HayashiOptimal Menu of Menus with Self-Control Preferences by Susanna Esteban and Eiichi Miyagawa2005-09-24T19:01:55-08:002005-09-24T19:01:55-08:00tag:http://www.najecon.org,2011:7848280000000004582005-09-24T19:01:55-08:00In the standard model of monopoly pricing with incomplete information, the firm offers a menu of price-quantity pairs. On the other hand, many real-world tariff schedules consist of a menu of *non-linear* prices. For example, most cell-phone service plans provide initial minutes at a low marginal price (usually zero) and further quantity at a high price. This paper shows how this is necessarily part of an optimal tariff schedule for consumers who have self-control preferences. By adding a steep price for extra minutes to plans targeted for low value consumers, the monopoly relaxes the incentive constraint for high-value consumers. This is because high-value consumers forsee that if they select the low plan they will be tempted to use extra minutes and pay the high price.Jeff ElyOptimal Menu of Menus with Self-Control Preferences by Susanna Esteban and Eiichi MiyagawaStrategic Experimentation in Networks by Yann Bramoulle and Rachel Kranton2005-09-09T17:13:12-08:002005-09-09T17:13:12-08:00tag:http://www.najecon.org,2011:7848280000000004202005-09-09T17:13:12-08:00Bramoulle and Kranton study the play of local public goods games when players are linked by a network. Players derive payoffs from their own and immediate neighbors actions, and the authors discuss applications to experimentation where players learn and benefit from the actions of immediate neighbors, but not indirect neighbors. Nevertheless, indirect neighbors’ play affect direct neighbors’ choices, as actions are strategic substitutes. While tractability is a challenge, the authors are able to deduce some interesting patterns of behavior and specialization as a function of network architecture.Matthew O. JacksonStrategic Experimentation in Networks by Yann Bramoulle and Rachel KrantonOn the Existence of Monotone Pure Strategy Equilibria in Bayesian Games by Philip J. Reny2005-09-09T17:00:25-08:002005-09-09T17:00:25-08:00tag:http://www.najecon.org,2011:7848280000000004152005-09-09T17:00:25-08:00Phil Reny provides new techniques for proving existence of monotone pure strategy equilibria Bayesian games with multidimensional types and actions. It clarifies the role of single-crossing, and uses a weaker version than previously employed. The paper leaves open questions as it has a continuity assumption that while allowing for a wide variety of games with finite strategy sets, does not admit discontinuous games with continuum action spaces, as in many auction models. Nevertheless, the set of games covered is of substantial interest and more general than in previous results, the arguments deepen our understanding of what is needed for existence, and the use of contractability is clever and looks like to be useful beyond the current work.Matthew O. JacksonOn the Existence of Monotone Pure Strategy Equilibria in Bayesian Games by Philip J. RenyNoise, Information and the Favorite-Longshot Bias by Marco Ottaviani and Peter Sorenson2005-09-02T12:10:33-08:002005-09-02T12:10:33-08:00tag:http://www.najecon.org,2011:7848280000000004002005-09-02T12:10:33-08:00In pari-mutuel (i.e. horse race) betting, the payout odds on a horse are determined by the fraction out of the total betting pool wagered on the horse. There is a well-documented regularity, the favorite-longshot bias: the odds for longshots overstate, and the odds for favorites understate, the true probability of winning. This paper provides a simple and elegant explanation based on a model of privately informed bettors. Because the odds are determined only after betting closes, bettors do not know the payout odds when they bet. After the betting closes the revelation of odds aggregates information but by then bets cannot be changed. In particular, those who bet on the horse which turned out to be the longshot realize ex post that the probability of winning is lower than they thought, hence the bias.Jeff ElyNoise, Information and the Favorite-Longshot Bias by Marco Ottaviani and Peter SorensonRevenue Comparisons for Auctions when Bidders Have Arbitrary Types by Yeon-Koo Che and Ian Gale2005-04-19T10:15:14-08:002005-04-19T10:15:14-08:00tag:http://www.najecon.org,2011:7848280000000000152005-04-19T10:15:14-08:00This paper introduces a clever approach to compare the expected revenue from different auction designs, assuming bidders in each auction use symmetric equilibrium strategies. Roughly, the idea is replace each bidder with a twin who is risk-neutral and has a single-dimensional value distribution but whose equilibrium bidding behavior would be the same. The expected revenue from the original auction must equal the second-order statistic of the twin's value distribution. This approach is used to extend the revenue equivalence theorem to discrete value distributions and to greatly generalize the revenue ranking of first and second price auctions with risk-averse bidders.Jon LevinRevenue Comparisons for Auctions when Bidders Have Arbitrary Types by Yeon-Koo Che and Ian GaleAxiomatic Justification of Stable Equilibria by Srihari Govindan and Robert Wilson2005-04-17T18:25:16-08:002005-04-17T18:25:16-08:00tag:http://www.najecon.org,2011:7848280000000000092005-04-17T18:25:16-08:00Refinements usually judge equilibria by the plausibility of the beliefs that support them. This is an extensive form criterion. On the other hand, a traditional viewpoint is that rationality-based theories of behavior should depend only on the strategic form. This paper embraces both views. Invariance, the requirement that solutions depend only on the reduced strategic form, together with a version of backward induction, imply Kohlberg-Mertens stability.Jeff ElyAxiomatic Justification of Stable Equilibria by Srihari Govindan and Robert WilsonDiscounting and altruism to future decision-makers by Maria Saez-Marti and Jorgen W. Weibull2005-04-17T13:01:36-08:002005-04-17T13:01:36-08:00tag:http://www.najecon.org,2011:7848280000000000042005-04-17T13:01:36-08:00Suppose that parents have an altruistic utility function that is a weighted sum of the "selfish" utilities of their own consumption and that of each of their descendants. Each descendant in turn has such an altruistic utility function.
When can a parent's altruistic utility be written as a positively weighted linear combination of her own selfish utility and the altruistic utilities of her descendants.
The authors show that there are interesting cases where this cannot be done and where it can be done. They provide remarkably crisp necessary and sufficient conditions for
when it can be done.Ted BergstromDiscounting and altruism to future decision-makers by Maria Saez-Marti and Jorgen W. WeibullContracts, Liability Restrictions and Costly Verification by Francesco Squintani2005-04-06T06:57:29-08:002005-04-06T06:57:29-08:00tag:http://www.najecon.org,2011:1727820000000001042005-04-06T06:57:29-08:00This paper is an original attempt to open the black box known as "the court" - specifically, the notion of "verifiability" - in contract theory. Before playing a normal-form game, two players sign a contract that conditions on the outcome of the game. The verifiability constraint is modeled as a partition of the set of outcomes. So far, this description follows Bernheim and Whinston (AER 1998). Squintani takes a step forward and allows for non-product partitions: even when the court can verify a breach of contract, it may be unable to verify who did it. In such cases (assuming individual liability), the players may want to write a "roundabout" contract containing an explicit commitment which the players expect to be violated in equilibrium. Although such a contract is unenforceable, it may dominate all enforceable contracts. Squintani also considers non-partitional verifiability structures and examines conditions for their desirability, in terms of familiar properties of non-partitional information structures.Ran SpieglerContracts, Liability Restrictions and Costly Verification by Francesco SquintaniSimultaneous Search by Hector Chade and Lones Smith2005-03-29T13:40:31-08:002005-03-29T13:40:31-08:00tag:http://www.najecon.org,2011:1727820000000000362005-03-29T13:40:31-08:00The authors study the following "college application" problem. A student who assigns different probabilities to getting in to different colleges and faces a per-college cost of application must identify the optimal set of schools to which to submit applications. The paper develops a marginal improvement algorithm to solve for the optimal application portfolio and provides elegant characterization results. The optimal portfolio turns out to be less risky than if applications are made sequentially but more risky than if the student picked the most individually promising schools in order ignoring the portfolio aspect of the problem.Jon LevinSimultaneous Search by Hector Chade and Lones SmithThe Folk Theorem for Games with Private, Almost-Perfect Monitoring by Johannes Horner and Wojciech Olszewski2005-03-25T09:52:31-08:002005-03-25T09:52:31-08:00tag:http://www.najecon.org,2011:1727820000000000092005-03-25T09:52:31-08:00This completes a step toward the Folk Thereom for repeated games with private monitoring. It establishes the result for all n-player games where monitoring is "almost-perfect" (and the usual dimensionality conditions are satisfied.) Previous authors had accomplished as much as possible using limited methods to avoid getting their hands dirty. The quantity of dirt on these authors' hands is impressive.Jeff ElyThe Folk Theorem for Games with Private, Almost-Perfect Monitoring by Johannes Horner and Wojciech OlszewskiArt and the Internet: Blessing the Curse? by Patrick Legros2005-03-21T17:41:13-08:002005-03-21T17:41:13-08:00tag:http://www.najecon.org,2011:1727820000000000032005-03-21T17:41:13-08:00Brilliant survey of the field, containing also original work. The boldrin-levine model is extended to model the markets for art. An artist can embody each original idea in m different works of art y(1), ..., y(m). The creativity of an artist is indexed by n, the number of original ideas. Each artist has a fixed capacity k of making works of art, hence he will make s=k/n embodiments for each of his own n ideas. This defines his portfolio. Consumers and artists have access to reproduction technologies. The welfare theorems apply, and are used to derive supporting prices and decentralization mechanisms. He argues that incentive provisions for new creative ideas have little to do, at least in principle, with most of the crying about piracy and in support of copyrights.Michele BoldrinArt and the Internet: Blessing the Curse? by Patrick LegrosInformation Transmission with Cheap and Almost-Cheap Talk by Navin Kartik2005-03-17T11:35:12-08:002005-03-17T11:35:12-08:00tag:http://www.najecon.org,2011:6661560000000006522005-03-17T11:35:12-08:00Consider a signaling model with many equilibria, some of which are more informative than others. Suppose that truth is free, but lies are costly. Then significant information can be transmitted by talk. So some of the less informative equilibria disappear. Which equilibria remain in the limiting case as the cost of lying approaches zero? The paper shows that under "a standard condition", only the most informative equilibrium of the original model survives.
(This paper was presented at the Soutwest Economic Theory
Conference in March, 2005.)Ted BergstromInformation Transmission with Cheap and Almost-Cheap Talk by Navin KartikRevealing Preferences for Fairness in Ultimatum Bargaining by James Andreoni, Marco Castillo and Ragan Petrie2005-03-16T20:46:21-08:002005-03-16T20:46:21-08:00tag:http://www.najecon.org,2011:6661560000000006482005-03-16T20:46:21-08:00[Fairness and Reciprocity Special Issue]Part of the controversy over the Fehr/Schmidt (and other) calibrations revolves around the fact that existing ultimatum experiments do not generate enough information to pin down preferences. The correct response to this criticism is to get better data. This study does exactly that by allowing the responder to shrink offers as well as to accept and reject them. The underlying preferences appear to satisfy ordinary convexity and regularity assumptions, but are non-monotonic and fairly heterogeneous across individuals. Reviewed by Jeff Ely, Drew Fudenberg, and David K. LevineJeff ElyRevealing Preferences for Fairness in Ultimatum Bargaining by James Andreoni, Marco Castillo and Ragan PetrieThe Canonical Type Space for Interdependent Preferences by Faruk Gul and Wolfgang Pesendorfer2005-03-16T20:39:45-08:002005-03-16T20:39:45-08:00tag:http://www.najecon.org,2011:6661560000000006382005-03-16T20:39:45-08:00[Fairness and Reciprocity Special Issue] An alternative to theories of "fairness" that seem to have a degree of arbitrariness as to what is "fair" are theories of reciprocity in which people want to be kind or cruel based upon their perception of whether their opponent(s) are kind or cruel. Gul-Pesendorfer provide an axiomatic basis for interpersonal utility that leads to a theory of reciprocity. As an application they show how their model is consistent with data on the ultimatum game and related experiments. Reviewed by Jeff Ely, Drew Fudenberg, and David K. Levine.Jeff ElyThe Canonical Type Space for Interdependent Preferences by Faruk Gul and Wolfgang PesendorferContracts, Fairness and Incentives by Ernst Fehr, Alexander Klein and Klaus M. Schmidt2005-03-16T20:16:59-08:002005-03-16T20:16:59-08:00tag:http://www.najecon.org,2011:6661560000000006302005-03-16T20:16:59-08:00[Fairness and Reciprocity Special Issue] This is a very recent application of the Fehr-Schmidt methodology. Experimentally and theoretically it is shown that it is better not to rely soley on either trust or incentives when desiging contracts; bonus contracts that combine elements of incentives and trust do the best. Reviewed by Jeff Ely, Drew Fudenberg and David K. Levine.Jeff ElyContracts, Fairness and Incentives by Ernst Fehr, Alexander Klein and Klaus M. SchmidtBrief Reply by Avner Shaked2005-03-16T20:07:09-08:002005-03-16T20:07:09-08:00tag:http://www.najecon.org,2011:6661560000000006232005-03-16T20:07:09-08:00[Fairness and Reciprocity Special Issue Begins in Previous Volume] Shaked's brief reply takes a less pejorative tone and boils the debate down to one serious concern. When a researcher selects parameter values for a theoretical model consistent with data from already existing experiments, to what extent has it been shown that the model "explains" the data? Reviewed by Jeff Ely and David K. LevineJeff ElyBrief Reply by Avner ShakedThe Rhetoric of Inequity Aversion- A Reply by Ernst Fehr and Klaus Schmidt2005-03-16T19:59:43-08:002005-03-16T19:59:43-08:00tag:http://www.najecon.org,2011:6661560000000006192005-03-16T19:59:43-08:00[Fairness and Reciprocity Special Issue] This is the reply to the Shaked "pamphlet" by Fehr and Schmidt. It provides substantive answers to the substantive points raised by Shaked. It points out that the questions about the analytic results arise from a typo not a substantive error, and provides additional insight into why the particular parameter values were chosen for the calibration. Reviewed by Jeff Ely and David K. Levine.[Fairness and Reciprocity Special Issue continued in next volume]Jeff ElyThe Rhetoric of Inequity Aversion- A Reply by Ernst Fehr and Klaus SchmidtThe Rhetoric of Inequity Aversion by Avner Shaked2005-03-15T21:26:52-08:002005-03-15T21:26:52-08:00tag:http://www.najecon.org,2011:6661560000000006142005-03-15T21:26:52-08:00Avner Shaked presents a sharply critical discussion of claims that Ernst Fehr and Klaus Schmidt have made for their
theory of "inequity aversion" and of the methods that they have used to promote this theory. A vigorous response by Fehr and Schmidt and a brief rejoinder by Shaked can also be found at the above link. While Shaked's criticism is directed at Fehr and Schmidt, it raises important issues about the handling of evidence in many branches of economics.Ted BergstromThe Rhetoric of Inequity Aversion by Avner ShakedWishful Thinking in Strategic Environments by Muhamet Yildiz2005-03-11T22:58:29-08:002005-03-11T22:58:29-08:00tag:http://www.najecon.org,2011:6661560000000006002005-03-11T22:58:29-08:00Recently there has been a proliferation of economic models with over-optimistic agents. However, these models have a catch: players have biased beliefs regarding the moves of Nature, yet standard equilibrium analysis implies that they are not allowed to hold biased beliefs regarding other players' moves. Here is an interesting attempt to address this problem. The paper analyzes complete-information games with players who are "wishful thinkers": they choose not only how to act but also what to believe regarding the opponent's action. Yildiz constructs an epistemic model, in which "rationality in a state" is replaced with "wishful thinking in a state", and "common knowledge of rationality" is replaced with "common knowledge of wishful thinking". Yildiz shows that only strategies that are played in Nash equilibrium are consistent with common knowledge of wishful thinking. The only kind of biased beliefs that the model essentially leaves room for is optimism about which Nash equilibrium is going to be played.Ran SpieglerWishful Thinking in Strategic Environments by Muhamet YildizRobust Mechanism Design by Dirk Bergemann and Stephen Morris2005-03-10T15:20:38-08:002005-03-10T15:20:38-08:00tag:http://www.najecon.org,2011:6661560000000005962005-03-10T15:20:38-08:00The authors investigate mechanism design in a robust sense: requiring that the mechanism result in the desired equilibrium outcomes even when a large type space is considered, so that agents beliefs, beliefs about beliefs, etc., are incorporated into types and can vary. Anything that is ex post implementable is robustly implementable, and the authors identify settings where the converse holds so that these two concepts are equivalent. The authors also have an interesting companion paper (http://www.econ.yale.edu/%7Esm326/rmd-full.pdf) that looks at the full implementation question (accounting for all equilibria of a mechanism) in the face of such robustness requirements.Matthew O. JacksonRobust Mechanism Design by Dirk Bergemann and Stephen MorrisWho's Who in Networks. Wanted: the Key Player by Coralio Ballester, Antoni Calvo-Armengol and Yves Zenou2005-03-10T14:14:35-08:002005-03-10T14:14:35-08:00tag:http://www.najecon.org,2011:6661560000000005902005-03-10T14:14:35-08:00The authors provide results linking equilibrium behavior in a game among networked players to social networks-based measures of centrality, providing an interesting bridge between the economics and sociology literatures. A network of players each picks a level of some activity in a game where there are negative global externalities (competition) and local positive externalities (learning, cooperation, etc.) that come through the network. This system has feedback effects, and the authors show how equilibrium activity levels can be expressed in terms of a centrality measure from the social networks literature (Bonacich centrality). Besides deriving some comparative statics, the authors show how the centrality index can be used to identify ``key’’ players in terms of their decisions having maximum influence on overall activity.Matthew O. JacksonWho's Who in Networks. Wanted: the Key Player by Coralio Ballester, Antoni Calvo-Armengol and Yves ZenouUntitled by Alvaro Sandroni2005-02-01T20:50:59-08:002005-02-01T20:50:59-08:00tag:http://www.najecon.org,2011:6661560000000004612005-02-01T20:50:59-08:00This is the latest word in a fascinating literature on testing expert forecasters. A forecaster is making probabilistic predictions about the realizations of a stochastic process. A principal wishes to test these predictions against the observed outcomes to determine whether the forecaster is a true expert or not. Previous literature had considered "calibration tests" and it is known that even a completely ignorant forecaster can pass any such test. Here there are literally no restrictions on the type of test and it is shown that an ignorant forecaster can use a mixed forecasting strategy to pass *any* test that a true expert can pass. The mixed strategy depends on the test, so an open question is whether the principal can improve by randomizing the test and keeping it secret.
Erratum added December 3, 2005: The assertion in (4.1) on p. 7 is not correct, and the main Proposition, Proposition 1, must be viewed as unproven.Jeff ElyUntitled by Alvaro SandroniThe Concept of Income in a General Equilibrium by J Sefton and M Weale2005-01-15T11:19:57-08:002005-01-15T11:19:57-08:00tag:http://www.najecon.org,2011:1222470000000008472005-01-15T11:19:57-08:00Theorists are rightfully skeptical of national income accounting, recognizing that the arbitrary methods used have no theoretical basis. This paper shows that - if it is done correctly - national income accounting can have a theoretical basis, and income measured to directly correlate with welfare.David K. LevineThe Concept of Income in a General Equilibrium by J Sefton and M WealeOptimal Voting Schemes with Costly Information Acquisition by Alex Gershkov and Balazs Szentes2004-07-20T14:58:49-08:002004-07-20T14:58:49-08:00tag:http://www.najecon.org,2011:1222470000000003142004-07-20T14:58:49-08:00This is a nice representative of a growing literature on mechanism design when information acquisition is costly. It examines the case of a common objective in a setting where commitment to ex post inefficiency is not practical, and characterizes the optimal mechanism. The optimal mechanism is not a committee, but rather to anonymously and sequentially consult people until a threshold of precision is reached. As a practical matter it can be thought of as a process of getting a second (and third) opinion based on the information in the first (and second) opinion. For incentive reasons, it is best not to let the different "doctors" know that you have consulted with the others.David K. LevineOptimal Voting Schemes with Costly Information Acquisition by Alex Gershkov and Balazs SzentesFairness and Redistribution by Alberto Alesina and George-Marios Angeletos2004-07-20T14:33:36-08:002004-07-20T14:33:36-08:00tag:http://www.najecon.org,2011:1222470000000003092004-07-20T14:33:36-08:00If wealth is due to luck optimal insurance implies a confiscatory tax is efficient; if wealth is due to effort transfers should be low to encourage effort. But even if wealth is due to effort, if taxes are confiscatory, effort does not generate wealth, only luck does so beliefs that only luck matters will be self-confirming. Alesina and Angeletos use the resulting multiplicity of self-confirming equilibria to reconcile cross-country correlation of perceptions about wealth formation and tax policy.David K. LevineFairness and Redistribution by Alberto Alesina and George-Marios AngeletosPrice Dispersion, Inflation and Welfare by Allen Head and Alok Kumar2004-06-04T13:47:39-08:002004-06-04T13:47:39-08:00tag:http://www.najecon.org,2011:1222470000000002442004-06-04T13:47:39-08:00This paper introduces prices dispersion into a monetary model. This has two striking consequences: first, inflation effects the variance of prices as well as the level of prices. Second, because price dispersion has an impact on the monopoly power of firms, inflation has unexpected welfare consequences. A mild inflation can be beneficial because it induces more search by consumers and reduces the monopoly power of firms.David K. LevinePrice Dispersion, Inflation and Welfare by Allen Head and Alok KumarLimited Computational Resources Favor Rationality by Yuval Salant2003-07-20T05:21:11-08:002003-07-20T05:21:11-08:00tag:http://www.najecon.org,2011:6661560000000000842003-07-20T05:21:11-08:00The paper presents an approach to computational aspects of choice functions. Computational complexity is measured
by the number of memory cells needed to carry out a computation. The main result states that the choice functions which require the least amount of memory are rationalizable while most functions require "much more" memory.
This is a very nice paper written by a very promising young researcher.Ariel RubinsteinLimited Computational Resources Favor Rationality by Yuval SalantCarrot Or Stick: Group Selection and the Evolution of Reciprocal Preferences by Florian Herold2003-06-28T19:08:40-08:002003-06-28T19:08:40-08:00tag:http://www.najecon.org,2011:6661560000000000752003-06-28T19:08:40-08:00This paper has the most interesting answer that I have seen to the question: How could natural selection produce creatures who get angry and bear costs to punish bad behavior even if no repeated encounter is likely? The paper also proposes an explanation of why some people will bear costs to reward good behavior, even without hope of reciprocity.
The paper uses a "haystack model" in which individuals are randomly assembled into groups where they interact and reproduce. The number of offspring that a player has is her payoff in an n-player prisoners' dilemma game in her group after account is taken for punishments and rewards. If there are enough punishers or enough rewarders in a group, it pays everybody in the group to cooperate. Otherwise they all defect. The paper shows that with this setup, there exists an evolutionarily stable equilibrium in which all players are programmed to engage in costly punishment and where everyone therefore cooperates. It also shows that there is a polymorphic equilibrium in which some individuals reward cooperation and that there is no equilibrium in which nobody rewards cooperation. Here is a glimpse of how a population of costly punishers can be stable. If almost everybody in the population at large is a punisher, then in almost all groups, there is a preponderance of punishers and so everybody chooses to cooperate. So punishers never have to bear the costs of punishing. The only way that a non-punisher could have a different payoff from a punisher would be if the random matching process puts her in a group of enough non-punishers so that everybody in the group plays defect. The remarkable thing that Herold notices is that when non-punishers are rare in the population at large, the expected payoff to non-punishers will actually be lower than the expected payoff to punishers.Ted BergstromCarrot Or Stick: Group Selection and the Evolution of Reciprocal Preferences by Florian HeroldBuilding Rational Cooperation by Jim Andreoni and Larry Samuelson2003-06-18T17:43:06-08:002003-06-18T17:43:06-08:00tag:http://www.najecon.org,2011:6661560000000000712003-06-18T17:43:06-08:00This paper is a showcase for the way that economic theory can inform laboratory testing and vice versa. Previous experimental results suggest that some (but not all) subjects prefer to cooperate in single-shot prisoners' dilemma iff they believe their opponents will cooperate. The paper presents a neat theory of how a heterogeneous population including some conditional cooperators would behave in a game of twice-repeated prisoners' dilemma. The theory is tested experimentally and seems to fare well.Ted BergstromBuilding Rational Cooperation by Jim Andreoni and Larry SamuelsonAggregative Public Goods Games by Richard Cornes and Roger Hartley2003-06-17T17:28:34-08:002003-06-17T17:28:34-08:00tag:http://www.najecon.org,2011:6661560000000000642003-06-17T17:28:34-08:00This paper introduces a clever trick for dealing with games in which each player's utility depends on his own consumption and on the sum of all players' contributions to a "public good". The proof greatly simplifies proofs of known results and seems to be a powerful tool for finding new ones.Ted BergstromAggregative Public Goods Games by Richard Cornes and Roger HartleyThe Linking of Collective Decisions and Efficiency by Matthew O. Jackson and Hugo F. Sonnenschein2003-06-14T08:03:02-08:002003-06-14T08:03:02-08:00tag:http://www.najecon.org,2011:6661560000000000602003-06-14T08:03:02-08:00In Bayesian mechanism design problems, side payments are the usual way to relax incentive constraints. Without side payments, incentive constraints can be relaxed by linking mechanisms across several such problems. For example, in voting over a single issue individuals cannot express strength of preference, but with multiple issues this can be done by logrolling. The main result in this important paper is that, with many decision problems, if individuals have independent private values across these decisions, incentive constraints can be avoided entirely. This paper proposes a mechanism for achieving efficiency in the limit by requiring each agent’s reported profile of preferences across all decisions to match the prior distribution. Approximate truthful revelation is incentive compatible in a strong sense. There are many applications of this result.Thomas R. PalfreyThe Linking of Collective Decisions and Efficiency by Matthew O. Jackson and Hugo F. SonnenscheinAddiction and Cue-Conditioned Cognitive Processes by B. D. Bernheim and Antonio Rangel2003-06-14T06:51:16-08:002003-06-14T06:51:16-08:00tag:http://www.najecon.org,2011:6661560000000000552003-06-14T06:51:16-08:00Bernheim and Rangel use facts on drug addiction to motivate an ingenious dynamic model of decision making featuring interactions of two separate cognitive systems, emotion and (rational) cognition. Agents always consume the drug when in a "hot mode", and may consume in a "cold mode". Level of addiction is a state variable, which goes up after consumption and goes down after abstention. They characterize the value function and show that for addictive goods it is declining in the state, so rational agents should never intentionally consume. The paper is a showcase piece for behavioral economics. The model is well-grounded in facts about neuroscience and addiction, and leads to interesting and testable empirical predictions with significant welfare implications.Thomas R. PalfreyAddiction and Cue-Conditioned Cognitive Processes by B. D. Bernheim and Antonio RangelBeauty Contests, Bubbles and Iterated Expectations in Asset Markets by Franklin Allen, Stephen Morris and Hyun S. Shin2003-04-17T11:56:40-08:002003-04-17T11:56:40-08:00tag:http://www.najecon.org,2011:3917490000000005572003-04-17T11:56:40-08:00This paper points out that the law of iterated expectations doesn't apply when averaged over a group of agents. Consequently, in a financial market with short-lived traders, the date 1 price need not equal the date 1 average expectation of the date 3 price: In constrast to representative-agent models, there need not be a martingale representation of the price process.Drew FudenbergBeauty Contests, Bubbles and Iterated Expectations in Asset Markets by Franklin Allen, Stephen Morris and Hyun S. ShinBounded Memory and Biases in Information Processing by Andrea Wilson2003-04-17T11:23:43-08:002003-04-17T11:23:43-08:00tag:http://www.najecon.org,2011:2349360000000000722003-04-17T11:23:43-08:00This paper shows that some forms of biases in information processing are consistent with the optimal use of a finite memory. An infinitely-lived decision maker receives a sequence of signals, after which she must make a decision. The agent has a fixed number of memory states available, and chooses the updating rule and the map from memory to actions to maximize her expected payoff. The paper obtains a strikingly sharp characterization of the optimal rule when the agent is likely to observe a great many signals before needing to act.Drew FudenbergBounded Memory and Biases in Information Processing by Andrea WilsonBuyer Coalition Against Monopolistic Screening: On the Role of Asymmetric Information among Buyers by Doh-Shin Jeon and Domenico Menicucci2002-08-26T11:40:09-08:002002-08-26T11:40:09-08:00tag:http://www.najecon.org,2011:5064390000000000282002-08-26T11:40:09-08:00A monopolist can achieve a degree of price discrimination by allowing consumers to self-select among a menu of alternatives. But what if the consumers collude? This paper establishes the surprising result that the monopolist can do as well in the face of collusion as in its absence. It does so by exploiting the fact that consumers also face asymmetric information.David K. LevineBuyer Coalition Against Monopolistic Screening: On the Role of Asymmetric Information among Buyers by Doh-Shin Jeon and Domenico MenicucciPersistence in Law-of-One-Price Deviations: Evidence From Micro-Price Data by Mario J. Crucini and Mototsugu Shintani2002-08-04T12:53:59-08:002002-08-04T12:53:59-08:00tag:http://www.najecon.org,2011:5064390000000000222002-08-04T12:53:59-08:00A long-standing puzzle in the theory of exchange rates is that short-term exchange rate variations appear to have a half life of 3-5 years - difficult to explain as a consequence of nominal rigidities. Current thinking is that violations in the law of one price between countries is due to real factors, such as differences in the prices of non-traded inputs (such as land). Crucini and Shintani use prices on 270 goods across 90 countries and 13 US cities to study the long and short-term adjustment of prices. They find strong evidence that there are long-term price differences between cities, but not within the US. On the other hand, adjustment back to these long-term prices following short-term exchange rate shocks is rapid, with a half life of a year or less, both between countries and within the US.David K. LevinePersistence in Law-of-One-Price Deviations: Evidence From Micro-Price Data by Mario J. Crucini and Mototsugu ShintaniInductive Inference: An Axiomatic Approach by Itzhak Gilboa and David Schmeidler2002-05-10T14:21:58-08:002002-05-10T14:21:58-08:00tag:http://www.najecon.org,2011:3917490000000005472002-05-10T14:21:58-08:00An agent must rank the likelihood of eventualities based on a memory of past cases. For each memory, the agent is assumed to have a complete ranking. The paper provides axioms that yield the following representation: a weight is assigned to each case-eventuality pair and eventualities are ranked according to the sum of their weights (summed over all cases in memory). The key axiom asserts that if for two disjoint memories x is deemed more likely than y then the same ranking holds for the combined memory.Wolfgang PesendorferInductive Inference: An Axiomatic Approach by Itzhak Gilboa and David SchmeidlerTwo-Class Voting: A Mechanism for Conflict Resolution? by Ernst Maug and Bilge Yilmaz2002-05-03T09:33:09-08:002002-05-03T09:33:09-08:00tag:http://www.najecon.org,2011:3917490000000005392002-05-03T09:33:09-08:00A group of agents must vote on a proposed policy. Agents have private information about the merits of the policy. The paper compares a simple voting rule with a "two-class" voting system. A simple voting rule requires k votes for the policy to be implemented. The two-class system partitions the agents into two groups and specifies a simple voting rule for each group. The policy is implemented if it is approved by both groups. The paper shows that two-class-voting aggregates more information if agents have sufficiently diverse preferences.Wolfgang PesendorferTwo-Class Voting: A Mechanism for Conflict Resolution? by Ernst Maug and Bilge YilmazCoalitional Rationalizability by Attila Ambrus2002-03-28T11:26:58-08:002002-03-28T11:26:58-08:00tag:http://www.najecon.org,2011:3917490000000005212002-03-28T11:26:58-08:00Suppose that whenever it is of mutual interest for a group of players to avoid certain strategies, the members of the group will make an implicit agreement not to play them. This leads to an iterative procedure of restricting players' beliefs and action choices; the strategies that remain are called coalitionally rationalizable. In contrast to coalitional solution concepts based on the notion of Nash equilibrium, the set of coalitionally rationalizable strategies is always nonempty.Drew FudenbergCoalitional Rationalizability by Attila AmbrusBad Reputation by Jeffrey Ely and Jusso Valimaki2002-03-26T10:13:31-08:002002-03-26T10:13:31-08:00tag:http://www.najecon.org,2011:3917490000000005172002-03-26T10:13:31-08:00This paper constructs a striking example of a game played by a long-run player against a sequence of short-run opponents. When the long-run player is known to be rational, then regardless of the player's discount factor these is an equilibrium that achieves the highest feasible payoff, while introducing a particular “bad” commitment type lowers the equilibrium payoff of a patient long-run player. Moreover, holding fixed the probability of the bad type, the equilibrium payoff of a patient long-run player is lower than its payoff in a one-time interaction.Drew FudenbergBad Reputation by Jeffrey Ely and Jusso ValimakiSequentially Optimal Mechanisms by Vasiliki Skreta2002-03-05T16:03:28-08:002002-03-05T16:03:28-08:00tag:http://www.najecon.org,2011:3917490000000004902002-03-05T16:03:28-08:00You do not renounce selling a good just because the first round of bargaining failed. The literature on auctions under incomplete information assumes you would. If the first round fails, you committ to renounce selling the good forever. Skreta's paper looks at sequential mechanism design without this kind of committment. When designing today's mechanism for selling the good, you cannot commit to tomorrow's mechanism. Hence, the revelation principle cannot be applied. A characterization of the optimal dynamic incentive scheme for two-period problems without committment is provided. After characterizing the seller's problem for arbitrary agent types, the author shows that, in sequential bilateral bargaining, the optimal mechanism is to post a price each period.Michele BoldrinSequentially Optimal Mechanisms by Vasiliki SkretaCan We Really Observe Hyperbolic Discounting? by Jesus Fernandez-Villaverde and Arijit Mukherji2002-02-16T17:42:47-08:002002-02-16T17:42:47-08:00tag:http://www.najecon.org,2011:3917490000000004812002-02-16T17:42:47-08:00Short answer: no. In the presence of uncertainty about future preferences geometric discounting gives rise to exactly the type of "preference reversal" observed in psychology experiments. However, hyperbolic discounting implies a preference for commitment, a preference that is not present in an experiment carefully designed to distinguish the two theories.David K. LevineCan We Really Observe Hyperbolic Discounting? by Jesus Fernandez-Villaverde and Arijit MukherjiSocial Choice without Rationality by Gil Kalai2002-01-28T09:31:30-08:002002-01-28T09:31:30-08:00tag:http://www.najecon.org,2011:3917490000000004572002-01-28T09:31:30-08:00The power of high-caliber mathematicians is knocking on the doors of Social Choice Theory with some intersting and general results. The paper overviews two involved mathematical results proved by Saharon Shelah and Gil Kalai, which relate to the aggregation of classes of choice functions. In particular the following is result is discussed: Let C be a class of choice functions which does not contain all choice functions, and which is closed to all permutations of the names of the alternatives. Let F be a function which aggregates profiles of functions in C into C and which satisfies that if all individuals agree on the choice from a set so does the aggregator and that the choice from a set depends only on the individuals' choices from that set. Then, F must be a "dictatorship".Ariel RubinsteinSocial Choice without Rationality by Gil KalaiOptimal Indirect and Capital Taxation by Mikhail Golosov, Narayana Kocherlakota and Aleh Tsyvinski2002-01-10T10:22:51-08:002002-01-10T10:22:51-08:00tag:http://www.najecon.org,2011:3917490000000004532002-01-10T10:22:51-08:00This paper analyzes the classic Mirrlees problem of designing a taxation scheme to provide insurance, when agents' skills are private information. However, this paper considers a much more general model, where agents' skills may be multidimensional and can follow any stochastic process, and the tax system can be nonlinear and history-dependent. The paper provides an important and general insight: investment should be discouraged relative to the complete-information solution, because future investment income makes it more costly to provide incentives for truthful revelation. Thus, the optimal tax scheme has a positive capital income tax.Susan AtheyOptimal Indirect and Capital Taxation by Mikhail Golosov, Narayana Kocherlakota and Aleh TsyvinskiTesting Threats in Repeated Games by Ran Spiegler2002-01-07T13:29:47-08:002002-01-07T13:29:47-08:00tag:http://www.najecon.org,2011:3917490000000004472002-01-07T13:29:47-08:00Two players play a 2X2 repeated game. Strategies are implemented by automatae. When a player responds to a state of the other machine with an action that is not the one-shot best response, he is deterred by some threat from the other player's machine. The paper suggests a solution
concept which essentially requires that for any player
(1) if the other player's machine has a recurrent state, the machine will eventually play the best response against it
and
(2) the solution path has the property that a player who does not play the one shot best response will be able to point to an event in the past which shows that the deterring threat exists. A partial characterization of the solution for repeated chicken and prisoner's dillema and a folk theorem are provided.Ariel RubinsteinTesting Threats in Repeated Games by Ran SpieglerLearning To Play Games In Extensive Form By Valuation by Philippe Jehiel and Dov Samet2001-12-12T03:14:15-08:002001-12-12T03:14:15-08:00tag:http://www.najecon.org,2011:3917490000000000132001-12-12T03:14:15-08:00Extensive game theoretic models of reinforcement learning assume that players make their decisions based on their experienced valuation of the extensive game strategies. A new and exciting direction of research is suggested. Each player evaluates each move separately and chooses the move with the highest valuation. When applied to win-lose games and when the valuation of a move is taken to be the payoff incurred the last time the action was used, a player with a winning strategy always wins. For general payoffs, when the valuation of a move is the average of the payoffs incurred when it was used, and with a small "exploration" probability
the players converge to a subgame perfect equilibrium.Ariel RubinsteinLearning To Play Games In Extensive Form By Valuation by Philippe Jehiel and Dov SametArms Races and Negotiations by Sandeep Baliga and Tomas Sjostrom2001-12-06T13:57:05-08:002001-12-06T13:57:05-08:00tag:http://www.najecon.org,2011:3917490000000000082001-12-06T13:57:05-08:00A "solution" to Schelling's burglar's dilemma. Players are randomly matched to play a two-person game where a player's type is private information. For some low types, defect is dominant strategy, while for higher types "cooperate" is a best response if and only if the probability that one's match cooperates is high enough. In equilibrium without talk, mutual distrust feeds on itself and all defect; but with pregame cheap talk, there is a remarkable partially separating equilibrium that maintains a good deal of cooperation.Ted BergstromArms Races and Negotiations by Sandeep Baliga and Tomas SjostromSignals, Evolution and the Explanatory Power of Transient Information by Brian Skyrms2001-12-04T17:02:02-08:002001-12-04T17:02:02-08:00tag:http://www.najecon.org,2011:3917490000000000032001-12-04T17:02:02-08:00Talk about consumers' surplus; Skyrms shows that in evolutionary dynamic models of games, talk can be extremely valuable, even though it is cheap. In games like the stag hunt or Nash bargaining game with multiple Nash equilibria, a cheap talk phase can create new polymorphic equilibria, and change the stability and the basins of attraction of equilibria in the base game.Ted BergstromSignals, Evolution and the Explanatory Power of Transient Information by Brian SkyrmsInstantaneous Gratification by Christopher Harris and David Laibson2001-10-26T12:16:26-08:002001-10-26T12:16:26-08:00tag:http://www.najecon.org,2011:6250180000000002702001-10-26T12:16:26-08:00Worried that hyperbolic discounting is unusable? That there are a continuum of equilibria? This paper gives a clean usable formulation of hyperbolic discounting in continuous time with an illustrative application to a savings problem.David K. LevineInstantaneous Gratification by Christopher Harris and David LaibsonRepeated Games with Almost-Public Monitoring by George J. Mailath and Stephen Morris2001-10-04T22:17:04-08:002001-10-04T22:17:04-08:00tag:http://www.najecon.org,2011:6250180000000002602001-10-04T22:17:04-08:00This paper provides positive and interpretable results for repeated games where monitoring is private (i.e. each player privately observes a noisy signal of opponent actions), but close to perfect. The main result shows that if in a (strict) perfect public equilibrium of a game with public monitoring, players condition their behavior only on a finite history, then such an equilibrium also exists in close-by games with private monitoring. The paper gives several instructive examples, illustrating how the restriction on memory implies that players have approximate common knowledge of the state of the game.Susan AtheyRepeated Games with Almost-Public Monitoring by George J. Mailath and Stephen MorrisConsumption Savings Decisions with Quasi-Geometric Discounting by Per Krusell and Anthony A. Smith, Jr.2001-10-02T13:21:06-08:002001-10-02T13:21:06-08:00tag:http://www.najecon.org,2011:6250180000000002542001-10-02T13:21:06-08:00A consumption-savings problem of an infinitely lived consumer with beta-delta preferences is shown to have many Markov perfect equilibria. In particular, the consumer's capital holdings may converge to any point in a wide interval.Wolfgang PesendorferConsumption Savings Decisions with Quasi-Geometric Discounting by Per Krusell and Anthony A. Smith, Jr.Participation Externalities and Asset Price Volatility by Helios Herrera2001-09-24T15:54:50-08:002001-09-24T15:54:50-08:00tag:http://www.najecon.org,2011:6250180000000002442001-09-24T15:54:50-08:00Common wisdom and previous literature argue that, due to a law of large number effect, increasing participation decreases price volatility. Available evidence suggests the opposite is true. The model developed here reconciles theory with facts. Key assumptions are: (i) exogenous fixed cost of entry and, (ii) heterogeneity in risk aversion. In equilibrium new entrants are more risk averse than people already in the market. Hence their participation increases the volatility of supporting prices.Michele BoldrinParticipation Externalities and Asset Price Volatility by Helios HerreraThe Long March of History: Farm Laborers Wages in England 1208-1850 by Gregory Clark2001-09-24T15:48:20-08:002001-09-24T15:48:20-08:00tag:http://www.najecon.org,2011:6250180000000002402001-09-24T15:48:20-08:00In which it is once again shown, on the basis of reliable historical records, that the idea of a long economic stagnation until the miracle of the industrial revolution is quite incorrect. Growth in labor productivity comes from far back in human history, and this is true even for England.Michele BoldrinThe Long March of History: Farm Laborers Wages in England 1208-1850 by Gregory ClarkCostly Voting by Tilman Börgers2001-09-24T07:42:46-08:002001-09-24T07:42:46-08:00tag:http://www.najecon.org,2011:6250180000000002342001-09-24T07:42:46-08:00Should voting be mandatory or voluntary? The paper shows that voluntary voting Pareto dominates in a symmetric environment with costly voting.Wolfgang PesendorferCostly Voting by Tilman BörgersTwo Competing Models of How People Learn in Games by Ed Hopkins2001-09-21T11:38:53-08:002001-09-21T11:38:53-08:00tag:http://www.najecon.org,2011:6250180000000002282001-09-21T11:38:53-08:00This paper shows that the steady states and local stability properties of stochastic fictitious play and reinforcement learning are very similar. Apparent evidence to the contrary provided by Erev and Roth had ignored the way that the noise terms in reinforcement learning can push the steady states away from the Nash equilibria of the unperturbed game.Drew FudenbergTwo Competing Models of How People Learn in Games by Ed HopkinsIs it 'Economics and Psychology'?: The Case of Hyperbolic Discounting by Ariel Rubinstein2001-09-21T09:57:08-08:002001-09-21T09:57:08-08:00tag:http://www.najecon.org,2011:6250180000000002212001-09-21T09:57:08-08:00If you have never heard of hyperbolic discounting, if you have heard that economics is all screwed up because it has been proven that in reality people use hyperbolic rather than exponential discounting, or if you are just wondering what the fuss is all about, this is the paper to read.David K. LevineIs it 'Economics and Psychology'?: The Case of Hyperbolic Discounting by Ariel Rubinstein