This file is also available in Adobe Acrobat PDF format
THE NEED FOR A NEW PARADIGM
1.1 The Rational Representative Agent Paradigm
Since the start of the rational-expectations revolution in the mid 1970s, macroeconomic analysis has been dominated by the assumption of the rational representative agent. This assumption has now become the main building block of macroeconomic modeling, so much so that macroeconomic models without a microfoundation based on the rationality assumption are simply no longer taken seriously.
The main ingredients of the rational representative agent model are well-known. First, the representative agent is assumed to continuously maximize his utility in an intertemporal framework. Second, the forecasts made by this agent are rational in the sense that they take all available information into account, including the information embedded in the structure of the model. This implies that agents do not make systematic errors in forecasting future variables. The great attractiveness of the rational-expectations model is that it imposes consistency between the agent's forecasts (the subjective probability distribution of future variables) and the forecasts generated by the model (the objective probability). Third, the model implies that markets are efficient, i.e., asset prices (including the exchange rate that will be the focus of our analysis in this book) reflect all relevant information about the fundamental variables that determine the value of the asset. The mechanism that ensures efficiency can be described as follows: when rational agents value a particular asset, they compute the fundamental value of that asset and price it accordingly. If they obtain new information, they will immediately incorporate that information in their valuation of the asset. Failure to arbitrage on that new information would imply that they leave profit opportunities unexploited. Rational agents will not do this.1
The efficient-market implication of the model is important because it generates a number of predictions that can be tested empirically. The main empirical prediction of the rational representative agent model is that changes in the price of an asset must reflect unexpected changes (news) of the fundamental variables. The corollary of this prediction is that when there is no news about the underlying fundamentals the price cannot change. Thus, only if the fundamentals change unexpectedly should one observe movements in the asset price.2
There is no doubt that the rational-expectations–efficient-market (REEM) model is an elegant intellectual construction that continues to exert a strong attraction to many economists enamored by the beauty of its logical consistency.
1.2 Cracks in the REEM Construction
A scientific theory, unlike religious belief or works of art, should not be judged by its elegance but by its capacity to withstand empirical testing. It is now increasingly recognized that the main empirical predictions of the REEM model have not been borne out by the facts.
A major shock to the theory came when spectacular bubbles and crashes occurred both in the stock markets and in the foreign exchange markets.3 We show two episodes of such bubbles and crashes in the foreign exchange market. Figure 1.1 shows the DM/USD exchange rate during the period 1980–87. We observe that during the first half of the 1980s the dollar experienced a strong upward movement, so that by 1985 it had doubled in value against the German mark. In 1985 the crash came, and in the following two years the dollar halved in value again against the mark. Something similar occurred during the period 1996–2004. Figure 1.2 shows the euro/USD rate during that period.4 From 1996 to 2001 the dollar increased by more than 70% against the euro (DM). In 2001 it started a spectacular decline, such that by the end of 2004 it was at approximately the same level as it was at the start of the bubble. Thus, it appears that, since the start of flexible exchange rates in 1973, the dollar has been involved in two major bubbles and crashes, each of which has lasted eight to nine years. To put it another way, since 1973 the dollar was caught by a bubble-and-crash dynamics for about half of the time.
Defenders of the REEM model will of course have difficulty in accepting this bubble-and-crash interpretation of the dollar exchange rate movements.5 They will typically argue that these movements can be explained by a series of fundamental shocks. For example, it can be argued that the upward movement of the dollar during the period 1980–85 was due to a series of positive shocks in U.S. fundamentals, to be followed after 1985 by a series of negative news about U.S. fundamentals for two years. A similar interpretation of the dollar movements could be given during the period 1996–2004. The trouble with this interpretation is that there was simply not enough positive news to account for the long upward movements. Similarly, there was not enough negative news to explain the crashes afterward (see Frankel and Froot 1986, 1990).
The lack of sufficient news in the fundamentals to explain the wide swings of the exchange rates is made very visible in a recent study of Ehrmann and Fratzscher (2005). These authors looked carefully at a whole series of fundamental variables (inflation differentials, current accounts, output growth) in the United States and in the Eurozone and computed an index of news in these fundamentals. We show the result of their calculation in Figure 1.3, together with the USD/euro rate during the period 1993–2003.6 It is striking to find that there is very little movement in the news variable, while the exchange rate is moving wildly around this news variable. In addition, quite often the exchange rate moves in the opposite direction to that expected if it were driven by fundamental news. For example, it can be seen from Figure 1.3 that during the period 1999–2001 the news about the euro was relatively more favorable than the news about the dollar (see De Grauwe (2000) for additional evidence), and yet the euro declined spectacularly against the dollar.
This lack of correlation between news in the fundamentals and the exchange rate movements has been documented in many other studies. For example, using high-frequency data, Goodhart (1989) and Goodhart and Figliuoli (1991) found that most of the time the exchange rate moves when there is no observable news in the fundamental economic variables. This result was also confirmed by Faust et al. (2002). Thus, the empirical evidence that we now have is that the exchange rate movements are very much disconnected from movements in the fundamentals. This finding for the foreign exchange market is consistent with similar findings in the stock markets (see Cutler et al. 1988; Shiller 1989; Shleifer 2000). It has led Obstfeld and Rogoff to identify this "disconnect puzzle" as the major unexplained empirical regularity about the exchange rates (Obstfeld and Rogoff 1996).
One possible rationalization of the large cyclical movements of the exchange rate around its fundamental could be found in the celebrated Dornbusch (1976) model. This model, which continues to be very popular in the classroom, incorporates the efficient-market hypothesis but adds some rigidities in the prices of goods.7 As a result, it produces the overshooting phenomenon, i.e., the exchange rate overreacts to news in the fundamentals (it overshoots) and then moves back over time to its fundamental value. The Dornbusch model has sometimes been seen as the model that rescues the efficient-market model because it shows that temporary movements away from the fundamental need not imply that markets are inefficient. The trouble with this rescue operation is that the dynamics of the Dornbusch model does not conform with the dynamics observed in the foreign exchange market. The Dornbusch model predicts that news in the fundamentals leads to an instantaneous jump in the exchange rate and a slow movement back to the fundamental afterwards. But this is not what we observe in reality. When the exchange rate is involved in a bubble and crash, it appears that the bubble phase is quite long, while the crash is typically shorter. This is exactly the opposite of the dynamics in the Dornbusch model. In addition, in order for the Dornbusch model to explain the bubbles of 1980–85 and of 1996–2001 one would need a long series of positive news in the dollar fundamentals, each of which leads to positive jumps in the exchange rate. As we showed above, there were simply not enough positive shocks in the fundamentals to do the trick. On one occasion, i.e., during the period 1999–2001, the exchange rate movements were even going in a direction opposite to the news.
Other empirical anomalies have been discovered that invalidate the REEM model. We will discuss these anomalies in greater detail in this book. Here we mention the phenomenon of volatility clustering, i.e., it has been found that the volatility of the exchange rate is autocorrelated. This means that there are tranquil and turbulent periods. These periods of tranquillity and turbulence alternate in an unpredictable fashion. We show some descriptive evidence of this in Figure 1.4, which presents the daily changes (the returns) in the DM/USD rate during the period 1986–95. It can clearly be seen that periods of small and large changes in the exchange rate tend to cluster. If one adheres to the REEM model, the only way one can explain this volatility clustering is to assume that the volatility in the news about fundamental variables is clustered, so that the volatility clustering in the exchange rate just reflects volatility clustering in the news. Explaining volatility clustering in the exchange rate by volatility clustering in the news, however, is a "Deus ex machina" explanation that shifts the need to explain these phenomena to a higher level. In other words, the REEM model has no explanation for the widespread empirical observation that the volatility of asset prices, including the exchange rate, tends to be clustered.
Figure 1.4 reveals another empirical phenomenon observed in the exchange rate data. This is that the returns8 are not normally distributed.9 We can see from Figure 1.4 that there were quite often daily changes in the DM/USD rate that are several times the standard deviation during the sample period (1986–95). We observe that at three occasions the daily change was 0.015 or more, which is five times the standard deviation of 0.0029. Such a large change has a chance of occurring only once every 7000 years, if the returns are normally distributed. The fact that we find three such large changes during a sample period of ten years rules out the possibility that these returns are normally distributed. In order to show the contrast with normally distributed returns, we present a series of artificially generated returns that are normally distributed in Figure 1.5. The contrast between these normally distributed returns and observed exchange rate returns is striking.10
The evidence that the exchange rate returns are not normally distributed leads to a similar problem for the REEM model. In order to explain the nonnormality of exchange rate changes, the news in the fundamentals must be nonnormally distributed. As in the previous case, this interpretation shifts the need to explain the phenomenon to another level. In other words, the REEM model has no explanation for the widely observed fact that returns are not normally distributed,11 and has to rely on a "Deus ex machina" explanation.12
The reaction of economists to the accumulating evidence against the REEM model has gone in two directions. One reaction has been to invoke technical reasons why the empirical evidence is unfavorable for the REEM model. For example, tests of the existence of bubbles and crashes and of a disconnect puzzle have often been dismissed by arguing that these tests are based on a misspecification of the underlying model of the fundamental variables. This criticism, however, has become less and less credible, as researchers have specified many different models producing similar results, i.e., that asset prices and exchange rates in particular are only loosely related to underlying fundamentals. Quite paradoxically, the increasing empirical evidence against the REEM model has led to retrenchment into a more rigorous microfoundation of macroeconomic models (see Obstfeld and Rogoff (1996), who have done this very systematically in open-economy macroeconomics). This retrenchment into theoretical analysis has as yet led to few empirical propositions that can be refuted by the data.
The second reaction to the empirical failure of the REEM model has been to look for alternative modeling approaches. This reaction has been greatly stimulated by the emergence of a new branch of economic thinking introduced first by Simon (1957) and later by Kahneman and Tversky, Thaler, Shleifer, and others. In the next section we summarize the main ideas behind this school of thought that is sometimes called "behavioral economics" and, when applied to financial markets, "behavioral finance."13
1.3 Behavioral Finance: A Quick Survey
The starting point of this school of thought is the large body of evidence that has accumulated over time indicating that individuals do not behave in the way described by the rational agent paradigm. Many anomalies have been found by different researchers. Pathbreaking research was done by Kahneman and Tversky. (For a nice survey of this research the reader is referred to the Nobel lecture of Kahneman (2002).) We mention just a few of these anomalies here. The first one is called "framing" (Tversky and Kahneman 1981). One of the major tenets of the rational agent model is that the individual's preferences are not affected by the particular way a choice is presented. However, it appears that agents'preferences are very much influenced by the way a choice is "framed." A famous example of such a framing effect is the Asian disease problem, which shows how the way a choice is formulated leads agents to choose differently. In an appendix (see Section 1.5) we discuss this Asian disease framing effect in greater detail.
A second anomaly has also been researched thoroughly by Kahneman and Tversky. This anomaly challenges the expected utility theory that is at the center of the REEM model. This theory was first formulated by the great mathematician Bernoulli in 1738. It assumes that rational agents who maximize the utility of their wealth only take into account the utility of their final asset position. Their initial wealth position does not affect the value they attach to that final asset position. Kahneman and Tversky found that this assumption is rejected by agents' behavior. This was found in both experimental and observational studies (Kahneman and Tversky 2000). It appears from these studies that agents' evaluation of their final asset position does depend on the initial wealth. This finding led Kahneman and Tversky to propose another theory of choice under uncertainty. In this theory, called prospect theory, the carriers of utility are gains and losses (changes in wealth rather than states of wealth). One of the important insights of this theory is that agents attach a different utility value to gains and losses (Kahneman and Tversky 1973). We will use and further expand this theory in Chapter 5.
Other anomalies that challenge the rational agent model were discovered (see Thaler (1994) for spirited discussions of many anomalies). We just mention "anchoring" effects here, whereby agents who do not fully understand the world in which they live are highly selective in the way they use information and concentrate on the information they understand or the information that is fresh in their minds. This anchoring effect explains why agents often extrapolate recent movements in prices.
These empirical anomalies lead to the conclusion that agents do not conform to the rational agent model. But let us pause here for a while. The view that many agents do not behave rationally (in the sense in which rationality is defined in the REEM model) does not necessarily invalidate that model. Proponents of the REEM model have countered that, for markets to be efficient, it is sufficient that some agents are rational.14 To see this, suppose that agents are driven by anchoring effects and are using extrapolative forecasting rules. This will drive the exchange rate away from its fundamental value. The rational agents will recognize that there are profit opportunities to exploit. They will therefore be willing to arbitrage, i.e., to sell the currency when it is overpriced, and to buy it when it is underpriced. This arbitrage activity by the rational agents then ensures that the exchange rate always reflects its fundamental value. Thus, even if the world is populated by a lot of irrational agents, it will be sufficient to have a few rational agents for the markets to conform to the REEM predictions. The market outcome will make it appear as if all agents are rational, while only a few actually are. This has led to the idea that the rational agent assumption is an "as-if" proposition.
The trouble with this criticism of behavioral finance theory is that it is not borne out by the empirical evidence discussed in the previous section. The empirical rejection of market efficiency can now be interpreted in two ways. It means that either the rational agent as defined in the REEM model simply does not exist, or there exist such superior individuals but for some reason they are prevented from exploiting the profit opportunities created by the behavior of the irrational agents. In this second interpretation it is as if these rational agents do not exist.15 Whichever interpretation is correct, we have to look for a modeling approach other than that based on the rational agent model. This alternative approach will have to take into account the departures from rationality that have been documented in the behavioral finance literature.
1.4 The Broad Outlines of an Alternative Approach
In this book we will use the main ideas that have been developed in the framework of the behavioral finance approach, and we will try to formalize it. The broad outlines of this approach can be summarized as follows. First, agents experience a cognitive problem when they try to understand the world in which they live. They find it difficult to collect and to process the complex information with which they are confronted. As a result, they use simple rules ("heuristics") to guide their behavior. They do this not because they are stupid or irrational, but rather because the complexity of the world is overwhelming, making it pointless to try to understand it completely. Thus, the agent we will assume is very different from the rational agent assumed to populate our economic models who is able to comprehend the full complexity of the world, i.e., who has a brain that can contain and process the information embedded in the world in its full complexity. This is rather an unreasonable assumption. It is tantamount to assuming that humans are godlike creatures.
The second component in our modeling approach will be to discipline the behavior of our agents, i.e., to impose some criterion that allows some behavioral rules to survive and others to disappear. Thus, in this second stage we will assume that agents regularly evaluate the behavioral rules they use. We will do this by assuming that agents compare the utility of the rule they are using with the utility of other rules and that they are willing to switch to another rule if it turns out that this other rule gives them more satisfaction. Thus, behavioral rules will have to pass some fitness criterion. In this sense, one can say that rationality is introduced in the model. Rationality should, however, be understood in a different way than in the REEM model. It is a selection mechanism based on trial and error, in which imperfectly informed agents decide about the best behavior based on their past experiences. We claim that this trial-and-error strategy is the best possible one in a very uncertain world.
This two-stage modeling approach can also be interpreted in the context of modern theories about how human brains function. It is now increasingly recognized by psychologists and brain scientists that there are two (interacting) cognitive processes at work in our brains. These have been called system 1 and system 2 by Stanovitch and West (2000) (see also Epstein 1994; Hammond 1996; Kahneman 2002; Sloman 2002; Damasio 2003). System 1 is based on intuition and emotion; it is associative and difficult to control, or even to express verbally; it is automatic, fast, and effortless. System 2 is based on explicit reasoning; it is slow, effortful, and potentially rule-governed. It also functions as a monitoring process evaluating the quality of the mental processes as a whole (Gilbert 2002; Stanovitch and West 2002). The structure we impose on the decision process of our agents is similar to this dual structure existing in our brains, albeit in a very simplified version.
The two-stage approach proposed here comes closer to a correct microfoundation of macroeconomics. Conversely, based on what we now know about the functioning of the brain, we can conclude that the microfoundation of macroeconomics that is present in the REEM model has no scientific basis. Humans certainly do not make decisions using the process of reasoning (system 2) in isolation. The emotional process is of crucial importance for allowing the reasoning process to lead to rational decisions (see Damasio (2003) for an elegant analysis of the interaction of emotional and rational processes).
1.5 Appendix: The Asian Disease Problem
This problem vividly illustrates how the framing of a problem has an influence on decisions of individual agents.
Suppose there is an outbreak of an influenza epidemic coming from Asia. Scientists predict that if nothing is done 10 000 people will die in the United States. They have, however, devised two different programs to reduce the number of victims. The members of Congress have to decide which program to adopt. If the first program is adopted, 4000 people will be saved; if the second program is adopted, there is a 40% probability that 10 000 people will be saved and a 60% probability that nobody will be saved.
When respondents are asked which of the two programs they prefer a large majority typically chooses the first program. This implies that most respondents are risk averse.
But let us now assume that the two programs are "framed" differently and that they are presented in the following way: if the first program is adopted, 6000 people will die; if the second program is adopted, there is a 40% probability that nobody will die, and a 60% probability that 10 000 people will die.
When this time respondents are asked which of the two programs they prefer a large majority chooses for the second program. This suggests that most respondents are risk seekers. It should be clear, however, that the two decision problems are identical. Only the way the programs are presented (are framed) differs. The framing of the problem appears to have a significant effect on the decisions.
This framing problem has been found to exist even when the agents who have to decide are highly trained professionals. There is a famous framing effect involving experienced physicians who had to decide between surgery or radiation therapy and who decided very differently depending on whether the outcome statistics were formulated in terms of mortality rates or survival rates (see Kahneman 2002).
Return to Book Description
File created: 8/7/2007
Questions and comments to: firstname.lastname@example.org
Princeton University Press