This file is also available in Adobe Acrobat PDF format
This book provides a likelihood-based introduction to econometrics. In essence, the idea is to carefully investigate the sample variation in the data, then exploit that information to learn about the underlying economic mechanisms. The relative likelihood reflects how well different models of the data perform. We find this is a useful approach for both expository and methodological reasons.
The substantive context of econometrics is economics. Economic theory is concerned about how an economy might function and how agents in the economy behave, but not so much about a detailed description of the data variation that will be observed. Econometrics faces the methodological challenge that much of the observed economic data variability is due to factors outside of economics, such as wars, epidemics, innovations, and changing institutions. Consequently, we pursue a methodology where fairly general econometric models are formulated to capture the sample variation, with one or several economic theories being special cases that can be tested. This approach offers the possibility of falsifying that theory, by not imposing the structure of an economic theory at the outset. More constructively, it is possible to enhance the economic theory by also explaining important variations in the economy. An example is the theory of consumption smoothing. By looking at the variation in a quarterly time series, we will see that consumption is not smooth but in fact shows a large increase every Christmas, followed by a slump in the first quarter of the following year. It is then clear that a general theory of consumption would have to take this seasonal fluctuation into account.
We will use a maximum likelihood approach to analyze distributional models for the data. The models we consider have parameters that characterize features of the data: to learn about these parameters, each is accorded its most likely value. Within such a framework, it is then relatively easy to explain why we use the various econometric techniques proposed. The book is organized so as to develop empirical, as well as theoretical, econometric aspects in each chapter. Quite often the analysis of data sets is developed over a sequence of chapters. This continued data analysis serves as a constant reminder that in practice one has to be pragmatic, and there will be many situations where one has to make compromises. By developing a likelihood approach, it is easier to realize where compromises are made and decide on the best resolution.
The book is written to be self-contained. Most chapters have exercises at the end. A few are marked with an asterisk, *, indicating that they are more advanced, often trying to illustrate some definitions, but otherwise tangential to the flow of the text. Others are labeled Computer Tasks and are computer-based exercises. Monte Carlo methods can with advantage be used to illustrate the econometric methods from an early point. An introduction to Monte Carlo theory is, however, deferred to Chapter 18.
Throughout the book, references are given to books or articles with related material. These references are relatively few in number, given the large econometric literature. Chapter 22 therefore concludes the book by discussing a way ahead in the econometric literature beyond this book. At this point, however, it may be useful to highlight a few other econometric textbooks that can be consulted. An introduction to the likelihood approach can also be found in the political science textbook by King (1989). The book by Gujarati (2003) has approximately the same algebraic level as this book. It follows a least-squares approach rather than a likelihood approach, but may be a useful companion as it provides wider coverage. Maddala, Rao and Vinod (1992) is similar, but at a slightly lower level. Dougherty (2002) and in particular Verbeek (2000) follow the least-squares approach, with many empirical illustrations based on downloadable data. Another textbook at about the same algebraic level is Wooldridge (2006), which uses a method-of-moments motivation for the econometric theory rather than a likelihood approach. The style of Kennedy (2003) is idiosyncratic as a detailed introduction to econometrics, in that it largely avoids algebra. Our initial discussion of sample distributions is inspired by Goldberger (1991, 1998).
We are indebted to Swarnali Ahmed, Gunnar B˚ardsen, Jennifer Castle, David Cox, Guillaume Chevillon, Jurgen Doornik, Katy Graddy, Vivien Hendry, Søren Johansen, Hans-Martin Krolzig, Marius Ooms, four anonymous referees, students, and class teachers for many helpful comments on an earlier draft, and for assistance with the data and programs that are such an essential component of empirical modeling. The book was typeset using MikTex. Illustrations and numerical computations have been done using R (see R Development Core Team, 2006) and OxMetrics (see Doornik, 2006). Lucy Day Werts Hobor of Princeton University Press kindly steered the preparation of the final version.
Return to Book Description
File created: 8/7/2007
Questions and comments to: firstname.lastname@example.org
Princeton University Press