mycroft1018 / Econ_101

0 stars 0 forks source link

What the Heck is “Calibration” Anyway? #1

Open mycroft1018 opened 12 months ago

mycroft1018 commented 12 months ago

APRIL 7, 2014 BY CHRIS HOUSE

Every year I teach EC607 I arrive at the Real Business Cycle model and run into a problem. No, it’s not struggling to answer “why are you teaching the RBC model if you don’t think it is useful for understanding business cycles?” No, the problem occurs when I get to the subject of calibration. I would like to tell my students exactly what we mean when we say that we calibrate parameters. I can’t tell them however since I don’t really know myself.

In my own work, I do things that I would describe as calibration. I even have an intuitive sense of what I mean when I say that some parameter has been calibrated. However, I do not have a precise notion of what it means to calibrate a model. In fact, I am not sure anyone has a precise statement of what it means.[1]

Calibration is a way of assigning values to the parameters which determine how our models function. Unlike estimation, calibration does not assign parameter values to make the model fit the data. Some descriptions of calibration suggest that the parameter values should come from separate data sources – separate, that is, from the data that you are analyzing with the model. In Advanced Macroeconomics, David Romer describes calibration as follows (emphasis added):

The basic idea of calibration is to choose parameter values on the basis of microeconomic evidence and then to compare the model’s predictions [with the data].

This is a fairly reasonable description of what many people mean when they use the term ‘calibration’ but it is problematic for at least a couple of reasons. First, economic data typically don’t come with useful labels like “Microeconomic Data: Approved for use with calibrating models” or “Macroeconomic Data: WARNING – DO NOT USE FOR CALIBRATION!” You might think that it’s obvious which is which but it’s not. Certainly panel data like the PSID sounds like data an applied microeconomist might use. What about price (inflation) data? Is that aggregate “macro data”? What about unemployment rate data? What about data on type-specific investment? Is that micro data?

Second, many of the calibrations used in practice seem to come from macro datasets anyway. Take for instance the calibration of the labor’s share parameter in the production function. This calibration is typically justified by calculating the average ratio of total employee compensation to total income – figures which both come from the NIPA.

Romer also says that we should choose the parameter values before comparing the model with the data. I hear sentiments like this a lot though again it doesn’t really hold up when we look at standard practice. The labor’s share parameter is again a case in point. We are setting that parameter based on fitting a single moment of the data (we are going to match the model average labor share with the observed labor share). Another example concerns a standard calibration of investment adjustment costs in business cycle models. These parameters are sometimes calibrated to match the model’s predicted investment volatility with observed investment volatility. These examples make calibration sound suspiciously like estimation. (Hopefully, calibration isn’t just estimation without bothering to report a standard error.)

Nevertheless, even though I don’t really have a precise definition of what I mean by ‘calibration’, I believe that it may indeed have an important role to play in economic analysis. In particular, calibration might work quite well in situations in which we believe the model is wrong. (I know what you’re thinking – we always think the model is wrong! True. This means that calibration may indeed be very valuable.)

Let’s take a specific example. Suppose we have data on wages and employment and we have a labor supply / labor demand model which we propose to explain the observations. Suppose further that all of the changes in employment are driven by shifts to labor demand. The only thing missing is the labor supply elasticity parameter. An estimation based approach would do the following: we would invoke the null hypothesis that the model is correct and then estimate the missing labor supply elasticity from the observed data (just run OLS for instance). A calibration approach would not assume that the model is correct. Instead, a calibrated model would (somehow) obtain a parameter value from elsewhere, plug it into the model and compare the model output with the observed data. Let’s assume that the analyst calibrates the labor supply elasticity at roughly 0.5.

Suppose that (unfortunately for the econometrician) the model is mis-specified. In fact, the wage is stuck above the market clearing wage and there are many workers who are involuntarily unemployed. Every labor demand shift is resolved by simply absorbing available workers at the fixed wage. The econometrician estimates the model and finds that the labor supply elasticity is very high indeed (near infinity in fact). The analyst using the calibrated model finds that his model predicts virtually no changes in employment. Notice that it seems that the analyst using the calibrated model is actually on to something. There is a tension between his calibrated labor model and the observables. Moreover, this tension seems to provide an important clue as to how the model needs to be modified.[2] The econometrician on the other hand is happy with his estimates and will go about his business content in the belief that all is well with the model.

Naturally, the missing link in this narrative is the source of this outside information that the calibrated model draws on. Where does this initial parameterization come from? Perhaps there were some earlier studies that provide some information on the labor supply elasticity? Perhaps the analyst just arrived at the number through sheer introspection. (If I were offered a wage increase, how would I respond?) In a sense calibration shares a common thread with Bayesian estimation which requires a prior to guide the estimates (like calibration, the exact source of the prior is somewhat mysterious). In fact, many prominent researchers who advocate the use of Bayesian techniques come from backgrounds that embrace calibration (Jesus Fernandez-Villaverde was trained at Minnesota for instance).

One other thing which strikes me is that the researchers who use calibration are often much more interested in the performance and insights generated by the models and much less interested in the parameter values themselves. Estimation it seems tends naturally to place much more emphasis on the point estimates themselves rather than their consequences.

In any case, calibration will likely continue to be used as an important analytical technique, even if no one knows what it actually is …

[1] My coauthor Jing Zhang assures me that calibration does indeed have a specific meaning though she has never articulated what this meaning is. (Actually, when I asked her what she meant by ‘calibration’ her first reaction was to laugh at me after which she told me that I didn’t have proper training …).

[2] Paul Krugman seems to arrive at a similar conclusion in a past blog post (though you will have to put up with the obligatory “fresh water bashing” before the end of the post).

mycroft1018 commented 12 months ago

chapt7.pdf

mycroft1018 commented 12 months ago

Identification and calibration can be meant to express a subset of estimation. Any identified coefficient is also an estimate, but not vice-versa.

An identified estimate is any estimate that fulfills certain conditions that make it the true number we want.

For example, any coefficients from (estimating) an OLS regression are estimates. However, a coefficient from an OLS model that fulfills all the OLS assumptions for an unbiased consistent estimate (e.g. no relationship between the error terms and the independent variable) is an identified estimate. Only such a model "achieves identification" or allows authors to state "we identify the effect".

Calibration is relevant when data is used to quantify a theoretical model. There are different ways to do it and hence different potential meanings behind the term.

If you want to produce real-world relevant numbers based on a model (theory) you will need to input certain numbers called "parameters." Inputting those numbers is what it means to say we "calibrate the model by ...". This is what all calibration meanings have in common.

How those numbers are estimated is a different story. Ideally these parameters would also be "identified". Sometimes researchers will calibrate by using parameters from the literature or an identified regression model.

More often calibration is done by trying out different values for the parameters until the model achieves predictions with the least deviation from the data or reproduces some other empirical features. I believe this is the meaning of calibration that is most commonly used in the context of the OP.

mycroft1018 commented 12 months ago

answer from gpt-4

The difference between calibration and estimation in economics is that calibration is a way of choosing the parameters of a model to match some features of the data, while estimation is a way of finding the parameters that best fit the data using some statistical or mathematical methods.

Calibration is often used when there is no clear way to estimate the parameters from the data, or when the model is too complex or nonlinear to be estimated easily. Calibration can also be used to test the implications of different assumptions or scenarios. However, calibration may not reflect the specific characteristics of the data or problem, and it may rely on arbitrary choices or judgments.

Estimation is often used when there is enough data and a well-defined objective function to measure the fit of the model. Estimation can also provide measures of uncertainty or confidence intervals for the parameters. However, estimation may require more data and assumptions, and it may be subject to errors or biases.

For example, suppose you want to model the demand for a product as a function of its price and other factors. You can calibrate your model by using some values of price elasticity, income elasticity, and other parameters that you find in the literature or from experts, and see how well your model predicts the demand for different prices and scenarios1. Or, you can estimate your model by using some data on price and demand for the product, and applying some methods such as regression or maximum likelihood to find the values of parameters that best fit the data2.

You can find more information about calibration and estimation in economics in these sources:

Example of estimation vs. calibration What is the difference between identification, calibration and estimation? Calibration vs Estimation – Research Diary

mycroft1018 commented 12 months ago

the best answer i've found:

Model estimation is the process of picking the best (according to some metric) kind and structure of model. Estimation may include calibration.

Calibration is the process of finding the coefficients that enable a model (the kind and structure of which is already determined) to most closely (according to some metric) reflect a particular known dataset.

So: estimation will set kind, structure and coefficients. Calibration will tweak coefficients, holding kind and structure constant.

Newton's model of motion is fine for most purposes. By calibrating the gravitational coefficient in it, we can make estimates of the mass of the Earth. But it won't work as a model of relativistic motion - that needs the estimation of a different model: there is no recalibration of Newton's model that works for relativistic motion - no coeffecient will work, because the model itself is simply the wrong kind and structure. It omits mechanisms and responses that are absolutely crucial, if the model is to be useful.

Similarly with economic models, Paul Krugman's point is that freshwater economists are saying that their model structures are fine, just the coefficients need tweaking. The problem with that is that if their structures are wrong, no amount of tweaking will make the models useful. Only by going back to basics, and re-estimating the whole model, would they incorporate the crucial mechanisms and responses. He argues that they won't do that, because that would require them to recognise that their existing paradigm is inadequate.