An Introduction to Alternative Methods in Program Impact Evaluation
PDF

Supplementary Files

PDF
Copy right form
Conflict of interest form
Ethnic form

Keywords

Program impact evaluation
treatment effect
counterfactual
potential outcomes

How to Cite

NGUYEN, C. (2016). An Introduction to Alternative Methods in Program Impact Evaluation. Journal of Economic and Social Thought, 3(3), 349–375. https://doi.org/10.1453/jest.v3i3.937

Abstract

Abstract. During the recent years, researchers as well as policy makers have been increasingly interested in impact evaluation of development programs. A large number of impact evaluations have been developed and applied to measure the impact of programs. Different impact evaluation methods rely on different identification assumptions. This paper presents an overview of several widely-used methods in program impact evaluation. In addition to a randomization-based method, these methods are categorized into: (i) methods assuming “selection on observable” and (ii) methods assuming “selection on unobservable”. The paper discusses each method under identification assumptions and estimation strategy. Identification assumptions are presented in a unified framework of counterfactual and two-equation model.

Keywords: Program impact evaluation, treatment effect, counterfactual, potential outcomes

JEL classification:C40, H43, J68.

https://doi.org/10.1453/jest.v3i3.937
PDF

References

Abadie, A. & Imbens, G.W. (2002). Simple and bias-corrected matching estimators for average treatment effects. NBER Technical Working, Paper No. 283. doi. 10.3386/t0283

Abhijit, V., Banerjee, A., & Duflo, E. (2008). The experimental approach to development economics. NBER Working Paper, 14467. doi. 10.3386/w14467

Angrist, D.J., & Pischke J.S. (2009), Mostly Harmless Econometrics: An Empiricist's Companion, Princeton University Press.

Asian Development Bank, (2011), A Review of Recent Developments in Impact Evaluation, Asian Development Bank, Philippines.

Banerjee, A., & Duflo, E. (2011), Poor Economics: A Radical Rethinking of the Way to Fight Global Poverty, Public Affairs Books, New York.

Blundell, R., & Costa-Dias, M. (2009). Alternative approaches to evaluation in empirical microeconomics. Journal of Human Resources, 4(3), 565-640. doi. 10.3368/jhr.44.3.565

Cochran, W.G. (1968). The effectiveness of adjustment by subclassification in removing bias in observational studies. Biometrics, 24(2), 295-313. doi. 10.2307/2528036

Cochran, W.G., & Chambers, S.P. (1965). The Planning of Observational Studies of Human Population. Journal of the Royal Statistical Society, 128(2), 234-266.doi. 10.1016/0021-9681(79)90034-1

Dawid, A.P. (1979). Conditional independence in statistical theory. Journal of the Royal Statistical Society. 41(1), 1-31.

Dehejia, R.H. & Wahba, S. (1998). Propensity score matching methods for non-experimental causal studies. NBER Working Paper, No. 6829. doi. 10.3386/w6829

Duflo, E., Glennerster, R., & Kremer, M. (2008). Using Randomization in Development Economics Research: A Toolkit, Chapter 61. Handbook of Development Economics, Volume 4.

Fan, J. (1992). Local linear regression smoothers and their minimax efficiencies. The Annals of Statistics, 21(1), 196-216.doi. 10.1214/aos/1176349022

Greene, W.H. (2003). Econometric Analysis. Firth Edition, Prentice Hall Press.

Hahn, J. (1998). On the role of the propensity score in efficient semiparametric estimation of average treatment effects. Econometrica, 66(2), 315-331. doi. 10.2307/2998560

Hahn, J., Todd, P., & van der Klaauw, W. (2001). Identification and estimation of treatment effects with a regression-discontinuity design. Econometrica, 69(1), 201-209. doi. 10.1111/1468-0262.00183

Heckman, J. (1978). Dummy endogenous variables in a simultaneous equation system. Econometrica, 46(4), 931-959. doi. 10.2307/1909757

Heckman, J., & Vytlacil, E.J. (1999). Local instrumental variables and latent variable models for identifying and bounding treatment effects. Proceeding of National Academy of Science, 96(8), 4730-4734. doi. 10.1073/pnas.96.8.4730

Heckman, J., Ichimura, H., & Todd, P.E. (1997b). Matching as an econometric evaluation estimators: evidence from evaluating a job training programme. Review of Economic Studies, 64(4), 605- 654. doi. 10.2307/2971733

Heckman, J., Ichimura, H., Smith, J.A., & Todd, P.E. (1998a). Characterizing selection bias using experimental dada. Econometrica, 66(5), 1017-1098. doi. 10.2307/2999630

Heckman, J., Smith, J., & Clements, N. (1997a). Making the most out of program evaluations and social experiments: Accounting for heterogeneity in program impacts. Review of Economic Studies, 64(4), 487-535. doi. 10.2307/2971729

Heckman, J., Lochner, L., & Taber, C. (1998b). General equilibrium treatment effects: A study of tuition policy. American Economic Review. 88(2), 381-386.

Heckman, J., Hohmann, N., Khoo, M., & Smith, J. (2000). Substitution and dropout bias in social experiments: Evidence from an influential social experiment. The Quarterly Journal of Economics, 115(2), 651-694. doi. 10.1162/003355300554764

Heckman, J., Lalonde, R., & Smith, J. (1999). The economics and econometrics of active labor market programs. Handbook of Labor Economics, Volume 3, Ashenfelter, A. and D. Card, eds., Amsterdam: Elsevier Science.

Hirano, K., Imbens, G.W., & Ridder, G. (2002). Efficient estimation of average treatment effects using the estimated propensity score. NBER Working Paper, No.251. doi. 10.3386/t0251

Imbens, G.W., & Angrist, J.D. (1994). Identification and estimation of local average treatment effect. Econometrica, 62(2), 467-475. doi. 10.2307/2951620

Imbens, G.W., & Wooldridge, J.M. (2009). Recent developments in the econometrics of program evaluation. Journal of Economic Literature, 47(1), 5-86. doi. 10.1257/jel.47.1.5

Imbens, G.W., & Lemieux, T. (2008). Regression discontinuity designs: A guide to practice. Journal of Econometrics, 142(2), 615-635.doi.10.1016/j.jeconom.2007.05.001

Lee, D.S., & Lemieux, D.S. (2010). Regression discontinuity designs in economics. Journal of Economic Literature, 48(2), 281-355.doi. 10.1257/jel.48.2.281

Mahalanobis, P.C. (1936). On the generalized distance in statistics. Proc. Nat. Inst. Sci. Ind. 12 (1936), 49-55.

Moffitt, R. (1991). Program evaluation with nonexperimental data. Evaluation Review, 15(3), 291-314. doi. 10.1177/0193841X9101500301

Powell, J. (1994). Estimation of semiparametric models. in: R. Engle and D. McFadden, eds., Handbook of Econometrics, vol. 4 (North-Holland, Amsterdam, Netherlands), 2443-2521.

Rosenbaum, P., & Rubin, R. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1), 41-55. doi. 10.1093/biomet/70.1.41

Rosenbaum, P., & Rubin, R. (1984). Reducing bias in observation studies using subclassification on the propensity score. Journal of Statistical Association, 79(387), 516-523.

Rosenbaum, P., & Rubin, R. (1985a). Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. American Statistician, 39(1), 33-38.

Rosenbaum, P., & Rubin, R. (1985b). The bias due to incomplete matching. Biometrics, 41(1), 103-116. doi. 10.2307/2530647

Rubin, D. (1974). Estimating causal effects of treatments in randomized and non-randomized studies. Journal of Educational Psychology, 66(5), 688-701. doi. 10.1037/h0037350

Rubin, D. (1977). Assignment to a treatment group on the basis of a covariate. Journal of Educational Statistics, 2(1), 1-26. doi. 10.3102/10769986002001001

Rubin, D. (1978). Bayesian inference for causal effects: The role of randomization. The Annals of Statistics, 6(1), 34-58. doi.10.1214/aos/1176344064

Rubin, D. (1979). Using multivariate sampling and regression adjustment to control bias in observational studies. Journal of the American Statistical Association, 74, 318–328. doi. 10.1080/01621459.1979.10482513

Rubin, D. (1980). Bias reduction using Mahalanobis-Metric matching. Biometrics, 36(2), 293–298. doi. 10.2307/2529981

Smith, J. & Todd, P. (2005). Does matching overcome LaLonde’s critique of nonexperimental estimators?. Journal of Econometrics, 125(1–2), 305–353. doi. 10.1016/j.jeconom.2004.04.011

Staiger, D., & Jame, H.S. (1997). Instrumental variables regression with weak instruments. Econometrica, 65(3), 557–86. doi. 10.2307/2171753

Van der Klaauw, W. (2002). Estimating the effect of financial aid offers on college enrollment: A regression-discontinuity approach. International Economic Review, 43(4), 1249-87. doi. 10.1111/1468-2354.t01-1-00055

White, H, (2006). Impact Evaluation: the Experience of the World Bank’s Independent Evaluation Group. Washington, DC: World Bank.

White, H, (2009). Some Reflections on Current Debates in Impact Evaluation. Working Paper No. 1, International Initiative for Impact Evaluation.

Wooldridge, J.M. (2001). Econometric Analysis of Cross Section and Panel Data. The MIT Press, Cambridge, Massachusetts London, England.

Creative Commons License
This article licensed under Creative Commons Attribution-NonCommercial license (4.0)

Downloads

Download data is not yet available.