Monday, September 12, 2011

Over-Use of Randomization?

I just watched A. Deaton's talk on RCTs. It is very good and I think we should all take the time to watch/listen:

http://mfi.uchicago.edu/events/20100225_randomizedtrials/

The basic idea he advocates is not blindly implementing RCTs and knowing what you are getting when you do (or even when you implement them without total blindness). From my recent experience trying to design an impact evaluation, I find myself wondering if randomization was necessary, and asking why, if we do use randomization, do we need a baseline? D-in-D requires a baseline of a treatment and a comparable control group. For RCTs we should technically only have to compare the outcome of interest after the treatment -- if the randomization worked we do not need to control for initial conditions. But not controlling for initial conditions seems like a very bad idea to me, in social sciences, since different initial conditions can lead to drastically different outcomes. And, after all, we cannot randomize on unobservables, the very thing we are trying to correct for.

Since we cannot observe the unobservables, there is no way to tell if your randomization really worked. We use means of observables of treatment and control as a way to suggest whether or not randomization worked for unobservables, but that same distribution according to observables can be achieved with matching techniques and the like. I find myself wanting to use randomization AND D-in-D so that I can achieve decent comparison groups, but D-in-D actually was born because economists could not randomize. Is randomization plus D-in-D an overkill? Do I spend a lot of money for an extensive baseline if randomization means I do not need outcome measures from the baseline?

2 comments:

  1. There are severe identification issues with D-in-D alone though. Unless you have a natural experiment.

    Duflo had a response to Deaton. And so did these guys (I believe I sent you the link):

    http://www.economics.harvard.edu/faculty/imbens/files/bltn_09apr10.pdf

    ReplyDelete
  2. Yes, read this one in 2009 when that debate first started.
    It's a slightly different point than the above.

    But D-D alone is not sure fire solution to extracting causality-especially without an experiment. The biggest issue being time invariant unobservables, but D-D also excacerbates the inconsistency of standard errors should there be measurement error (proof somewhere from labor class).

    The above paper mostly points to external validity.
    I like Blattman's article:
    http://chrisblattman.com/documents/policy/2008.ImpactEvaluation2.DFID_talk.pdf

    which says-let's all just agree that there's nothing universally valid about RCT's in one context, and I'm surprised anybody would claim that.
    Rather, let's look at heterogenous effects using RCTs and experiments.

    Fafchamps said, and I agree with his statement that the idea of just using RCTs is ending. It must be contextualized in a theory before execution.
    http://jae.oxfordjournals.org/content/20/4/596.extract

    ReplyDelete