Thursday, November 3, 2011

Info sharing through ancillary consumption


One thing the network literature is focusing on is how we can get people to share information by placing incentives around information sharing-be it educational information or health information (e.g. development programs on malaria). Billions of dollars are spent on HIV education in Africa only to have misinformation spread or multiple conflicting information sources. But perhaps we're approaching this incorrectly. Perhaps the key to spreading the right information is through consumers, which have a built in network following. So if billions of people are buying stuff through groupon or amazon, or are sharing pictures an facebook, we could spread information by bundling that information with well sought after products.

Tuesday, October 11, 2011

FE with Unbalanced Panel


This is sort of an old question, but can you do fixed effects with an unbalanced panel?
Suppose the panel is only 2 observations per ith person, so some people have 2 observations, but some only have 1.  I should look inside the .ado..but?

Respondent 1:
I believe you can do fixed effects with an unbalanced panel, but it gets trickier if you have only 1 observation. That observation should be dropped. Stata still uses that observation to calculate the standard errors.

Of course there maybe some corrections you can do for unbalanced panel, but I have never gotten any hell for it.

Respondent 2:
I second that. No problems at all running fe with anunbalanced panel, though you certainly have to correct for it in the standard errors. some options for that: Huber-White sandwich estimator, clustering (see C&T for a nice description of unbalanced panels and clustering), or bootstrap. in the panel where some people have only one observation, those observations are dropped from the estimation -- essentially when you do the mean differencing the values for that observation all turn into zeros. I do not know about the standard error, but I trust Asif on that one (Responder 1, How does it do that?).

This page http://www.stata.com/support/faqs/stat/xtreg2.html has the description of what Stata estimates when you add the "fe" extension to the end of your command line.

Responder 1:
Ha ha. I honestly dont know. If I were to guess, when it calculates the variance -covariane matrix, it uses the whole sample (inluding the singleton observations), essentially making use of more observations. This is why your total no. of observations when your xtreg, fe remains the same with xtreg, re (random effects) despite singleton values.


Thank you!

Tuesday, September 27, 2011

Interpreting the "trend effect" when we include a marginal effects

PersonA:
Let's say we have a model:

y=bX+ error

b is negative

And when we add an interaction term:


y=bX+ c(X*Z) + error

now b become spositive but c is negative.
Does the coefficent of b have any meaning in this case, should I just at the overall effects?

Person B:
Yes, b definitely has meaning?
I don't know any papers off hand that just deal with interactions, but any d-d paper (like duflo education) would interpret interactions, since rct's are always an interaction of time and the program.

If you want the marginal effect of X, they dy/dx= b+ cZ^, where Z^ is a specified value, like the mean of Z or whatever you decide.
if Z is a dummy, then it's just when the dummy is turned on.


If b changed sign, I'd be worried about multicollinearity (how collinear is X with X*Z?), or Z having been omitted such that Z and X are negatively correlated.

The latter seems likely if c is negative and b goes to positive.

Person A:
No b has meaning yes as it cnotributes to the caclulation of the overall effects. but b alone does not have meaning.

Person B:
In program evaluation b definitely has meaning, e.g. if X were time, and Z were the program, then b would tell you the average time trend effect, and c would tell you the program's effect (over time).

Person A:
Would you care about the average time trend effect and not the program effect?

Person B:
Yes-it would change the interpretation of what the program effect is doing.
If b>0, then program effect is enhancing, if b<0, then the program effect is minimizing a crisis.

You'd also care about the b, because it measures whether your randomization was done properly (namely, if X were program, and be measured just the effect of the program, not over time...good randomizaiton should make b insig...but that's a different story. I don't know what you're regression is.)

Person A:

Hmm. So these politcial scientists are wrong? Look at bottom page 71 and beginning of page 72 in attached document.
(Understanding Interaction Models: Improving Empirical Analyses, Brambor et al 2005).

Person B:
They're right, and I'm saying the same thing.

I guess I can be more specific by saying b is the average/trend effect holding all else constant, but the full marginal effect of X is: b+cZ

Person A:

I thought what they are saying is that if you have the interaction term you cannot say b is the average trend effect. This is only true if you dont have the the interaction effect, but only have b.

"Scholars should refrain from interpreting the constitutive elements of interaction terms as unconditional or average effects—they are not...As a consequence, the coefficient on the constitutive term X must not be interpreted as the average effect of a change in X on Y as it can in a linear-additive regression model. As the above discussion should have made clear, the coefficient on X only captures the effect of X on Y when Z is zero."


I thought this menas b is only valid in interpretation if you DO NOT have an interaction term in the estimation.

Person B:
To me avg conditional effect is the same as saying avg effect holding all else constant.
They're saying b is not the avg undconditional effect, which is true, ie. it's conditional.

But I think they're main point is that the marginal effect of X on Y is not just b if an interaction effect is present.
Of course, it's sort of self enforcing. It is the researcher that decides that the marginal effect be introduced or not.
I suppose you can introduce it, see if it's sign, and then if it's not dump it. But, one probably has reason to believe that the interaction should/shouldn't be there.

That's my take.

Monday, September 12, 2011

Over-Use of Randomization?

I just watched A. Deaton's talk on RCTs. It is very good and I think we should all take the time to watch/listen:

http://mfi.uchicago.edu/events/20100225_randomizedtrials/

The basic idea he advocates is not blindly implementing RCTs and knowing what you are getting when you do (or even when you implement them without total blindness). From my recent experience trying to design an impact evaluation, I find myself wondering if randomization was necessary, and asking why, if we do use randomization, do we need a baseline? D-in-D requires a baseline of a treatment and a comparable control group. For RCTs we should technically only have to compare the outcome of interest after the treatment -- if the randomization worked we do not need to control for initial conditions. But not controlling for initial conditions seems like a very bad idea to me, in social sciences, since different initial conditions can lead to drastically different outcomes. And, after all, we cannot randomize on unobservables, the very thing we are trying to correct for.

Since we cannot observe the unobservables, there is no way to tell if your randomization really worked. We use means of observables of treatment and control as a way to suggest whether or not randomization worked for unobservables, but that same distribution according to observables can be achieved with matching techniques and the like. I find myself wanting to use randomization AND D-in-D so that I can achieve decent comparison groups, but D-in-D actually was born because economists could not randomize. Is randomization plus D-in-D an overkill? Do I spend a lot of money for an extensive baseline if randomization means I do not need outcome measures from the baseline?

Tuesday, August 2, 2011

Cheating

http://chronicle.com/blogs/wiredcampus/nyu-prof-vows-never-to-probe-cheating-again%E2%80%94and-faces-a-backlash/32351

I concur.
I find it impossible to punish cheating.
At the top of the food chain (chair and committee), it's too much trouble to take the initiative so they ask you to sweep it under the rug.

Monday, August 1, 2011

Open Source Database?: The decline effect, selective scientific reporting, significance chasing and underpowered tests

THE TRUTH WEARS OFF
http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer?currentPage=all

This article discusses how many statistically significant results that are published in scientific literature are based on random errors. They are published because of publication bias towards positive vs. null results. But after repeating the trial, we very often witness some reversion of the result to its mean and the t-stat on the variable's effect approaches zero.

The article uses some psychologist's research that made him famous, who later couldn't replicate his results.

It makes we wonder if Emily Oster's work which falsely attributed the missing women in the world to the presence of Hep B in areas where male-fem ratios were higher, suffered from a decline effect as well. Her work was discounted by later research that showed that high male-fem ratios are the outcome of sex-selection abortion after the first child is born.

Furthermore, given that many trials eventually suffer from the decline effect, since our samples are not that big, we test and experiment with what we want to see in data, and hence are more likely to find it than if we never questioned it at all...i.e. finding an effect is not 50/50 at the outset.

Monday, July 25, 2011

Playing the Trust Game in Rural Cameroon

Hi Everyone,

I recently read "Does trust extend beyond the village? Experimental trust and social distance in Cameroon" in Experimental Economics and had some reservations about their use of the term "social distance" to describe their primary findings. My thoughts on this are in detail below.

I think the paper is interesting and well done. The authors use the trust game to look at how people respond to partners from their own or a neighboring (but not very close) village. Their main result is that the first movers send significantly more money to their anonymous partner when that partner is in the same village than when the partner is in a different village. It is a between comparison. This is not entirely surprising, but still a nice result and the experiment and analyses are well done.

But I was surprised to find no discussion or even mention of how the role that norms of money lending and informal insurance may influence behavior in the laboratory. There are large bodies of literature on informal lending and insurance that suggest how much influence such institutions have in how people perceive and use money in rural villages where formal credit and insurance markets are incomplete. The trust game in many ways may simply cues participants, who in turn respond as their village norms dictate. In that case, the authors' results on trust differing according to village membership would be confounded by this norm response. In fact, the norms of behavior outside the lab setting, which I argue are invoked in the game, may even be capturing more than just trust (with expected reciprocity), but within village norms of giving, sharing and insurance. This is particularly true if players are aware of each others' village locale. Thus the paper's measure is not a response to social distance but a measure of norms. That the design includes people from both villages paired with others from the opposing village helps, but only nullifies my comment if the villages are incredibly different with very different norms of money usage.

In that sense, I do not think that the experimental design (and accompanying survey data) allow the authors to unequivocally identify the difference in giving by village membership as being a social distance effect. Thus, the interpretation that social distance explains the difference in trusting may not in fact explain what people are responding to in your treatment. Rather, they may be responding according the the norms in their villages (i.e. how people respond to the money allocation events).

As the results show, whether or not the recipient is in the village certainly matters in terms of average amount given in the trust game, but WHY it matters may not be (primarily) social distance interacting with individual decision making. With informal insurance and credit markets playing such an important role in how rural communities in many developing countries use money, I would expect certain rules or norms to be in place for how people respond to the in village/out village money allocation events. One example of a rule or norm would be "when someone in our village asks for money, we always give". I realize that this is not the appropriate rule/norm for their data, since they have such high rates of giving overall and thus cannot look at the play/do not play decision. But I think it conveys the idea that people may not be responding to the incentives and the expectation of immediate reciprocity (i.e. choosing to trust), but rather responding to the cue of it being a money allocation event -- norms of how people respond to such an event likely differ according to whether the other guy is from one's village. To their credit the authors do include ROSCA membership in their regressions. This begins to get at this issue, as they allude to but discuss only briefly. And the coefficient on RSOCA membership is large (larger than their coefficient on the variable of interest) and highly significant. That ROSCA membership is so important (but length of time in a ROSCA is not) suggests there is something more going on here and that village differences is capturing more than just social distance.

I posed the following questions to the authors:
- Have you tried including in the regression an interaction term of ROSCA membership and A and B in the same village?
- Another comment in terms of your future research, why trust game transfers so high among women?
- What about being a women in these villages in Cameroon would contribute to this behavior?
- In line with my above statement on village norms-could it be that women face a greater expectation of giving/sharing and insurance within a village? So that again, it's not trust per say that you're capturing through gender, but the intensity with which female villagers are subject to these financial and familial obligations?

Tuesday, July 19, 2011

Data Tools

In my all our data work I've found that:

R produces great graphics. Like ellipsical confidence intervals and such. But it's statistical syntax has little logic to me. It's not matrix algebra, and it's not one line code. Perhaps it's good for object oriented programmers.

SQL is best for data aggregation, especially biggish data. Like if your data are in relational schema, don't merge them in a statistical tool. Do it directly from SQL.SQL won't let you merge datasets when unique id's don't match, unless you allow for left and right joins. STATA can often produce messy merges-dropping stuff, or lots of missings. Don't ask me why.

SAS is good for data manipulation-transposing, reshaping.

STATA is good for built in stats. It just takes way fewer commands to run an estimation in STATA than in anything else. Hands down.

MATA and MATLAB are good for coding up your own estimators. If you want to change a maximum likelood estimator for some particular distribution, your best bet is to find it in MATA(STATA's Matlab) or MATLAB code and alter it. R would be a bitch. So would SAS.

SPSS just sucks. Don't use it. Who wants to only click their way through life??

Thursday, July 7, 2011

A Natural RCT on Healthcare

Aside from a short sting working for Ray Fisman, this is interesting:
http://www.slate.com/id/2298463/pagenum/2


Basically, people were randomly selected to opt into medicare, so the random instrument was "option to opt in".


1) The most noteworthy result to me is "The only type of care with no statistically discernible increase—or decrease—was emergency room use," which seems to account for the bulk of medical expenses by the uninsured (READ!!! http://www.newyorker.com/reporting/2011/01/24/110124fa_fact_gawande)


Results by Gruber&Finkelstein (MIT): We find that in this first year, the treatment group had substantively and statistically significantly higher health care utilization (including primary and preventive care as well as hospitalizations), lower out-of-pocket medical expenditures and medical debt (including fewer bills sent to collection), and better self-reported physical and mental health than the control group.”

Goldsten's comments:
http://blogs.worldbank.org/impactevaluations/the-new-big-randomized-trial-that-you-should-know-about-randomized-medicaid

2) Can anyone tell me "family-wise error adjusted p-value based on step-down resampling by Westfall and Young" is?
Anybody?

Saturday, July 2, 2011

Logarithmic Distribution of Leading Numbers in any Data??

This is unusual.
http://testingbenfordslaw.com/

Wiki:
Benfor'ds Law, also called the first-digit law, states that in lists of numbers from many (but not all) real-life sources of data, the leading digit is distributed in a specific, non-uniform way. According to this law, the first digit is 1 about 30% of the time, and larger digits occur as the leading digit with lower and lower frequency, to the point where 9 as a first digit occurs less than 5% of the time. This distribution of first digits is the same as the widths of gridlines on the logarithmic scale.

Me:
The shape isn't surprising, given that a lot of stuff follows a bell if the sample is big enough. But 1's have the highest frequency.


Why would that be?

Thursday, June 30, 2011

Randomized Control Trials

Dear avid readers:

This may seem naive...but why do we care about causality in the LR?
We want to teast out causal effects so that we know "de-worming-> increases school attendance", and isn't confounded by some other thing that's driving up school attendance.

But who cares..if my sample is big enough and I see a positive correlation, then in general I know deworming affects something that affects school attendance.
Even if we determine whether de-worming and increased school attendance is not causal, and don't implement the program, it doesn't mean we'll figure out the variable that is, and why not still implement the program?

Perhaps it is because we are concerned that:
1) if it's a confouder that increase school attendance via de-worming, what is it, and what if it drops off?
2) if it's a confonder and we dont' know what it is, then we can't scale this up


Rebuttals:
What if the market and people that de-worming is offered to is pretty constant. Why do we care about causality then?

Just something I've been kicking around on runs...


Reader1: I think that we care about it for the two reasons that you mentioned
-- if we implement de-worming but the correlation is actually due to a
third variable that may "drop off" or that may have a change in how it
is correlated with the other variables (or something else happens that
impacts that third variable such that the correlation changes) then
the de-worming stops working and we do not know why or what we are to
do then. so causal is most important with respect to policy
implications. But if the third variable is very stable and the
correlations do not change, then from a policy perspective, the
de-worming works so who cares? the only reason we might care in that
case would be if there were a cheaper or more tenable policy option
that we are blind to because we ignored the potential third variable.

In terms of defining relationships, I believe causality does not matter.

Reader 2: The thing about de worming as an intervention is that it is super cheap and it works. In fact, it probably the cheapest thing you could do other than nothing. The only way this study is interesting in an economic sense is if school attendance post deworming goes down. kids lives are improved in other dimensions and you now have a tradeoff to study. But this is not the result, so we are done and nothing is surprising. Now we can see how more costly interventions compare.

Reader 3: I agree, causality is key for policy. Suppose there is only correlational evidence of deworming and schooling. i.e. we observe that some villages have implemented deworming programs and these villages also have greater educational attainment than villages that have not implemented deworming. one possible 'threat" to a causal interpretation of this evidence is that the villages that deworm might also happen to have higher than average income (or lower than average income if these programs are sponsored by donors and targeted at poor villages) and it is income that is really causing differential investments in education and educational attainment.

If a government is deciding whether to invest in a deworming program, there is no guarantee at all -- based on this evidence -- that schooling will improve if a deworiming program is implemented in the villages that have not already implemented the program. hence the need for an RCT.

but maybe you are thinking about the mechanisms- proximal vs. ultimate causes... rather than correlations?

R Squared

Why and When to Use it?

Economists are not so keen on using R-squared, even the Adjusted R-Squared, which due-fully adjusts for the the number of parameters being estimated in a model.

Here are some 2 cents about it:

Reader 1: I thought I'd put in my 2 cents here -- so far int he stuff I have
done with Ken and Andreas and Erkut, no one seems to care about the R2
at all! And papers I have read recently do not discuss it, though I
always report it. I n general, if the adj R2 goes up, it sggests that
the model fit is better with the increased variable included. We use
adjusted because adding a var will always make the regular R2 go up so
it does not really tell us anything (which you probably already know).

Reader 2: I don't ever use R squared. I used the adjusted R squared for the informal use of Altonji (2005) to see robustness of the coefficients to additional independent variables. So lets say you are looking at if years of schooling leads to higher growth. Now someone might claim your estimation is too parisomonious. Lets say you may not have included inflation. So you add inflation, see if the adjusted r squared improves. If it improves, its a better fit. And if your coefficient of years of schooling is still significant, it implies your estimation is robust. That is basically when I use adjusted r squared. I read somewhere that the F-stats are better.

Reader 3:
Yes to adjusted r-squared...both the Fstat and adjusted r squared do the same thing in terms of adjusting for the # of parameters being estimated (hence the adjustmnet), otherwise r squared straight up increases as the # variables in your model (i.e parameters being estimated) increases.

Why? because over a concave space, you will always acheive a lower minimum (summ of sqrd errors or residuals will go down) with more variables in your function. Hence SSerrors goes down and Rsquared=1-(SSE/SST) goes up.

R squared adjusted divides by the number of variables (parameters) to counteract the SSE going down. Same thing with F stat.


Response to Reader 2: This is the sentence I wanted: "And if your coefficient of years of schooling is still significant, it implies your estimation is robust."
So, reader 2 uses adjusted r squared for robustness.

But what if adj r sq goes up but schooling becomes insig?
Not robust-right?
Throw out the model?

Reader2: If your adj- r squared goes up, and years of schooling looses significance, it implies that inflation is a) important to your model and b) years of schooling is capturing something inflation is explaining. So your estimation is not robust and you have to either justify why inflation is incorrect to put in the model theoretically, or claim that inflation steals away an important years of schooling affect. Maybe inflation reflects status of the economy, and whne you have a bad economy, schooling plummets. Anyways, here is where everything becomes an art form. Obviously if you add every possible variable out there, you will eventually lose significance, so theory has to guide your specification. Of course this is just a robustness check. I found a paper that claimed it wasn't the greatest robustness check either, as you are hadn picking measurable variables, when your issue is ommitted variable bias.

Reader 3: I'm a non believer in adj r-squared over except for Asif's above.
Why does everyone else use it so much? sociologists? business? do they not know better?

To be (Sig)?

Q: If you multiply a significant coefficient with another significant ceofficient, is the resulting coefficent always significant? (standard errors probably calculated by delta method)

A:No, there's no statistical reason that it should be.
If you multiply them together, you are estimating a new statiscal average, that has a different sd.