张川川，浙江大学经济学院研究员，博士生导师，兼任浙江大学共享与发展研究院研究员和国家制度研究院特约研究员，曾任中央财经大学经济学院副教授，研究兴趣包括社会保障政策评估，健康的决定因素及其影响，国际贸易的本地劳动力市场效应，以及文化、非正式制度等对经济行为和经济绩效的影响。张川川老师的论文发表于Demography, AEJApp, JDE, JCE, J Popul Econ等英文期刊以及《中国社会科学》、《经济研究》、《经济学季刊》、《管理世界》、《金融研究》等中文期刊。
Professor Zhang firstly presents the selection bias embedded in the first-order difference using the cross-section data, which contributes to the upshot as well as the treatment effect, and demonstrates that the assumption of parallel trend (i.e., the changes in time trend are independent of the treatment status) of conventional DiD (also well known as standardized DiD) could differentiate the said selection bias from the treatment effect. The statistical difference between DiD and the commonly used fixed effect approach is quite tenuous, and the policy background underpinning the parallel trend is essential to justify the DiD approach. Existent literature often used the so-called policy shock to establish the DiD framework. Unfortunately, the long-time lumping of multifarious policies by the government has made it extremely difficult to precisely estimate the treatment effect of one single policy by applying DiD. In addition, the general equilibrium effects often tend to blur the treatment effect. For instance, the university enrolment expansion simultaneously prompted individuals educational attainment and enlarged the labor supply base trimming returns on education. Professor Zhang also warns against using non-linear form estimation employing DiD, which is unable to differentiate out the fixed-effect like OLS.
The DiD approach reports the mean value change in the Figure illustrating the parallel trend, and Professor Zhang suggests the comparison of the distribution of the outcome variable between the treatment and control group would further our understanding of the estimation.
Next, Professor Zhang explicates the extensions of DiD. He first introduces the staggered DiD concerning the treatment timing or intensity variation. Specifically, there are usually two approaches to estimating staggered DiD, i.e., the static and dynamic TWFE specification. For the static one, the core independent variable is only one indicating whether i was exposed to policy at t, and its flaws lie in assuming the constant treatment effect across time (in reality, the treatment effects might fade away), which could be addressed in the dynamic model, i.e., TWFE with "leads" and "lags" of the event indicator. The coefficients of the "lags" are interpreted as a dynamic path of causal effects, and the statistical significance of the "leads" is often used to examine the pre-trend.
Professor Zhang secondly presents the cohort DiD using the cross-section data, which is often employed in studies exploring the effects of events in early childhood. The multi-dimension difference was the third extension Professor Zhang demonstrated. The DDD comprises two DiD estimations, one is of our interest and another serves as a placebo test. We should avoid directly using DDD, which masquerades the potential spillover effects that can be spotted in employing two DiD estimations apart.
The fourth extension Professor Zhang introduced was that the event study allowed different units to be treated at different times. In order to assuage collinearity, the samples in the period just before the initial treatment as well as the samples being the most distant from the treatment are often excluded as the relevant group.
The fifth extension is the synthetic control, which constructs a control group using preintervention characteristics, making the constructed control group most similar to the treatment group. The pre-treated trends of the treatment and control groups are either overlapped or parallel. The sixth extension is the Bartik Instruments, which has the advantage of alleviating general equilibrium effects.
Next, Professor Zhang instigates a comprehensive review of the recent development of DiD approaches. The negative weights enabled by the comparison between later-treated and earlier-treated groups could bias the static DiD estimator. One approach to deal with this kind of bias is using only newly leavers or joiners, i.e., the instantaneous DiD. The severity of the contamination is determined by the weights and magnitude of the part calculating the difference between the later-treated and earlier-treated groups. The corresponding Stata command to decompose every specific DiD estimator is "bacondecom".
Regarding TWFE in dynamic models, like, is not only the treatment effect at l but also incorporates the average weighted effects of other periods. The missing regressors are arguably the culprit for this kind of contamination. Specifically, in a panel data set with balanced calendar time, the different treated timing means the unbalance of the duration individuals bestride. For example, Professor Zhang proposes a duration bestriding 20 periods, individual i has been treated at time 10, then his/her duration could be signaled as [-10, 10], while the other individual j has been treated at time 2, then his/her duration should be indicated as [-1, 19]. The time relative to the initial treatment is therefore naturally unbalanced. The homogeneity of treatment intensity could neutralize this sort of contamination. However, in a more common case with a heterogeneous treated effect, the precision of the estimation could suffer. Professor Zhang next argues why he and his coauthor Professor Xu didn’t find heterogeneity in most published papers; the published works always used samples that satisfied the parallel trend assumption and therefore tended to exhibit homogeneity.
In the next section, Professor Zhang recommends the parametric approach— cohort-specific treatment effect — to estimate the weighted treatment effect based on some “building blocks” (to put it more bluntly, identification of disaggregated causal effects). Specifically, we should first employ the event study for every single group with different initial treatment timing, which guarantees the consistency of the timing of being treated within each estimation and effectively addresses the “missing regressors” problem. Then, the DiD estimator would be obtained by totaling each result based on corresponding weights.
In practice, wee can first generate dummies signaling each group with different initial treatment timing, and then interact these group dummies with . And we finally average over the coefficients of the said interactions using weights measuring the probability of entering the treatment group. The weights and estimation result could be calculated via the Stata command “eventstudyweights” and “eventstudyinteract”, respectively. The command “csdid” could also be used to employ the “building blocks” approach using different block-dividing methods. While it is worth mentioning that the standard errors obtained through these commands are sensitive to the covariates entering into the equation, these methods are still fledgling.
Professor Zhang also propounds the “imputation estimator”, which first uses Y(0) as the dependent variable, and unit and time fixed effects as independent variables to predict the coefficients of unit and time fixed effects. Then confines our sample to the treated group, and lets TWFE and previously predicted coefficients enter into the function producing the fitted value of the counterfactual outcome . The difference of is therefore the ATT.
In the last section, Professors Zhang contrasts the OLS estimates with various DiD development methods and suggests we employ these approaches at least in the robustness checks to lend support to our studies.
Deeply moved by the positive and open-minded attitude of Professor Zhang, the students enthusiastically discussed life planning and scientific research with Professor Zhang, and Professor Zhang also welcomed anyone to communicate with him at any time, which brought this lecture to a successful conclusion. Thank Professor Zhang for his wonderful sharing with the students!