# «Douglas Harris Tim R. Sass Dept. of Educational Leadership & Policy Studies Dept. of Economics Florida State University Florida State University ...»

Equation (8), often called the “gain-score model” in the education literature, has been used by a number of authors to gauge the impact of teachers on student achievement.10 As noted in Table 1, studies of teacher quality that employ achievement gains as the dependent variable include include Ballou, Sanders and Wright (2004), Goldhaber and Anthony (2004), Rivkin, Hanushek and Kain (2005) and Wright, Horn and Sanders (1997). None of these papers test the restriction on lambda, however.

E. The Contemporaneous Specification A third alternative is to assume there is no effect of lagged inputs on current achievement.

In other words, there is immediate and complete decay so that (1-λ)=1 or λ=0. In this case

**lagged achievement drops out of the achievement function and equation (7) becomes:**

Alternatively, the model can be derived by starting with a model of student learning gains (rather than levels) and assuming that there is no persistence of past schooling inputs on learning gains.

This gain score model should be distinguished from the two-period gain score studies mentioned earlier that cannot control for the unobserved individual effects of students, teachers, and schools. See Harris and Sass (2006) for a discussion of the earlier type of gain score studies.

This is the approach taken by Dee (2004) and Rockoff (2004).

III. Modeling Teacher and School Inputs A. Decomposing School Inputs The vector of school-based educational inputs, Eit, contains both school-level inputs such as the quality of principals and other administrative staff within school m, Smt, as well as a vector of classroom-level inputs in classroom j, Cjt. The vector of classroom inputs can be divided into four components: peer characteristics, P-ijmt (where the subscript –i students other than individual i in the classroom), time-varying teacher characteristics (such as experience), Tkt (where k indexes teachers), time-invariant teacher characteristics (such as “innate” ability and pre-service education), δk. and non-teacher classroom-level inputs (such as books, computers, etc.), Zjt. If we assume that, except for teacher quality, there is no variation in education inputs across classrooms within a school, the effect of Zjt becomes part of the school-level input vector, Smt. If we further assume that school-level inputs are constant over the time span of analysis, they can be captured by a school fixed component, φm.11 The value-added model can then be

**expressed as:**

Rarely do data exist on time-varying school-specific characteristics (ie. variables that change for one school but not for others within a district. One exception are the characteristics of school principals. We intend to analyze the impact of principals on student achievement in future work.

B. Modeling Teacher Effects There are three major sources of variation in the modeling of teacher effects within educational production function models. First is the choice between measuring time-invariant teacher characteristics with the use of covariates, such as race, gender, and college selectivity or employing teacher-specific effects. Replacing the time-invariant teacher effect with a vector of

**constant teacher covariates Yk in the value-added equation yields:**

where υit = (δk-ρYk) + ηit. This approach will produce biased estimates if unmeasured timeinvariant teacher characteristics, (δk-ρYk), which are now part of the error term, are correlated with observed time-varying student, peer or teacher variables in the model (ie. Xit, P-ijmt, Tkt).

With teacher-specific effects, there is also the choice between fixed and random effects estimators. The impact of an individual teacher on student achievement can be modeled as a parameter specific to that teacher (a “fixed effect”) or as a draw from a normal distribution. One can argue whether the population of teacher quality levels is normally distributed.12 However, from a practical standpoint the choice between fixed and random effects often boils down to a tradeoff between computational cost and consistency of the estimator. Traditionally, the fixed effects or dummy-variable approach has been computationally burdensome. If student fixed effects are used to control for individual heterogeneity among students, then incorporating fixed effects for teachers has required incorporating a dummy variable for each teacher into the It is possible to estimate models which use other distributions for random effects, such as mixtures of normal distributions. See Verbeke and Lesaffre (1996).

model.13 However, a new iterative fixed-effects estimator introduced by Arcidiacono et al.

(2005) has greatly reduced the computational cost of estimating multi-level fixed effects models.

This new technique has been employed by Harris and Sass (2006) to estimate models with student, teacher and school fixed effects using a large sample of students.

The random-effects approach essentially makes the teacher effect part of the error term and thus requires that the teacher effect be independent of the other explanatory variables in order to generate consistent estimates. Thus, for example, if peer effects are included in the model then random effects estimation would only yield consistent estimates if time-invariant teacher quality were uncorrelated with the characteristics of students in the classroom, which is unlikely. Since fixed effects are consistent (and random effects are inconsistent) when teacher effects are correlated with the explanatory variables in the model a Hausman test can be conducted to test for the consistency of the random effects estimator. To the best of our knowledge, the only existing study to conduct such as test is Goldhaber and Brewer (1997), who fail to reject the consistency of random teacher effects. In fact only five studies include any sort of teacher-specific effect along with a measure of student heterogeneity, three employing teacher fixed effects (Aaronson, Barrow and Sander (2003), Ballou, Sanders and Wright (2004) and Rivkin, Hanushek and Kain (2005))14 and two utilizing teacher random effects (Goldhaber and Brewer (1997), Rockoff (2004)).15 The third specification issue related to teacher effects is whether the impact of teachers on student achievement varies across students. It may be that some teachers possess skills that Individual student effects can be taken into account by differencing the data with respect to student means, but then the teacher effects must be entered as dummy variables in a regression using the (student) de-meaned data. For a discussion of the computational issues involved see Andrews, Schank, and Upward (2004).

Rivkin, Hanushek and Kain do not observe the specific classroom assignments of teachers and thus their teacher fixed effects are really school-by-grade effects that represent the average quality of teachers in a given grade level at a particular school.

Teacher fixed effects have also been used in recent studies of peer effects. See Burke and Sass (2006) and Cooley (2005).

aid some students more than others or perhaps the racial/ethnic identity of a teacher has differential effects on students of different races and ethnicities. To control for potential variation in teacher effects among students a number of studies either include interactions between teacher and student characteristics (Wright, Horn and Sanders, (1997)) or conduct separate analyses for different groups of students (Aaronson, Barrow and Sander (2003), Dee (2004), Goldhaber and Anthony (2004)).

C. Modeling Peers, Other Classroom Factors, and School Effects There is a rapidly growing empirical literature on classroom peer effects. It is well known that if students are assigned to classrooms non-randomly and peer-group composition affects achievement, then failure to control for the characteristics of classroom peers will produce biased estimates of the impact of teachers on student achievement. The measured teacher effects will capture not only the true impact of teachers but will also partially reflect the impacts of omitted peer characteristics. Recognizing this potential problem, the majority of the existing studies of teacher effects contain at least crude measures of peer characteristics like the proportion that are eligible for free/reduced-price lunch. An alternative approach is to focus on classes where students are, or appear to be, randomly assigned, as in Clotfelter, Ladd, and (2005), Dee (2004), and Nye, Konsantopoulos, and Hedges (2004).

As with the effects of peers, omission of other classroom-level variables can bias the estimated impact of teachers on student achievement if the allocation of non-teacher resources across classrooms is correlated with the assignment of teachers and students to classrooms. For example, principals may seek to aide inexperienced teachers by giving them additional computer resources. Similarly, classrooms containing a disproportionate share of low-achieving or disruptive students may receive additional resources like teacher aides. Due to the paucity of classroom data on non-teaching personnel and equipment, most studies omit any controls for non-teacher inputs. The only exceptions are Dee (2004) and Nye, Konstantopoulos and Hedges (2004) who use data from the Tennessee class-size experiment where classrooms were explicitly divided into three types: small classes, larger classes with an aide and larger classes with no aide.

Student achievement may be influenced by factors within schools but outside the classroom, such as the interactions and coordination among teachers, the leadership of the school principal, and the alignment of the adopted curriculum to achievement tests. It is rarely possible to measure any of these inputs directly and instead many researchers now include individual school effects. Like the teacher effects discussed above, these can be modeled as either fixed or random. When school-level effects are included, the teacher effects measure the influence of each teacher relative to the others within the same school.16 This has important implications for the interpretation of our results, as discussed below.

D. Aggregation Issues Historically, analyses of student achievement were often done at the school or even at the district level, since that was most disaggregate data available. With the advent of student-level panel data, individual student achievement can be analyzed, but frequently it is difficult to match students to their specific teacher. Precise student-teacher matching can be done at all grade levels in Florida and parts of New York City and for elementary-school students in North Carolina, it is not currently possible to link students to their teachers in other longitudinal databases from other areas, such as Texas.

Teacher and school effects can be combined into a single teacher-school “spell” which then only requires a separate dummy for each unique teacher-school combination. Individual student effects can be taken into account by differencing the data with respect to student means, but then the teacher effects must be entered as dummy variables in a regression using the (student) de-meaned data. For a discussion of the computational issues involved see Andrews, Schank, and Upward (2004).

There are both potential advantages and disadvantages to aggregating data across teachers. As noted by Rivkin, Hanushek and Kain (2005), aggregating teacher characteristics to the grade within a school has the advantage of avoiding issues associated with non-random assignment of students to teachers. Also, it is well known that measurement error in the key independent variables places a downward bias on the coefficients (Grunfeld & Griliches, 1960).

Measuring teacher attributes at the grade level rather than the individual level may therefore reduce this bias since errors at the individual teacher level may cancel out at the grade level.

Hanushek, Rivkin and Taylor (1996) discuss this possibility, but they also show that aggregation can upwardly bias the estimated impacts of school resources in the presence of omitted variables, especially when the omitted variables occur at the same level as the aggregation. Aggregation will of course also tend to reduce the precision of estimates.

IV. Modeling Student/Family Inputs The most important contributors to student learning are arguably the students themselves.

Therefore, in the absence of random assignment, differences in average classroom performance will reflect not only the quality of teachers, but the ability of students as well. Consequently, it is important to control for student characteristics when evaluating the influence of teachers on student performance. While all studies include some measures of observed student characteristics, like race, gender and student mobility, there are distinct differences in how extant studies account for student and family characteristics that are typically unobservable, such as student ability and motivation.

A. Fixed and Random Student-Specific Effects With the recent availability of longitudinal data, models of student achievement now frequently employ student-specific effects to control for time-invariant student and family characteristics. For example, in the economics literature Goldhaber and Anthony (2004), Rivkin, Hanushek and Kain (2005) and Rockoff (2004) directly capture the effect of time-invariant student-level ability and family inputs by incorporating individual fixed effects. In contrast to the use of student covariates, the fixed-effects approach should control for all time-invariant student characteristics, both observed and unobserved. In the education literature, few studies include individual-specific effects. One exception is Rowan, Correnti, and Miller (2002), who include student-specific effects in the context of a hierarchical linear model (HLM) framework.

As with the effects of teachers, the impact of unobserved student and family characteristics on student achievement can be modeled as either a fixed or random effect.17 Random student effects will produce inconsistent estimates of model parameters if unobserved student heterogeneity is correlated with explanatory variables in the model. Since fixed effects yield consistent estimates even when they are correlated with other independent variables, the consistency of random effects estimators can be testing by comparing the parameter estimates from fixed and random-effects models via a Hausman test. Formal tests of the consistency of random effects estimators are rarely done, however,18 perhaps due to the computational problems associated with estimating models with both student and teacher/school fixed effects models. As indicated above, however, recent advances in econometric methodology have greatly reduced the It is possible to estimate models which use other distributions for random effects, such as mixtures of normal distributions. See Verbeke and Lesaffre (1996).

Todd and Wolpin (2005) conduct Hausman tests of fixed versus random student effects in their model of racial test score gaps and find that mother-specific endowment effects are not orthogonal to included inputs in the math achievement equation and thus the random effects specification is rejected. They do not reject the random effects specification for reading achievement, however.

computational cost of estimating multi-level fixed effects models, making this a viable alternative to the random effects methodology frequently employed in the past.