# «University of Nebraska - Lincoln DigitalCommons of Nebraska - Lincoln Dissertations and Theses in Statistics Statistics, Department of 8-2010 ...»

The NLM(0.7) estimates are also susceptible to within-track centering of the covariates. The expected deviation scores are found by comparing a teacher’s mean student deviation score to the mean deviation scores of students in the same track rather than the mean deviation scores of all students for whom scores are available. For example, the NLM(0.7) estimate for teacher D uses M1DE and M3DE to center the year one and year three covariates, respectively. Teacher D and E’s students are not in the same track as teacher F’s student. Consequently, the year one and year three deviation scores of teacher F’s student do not impact the centering of the covariates for estimating teacher D and teacher E’s NLM(0.7) effects. Although these scores are included in the estimation of the year one and year three fixed effect means, the fixed effect means used in the calculation of M1D, M1DE, M3D and M3DE cancel when finding the differences (M1D – M1DE) and (M3D – M3DE). Therefore, teacher F’s student deviation scores do not affect the NLM(0.7) estimates of teacher D and E’s effects. Similarly, when estimating the NLM(0.7) effect for teacher F in year two, the year one and year three mean deviation scores are found for only the students in that same track. This example is particularly problematic, because the within-track centering of the covariates simplifies the formula for this estimate,

to be merely the unadjusted mean deviation score for teacher F’s student in year two, M2F, the same as the NLM(0) estimate; no adjustments are made, because the expected deviations are zero.

In year three, the NLM(0.7) estimates have similar issues. The estimated effect for teacher G in year three,

is the mean deviation score for teacher G’s students in year three, M3G, adjusted for the students’ performance in years one and two, b(M1G – M1GH) and b(M2G – M2GH), respectively. Although the covariates for years one and two are not affected by the year three deviation scores, the year three NLM(0.7) estimates are still susceptible to withintrack centering. Consequently, the NLM(0.7) estimates for teachers F and J do not account for the student’s previous performance and, therefore, are not truly value-added effects.

The LM(0) teacher effect estimates are also adjusted means, but the estimates incorporate between-track differences and between-student correlations. For instance, the random effect of teacher D in year two is estimated using

The mean deviation score of students taught by teacher D in year two, M2D = -60, is adjusted by the mean year one deviation score for students taught by either teacher D or E in year two, M1DE = -30, and the performance of teacher D’s students in year three compared to year two, relative to students on the same track, 0.5[(M3D – M3DE) – (M2D – M2DE)] = 0.5[{(-70) – (-30)} – {(-60) – (-30)}] = -0.5. Together, these adjustments produce the estimated LM(0) effect for teacher D in year two, -35, which is higher than the NLM(0) estimate of -60, but still below the average teacher effect. The LM(0) effect for teacher F in year two simplifies to be M2F – M1F + 0.5[(M3F – M3F) – (M2F – M2F)] = M2F – M1F, because this track consists of only one teacher in each year, nullifying the adjustment for future scores and reducing the teacher effect to simply a mean gain. However, in both of these estimates, the LM(0) effects account for between-track differences, unlike the NLM(0.7) estimates. The mean deviation of the previous year’s scores is track specific, but it incorporates the fixed effect mean for that year, which is estimated using data from all students, irrespective of track. This is contrary to the NLM(0.7) estimates, in which each fixed effect mean cancels due to the centering of each covariate.

The LM(0) estimates also account for the between-student correlations induced by using the layered design matrix. Expected deviation scores are found based on deviation scores for students who have shared at least one common teacher. In the adjustment for teacher D’s effect, 0.5[(M3D – M3DE) – (M2D – M2DE)], teacher D is rewarded if his or her students perform better in year three than they did in year two, relative to other students on the same track. Likewise, if teacher D’s students have relatively worse future performance, his or her teacher effect estimate decreases; this is contrary to the estimation of the NLM(0.7) teacher effects. Although no future scores are available to obtain this type of adjustment for year three LM(0) teacher effect estimates, betweenstudent correlations are accounted for through the additional prior year adjustment in the teacher effect estimates. For instance, the formula for the LM(0) estimate for teacher G in year three is M3G – M2GH, where M2GH adjusts the year three mean deviation score for teacher G, M3G, by accounting for prior mean score deviations, between-track differences and between-student correlations. This type of adjustment also occurs in the year two LM(0) teacher effect estimates (e.g., M1DE for teacher D’s effect and M1F for teacher F’s effect). Yet, none of the LM(0) teacher effect estimates account for within-student correlations, so expected deviations scores for a student cannot be based on other scores from the same student, only on scores of students who have shared the same teacher(s).

The LM(0.7) estimates account for both between- and within-student correlations.

The LM(0.7) random effect of teacher D in year two is estimated using M2D – M1DE – 0.7(M1D – M1DE) + 0.5[(M3D – M3DE) – (M2D – M2DE)], which is similar to the LM(0) estimate. However, the LM(0.7) utilizes within-student correlations to obtain expected deviation scores for students based on other scores from those same students. The inclusion of the additional term, 0.7(M1D – M1DE), compares the mean year one deviation score for students who had teacher D in year two to the mean year one deviation score for students in the same track. The LM(0.7) estimate for teacher G in year three,

also accounts for within-student correlations by incorporating the students’ prior mean deviation scores in both year one, b(M1G – M1GH), and year two, b(M2G – M2GH). By making use of both between- and within-student information, the LM(0.7) teacher effect estimates utilize expected deviation scores based on scores of students who have shared the same teacher(s), as well as other scores from the same student.

** Table 2.2 displays the year two and year three teacher effect estimates from each of the four models discussed using the student scores graphed in Figure 2.**

3. Estimates are not included for year one teachers A, B and C, because their effect estimates cannot be described as value-added. However, comparing the year two and year three teacher effect estimates is particularly enlightening. As illustrated in Figure 2.3, student gains from year one to year two and from year two to year three support the notion teachers E and H had relatively higher than average effects on student learning. Alternatively, the effects of teachers D and G on student gains from one year to the next were below average, while the effects of teachers F and J were average. The effects of teachers G and H are further magnified by considering each student’s score trajectory from year one to year two through the inclusion of within-student correlations; the students’ trajectories drastically change after teacher G’s and teacher H’s instruction, whereas the trajectory for teacher J’s student does not change.

** Table22.2: Teacher Effect Estimates from NLM(0), NLM(0.**

7), LM(0) and LM(0.7) The non-layered models produce estimates reflecting mean deviation scores;

NLM(0) estimates are merely unadjusted mean deviation scores, while NLM(0.7) are mean deviation scores adjusted for students’ prior and/or future achievement.

Consequently, the teacher effect estimates for these models portray teachers F and J as highly effective teachers, simply because their student had higher than average scores in years two and three, not because the student showed higher than average growth.

Alternatively, the layered models produce estimates reflecting mean gains in deviation scores. As a result, teachers F and J had teacher effect estimates of zero for both layered models, because their student had relatively average mean gains and the student’s score trajectory did not change over time; the estimate of zero conveys the teacher did not have a value-added effect on the student’s learning. Additionally, accounting for withinstudent correlations provides more information about student growth over time, pulling the other LM(0.7) teacher effect estimates further from zero than the LM(0) estimates for the same teachers.

The variable persistence model (Equation 2.8) proposed by McCaffrey et al.

(2004) allows teacher persistency parameters to vary. However, estimated persistency parameters tend to be near zero (Lockwood, McCaffrey, Mariano, et al., 2007;

McCaffrey et al., 2004), so the teacher effect estimates from the low-persistency model approach the undesirable behavior of the NLM(0.7) estimates and do not appear to be value-added; estimates fail to account for between-track differences, and teachers are penalized for students who perform unexpectedly well in future years. As discussed by McCaffrey et al. (2004), the non-layered and low-persistency model estimates are also typically more susceptible to bias from omitted variables correlated with the level of student achievement than are the layered model estimates. In fact, the risk of overcorrecting teacher effects arises if non-instructional, time-invariant covariates beyond a school system’s control, such as race and poverty status, are included in the layered model. However, teacher effect estimates are susceptible to bias if variables correlated with gains in student achievement are omitted, irrespective of how teacher effect persistency is or is not specified.

In general, using the EVAAS layered model (Equation 2.6) to estimate teacher effects has advantages over other modeling approaches. The layered model accounts for both between- and within-student correlations to adjust for past and future student achievement scores. The model also uses all available scores, resulting in “less biased, more stable, more efficient estimates” (Wright & Sanders, 2008, p. 14) than either the gain score (Equation 2.2) or the covariate adjustment model (Equation 2.1). The use of multiple years of data allows estimates to be adjusted, thereby accounting for external pulses occurring in a given year and rewarding teachers whose students perform better than expected in the future. Overall, Wright and Sanders (2008) argue the EVAAS layered model is a competitive option for estimating teacher effects, because of its flexibility and adherence to value-added philosophy.

2.4 Issues Uncertainty remains concerning whether student background variables should be included as covariates. The EVAAS model does not include such covariates, but researchers can inappropriately extend this practice to other, less sophisticated longitudinal value-added models. Instead, decisions to model student-level covariates should be based upon the interaction of several factors: “the distribution across classes and schools of students with different characteristics, the relationship between the characteristics and outcomes, the relationship between the characteristics and true teacher effects, and the type of model used” (McCaffrey et al., 2003, p. 70). Bias of teacher effect estimates is an issue when students are disproportionately assigned to schools and/or classrooms based on background characteristics related to student outcomes and true teacher effects. When student background characteristics are correlated with teacher effects, the inclusion of student-level covariates can mask the effects of teachers. Fixed effects estimation of the covariates overcorrects for these background characteristics and results in underestimation of true, random teacher effects. For example, if highly effective teachers are assigned to affluent students, socioeconomic status becomes a proxy for teacher quality and true teacher effects are underestimated. Ballou, Sanders, and Wright (2004) propose a strategy to adjust for bias when student characteristics are correlated with true teacher effects, but strategies to address the issue of bias resulting from contextual effects as a result of aggregate-level variables have yet to be developed.

Ultimately, when deciding whether to control for covariates, researchers must choose between potentially confounding teacher effects with student-level variables and potentially biasing teacher effects.

Debates also continue about the persistency of teacher effects. The EVAAS (Equation 2.6) and cross-classified models (Equation 2.4) assume teacher effects persist undiminished into the future. As a result, prior teachers affect a student’s score in a particular year, but not a student’s gain in scores from one year to the next. Gain score models (Equation 2.2) implicitly make the same assumption in that gains do not rely on prior year teacher effects. Consequently, the models assume teachers do not affect students’ future growth. In the variable persistence model (Equation 2.8), however, student gains depend on prior teachers’ effects, because the differences between the current and previous year’s persistency parameters for teacher effects are not necessarily zero, as they are in complete persistence models. Similarly, covariate adjustment models (Equation 2.1) assume prior teacher effects influence student growth at the rates specified by the coefficients for prior scores. Yet, the potential advantages of freely estimating persistency parameters need to be carefully weighed against the advantages of using a layered model; as the persistency parameters approach zero, correlations between future scores of students who shared a teacher in the past are no longer accounted for, eliminating one of the benefits of using the layered modeling approach (Wright & Sanders, 2008).

Other issues arise when linking teachers to student scores. In some cases, students have incomplete records, missing some or all of their teacher links in a given year. This can occur when students are taught by multiple teachers in a single subject. Specifically, students may transfer to different schools midyear, be team-taught and/or learn a subject, such as reading, in multiple teachers’ classes. Such instances result in complex teacher links, and linking a student’s outcome to only one (or even no) teacher in a particular year may confound an identified teacher’s effect with other, unidentified teachers’ effects. However, determining how to accurately reflect the effect of multiple teachers on a student is not straightforward and requires potentially unrealistic assumptions.