«Does the Market Value Value-Added? Evidence from Housing Prices After a Public Release of School and Teacher Value-Added Scott A. Imberman∗ Michigan ...»
In order to underscore the fact that, conditional on achievement levels, value-added is weakly correlated with student demographics and that it is diﬃcult to predict with pre-release observables, the remaining columns of Table 4 test whether value-added information is capitalized into property values prior to their release. If parents know which schools are the highest valueadded from reputation or from factors we cannot observe, the value-added releases should not provide additional information about school quality. In columns (4) and (5) of Table 4, we estimate the same boundary ﬁxed eﬀects model as in column (3) but using the ﬁrst release of LAT value-added and the LAUSD value-added as explanatory variables. Since all of the sales data used in Table 4 are from pre-August-2010, these models test whether future information about value-added is already capitalized into home prices. In column (4), there is a positive relationship between the LA Times value-added and property value diﬀerences across boundaries.
However, some of this eﬀect is likely due to the correlation, however weak, between LA Times value-added and API. As a result, when we add API percentile as a control in column (5), the relationship between value-added and test scores becomes smaller and is no longer statistically signiﬁcant. The magnitudes of the value-added estimates also are very small. However, the API estimate is almost identical to that found in column (3), suggesting that the capitalization of API scores is not being driven by value-added information and that any information contained in the LA Times and LAUSD value-added estimates are not already capitalized into home prices prior to August 2010.
Ideally one would also like to control for neighborhood characteristics as well. However, typically the boundary areas in LAUSD are smaller than Census tracts, leaving most boundary areas entirely within a single tract. When we control for Census tract observables, the API coeﬃcient becomes smaller and no longer is statistically signiﬁcant. This suggests that either the main aspect of API that is capitalized into property values is neighborhood composition or that including our set of neighborhood controls leaves too little variation for identifying the role of API.
5.2 Diﬀerence-in-Diﬀerence Estimates Table 5 presents the baseline estimates from equation (1). In each column, we add controls sequentially in order to observe the eﬀect of the controls on the estimates. All estimates are multiplied by 100 so they show the eﬀect of a 100 percentile increase in value-added on home prices post-release. In column (1), we included no controls other than those shown in the table, API score, API percentile, lagged and twice lagged API score, and school rank compared to ‘similar’ schools in California as determined by the California Department of Education. There is a positive but statistically insigniﬁcant relationship between the LA Times value-added and home prices post-release. When we add school and month ﬁxed eﬀects, however, the LA Times estimate turns negative and marginally signiﬁcant. Nonetheless, as we add additional controls the point estimates attenuate and become insigniﬁcant. Similarly, we ﬁnd no statistically signiﬁcant impacts of the LAUSD value-added release nor the LA Times publication of API scores on housing price. The latter result is perhaps unsurprising because, as previously noted, API scores were available to the public prior to the LA Times publication and thus should only be capitalized into housing prices if people were unaware of them.
In interpreting our results, we focus on our preferred estimates in column (6). The point estimate of -0.027 on the LA Times value-added score implies that a 10 percentile increase in value-added reduces housing prices by 0.3%, but it is not statistically signiﬁcant. Thus it is useful to consider how large of a positive eﬀect on housing prices can be ruled out. The upper bound of the 95% conﬁdence interval for a 10 percentile increase in value-added is 0.10. This is a very small estimate. To put it in perspective, suppose we were to increase a school’s valueadded by 50 percentiles, which is equivalent to taking the lowest ranking school and making it an average school or taking an average school and making it the highest ranking school. The upper bound implies that this would generate, at most, a 0.5% increase in housing values.
Column (7) of Table 5 provides further evidence that value-added information does not aﬀect property values. In this column, we provide results from a model similar to those used in Table 4 that restricts to properties within 0.1 miles of a school zone boundary and includes boundary ﬁxed eﬀects. Thus, the estimates are identiﬁed oﬀ of changes in property values between properties on either side of a given attendance zone boundary when the value-added data are released. Table 4 shows that home prices do not vary systematically across borders with value-added in the pre-period, and the results from column (7) indicate that they do not change across these borders in response to the release of value-added information either.
As discussed above, a unique feature of the LA Times information release was that it included both school-average value-added and value-added rankings for over 6000 teachers in LAUSD. We now examine whether property values respond to variation in teacher quality, which is the ﬁrst evidence in the literature on this question.
In column (1) of Table 6, we add the standard deviation of the value-added scores across teachers in each school interacted with the timing of the initial LA Times release to the model.
If high-quality teachers are disproportionately valued (or if low-quality teachers have a disproportionately negative valuation), then a higher standard deviation will lead to higher (lower) property values conditional on school-wide value-added. The estimate on the standard deviation of teacher value-added is negative, but it is not statistically signiﬁcantly diﬀerent from zero at conventional levels. It also is small in absolute value, pointing to a decline in property values of only 0.006% for a one point increase in the standard deviation of teacher value-added.
In column (2), we interact the proportion of teachers in each quintile of the value-added distribution with being in the post-August 2010 period. Again, we see little evidence that having a higher proportion of teachers with high value-added leads to higher property values, nor does a high proportion of low VA teachers reduce property values. This result is surprising, given the strong correlation between teacher quality and student academic achievement as well as future earnings (Rivkin, Hanushek and Kain, 2005; Rockoﬀ, 2004; Chetty, Friedman and Rockoﬀ, 2011). However, the teacher value-added as measured here could be rather unstable from year to year, as each estimate is based oﬀ of a small number of students assigned to each teacher. It therefore could be sensible to ignore one year’s teacher value-added scores if they are weak indicators of stable teacher quality measures.
The remaining columns of Table 6 present estimates based on alternatives to the modeling assumptions we make. In column (3), we use sale levels instead of logs. The estimates, once converted back to percentage terms relative to baseline, are very similar to those in Table 5. In column (4), we average across all value-added measures in case the public follows a simple rule of thumb when assessing the multiple measures and takes the mean. We ﬁnd no evidence for this hypothesis, as the point estimate is negative and not statistically diﬀerent from zero. In the next two columns, we examine estimates separately for homes with more than two bedrooms and two or fewer bedrooms, since the former type of home is more likely to house families with children. Although the point estimates for the homes with more than two bedrooms are larger than those for homes with two or fewer bedrooms, with the exception of a negative estimate on AP I × P ost for less than two bedrooms, none of the estimates is statistically signiﬁcantly diﬀerent from zero at even the 10% level and they remain small. Thus,the valueadded information does not inﬂuence property values even among the homes that are most likely to have children in them.
As previously discussed, it is possible that if a neighborhood has fewer school choice options there would be more capitalization of the local school’s quality. To test this, in column (7) we interact the value-added score with the number of charter schools within a one mile radius of the property. We ﬁnd no evidence that the capitalization of value-added varies with the number of charter schools nearby. Results were similar using a two mile radius. Finally, in column (8) we test whether what matters is the size of the relative information shock rather than the valueadded score itself. To do this, we replace our value-added measures with the diﬀerence between
V ALAU SD, these measures are set equal to zero prior to the relevant value-added release. We st ﬁnd little to indicate that a larger positive shock generates larger increases in housing prices.
In Table 7 we present a series of robustness checks that allow us to assess the sensitivity of our main results and conclusions. In column (1) we do not control for lagged API, as changes in API may be correlated with value-added. Our estimates are unchanged by excluding these controls. In column (2) we drop the 7% of the sales data that are imputed (see Section 3). We then exclude properties with more than 8 bedrooms in column (3), which either are very large homes or are multiple unit dwellings. We alternatively exclude properties over 5000 square feet in column (4) and drop multiple unit properties in column (5). In each of these cases, the estimates are quantitatively and qualitatively similar to our baseline estimates. In columns (6) and (7), we allow for there to be lags between when the information is released and when it impacts the housing market. We allow for both 3 and 6 month lags, setting the value-added to zero in ﬁrst 3 and 6 months post-release, respectively. We continue to ﬁnd no eﬀect of valueadded information on property values, although there is some weak evidence of a small impact from making API information more salient. Finally, in column (8) we limit our analysis to the period before LAUSD released their value-added measure and the second LA Times release.
In this model, only the initial LA Times value-added data is known and there is no risk of “contamination” of our estimates from the additional releases. The results are very similar to the baseline estimates. Taken together, the results from Table 7 suggest that our ﬁndings are not being driven by outliers, the manner in which we measure home prices, or by the timing of the treatment.
Although there is no average eﬀect of value-added information on property values, the extent of capitalization could vary among diﬀerent types of schools or among diﬀerent populations.
We now turn to an examination of several potential sources of heterogeneity in value-added capitalization.28 In Figure 6, we present estimates broken down by observable characteristics of the school: 2009 within-LAUSD API qunitile, median pre-release home price quintile, percent free and reduced price lunch, percent black, percent Hispanic,and percent white. Although the precision of the estimates vary somewhat, the point estimates are universally small in absolute value and are only statistically signiﬁcantly diﬀerent from zero at the ﬁve percent level in two cases (out of 45 estimates).
Nonetheless, it is worth noting that the estimate for the bottom quintile of pre-release sale prices, which is positive, is statistically signiﬁcantly larger than the estimate for the top quintile, which is negative. Indeed, the estimates in the second panel do show a small but notable negative gradient in prior house prices, suggesting that lower-priced neighborhoods are more aﬀected by value-added. Percent free/reduced-price lunch and percent Hispanic show similar patterns and the bottom and top quintiles are signiﬁcantly diﬀerent in both cases. Given that all three of these measures are correlated with socioeconomic status, these ﬁgures provide We also analyzed whether getting a consistent signal whereby both the LAT and LAUSD measures are high or low has an impact. We found little evidence of this. These results are available by request.
some evidence that - to the extent the value-added scores are capitalized - the impact is larger in lower-income neighborhoods.
The last row of Figure 6 provides insight into two potential criticisms of using housing prices as our outcome measure. The ﬁrst panel addresses concerns that many neighborhoods in Los Angeles have high rates of private schooling and thus are likely to be less sensitive to the quality of the local public school. We show estimates that are interacted with the private schooling rate in the census-tract of each property as estimated by the American Communities Survey.
The mean private schooling rate in our sample is 20%, with a standard deviation of 31%. The estimates show little diﬀerence in capitalization by private schooling rate. In the second panel of the last row we measure variation by owner-occupancy rates, also calculated from the ACS.
The concern here is that in neighborhoods with low owner-occupancy rates, sale prices may be less sensitive to school quality. The mean of this measure is 50.1%, with a standard deviation of 23.3%. Once again, we see little evidence of heterogeneity along this margin.
School districts across the country have begun to use value-added methodologies to evaluate teachers and schools. Although only a few large districts have released these results publicly, it is likely that more will in the future. Thus, it is important to understand whether and how this information is valued by local residents. Furthermore, value-added measures provide information about school quality that is less correlated with the school demographic makeup than are test score levels. Identifying how value-added information in particular is capitalized into housing prices can therefore lend new insight into the valuation of school quality that research focusing on test score levels as a school quality measure cannot.
This paper is the ﬁrst to examine how publicly released school and teacher value-added information is capitalized into property values. We exploit a series of information releases about value-added by the Los Angeles Times and the Los Angeles Uniﬁed School District, which provided local residents with value-added rankings of all elementary schools and over 6000 teachers in the LA Uniﬁed School District. Using housing sales data from the LA County Assessor’s Oﬃce, we estimate diﬀerence-in-diﬀerences models that show how home prices change as a function of value-added after each data release. Across myriad speciﬁcations and variations in modeling choices and data assumptions, we show that property values do not respond to released value-added information. Our estimates are suﬃciently precise to rule out all but very small positive eﬀects on average. However, using boundary discontinuity methods, we ﬁnd that API diﬀerences across schools are capitalized into home prices, which indicates that school quality as measured by this outcome is valued by Los Angeles residents.
Unique to our study in the school valuation literature is the ability to examine home price eﬀects based on teacher quality information. Similar to the school-level results, though, we ﬁnd that property values are unresponsive to the within-school variance in teacher value-added.
Nonetheless, we do ﬁnd that, although they are generally not signiﬁcantly diﬀerent from zero individually, estimates that vary by low and high socio-economic status are signiﬁcantly diﬀerent from each other, indicating that the impact of value-added on housing prices has a small but negative gradient with SES.
Our estimates diﬀer substantially from previous literature on school valuation that uses test score levels as a measure of school quality. This literature typically has found an eﬀect on the order of 2 to 5 percent higher housing prices for each standard deviation increase in test scores (Black, 1999; Bayer, Ferrreira and McMillan, 2007; Gibbons, Machin and Silva, 2009).