Data Sgp is the key to interpreting student performance, and determining whether or not a school or teacher is effective. However, many of the available SGP analyses are based on test scores that are subject to large estimation error. These errors cause the estimated SGP to be noisy and can create misleading results.
To address these issues, we develop an estimator of the true SGP that is free from test-score-based estimates of student achievement and teacher effects. Our estimator is based on the idea that, at a given point in time, the true SGP of a student with a particular test score is the proportion of students whose true test score would be equal to or greater than that score under a model in which all of the students are identical in terms of their prior test scores.
Moreover, our results show that the true SGP for any student is proportional to the average of the true SGPs for all students who have the same prior test score, and that this ratio can be determined from the student’s current test score. Furthermore, we find that the true SGP of a teacher is proportional to the average of the true Latent Achievement Traits (LATs) for all students taught by that teacher.
This approach is particularly well suited for the measurement of student growth over a long period of time, as it allows the comparison of test scores from different points in time without having to account for changes in the student’s demographic characteristics or other explanatory variables. Using this methodology, we estimate individual SGPs and average SGPs for students in each grade level over the past two years. We also report student growth by school and teacher, and analyze relationships between these variables.
SGPs estimated from standardized tests are subject to large estimation errors that can cause them to be noisy measures of their underlying latent achievement traits. The errors in the prior and current test scores used to calculate the SGPs contribute to this noise, as do the unobserved student-level relationships between student background characteristics and latent achievement traits.
These errors make it difficult to interpret aggregated SGPs as indicators of teacher effectiveness, because the resulting relationships might be explained by other factors. This issue is easy to avoid by estimating a value-added model that regresses student test scores on teacher fixed effects, previous test scores, and student background variables. In this way, it is possible to control for these sources of variance while still retaining the transparency and interpretability provided by aggregated SGPs. However, these benefits need to be weighed against the cost of allowing this source of bias.