Value-Added Teacher Assessment

teacher assessment , assessment teacher , value added teacher assessment, value added assessment teacher
Most commonly associated with William B. Sanders and his colleagues (originally at the University of Tennessee and now at the SAS Institute), one such approach is predicated on the proposition that if enough data on individual students are available over time, then this information can be used to predict these students’ test score gains in the future.

It therefore follows that, if all of any given teacher’s students’ test score gains can be predicted based upon these students’ past performance, then any discrepancies from these predictions represent that teacher’s effectiveness ineffectiveness for that particular year. Called value-added teacher assessment , this approach uses sophisticated longitudinal statistical modeling procedures to generate predictions regarding students’ test score gains for a given year.

It then defi nes any observed classroom performance that turns out to be better than predicted on the end-of-year test as the value added by the teacher of said classroom. (Again, what else could it be?) This approach has resulted in some relatively promising findings, especially for mathematics, to a lesser extent for reading, but apparently not so much for other subjects. Before considering these findings in any detail, however, it is worth noting that the model attempts to simulate the situation in which:

  • Students are randomly assigned to teachers (which would help to decrease the individual differences in students’ propensity to learn between teachers’ classes that occur when students are assigned on the basis of their likelihood to gain more or less highly on standardized tests — such as occurs when parents request that their children be assigned to a given teacher based upon that teacher’s reputation or when a principal assigns students that he or she believes will prosper more with one teacher than another or when students are grouped/tracked based upon their ability level);
  • Students are tested twice per year, once at the beginning of the year and once at the end (because the learning and forgetting that goes on during the summer is not under the control of the next year’s teacher but obviously affects how much children improve from the previous May’s testing to the next May’s testing — which in turn is used to judge that teacher’s effectiveness);
  • Subtract the two test scores for each teacher to get a measure of how much his or her students learned during the year;
  • Repeat the entire process the next year;
  • Compare each teachers’ learning results across the two years after statistically controlling for as many factors not under the teachers’ control as possible (such as the amount of instruction students’ had previously received, and continued to receive, from their home learning environments).
Since these conditions are extremely difficult to implement (and information regarding children’s actual home learning environment is nonexistent) in the real world of schooling, Sanders and colleagues have made a valiant attempt to do the best they can with what is available to them. Their results have generated a great deal of excitement outside education (both President Obama and Malcolm Gladwell are huge fans), but unfortunately, although the value added researchers’ efforts are interpreted as showing that teacher effects are considerable in any given year, the results assessing the consistency of these effects over time are considerably less impressive.
Read More : Value-Added Teacher Assessment