Formula for deriving a teacher’s value added score in New York City

One of the biggest topics in education lately has been the use of Value added measures  also known by the acronym, VAM. In recent days, New York has been roiling in controversy with the release of teacher reports (more at New York Teacher Ratings Released — “At Best Unwise, At Worst Absurd”).  The release of teacher “ratings” by the LA Times a year ago led to similar controversy about accuracy, fairness, and whether such ratings are at all appropriate. Many of you may be wondering, what is VAM?

Value added measures involve evaluating teacher performance based on how their students perform on tests. Most often if involves comparing test scores from before, and after a student was with that particular teacher. Many states and large districts are now moving towards basing teacher evaluation on test scores, so that around 40% the students in the country will have a teacher who is being evaluated based in some part on their students’ test scores. Teacher evaluations that include using students test scores are required for the federal Race to the Top (RttT) program, some types of School Improvement Grants for persistently failing schools (what SCUSD calls Priority Schools), and to get “waivers” from NCLB sanctions by the federal Department of Education. Members need to know what is at stake with these programs:

Two examples showing how these systems can hurt good teachers are Grading New York Teachers – When the Formulas Lie –, which shows how these formulas can hurt those who teach our highest scoring students, and this piece about an EL teacher in LA, Retired L.A. teacher ponders her rating –, which shows how it can hurt those teaching our lowest level students. These articles are good introduction to the topic.

For general background, here are resources for further reading:

The follow is a list of links from member Alice Mercer on the subject (a complete listing is here):