Making Curriculum Pop

Fascinating story over at NPR Ed "A Botched Value Added Study Raises Bigger Questions" - the key excerpts IMHO.

But value-added modeling, it turns out, is really, really hard.

To understand why, we need to think like researchers. Policymakers face a basic task. They want to compare the performance of different teachers and schools. No Child Left Behind, the federal law, did that by asking a simple question: How many children in this school score proficiently?

It turns out that question is too simple.

***

Both student growth measures and value-added models are being adopted in most states. Education secretary Arne Duncan is a fan. He wrote on his blog in September, "No school or teacher should look bad because they took on kids with greater challenges. Growth is what matters." Joanne Weiss, Duncan's former chief of staff, told me last month, "If you focus on growth you can see which schools are improving rapidly and shouldn't be categorized as failures."

But there's a problem. The math behind value-added modeling is very, very tricky. The American Statistical Association, earlier this year, issued a public statement urging caution in the use of value-added models, especially in high-stakes conditions. Among the objections:

  • Value-added models are complex. They require "high-level statistical expertise" to do correctly;
  • They are based only on standardized test scores, which are a limited source of information about everything that happens in a school;
  • They measure correlation, not causation. So they don't necessarily tell you if a student's improvement or decline is due to a school or teacher or to some other unknown factor;
  • They are "unstable." Small changes to the tests or the assumptions used in the models can produce widely varying rankings.

Full story can be found HERE.

Views: 5

Events

© 2024   Created by Ryan Goble.   Powered by

Badges  |  Report an Issue  |  Terms of Service