26 Aug Musical chairs and the politics of state assessments
By Dale Chu
Over the summer, Tennessee signed a contract with their third company in five years to administer the state’s testing system. As demonstrated by the “Assessments by State” map on our homepage, the lack of consistency is hardly unique to the Volunteer State (though they’ve had a particularly bumpy ride). Not to be outdone, Kentucky and New Mexico, among others, are reportedly looking to redesign their tests, too. But the churn involved when it comes to states and their testing vendors carries significant repercussions, not the least of which is how to get a clear sense of a state’s academic trajectory.
It wasn’t supposed to be this way. As Marcus Aurelius might have once said,
“There was a dream that was common, high-quality assessment. You could only whisper it. Anything more than a whisper and it would vanish, it was so fragile.”
Ten years ago, states had seemingly joined hands in the pursuit of new and better state tests. The federal government awarded grants to two large testing consortia (PARCC and Smarter Balanced) to develop and implement the next generation of assessments. States rushed to sign on: at its peak, total membership was 45 states plus the District of Columbia. For the first time, true and honest comparability across states felt within reach.
But as states moved closer to implementation, controversy began to swirl and participation in the two consortia eroded precipitously. The majority of states have since struck out on their own again. It’s a worrisome development for a variety of reasons, primarily regarding questions of assessment quality and comparability.
Five years ago, the Council of Chief State School Officers (CCSSO) published criteria for states to use to ensure their assessments matched the “depth, breadth, and rigor” of higher standards. Independent reviews of state tests—confirmed by the U.S. Department of Education—showed significant variation in assessment quality. In fact, less than half of non-consortia states met CCSSO’s criteria. Political realities are understandable, if not frustrating, but there’s no excuse for states not to meet the expectations they set for themselves.
The challenge here is nontrivial and multifaceted. Only a handful of vendors have the capacity to handle large-scale testing. And since last November’s midterms, which saw the ascension of twenty new governors, there are at least thirteen new state school chiefs. Political timelines are frequently at odds with educational ones: these new leaders are under pressure to deliver against campaign promises with deliberate speed.
What happens when assessment and politics mix? Former West Virginia Governor Bob Wise had this to say:
What bothers me is when states pounded their chests and said, “Well, we’re going to roll back PARCC or Smarter Balanced because it’s a federal initiative.” They’re probably the best performance assessments that we’ve ever had, at the lowest price. And the federal government is not dictating what’s in them, it’s just paying for the development of them and saying, “Here, you can use these.”
States that are dropping the tests are losing millions of dollars, not only the investment in PARCC and Smarter Balanced. They’re going to have to go buy new assessments, and they’re going to have to retrain the teachers in them. So money that should be going to classroom teachers, money that should be going to curriculum and development, is getting wasted on useless political gestures.
In addition to the self-inflicted injury of lower-quality assessments, these states have also sacrificed comparability at the altar of political expediency. States that continue to switch tests over the long-term will be left twisting in the wind when it comes to how their students are really doing. The one silver lining here is we’ve been able to show that with consistent tests, high academic standards can help advance equity in education.
As states continue to play musical chairs with their assessment vendors, it’s worth considering the words of author Jim Collins, which are, to paraphrase, that there is no perfect test. Every assessment is flawed in one way or another—though some are better than others. These are the tests that states should select because students will be better served. And state leaders would be wise to stick with them, and focus their energies instead on tracking how well students are performing against a consistent measure.
No Comments