Burying the lede on analyzing assessment data

Burying the lede on analyzing assessment data

By Dale Chu

A provocatively titled Education Week opinion piece—“Does Studying Student Data Really Raise Test Scores?”—generated some buzz this month thanks to Betteridge’s law of headlines. For those who are unfamiliar, the tongue in cheek law states, “Any headline that ends in a question mark can be answered by the word no“. As such, in-all likelihood, many readers didn’t make it past the headline before allowing the piece to confirm long held biases against the utility of standardized testing.

For those who did made it past the headline, some assuredly dropped off after the first two sentences:

Question: What activity is done by most teachers in the United States, but has almost no evidence of effectiveness in raising student test scores?

Answer: Analyzing student assessment data. 

It’s an alluring opening, and one that on its surface appears dispositive, deceptively so. Authored by a Harvard education professor, her analysis is in fact far more nuanced than either the headline or the lede would suggest.

Not surprisingly, the article does offer plenty of anti-testing fodder through the author’s suggestion that the common practice of data analysis employed by teachers across the country has minimal impact on either teacher practice or student outcomes. According to the piece, the process of studying data has fed a lucrative “billion-dollar business” for testing and data companies who have been all too eager to supply the interim assessments used by schools and states. It all feels pretty nefarious until you get to the crux located midway through: Why hasn’t student performance improved?

Tests have always been a favorite punching bag for immovable student outcomes, but the absence of progress is less about the tests and more about how the data is or isn’t used. The author herself seems to concur: “The teachers mostly didn’t seem to use student test-score data to deepen their understanding of how students learn, to think about what drives student misconceptions, or to modify instructional techniques.” In other words, data analysis in isolation—as is the case with virtually any instructional practice—without the requisite follow through is unlikely to yield the desired results.

To wit, it’s at the end of the article that we find where the lede is buried:

The analysis of data can, when combined with strong supports for improved teaching, shift student outcomes. But the small number of programs that combine the study of data with wider instructional supports limits our ability to draw real conclusions.

The upshot is that we can’t draw conclusions about the efficacy of studying student data absent a better understanding of how and why that information isn’t being effectively used.

Nevertheless, instead of studying the data, the author concludes that schools and teachers should use their collaboration time in other ways. It’s a questionable recommendation because it wrongly suggests that data study is mutually exclusive from practices that improve student outcomes. Instead of halting the study of student data, it would make more sense to explore and examine how schools might better go about equipping teachers to improve their instructional practices within a culture that recognizes assessment as an integral part of good teaching.

No Comments

Post A Comment