By Dale Chu

State testing is now underway. With luck and hopefully some leadership (more on that later in this series), the disruptions of the past two years will be put behind us. It’s likely we’ll see the bottom drop even more, but in too many states we won’t know because of shoddy data resulting from ill-advised decision-making. Thankfully, faithful readers of this blog now have an easy way to see who the leaders and laggards were: the recent refresh of the Assessment HQ platform maps out the states on both ends of the spectrum.
Starting on a positive note, there were seventeen states that published both disaggregated data and complete participation information. Click on any of these states in the “Explore State Data” tab and you’ll be able to take a look. These states were: Alabama, Alaska, Arkansas, California, Kentucky, Massachusetts, Minnesota, Nevada, New Hampshire, North Dakota, Ohio, Rhode Island, South Dakota, Tennessee, West Virginia, Wisconsin, and Wyoming. Granted, what you see in some of these places is ghastly, but they get a nod for doing what thirty-four jurisdictions didn’t: provide a full picture of both student performance in each grade along with participation information across student groups.
Turning to the other side of the ledger, a number of states did a lousy job in being upfront with how their students fared. Some fell short on providing disaggregated data; others dropped the ball on student participation rates—providing partial information or none at all.
Seven states landed on my naughty list for not having what I consider the bare minimum: overall proficiency data. My home state of Colorado is one of them because they elected not to require testing in all grades. Oregon inexplicably (but unsurprisingly) followed their lead and is in the same boat. The information put out by Connecticut and Utah is as clear as mud, but three states out did them by opting not to publicly release any performance data: Maine, New Mexico, and New York. Unlike DC, none of these states received a testing waiver last spring, so the lack of transparency is weak sauce.
Notably, there were a few instances where state assessment data was laboriously difficult to track down. As a former state education official, I’ve often joked that parsing through state data on an agency website is like doing an archaeological dig—it requires time, patience, and knowing where to look. Some insisted that all of their information was publicly available in a “comprehensive” data dashboard. But as Inigo Montoya said to Vizzini, “You keep using that word. I do not think it means what you think it means.” “Convoluted” would be more like it. State assessment data can be challenging enough to obtain without the obfuscation created, intentionally or unintentionally, by a poor layout or inscrutable formatting.
To be fair, muddy data predated the pandemic—an issue I’ll continue to return to in future posts. In the here and now, the exams being administered over the next several weeks are arguably the most important round of state assessments given in recent memory. They could provide the clearest picture, albeit an unpleasant one, of how students have been adversely affected, and help establish a new baseline going forward. In my next entry in this series, I’ll delve into why this information remains relevant; even moreso with schools emerging from the din of Covid.