lucadp via shutterstock

I decided to become an English teacher because I love data. Poring over spreadsheets, crunching numbers, comparing cohorts — it doesn’t get any better for a lover of literature and language!

Imagine my excitement when the National Assessment of Educational Progress (NAEP) released the results of the 2019 test two weeks ago! Like a starry-eyed child on Christmas morning, I opened up my laptop with unbridled anticipation at the long-awaited gift I was about to receive: the latest version of The Nation’s Report Card.

Truth be told, the 2019 NAEP was not foremost on my mind two weeks ago. In fact, I didn’t realize the results had been released until I read the news on Oct. 30. Come to find out, the news was not good. Well, it wasn’t as good as it could have been. Okay, so it wasn’t all bad.

And so it goes with uncovering the real meaning behind standardized tests — if there is any real meaning at all.

To begin, here’s the official reaction in Connecticut:

“The report shows that the performance of Connecticut’s Grade 4 students improved in mathematics, but declined in reading when compared to results from 2017,” according to a state Department of Education press release. “Mathematics and reading scores for Connecticut’s Grade 8 students remained steady when compared to 2017 while the performance for the nation’s eighth-graders declined overall in both subjects. Very few states outperformed Connecticut students who scored higher than the nation in both grades and subjects.”

It was a measured response, as were many of the news reports, like this one from The Day of New London:

“Compared to other states not based on growth but on static scores for 2019, Connecticut ranked second for eighth-grade reading, seventh for fourth-grade reading, eighth for eighth-grade math, and ninth for fourth-grade math.”

And, of course, came the editorials, including this decidedly negative one:

“If we’re happy with being just slightly better than average, and no better than two years ago, we should be satisfied with these scores,” opined Meriden’s Record Journal. “We look forward to seeing Connecticut make a better showing next time around.”

So honestly, just what should we make of this year’s NAEP results? Connecticut’s Education Commissioner Miguel Cardona actually provided a reasonable, if jargon-filled, takeaway:

“While we are pleased to see that overall our students in Connecticut performed better than most of their peers across the country, we still have much more work to do to close the disparity gaps that exist around the state. Our priority will remain strengthening the instructional core that provides the necessary foundation for higher levels of learning and improved student outcomes. We will continue to focus on rigorous and engaging curriculum, enhancing connections with students, and ensuring great teachers and leaders in every school and classroom.”

In other words, we must continue to improve how we teach kids because the world is changing faster than ever. Even as NAEP provides a biennial benchmark of sorts, it does not offer the definitive picture that education pundits like Betsy DeVos and Arne Duncan think it does.

First of all, the numbers are not so easily interpreted. A small state like Connecticut, for example, has far fewer students taking the NAEP than most states, so the statistical margin of error is larger. Consequently, “when Connecticut’s average scale score increases by one point, the results indicate that Connecticut’s student performance stayed the same.” Clear as mud, right?

Moreover, the tests are now administered on computer tablets, a welcome change in line with our technological world. But could this new method skew the scores? Indeed, might students in schools that already employ tablets daily have a distinct advantage? Not so fast.

The Reboot Foundation, an organization that promotes critical thinking worldwide, conducted a study on the effects of computer tablets both in the classroom and when used for standardized tests.

“The results regarding tablet use in fourth-grade classes were particularly worrisome, and the data showed a clear negative relationship with testing outcomes,” explained Reboot. “Fourth-grade students who reported using tablets in ‘all or almost all’ classes scored 14 points lower on the reading exam than students who reported ‘never’ using classroom tablets. This difference in scores is the equivalent of a full grade level, or a year’s worth of learning.”

Go figure. Seems “The Nation’s Report Card” is not exactly an A+ measurement tool.

As a matter of fact, NAEP itself “cautioned against interpreting NAEP results as implying causal relations. Inferences related to student group performance or to the effectiveness of public and nonpublic schools, for example, should take into consideration the many socioeconomic and educational factors that may also have an impact on performance.”

Socioeconomic and educational factors? I didn’t see those on my 2019 NAEP spreadsheet. I guess measuring this education stuff is actually quite complicated. And to think that I waited two years for these scores.

Barth Keck is an English teacher and assistant football coach who teaches courses in journalism, media literacy, and AP English Language & Composition at Haddam-Killingworth High School.

DISCLAIMER: The views, opinions, positions, or strategies expressed by the author are theirs alone, and do not necessarily reflect the views, opinions, or positions of

Barth Keck is in his 32nd year as an English teacher and 18th year as an assistant football coach at Haddam-Killingworth High School where he teaches courses in journalism, media literacy, and AP English Language & Composition. Follow Barth on Twitter @keckb33 or email him here.

The views, opinions, positions, or strategies expressed by the author are theirs alone, and do not necessarily reflect the views, opinions, or positions of or any of the author's other employers.