shutterstock


In computer programming 101 they teach the term, “Garbage In, Garbage Out,” and it’s commonly known as “GIGO.” In other words, no matter how great the logic in your program, if you input poor data, you’re going to get a bad result.

Techopedia points out that while the term is usually used in the context of software development, it doesn’t only apply to that field: GIGO also can be used to refer to any decision-making systems where failure to use precise, accurate data could lead to wrong, nonsensical results.

Justice Roger D. McDonough of the N.Y. Supreme Court’s 3rd District provided a reminder of this on Tuesday when he ruled in the case of Sheri G. Lederman that the N.Y. Education Department’s growth score and rating of her as “ineffective” for the 2013-14 school year was “arbitrary and capricious and an abuse of discretion.”

Lederman is a fourth-grade teacher in Great Neck, Long Island. Great Neck’s Superintendent of Schools at the time she filed the lawsuit, Thomas Dolan, described her as a “highly regarded as an educator” with “a flawless record,” whose students consistently scored above the state average on standardized math and English tests. In 2012-13, more than two-thirds of her students scored as proficient or advanced. Yet in 2013-14, despite a similar percentage of students meeting or exceeding the standards, Lederman was rated “ineffective” as a teacher.

Maybe there’s a good reason why the American Statistical Association issued a strong warning about using value-added models (VAM) for high-stakes purposes two years ago.

It’s worth revisiting a few of the important highlights of the American Statistical Association statement, particularly since the state Board of Education and Performance Evaluation Advisory Council (PEAC) are revisiting Connecticut’s own teacher evaluation system at present, having postponed the 2012 legislative requirement to link 25 percent of Connecticut teachers’ evaluations to test scores for another year over the protests of corporate education reform groups.

According to the American Statistical Association:

“The measure of student achievement is typically a score on a standardized test, and VAMs are only as good as the data fed into them. Ideally, tests should fully measure student achievement with respect to the curriculum objectives and content standards adopted by the state, in both breadth and depth. In practice, no test meets this stringent standard, and it needs to be recognized that, at best, most VAMs predict only performance on the test and not necessarily long-range learning outcomes.

“Most VAM studies find that teachers account for about 1 percent to 14 percent of the variability in test scores, and that the majority of opportunities for quality improvement are found in the system-level conditions. Ranking teachers by their VAM scores can have unintended consequences that reduce quality.

Now here’s the part that proponents of VAM seem to have completely and utterly ignored when discussing the impact of an individual teacher on test scores:

“Various studies have demonstrated positive correlations between teachers’ VAM scores and their students’ future academic performance and other long-term outcomes. In a limited number of studies, teachers have been randomly assigned to classes within schools, thus reducing systematic effects that might arise because of assignment of students to teachers. These studies indicate that the VAM score of a teacher in the year before randomization is positively correlated with the test score gains of the teacher’s students in the year after randomization, but the correlations are generally less than 0.5. Also, studies have shown that teachers’ VAM scores in one year predict their scores in later years.

“These studies, however, have taken place in districts in which VAMs are used for low-stakes purposes. The models fit under these circumstances do not necessarily predict the relationship between VAM scores and student test score gains that would result if VAMs were implemented for high-stakes purposes such as awarding tenure, making salary decisions, or dismissing teachers.”

Why then, despite such a clear warning from expert statisticians, has there been legislation on both the national and state level to use VAM for high-stakes purposes? Could it possibly be the outsize political influence of the largest private foundation in the world and other foundations like it? Further, what damage has this outsized influence done to our country’s public education system?

It’s certainly impacted teacher morale. Veteran teachers are leaving the profession, taking valuable institutional knowledge with them.

Four years ago, in a meeting with the CTNewsJunkie editorial board, Gov. Dannel P. Malloy made the outrageous, nonsensical claim that teachers leaving the profession had nothing to do with such punitive policies, and when provided with research to the contrary his reply was silence and a determination to stay his clearly detrimental course.

And problems have been surfacing on a regular basis elsewhere in the country. Earlier this week, Politico reported that one of the poster charter chains for high test scores, Eva Moskowitz’s Success Academies, fired an ethnographer after he said his research suggested that Success might have achieved higher scores through cheating.

From Politico’s reporting:

“It seems possible if not likely that some teacher cheating is occurring at Success on both internal assessments and state exams,” reads the July report by [Roy] Germano, which was titled “Research Proposal: An Investigation into Possible Teacher Cheating.”

And that’s leaving aside “master teaching” that looks more like child abuse and the practice of “counseling out” special education students. Success Academies appears to be following in the footsteps of other districts that placed too great an emphasis on test scores — Atlanta, Washington, D.C. under Michelle Rhee, the Texas “miracle” that really wasn’t. The list goes on. Garbage in, garbage out indeed.

Corporate education reform groups like CCER and ConnCAN will keep issuing indignant press releases devoid of real facts as to why test scores should be a large part of teacher evaluations, but ask them for their research. When they provide it, employ “rigor” in your critical thinking. Who funded that research? Does it fit with the American Statistical Association guidelines? Or is it just another example of Robert Merton’s self-fulfilling prophecy?

Asked how the Lederman ruling might impact Connecticut, Abbe Smith, Communications Director for the state Board of Education, said: “We have not reviewed the ruling, but point out that the ruling applies to a specific case in a different state that has a different evaluation system. The stakeholders of PEAC continue to work to refine and strengthen Connecticut’s educator evaluation and development system.”

The stakeholders of PEAC, the state Board of Education, and the legislature — particularly in these tough budget times when districts are being forced to cut funding — need to think about what really works for education based on peer-reviewed research.

It was encouraging to learn from CABE’s General Counsel Patrice McCarthy that PEAC is reviewing all of the testing our kids are being subjected to with a view of eliminating overlap, because there isn’t just a financial cost to all this testing — there’s an emotional cost to the children and the opportunity cost of the learning they lose during the weeks and weeks — in some schools it amounts to months — devoted to standardized testing.

We can’t let big campaign contributions influence what is really best for our children. Democracy depends on it.

Sarah Darer Littman is an award-winning columnist and novelist of books for teens. A former securities analyst, she’s now an adjunct in the MFA program at WCSU, and enjoys helping young people discover the power of finding their voice as an instructor at the Writopia Lab.

DISCLAIMER: The views, opinions, positions, or strategies expressed by the author are theirs alone, and do not necessarily reflect the views, opinions, or positions of CTNewsJunkie.com.

Sarah Darer Littman is a critically-acclaimed author of books for young people. Her latest novel, Some Kind of Hate, comes out Nov. 1 from Scholastic Press.

The views, opinions, positions, or strategies expressed by the author are theirs alone, and do not necessarily reflect the views, opinions, or positions of CTNewsJunkie.com or any of the author's other employers.