As members of a state panel continue to wrestle over the best way to rate teachers, principals in the Hartford schools are being taught how to recognize good teaching when they see it.

The training is part of that city’s brand-new teacher evaluation system, and state officials may be looking at the same model when they roll out a plan to measure teacher performance in 10 pilot school districts this fall.

Hartford’s system is based on the latest version of Charlotte Danielson’s “Framework for Teaching,” which is being used in Chicago, Los Angeles and Pittsburgh schools and in several states that are moving toward more rigorous teacher evaluations.

State Education Commissioner Stefan Pryor confirmed last week that the state is considering hiring Danielson, a respected national education expert, as a consultant to help fine-tune its new teacher evaluation system.

Danielson gave a presentation in April before a subcommittee of the Performance Evaluation Advisory Council (PEAC), the panel now charged with putting the finishing touches on the evaluation system.

Pryor said he was interested in Danielson because her methods have been tested and are research based.

“It appears that most people are interested in exploring her work further,” said Patrice McCarthy, general counsel for the Connecticut Association of Boards of Education and a PEAC member.

As states move to tie tenure and firing decisions to evaluations, one of the biggest worries among teachers has been whether an administrator will know enough to judge them fairly and accurately.

Both Andrea Johnson, president of the Hartford Federation of Teachers, and Jennifer Allen, the chief talent officer for Hartford schools, said Danielson’s rubric leaves little room for subjective opinion because it is so specific.

The framework includes a checklist of 21 teaching practices sorted into four “domains”: Planning and Preparation, Classroom Environment, Instruction, and Professional Responsibilities. Teachers are rated either “unsatisfactory,” “basic,” “proficient,” or “distinguished.”

School administrators are trained to recognize what each of the rating categories looks like in action and must pass a certification test before stepping foot in a classroom to observe.

Allen said the expectation is that two trained evaluators would be able to look at the same teacher’s lesson and come away with identical ratings. The raters are looking at not only what a teacher does, but how the students interact with the teacher and each other.

“We can’t be subjective in what we look for in the classroom,” said Allen. “We have to calibrate ourselves and be looking for the same things—what we know the research tells us about what helps students achieve.”

The Hartford Board of Education in March approved a $940,753 contract with Teachscape, the online component of Danielson’s program, which gives Hartford administrators and teachers access to thousands of videos demonstrating examples of poor, good, and exemplary teaching practices.

Teachers are also being trained in the framework so that they will know what evaluators expect of them. Johnson said teachers like that there will now be consistency in evaluations from school to school.

She said in the past teachers have complained about vague or inconsistent feedback from their evaluation but the new system will provide them with concrete examples for how they can improve.

Johnson and Allen said the only major complaint they have heard so far is the amount of time it takes to implement the system.

It remains to be seen whether Danielson can bring this same consensus to those working on finalizing the state evaluation system, which must be approved by the State Board of Education by July 1.

The head of the state’s largest teacher’s union said it could be costly to bring Danielson in this late in the process, since her model would need major tweaks to fit into the framework already approved by PEAC. 

“We highly respect Charlotte as a national expert. The problem is trying to meld her work into ours and doing it effectively so it’s fair, valid and reliable,” Mary Loftus-Levine, executive director of the Connecticut Education Association, said.

For example, she said Connecticut’s teaching standards don’t match up precisely with Danielson’s four “domains,” and the names for the rating categories under the PEAC guidelines are different: “below standard,” “developing,” “proficient,” and “exemplary.”

She questioned how Danielson’s training videos could be used by Connecticut school administrators if the terminology is not the same.

“Can we take a product that’s fully developed and make it fit what we’ve already done?” she asked.

Danielson’s framework also fails to address one of the stickiest issues when it comes to evaluating teachers: how to incorporate student test scores into teacher ratings.

Danielson’s expertise is in classroom observation, which makes up only 40 percent of a teacher’s evaluation under the new Connecticut guidelines. Student performance measures make up 45 percent, with half of that coming from test scores. The rest would be based on feedback from parents, students and peers.

Danielson has raised concerns in interviews about using student test scores to judge teacher performance.

“I do think that it’s reasonable for teachers to demonstrate that their kids have learned,” she said in an interview for the New York Times’ SchoolBook. “Beyond that, though, I’m not at all convinced that it can be done fairly for teachers based on what we know now, particularly in a high-stakes environment.”

In Hartford schools, officials are still working a process to factor student achievement into teacher evaluations, said Allen.

“We know that we have to have multiple measures,” Allen said. “That’s our next conversation.”