Our new report on the New Haven model

0 Comments

As I promised in my last post, we’ve just released a new Issue Brief report on the new teacher evaluation model that was developed by the New Haven, Connecticut, public school system.  I tried to squeeze as much as I could out of the few public documents that describe the model in any detail, and I hope the New Haven system will soon release more details about how the model is to work.

As I make clear in the new report, I think the New Haven model is the way to go. They really have managed to come up with an ingenious solution to the problem of how to simply and fairly integrate student achievement as an evaluative element, plus included other thoughtful elements such as a peer review validation provision in response to concerns among teachers that there would be administrator favoritism.  Taken as a whole, the model is a thing a beauty.

The trick for the stakeholder group is how to fully embrace a model like this with only a week to go.  There is clearly no hope that they can develop a fully-fleshed out model such as this in a week.  It took New Haven months to come up with this one.  Somehow, though,  they have to act in some way that gives districts the green light to use or develop a model like New Haven’s.

The only solution, I think, is for the panel to adopt a broad outline of a model like this and hope that it will be enough to win the support of A.G. Mills.  They could adopt a set of standards that any student performance-using model must share, for instance, which would allow districts to move forward in some way.

They could say for instance, that any model that uses student achievement data must have the following elements:

  • It must have a five-level rating scale like the New Haven model.
  • A teacher cannot be found to be “needing improvement” or “developing” on the basis of student achievement alone, but cannot be found to be exemplary either unless their student achievement growth scores are very high.
  • Student achievement must be calculated in terms of growth, with individual student achievement goals set each fall.
  • Student achievement must be measured by multiple assessments, and multiple years of data, if available, must be used to set achievement goals.
  • Provisions must be made to develop reliable and valid assessments for subjects that are not commonly tested with standardized tests.
  • The model must include a peer validation provision like the one in the New Haven model.
  • The model must describe in detail the professional development and support to be provided to teachers found to be “needing improvement.”

There are certainly more elements that should be included, but I don’t see any reason why the stakeholder group couldn’t broadly identify a dozen or so characteristics that evaluation models of this kind must have, agree on a list of them, then present that to A.G. Mills as evidence that such systems can indeed be developed here.

It is certainly a better approach than coming up with nothing at all, which  seemed, by the end of Monday’s meeting, to be the most likely outcome of all this.

The panel’s homework then? Each group reviews the New Haven model and other models of their liking and produces a list of the qualities or characteristics that they think a model developed in Maine should share.  Each group presents its list at the next meeting, a master list gets compiled and agreed to, and everyone gets in an elevator, heads up to the A.G.’s office, and sets it on her desk.

See?  Simple.