Stakeholders Meeting regarding Race to the Top Funds

1 Comment

After some time to reflect, a few thoughts on yesterday’s meeting in Augusta:

Considering the madness of the entire enterprise – taking two weeks to develop a model for performance-based teacher and principal evaluations that can be used in every Maine school district – the parties involved acquitted themselves relatively well.

  • The Department set a productive tone for the meeting and chaired it well, but in retrospect, their rush to begin looking at models was a mistake. The panel simply didn’t understand, even by the end of the meeting, what exactly it was tasked with doing. The panel’s obligations under state and federal law wasn’t made as clear as it needed to be, and even the process by which the panel was to operate was never firmly established. By what means the panel would “approve” a model – its primary task and only reason for being – was never discussed in any depth, for instance.  As a result of all this, the panel will need to back up and do some of this work at its next meeting.
  • The Department made a second mistake by establishing in an earlier communication that it intended to have the panel look at two evaluation models – TAP and Danielson – but then dropped Danielson altogether because it does not make use of student performance data. This rubbed some members of the panel the wrong way, because it made it appear as though the Department was doing precisely what it was doing, which was trying to get the panel to agree on adopting TAP.
  • Some sympathy, therefore, must be extended to the Department’s teacher induction guy, Dan Conley, who did his best to present the TAP model to a panel that (a.) was simply not ready to begin looking at specific models, and (b.) seemed to resent being presented with only one model to look at. A better move might have been for the Department to have Conley walk through a handful of models in order to give the panel some background on performance-based evaluation systems in general, and save the primer on TAP until the next meeting.
  • As for the panel itself, some high marks have to go to the MEA, whose amendment to LD 1799 created the panel in the first place. The union’s president, Chris Galgay and its executive director, Mark Gray, did an admirable job outlining in refreshingly clear terms what the union’s concerns are with this approach and with the process by which it is being implemented. They acknowledged that student outcomes should inform teacher and principal evaluations in some way, but cautioned the panel that Maine’s history is replete with school reform approaches that were developed with high hopes in some conference room in Augusta, only to collapse at the implementation stage. They were right about this and they generally represented the union very effectively.
  • MSSA’s Sandy MacArthur should be commended as well for attempting, unsuccessfully, as it turned out, to get the panel to establish some ground rules about how it would operate and how it was to decide on which models, if any, were to be approved.  MacArthur gamely brought up the topic at least three or four times, but the Department, in a rush to get on to the models, skirted these fundamental issues. MacArthur will doubtless be ready with the same questions next meeting.
  • The panel’s rising star is SAD 3’s Carrie Thurston, who was relentless in her efforts to squeeze more information out of the Department.  She was right to ask for more clarity on the federal RTT requirements, right to criticize the Department for driving the panel toward TAP without showing them any other options, and right to remain focused, as she was throughout, on what the research really says about these models. She is the panel’s MVP thus far.
  • In his brief guest appearance, the governor was his usual platitudinous self, prodding the panel to get the job done while working as hard as he could to be on the side of the Department and the side of the MEA simultaneously. Hard to say what exactly the panel got out of it, to be honest.
  • Deputy Commissioner Angela Faherty, who chaired the meeting, concluded the panel’s work for the day by saying that they faced a series of challenges over the next two weeks. That is an understatement.
  • It is hard to see, frankly, how the panel is going to get this done. Following the Department’s lead and simply green-lighting the TAP model would be the simplest approach, but probably not the best course of action for Maine’s schools, and I say that as a fan of the TAP model. Having the panel say that TAP is the only acceptable model, even if it is a good one, is severely limiting to Maine’s school districts. TAP is indeed a solid model, but also a costly one. It doesn’t seem fair, as someone on the panel observed, to require that school districts looking to implement performance-based teacher evaluations use only a model that they must purchase from a vendor.
    The problem is that developing anything other than an evaluation system-in-a-can model like TAP takes way more time than the panel has. Denver’s custom-designed ProComp system took years to develop, for example. Knowing this, the Department is pushing TAP hard, but is getting some pushback already.

    An approach the panel might think about, therefore, is to avoid limiting school districts to a specific model such as TAP, and instead approve a set of qualities or conditions that any locally-developed evaluation system must have. Districts could adopt TAP if they wanted to under such an approach, but could also adopt a model of their own design that shared some of the same features as TAP.

    Under the TAP model, for instance, “individual classroom achievement growth” typically makes up 30 percent of a teacher’s evaluation score, with other factors, including more traditional classroom observations, making up the balance. The panel could establish, then, that under no evaluation model can individual classroom achievement growth make up more than 30 percent of the total evaluation, and that at least 50 percent of any evaluation must come from multiple classroom observations graded against agreed-upon standards for teacher effectiveness (such as Danielson, for instance). In this way, districts could use TAP, which works in this way, but could also develop their own model based on TAP or a program like it, as long as it followed the guidelines established by the stakeholder group. By taking this approach, the panel can empower school districts to develop a number of models rather than limit them to one, while addressing some of the MEA’s concern that student performance might become too  much of a factor in evaluation scores.

    The problem is that LD 1799, as amended by the MEA, uses the word “model” rather than a word like “criteria,” and thus appears to restrict the stakeholder group to approving a specific evaluation system in its entirety. The stakeholder group does hold all the cards here, though, and were it to approve a set of criteria that the MEA can accept, I can’t imagine anyone, including the AG, making too much of a fuss about what the word “model,” as used in LD 1799, actually means.

    In any event, yesterday represented a somewhat inauspicious start to the panel’s work, but a bit of groundwork was established and the bigger issues around these evaluation models, especially from the MEA’s perspective, got their first airing. There are very good people on the panel, so their remains hope that despite the short time frame and the lack of clarity about what to do and how to do it, something good can come of all this.

    (By the way, is all of this even worth it? Do we stand a chance of winning an RTT grant even if the stakeholder group pulls this off? Actually, our odds do seem to be improving by the day. The Fordham Institute’s Andy Smarick, who has been following RTT as closely as anyone in the nation, says that as many as 14 states may not apply in the second round.  There could be money enough in the second round, he says, to fund “10-15 winners.” Hmmmm….)