House Bill 399 was heard in the Senate Education Committee today, June 29, 2016, the penultimate day of the 2015-2016 legislative session. This bill, which passed the House unanimously (with two absent Representatives and one amendment), was one of only two bills heard by the committee, yet the public filled the small meeting room beyond capacity. Among the onlookers were the usual subjects; individuals from the Department of Education, the Executive Director of the State Board of Education, a handful of educators, lobbyists, and the general public affiliated with different groups of their own. It was the general tenor of the public comment that compelled me to pen this post, as although I have studiously refrained from engaging in the raging online debate up to this point, I do feel that misconceptions need to be cleared and the process, as transparent as it has been, should be outlined.
Please note that all of the information I am sharing comes directly from the information that can be found through the online state calendar if you care to scroll back through each and every Department of Education meeting from September 15, 2015, through March 8, 2016. The only two meeting dates that do NOT have associated minutes were November 9 and 30 of 2015, although draft minutes are floating around.
Let’s take this back to the beginning, shall we? That’s always a good place to start.
In April of 2015, I was contacted by the Delaware State Education Association to participate in a workgroup of teachers, specialists, and administrators, co-facilitated by the Delaware Association of School Administrators. The pitch was novel: Let’s get teachers, specialists, and administrators in a room and give them a task discussing the evaluation system and see what happens. As I have undergone the training for new administrators at the Department of Education and was, at one time, a credentialed observer, and because of my continued interest in and work with the evaluation system, I understood why I was invited. As for the other members of the workgroup, I cannot say, but as I got to know them I realized that they were all amazing individuals with really spot on observations and unique perspectives on the evaluation system. Only one of the nine other educators did I know prior to our first meeting.
On April 21 we met, broke into groups, and began the task of looking at the current rating system to make a proposal for alignment of formative component ratings with summative component ratings as well as to look at the summative component ratings and how they could roll up to an overall rating for teachers and specialists. The groups were random, although an effort was made to have teachers/specialists mixed in with administrators, so no group had just one category of educator. By the end of our allotted time together, we realized that more work needed to be done, and we were eager to continue, so we met again a week later, on April 28.
After the two work sessions, the team had come up with terminology that would be consistent across formative and summative categories, designed a numerical system that would go along with the ratings and reduce the potential for inconsistencies and “discretion” in the system, and made a few recommendations that addressed some issues we discussed, including the concept of an annual summative process (instead of the current biennial process) and a pilot for the numerical system (which is being referred to publicly as an algorithm).
The workgroup recommendations and proposals were presented to the DPAS II Advisory Committee, a group of educational stakeholders who meet regularly to discuss the evaluation system and offer suggestions for changes as necessary. Based on feedback from the committee members, the workgroup reconvened and refined the process accordingly on May 12, 2015. What has come forward into HB399 that bears mentioning here is the mathematical algorithm, and specifically how it was established and the criteria for “cut scores”.
It was not really a question that an Unsatisfactory rating should be unacceptable, and as such a numerical value of 0 was assigned to that. To further differentiate Unsatisfactory from Basic, which can be acceptable as a starting point in some circumstances (hence the need for discretion), Basic was assigned a value of 2, with Proficient and Distinguished assigned values of 3 and 4, respectively. It was agreed that, if multiple data points were available for specific criteria and/or components over the course of a two-year summative cycle, those data points would be averaged to come up with an overall score for the summative rating. It may be important to note here that, under current regulation, all teachers and specialists must be evaluated by a credentialed observer at least once a year, with recommendations and accommodations made to facilitate more regular observations. For instance, certain categories would warrant additional evaluations (novice status, under improvement, etc.), and shorter observation times could be used for supplemental evaluations as necessary.
Let me say that again. Under current regulation, every teacher and specialist must be observed and have a formative feedback document at a minimum of once a year. The summative rating can be done every year in current regulation, though it must be done at minimum every two years.
Let’s say I am evaluated, and in Component I, Planning and Preparation, I receive Proficient scores on all 5 criteria. That means I have earned an average score of 3, Proficient, for Component I. In Component II, Classroom Environment, I receive Proficient scores on 2 criteria and Basic on the other 2. I have earned an average score of 2.5, which falls under Basic. In Component III, Instruction, I received an Unsatisfactory in 2 criteria and a Basic in the other 3. That averages to 1.2, which is an overall Unsatisfactory rating. In Component IV, Professional Responsibilities, I score Proficient for all 4 criteria, earning an average score of Proficient for that category. Finally, in Component V, Student Improvement, I earn an Unsatisfactory, giving me an overall 0 score for that area.
Component I = 3. Component II = 2.5. Component III = 1.2. Component IV = 3. Component V = 0.
Total rating is 1.94, which puts me into the Basic category. Which we could have guessed, because so many of my scores are low. These ratings would also trigger an Improvement Plan, and my teaching career would be in jeopardy unless I followed the plan and earned higher ratings in the next observation and evaluation.
I would be a Basic teacher even with two of five Components rated as Proficient.
I’ve earned 11 Proficient ratings at the criteria level, 5 Basic ratings, and 3 Unsatisfactory ratings, yet am still rated as Basic and warranting an Improvement Plan.
That’s how the algorithm would work. The cut scores are based on the full workup of the entire set of possible ratings combinations, which I calculated using an Excel document with the gentle prodding and patient guidance of my husband, who neither saw nor cared about what I was actually doing, just told me how to get it done. This document was made available to the workgroup, and due to the areas where there was significant potential for the rating to be really inaccurate based on a quick glance at the numbers, a pilot program was suggested. The pilot was generally regarded as a simple thing to do, as the possibility of having one system (Bloomboard, for instance) write in the algorithm so it automatically calculates seemed easy.
Let me repeat that this time that these workgroup meetings, though not “public”, resulted in group consensus on recommendations that were presented to, refined as a result of feedback from, and then endorsed by the DPAS II Advisory Committee. Furthermore, all documents created were made available to the Department of Education, DSEA, DASA, and the Advisory Committee as well as the Sub-Committee later on.
Who was on the workgroup? Who were these educators who sat in a room together and dared to create a set of recommendations and proposal to change the entire evaluation system so dramatically (end sarcasm font) without the possibility of public input?
The members were published in the document made available to all entities listed above, and are as follows: Sherry Antonetti, Clay Beauchamp, Cheryl Bowman, Kent Chase, Charlynne Hopkins, Chris Jones, Jackie Kook, Suzette Marine, Dave Santore, and Nancy Talmo. Four teachers, two specialists, and four administrators.
Two of those individuals were also sitting members of the DPAS II Advisory Committee.
Six of those individuals became members of the DPAS II Advisory Sub-Committee.
When it is alleged, as it has been, that the information from the workgroup was never shared with the DPAS II Advisory Sub-Committee, and that no questions were asked about it, the data shows differently. A presentation on the workgroup recommendations was made on September 28, 2015, at the second meeting of the Sub-Committee. Discussion was held around the recommendations over the September and October meetings, and it is noted several times in the minutes that “discussion was held”, though not every word uttered was captured. Many committee members have their own notes, but the minutes could not possibly be a transcription of the level and detail of conversation that occurred.
The pilot was requested at least in part because there was no way to see all the possible kinks in the system, and rather than going full-on statewide with an untested program we felt it was more responsible to try it out and make sure it was accurate. After all, these are folks’ jobs we are talking about, as well as the education of students. We must get it done right, even if that means it cannot be hasty.
I do not pretend to speak for this diverse, talented, dedicated group of individuals. The legislation was inspired by the recommendations of the Sub-Committee, and although the words may not reflect verbatim the discussions that were held (after all, even the minutes don’t) and this may still be an imperfect system, the group did work hard and have impassioned discussions about what would be best not only for the educators in the system but also for our students. Keep in mind that The Conjuring was inspired by a true story…
One final point of note, since the data is readily available in the published minutes.
On September 15, the Department of Education was represented by Shannon Holston, who is documented as arriving at 4:45, and Christopher Ruszkowski, documented as arriving at 5:50. The meeting began at 4:30.
On September 28, the Department of Education was represented by Angeline Rivello and Laura Schneider.
On October 12, the Department of Education was represented by Angeline Rivello.
On November 9, the Department of Education was represented by Eric Niebrzydowski, Shanna Ricketts, and Laura Schneider. *Note that these are draft minutes, as final approved minutes are not available on the State Calendar.
On November 30, minutes were not available in draft or final form.
On December 14, the Department of Education was represented by Eric Niebrzydowski, who is documented as leaving at 4:30, Laura Schneider, who is documented as leaving the meeting at 3:30, and Christopher Ruszkowski. The meeting began at 2 pm.
On January 13, the Department of Education was represented by Atnre Alleyne, who is documented as arriving at 4:49. The meeting began at 4:30.
On February 1, the Department of Education was represented by Atnre Alleyne, Laura Schneider, Shanna Ricketts, and Christopher Ruszkowski, who is documented as arriving at 5:01. The meeting began at 4:30.
On February 16, the Department of Education was represented by Atnre Alleyne, Shanna Ricketts, Laura Schneider, and Dr. Steven Godowsky, Secretary of Education.
On February 29, the Department of Education was represented by Atnre Alleyne, Shanna Ricketts, and Christopher Ruszkowski, who is documented as leaving before Public Comment.
On March 8, the Department of Education was represented by Christopher Ruszkowski, who is documented as leaving before Public Comment.
For those of you keeping track, that brings Department of Education representation to a total of 8 different individuals, including the Secretary of Education, and no single representative was present for every single meeting. Again, this is based on the minutes from each available meeting (including one set of draft minutes I personally had for a meeting which minutes are not posted online) that had the individuals listed as “Department Staff/Other Members” and does not differentiate between those who sat at the table and those who did not. The final submitted report lists only Christopher Ruszkowski, Atnre Alleyne, Eric Niebrzydowski, and Angeline Rivello as Department of Education representatives (non-voting members) of the Sub-Committee. Any notation of late arrival and/or early departure is from the minutes themselves and included solely to be comprehensive in providing information. Of the 10 meetings for which documentation is in my possession, no single DOE representative attended more than 5.