In their new paper, Unintended Consequences of School Accountability Policies: Evidence from Florida and Implications for New York, authors Rajashri Chakrabarti and Noah Schwartz contend that schools in danger of being identified as “failing” under Florida’s school grading system had the perverse incentive to “game the system” by strategically designating poor performing students into exempt categories, to avoid negatively impacting their school’s grade. It is implied that improvements resulted from manipulating the rules of the calculation rather than school accountability measures, but the authors’ sweeping conclusions fail to recognize Florida’s efforts to raise the bar over the course of 14 years of grading schools.
First, this new paper bases its analyses on data from the 1999 and 2000 calculations of school grades – the first two years of the policy. The model used for Florida’s school grades in those years was significantly different than the model used today – and for most of the past 14 years. Namely, in those early years, school grades were based exclusively on student performance, not including any measures of student growth. Student learning gains have been included in school grades since 2002.
Second, though the authors accurately point out the exclusion rules that were in place in 1999 and 2000, they fail to point out that all students with disabilities and all ELLs have been included in the learning gains components of school grades since 2005 and, with the exception of newly arrived ELLs, have been included in the performance components of school grades since 2012. Thus, schools, at least since 2005, do not have an incentive to strategically classify students into these categories, especially since Florida has witnessed substantial gains among its students with disabilities and ELLs. Additionally, Florida now provides schools additional credit for making substantial gains with their lowest performing students, many of which are also students with disabilities and ELLs. If anything, Florida’s school grading model provides more incentives to schools to focus on all their students, not simply the high performing. The authors correctly laud New York City for these practices of including all students and providing schools additional credit for making gains with high-needs groups. However, they fail to note that Florida’s policies do the same.
It appears that authors themselves even realize their findings are shaky at best – with quotes like “the type of gaming that may have occurred in Florida” and “while the data do not allow us to pinpoint the exact cause [proactively classify students to ensure they are receiving the proper services or strategically classify to game the school grade] of such classifications, there seems to be somewhat more evidence that strategic classifications are the more likely driver of the results” – not exactly confident, definitive conclusions. After speculating on possible reasons why the school may be gaming, they conclude with “however, the implication that strategic classifications play a role should only be taken as suggestive, and not conclusive.”
The premise that exemptions can lead to “gaming” is inarguable. However, Florida has recognized this through the last decade and has broadened the inclusion of students in school grades. And it has used other factors to discourage “gaming,” such as crediting back of student performance from alternative schools to zoned schools and requiring that schools that earn enough points for top grades actually demonstrate gains with at least half of their most struggling students. By including all students and implementing policies to not lose focus on the most struggling students, one can concentrate on the fundamental goal of raising student achievement rather than fiddling with the minutia of the calculation. Florida’s ever-improving school grades model has done just that over the last 14 years.