A recent analysis of students participating in Alabama’s tax-credit scholarship program shows encouraging signs for the young program. While limited by available data, researchers at the University of Alabama reviewed test score results for participating students and compared the results to public school students in the state, finding that participating students tend to outperform similarly-situated students in public schools.
What We Know About the Participants
- Participating students are low-income. To enter the program, students must have a family income that qualifies them for a federal free or reduced-price lunch.
- One-third of students are zoned for a failing public school. Thirty-four percent of scholarship students are in the attendance zone of a public school deemed “failing” by the state. Less than 6 percent of public schools in the state are deemed failing, meaning scholarship students are far more likely to be attending a failing school than the average student in the state.
- Sixty-two percent of students are black, 20 percent are white and 11 percent are Hispanic.
- The average scholarship is 40 percent less than what is spent per-student in public schools. Alabama public schools spend about $9,000 per student, while the average scholarship amount was less than $5,500 in the year studied.
Scholarship students are required to take either the state test or a nationally norm-referenced test in grades 3-8, 10 and 11 (the same tested grades as public schools). Of the 4,076 participating students in 2016-17 (the year studied), only 1,991 were in these tested grades and had test data usable for research purposes.
Analysis 1: How do scholarship students compare with peers in Alabama public schools?
The researchers compared performance on two tests – ACT and ACT Aspire – among three groups: scholarship students, low-income Alabama public school students and all Alabama public school students. The number of scholarship students who took these tests was relatively small in the 2016-17 school year: 46 scholarship students took the ACT and 331 took ACT Aspire. So, a sample of 377 scholarship students out of more than 4,000 – compared to all Alabama public school students.
As the researchers note, “There were very few instances where the percentage of [public or private school] students reaching proficiency was 50 percent or higher, suggesting there is need for improvement in the state as a whole.” However, “it is noteworthy that in 25 out of the 34 (78 percent) comparisons made there was no significant difference between the scholarship recipients and students attending public schools in the state…” (emphasis added).
Charts 8 and 9 summarize some of the data. Perhaps it is just me, but scholarship students outperforming low-income public school students in 12 of 14 categories seems significant.
Analysis 2: How do scholarship students compare with peers nationally?
The larger part of the report compares scholarship students with students across the country who took the same nationally norm-referenced test. This approach allows us to observe how the average participant is performing in a given year. While interesting, the information in a vacuum does not tell us how the program is performing, because we don’t know the individual students’ starting points. As the researchers note:
In interpreting norm-referenced tests, it is important to be mindful that the percentile scores are an assessment of students’ performance relative to other children at the same grade level in the country. By themselves, the scores do not indicate if a child has acquired the knowledge and skills expected for their grade…. As a marker for performance, however, the scholarship recipients’ mean scores should be close to the 50th percentile, if as a group they are achieving at levels similar to others in the U.S.
Given their disadvantaged demographics, we would be shocked if the average participant scored better than the average test-taker across the country.
The better analysis, the authors acknowledge, would look at average scholarship student growth over time. For instance, while scoring at the 25th percentile is below average, a score in 30th percentile in the second year would be considered impressive growth. But researchers had trouble doing this for two reasons:
- Lack of data. It is nearly impossible to conduct a convincing growth analysis with three years of data for small samples of students.
- Difficulties interpreting data. The lack of data means that changes in test scores can only really be observed as changes in different cohort of students. As the authors state, “if proficiency rates remain constant from year to year, it is not clear whether that is due to there being no changes in individual student scores or if instead that the percentage of students who gained in proficiency was off-set by a similar percentage who dropped in proficiency.”
While the researchers did a valiant job with limitations, there is little that we can glean from the data. As the program grows and more students participate for longer periods of times, the data will become clearer.
While this type of analysis is important, some of the research limitations keep us from drawing large conclusions about the impact of this program. Of the small sample of students compared to overall students in the state, we can see some bright spots and some negative spots. But these rely on a single-year snapshot of student performance. These results can be impacted by a lot of factors – prior performance of the student, parent involvement, learning disabilities, school safety issues and other factors we have little information on.
For instance, research on a similar program in Florida found that participating students were more likely to enter the program with lower math and reading scores and come from lower-performing public schools, with higher rates of violence, compared to their peers who are eligible but choose not to participate in the program. If this is the case in Alabama, then it makes the results are even more impressive.
As the program matures and more students participate for longer periods of time, we will be able to learn more from the data. In the meantime, this is an encouraging analysis of the young program.
About the author
Adam Peshek @AdamPeshek
Adam Peshek is Managing Director of Opportunity Policy at ExcelinEd, where he provides strategic support to state leaders interested in developing, adopting, and implementing policies that increase educational options for children. He has provided expert testimony in more than a dozen state legislatures and is a frequent commentator on ESAs, school choice, and education policy across the country. He is also the is the co-editor of the first published volume on ESAs, Education Savings Accounts: The New Frontier in School Choice. Adam currently resides in Atlanta, Georgia and is a Senior Fellow with the Beacon Center of Tennessee.