Making the Most of Interim Assessment Data: Lessons from Philadelphia
This report shares the results of Research for Action’s (RFA) multimethod study of the use and impact of interim assessments in 10 low performing elementary schools in Philadelphia. Interim assessments were a central component of the district’s Managed Instruction System (MIS), which also included a core curriculum.
The study focuses on how schools and instructional communities within those schools (grade level teams, instructional leadership teams) use interim data for instructional improvement at the classroom, grade, and school levels and is based on a theory of action outlining how the use of interim assessment is expected to influence student learning. The theory considers the influence of the larger policy context and dimensions of school capacity on a four step feedback system that translates interim data into improvement action, which in turn yields gains in student learning. The four dimensions of school capacity are human and social capital, structural capacity and available solutions. The four step feedback system, which operates at multiple levels, includes accessing and organizing data, sense-making to identify problems and solutions, trying solutions, and assessing and modifying solutions.
The study focuses on four primary research questions:
What were district leaders’ expectations for how school staff would use Benchmark data and what supports did they provide to help practitioners become proficient in using data to guide instruction?
Were teachers receptive to the Managed Instruction System, particularly the Benchmark assessments? Did they use them? Did they find them helpful?
Did students experience greater learning gains at schools where the conditions were supportive of data use: that is, where the Managed Instruction System was more widely accepted and used and where analysis of student data was more extensive?
What organizational practices ensure that the use of Benchmark data contributes to organizational learning and ongoing instructional improvement within and across instructional communities?
Study findings and conclusions about the factors that predicted gains in student achievement offer guidance for system implementation. Teachers’ satisfaction with the usefulness of these measures did not predict achievement gains. Instructional leadership, collective responsibility, and use of core curriculum did. The study concludes:If practitioners’ sense-making does not lead them to seek or develop new and robust instructional interventions, if these interventions are not actually implemented or not implemented well, or if their effectiveness is not assessed, then teaching and learning is not likely to improve. Data can make problems more visible, but only people can solve them.