How to Improve ELA Proficiency Scores Fast: 5 High-Impact Strategies for School Leaders
By Susan Levenson and Julie Webb
You’ve done the hard work: aligned the curriculum, supported teachers, monitored attendance, strengthened reading instruction. Then the state English language arts (ELA) results arrive, and the overall proficiency rate is still lower than expected. What’s going on?
In our work with schools examining summative assessment data, a pattern often appears: Students’ writing performance is pulling down the entire ELA score. Not because students can’t think. Not necessarily because they don’t understand the text. But because a sizable number of students are not attempting the constructed response items, or are writing too little to score, or are producing responses that are non-scoreable.
This is one of the most fixable and overlooked drivers of ELA outcomes. The good news is that there are practical strategies for using writing assessment data to identify what is happening, why it is happening, and what to do next.
Why Writing Can Make or Break ELA Performance
In many state summative ELA assessments, constructed response items and extended writing tasks can account for 20 percent or more of a student’s total score. That means writing isn’t an “extra”; it’s a major lever. And when writing data are disaggregated at the task and score-code levels, many schools we’ve worked with find the same troubling reality:
- A significant portion of students don’t attempt writing tasks.
- Of those responses, many are deemed non-scoreable (or earn a zero) due to insufficient word count or genre confusion.
- Taken together, these conditions can make a school’s non-scoreable/zero-score rate extremely high.
When those patterns show up, they can depress overall ELA performance in ways that typical reading interventions might not fix. So, what is a school leader to do?
What to Do When Writing Pulls Down ELA Results
The five steps below outline practical, schoolwide actions that, in our experiences working with schools, can increase scoreable responses and improve writing performance.
Step 1: Start With the Most Actionable Data Point—Scoreability
Before you analyze rubric dimensions or instructional alignment, start with a simpler, more actionable question: Are students producing responses that can actually be scored? Scoreability is a high-leverage entry point because it helps you distinguish between two very different challenges. In Problem A, students are writing but not yet meeting expectations—this often points to instruction, task design, or rubric alignment. In Problem B, students aren’t successfully engaging with the task at all, which can reflect barriers such as limited stamina, confusion about prompt expectations, challenges with test navigation, or difficulty accessing the task.
To understand which problem you’re facing, you will need to analyze your high-stakes summative writing reports closely, working with your local data experts to mine the fine print. At minimum, separate out the rates of low-but-scoreable responses from blank/no-response rates, non-scoreable codes (for example, off-topic responses, copied text only, or illegible/insufficient writing), and zero scores. The key takeaway is practical: If you’re seeing high non-attempt or non-scoreable rates, your first move is not to overhaul writing instruction. Instead, your first move is to determine what needs to be done for students to be able to reliably produce scorable responses under assessment-taking conditions.
Step 2: Disaggregate by Task Type, Prompt, and Student Group
Once scoreability is on the table, the next step is to disaggregate the data to locate the pattern—not the average. Overall writing averages often hide the real story, especially when only certain prompts or task types are driving the overall drop in performance.
Start by looking closely at the writing tasks and prompts. Different writing demands produce different failure points, so it’s important to separate results by categories such as constructed response versus extended writing, single-text writing versus multiple stimuli, and genre format. Then look at student groups and entry conditions by disaggregating scoreability and writing scores across grade level, school, or course, and across student groups as appropriate for your context. Pay special attention to students with historically lower completion rates, such as students with chronic absenteeism, emerging Multilingual Learners, or students receiving specific supports. The goal is to pinpoint where scoreability breaks down so the response can be targeted rather than generic.
Step 3: Interpret the “Why” Behind Non-Scoreable and Zero Responses
Non-scoreable responses are rarely random. When you see patterns in blanks, non-scoreables, or zeros, they typically signal predictable gaps that can be addressed once you name them clearly.
One common cause is task comprehension and procedural knowledge. Students may not fully understand what the prompt is asking, what “use evidence” looks like in practice, or how much writing is considered sufficient. A common task type is narrative writing that requires use of details from a mentor text; a structure with which most students are completely unfamiliar. Another frequent driver is stamina and time management. Students may begin writing but run out of time, resulting in incomplete drafts, minimal responses, or only partial evidence integration. In many assessments, students also struggle with stimulus integration demands, especially when tasks require them to read closely, synthesize information across sources, and cite or reference sources appropriately while still managing structure and conventions. Finally, some issues reflect misalignment between classroom writing and assessment writing. Students may write frequently in class, but not in ways that match the demands of summative assessments—such as writing to sources, organizing multi paragraph responses under time constraints, and responding to rubric-driven expectations.
The administrative takeaway is straightforward: Non-scoreable data patterns point you to the barrier, while rubric scores identify the skill gap. You need both, but you may make faster progress when you interpret them in the right order.
Step 4: Build a Yearlong System for Assessment-Aligned Writing Practice (Without Teaching to the Test)
In our experience, the strongest improvements occur when students have sustained exposure to writing tasks that mirror the structure and demands of the summative assessment, paired with feedback that is timely and consistent. This is less about drilling test items and more about building predictable practice around authentic, standards-aligned writing behaviors.
In practice, a coherent yearlong approach typically includes regular, authentic prompts that reflect summative writing structures, routine practice with stimulus-based writing (not just opinion or “creative” writing untethered from texts), and short feedback cycles in which students revise rather than simply receive a score. It also requires calibration across classrooms so “meets expectations” means the same thing from one class to the next. The administrative move here is to avoid overfocusing on a single benchmark window; instead, ensure that every grade has a predictable cadence of writing practice tied to real expectations.
Step 5: Reduce Scoring Burden and Increase Consistency
Writing improvement often stalls because scoring is labor intensive and inconsistent across classrooms. Systems tend to improve faster when they put simple, sustainable structures in place that make high-quality scoring and feedback easier to maintain.
Traditional high-impact supports include common rubrics and anchor papers used across teams, short calibration routines during PLCs (often 10–15 minutes is enough to build alignment), and tools or templates that streamline feedback while keeping it grounded in rubric language. Today, scoring and calibration supports include cutting-edge AI tools that can radically reduce the burden on teachers while enhancing the speed and accuracy of actionable student feedback and data collection. The key takeaway is that writing outcomes improve when feedback is frequent, understandable, and consistent; improvement does not need to rely on an unsustainable effort from individual teachers.
Writing Data Are an Early Warning System—If You Use It
If your ELA scores are stuck, a focus on writing may be the hidden lever for getting unstuck. And a powerful entry point is better visibility into scoreability and task-level patterns.
When administrators treat writing as a system and not an isolated classroom practice, improvement becomes measurable. WestEd’s Formative Writing Framework provides the guidance and focus needed to carefully examine your school’s writing data and gain the insights you need to take steps toward improvement. Schools we work with regularly report gains of 12–18 points in ELA scores in a single year from our approach. What this looks like in practice is captured in the reflection below from a principal whose school saw double-digit literacy growth:
“The initial data analysis and needs assessment conducted through WestEd’s Formative Writing Framework showed us that the majority of our students were either not attempting the writing performance tasks on critical assessments or had material gaps in understanding around audience and purpose, rendering their writing samples non-scoreable. The data-informed changes in teaching and learning fostered by the Formative Writing Framework contributed to unprecedented double-digit growth in our literacy scores this year. This approach works.”
Charlotte Klinock, Principal
Meadow View Elementary School
Susanville, California
If you’re ready to start, begin with the simplest question: Are our students consistently producing scorable writing—and if not, where is the process breaking down?
