Home
/
Blog
/
Measuring What We Teach: Three Ways We Often Miss the Mark
Back

Measuring What We Teach: Three Ways We Often Miss the Mark

David Stevens

CEO & Founder
Follow us on:

Student assessment is really about improving student learning. We test because we need answers, answers to crucial questions that will guide instruction, planning, placement, and intervention:

  1. Is the student responding well to intervention?
  2. Is she catching up to her peers?
  3. Do we need to try something different?
  4. Are all students being challenged and making expected progress?

Good assessments will guide us in what we do next. Unfortunately, assessments sometimes fail to deliver the feedback we need, especially for struggling learners. One of the most common culprits of this pitfall is surprisingly simple.

We aren’t measuring what we’re teaching.

To give us useful data, the assessment must align with our curricula and interventions. If the questions don’t target the learning, we’re not only squandering instructional time and resources. We’re squandering something more precious—windows of opportunity for learning and growth. We must test what we teach. That means using assessments that are properly aligned with both our instruction and our goals.

“And the lack of clarity typically comes from assessments that are not well aligned with interventions, leaving us at the end of the day wondering if what we are doing is working and doubting the capability of our students.”

Have you been in a team meeting to discuss a particular student and what to do next for him? Did the meeting feel foggy and frustrating with a lack of clarity on what to do next? I have found this is often because our assessment data is not making clear what is working and what is not working. And the lack of clarity typically comes from assessments that are not well aligned with interventions, leaving us at the end of the day wondering if what we are doing is working and doubting the capability of our studenst.

If we are to avoid this pitfall, it helps to understand what misalignments look like. I’ve found that most fall into one of three categories:

Giving Static Grade-Level Assessments to Below Grade-Level Students

Static assessments give the same questions to all students, whether they’re performing above, below, or on grade-level. This makes it especially hard to measure progress of the struggling student. This is the most common misalignment

Let’s say we have a static assessment that tells us our fourth-grade student is reading below grade level because he missed most of the questions. After a year-long intervention he may have progressed from a kindergarten level to a third-grade level. But if our post-test is again a static fourth-grade assessment he may still do very poorly despite the huge gains he made during the year making it appear as if he made little progress. He ‘failed' the grade level assessment in the fall and then ‘failed’ it in the spring. Adaptive assessments typically provide a more accurate pre-test and post-test score and better reflect the tremendous progress students often make form well-designed interventions.

But with the static assessment the gains, which should be viewed as encouraging evidence of growth and effort, are frustratingly invisible in the static test results. Student, parents, and teachers are demoralized and confused, all because the test didn’t measure what was actually taught.

To counter this problem, research published in Policy Insights from the Behavioral and Brain Sciences, and summarized in Science Daily, urges us, instead, to use adaptive testing that “makes use of standardized tests that adapt to students’ ability levels.”

Single-Skill Assessments for Progress Monitoring

Single-skill assessments focus exclusively on one skill, and cannot give us the data we need to evaluate overall progress. If a middle-school student is tested in the fall on multiplication and division of fractions, but her teacher plans to cover those skills in the spring, the test results will be misleading. Or perhaps the student had already mastered the skill being tested yet has significant deficits in other areas. In both cases, a multi-skill assessment would give us a more accurate picture.

Single-skill assessments that focus on fluency (e.g., nonsense-word fluency in young readers or fluency in multiplication and division for third grade). It may be that our intervention students have made dramatic progress with foundational skills and understanding but have yet to generalize to the level of fluency. Even worse, using assessments that only focus on fluency can encourage the teaching of fluency to students that are lacking foundational skills and understanding.

This is one of the reasons the University of Washington’s Center for Teaching and Learning recommends that teachers “design test items that allow students to show a range of learning.”

Outdated Assessments

I’d love to say that the scenario of outdated assessments is rare, but it isn’t. This is one of the most frustrating examples of misalignment. The curriculum has changed, but the assessment hasn’t. In these cases, the tests have wasted vital instructional time and given us little of value in return. Imagine how frustrating it is for a second grader to be asked to highlight the adjectives in a paragraph when she has not yet learned what an adjective is because that skill has moved to a later grade in the new curriculum. Or how discouraging it is if we are the teacher, required to give an assessment not well suited to our student. This type of misalignment does not give us the information we need to answer our most important questions.

The Consequences of Misalignment

Nothing makes an administrator, teacher, parent, or even student, prouder than objective proof of progress, and nothing can deflate us quite like low test scores. Everyone loves making progress and seeing our students progress. In some cases misaligned assessments rob us of this important confirming evidence and in other cases it clouds the picture of how to best help our students.

Because so much depends on good data, we need reliable assessments that align with skills and knowledge taught in the classroom. Using the right tools can help us assess and optimize our interventions. Valid, adaptive test results can serve as beacons to guide our students’ educational journeys, but we must first commit to measuring what we teach.

Track My Progress is aligned closely to the Common Core State Standards. The test blueprint is based on the Common Core. Each question was written by a teacher trained in Common Core to measure a specific Common Core State Standard. Each test question was reviewed by multiple teachers trained in Common Core. The result is that if you are teaching a Common Core State Standards aligned curriculum you know you are measuring what you teach with Track My Progress.

David Stevens

CEO & Founder
Follow us on:
Subscribe to our newsletter

Get the latest news right on your inbox

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.