This year, I administer a decoding intervention to students in Tier 2 & 3 reading interventions. We work on decoding, spelling, vocabulary, and phonological awareness. Our goal is to read more quickly and accurately over time (fluency). This is an essential prerequisite for comprehension, because students who struggle to read fluently often lack the cognitive resources to dedicate to comprehending, which is the ultimate goal of reading.
Every week, I administer the AIMSWeb R-CBM probe, a 60-second running record that records students’ CWPM (Correct Words Per Minute) and errors, to each of my students. The hope is that through our weekly word study, students will show evidence of their learning by being able to read more quickly and accurately. However, I’ve always felt I was doing a pretty inadequate job with improving my students’ fluency. We practice reading out loud in class every day, but our gains in fluency are slow and hard earned. Most of my students’ AIMSWeb graphs are nearly flat, like the picture shown below, meaning very little growth is evidenced.
And yet, I see their growth in so many other ways every day. I see it in their confidence, their decoding abilities, vocabulary knowledge, and spelling – none of which is measured by AIMSWeb. But the question remains – how can I improve their fluency so that it is reflected in their scores?
I don’t know why I thought just reading more and more often would help my students. I had no systematic, direct approach; I just relied on sheer volume. We read interesting new texts every day and forged our way through the tough words together; and there was negligible improvement. I needed to find a way to speed up our progress.
I recently learned about a few fluency concepts that sounded really exciting and easy to incorporate into my curriculum, and I dove right in. Important concepts for improving fluency:
Students need to read the same text multiple times (repeated oral readings)
Students need to analyze and improve on their own miscues.
Students need to understand why fluency is an important skill worth improving.
It seems simple enough, and yet why didn’t I think of it sooner?
I began by choosing very short, leveled passages for us to work on. I work with groups of 4 students, and this activity should only be done individually (you don’t want students to hear each other, which will impact their own readings).
Next, I created a worksheet that allowed students to see and improve on their own miscues, keep track of their progress, and set goals for future improvement.
Here is my copy of the worksheet, which I put in a sheet protector so I could write on it with dry erase marker for each student.
Each of my students got their own copy of the worksheet, which they used to track their progress.
As you can see, this student clearly progressed between her Cold Read and her Warm Read. In the first read, she made 4 errors. We took the time to go over them, decode the words, discuss their meaning if necessary (especially with ‘turnstile’), and then I gave her individual work time to practice or reflect. On her second, warm read, she made only 1 error AND read faster! *NOTE: she did not repeat any of her initial miscues! In the end, I asked her to note any words that she felt were tricky for her and worth future practice. She chose ‘amid,’ which we have now decoded and made into a flash card. All of this took about 5-6 minutes total.
EACH and EVERY one of my students today said they *liked* this activity, and they felt it really helped. They asked me to please keep doing it, and to pick another interesting passage for tomorrow. They eagerly took on the challenge, enjoyed competing with themselves, and were thrilled to see their own progress. Yes – ACTUAL noticeable progress. I’m so glad we are incorporating this into our daily word study routine!
Okay that’s an exaggeration. I know we can’t really stop using AIMSWeb MAZE. For the time being, it is the best option we have, and there are no alternatives. But someone, PLEASE, make an alternative!
Let’s back up and do a quick overview of AIMSWeb MAZE, which you probably use in your school district if you have any kind of reading intervention program. AIMSWeb is a benchmarking and progress monitoring system used widely in schools across the nation. If a student is receiving an intervention, they are likely being monitored once a week with AIMSWeb to make sure they are on track and making gains. If they ARE making progress / gains, then this data is used to prove that the intervention is working. If they are NOT making progress / gains, then this data is used as evidence that the child may need a different (form of the) intervention.
Weekly data points are necessary to make sure everyone is staying on track. Think of it like weighing yourself every week you are on a diet. You don’t want to weigh yourself daily, because you will likely see too much variation. But on a weekly basis, you are more likely to see an accurate trend over time.
To track student progress in reading, AIMSWeb offers two types of progress monitoring tools. R-CBM tracks a student’s progress in reading fluency; the student reads out loud for 60 seconds, and we would record the number of correct words per minute (CWPM) and errors.
MAZE, on the other hand, is a tool used to measure comprehension. Comprehension is a much more complex construct than fluency. The following information is taken from the Pearson AIMSWeb Training workbook:
“Maze is a multiple-choice cloze task that students complete while reading silently. The first sentence of a 150-400 word passage is left intact. Thereafter, every 7th word is replaced with three words inside parenthesis. One of the words is the exact one from the original passage.”
The students in an intervention take one AIMSWeb MAZE probe ever week. They are given 3 minutes to circle as many correct responses as they can.
As the title of this post suggests, I am not a fan of MAZE. As part of my teacher education, I am trained to be skeptical of any assessment tool and analyze it for purpose, benefits, and limitations. When I see that a tool claims to assess comprehension, I am even more critical and skeptical. I know that comprehension is a complex, multifaceted construct that is nearly impossible to assess in multiple choice or scantron format. Comprehension should be a conversation, not an assessment. This isn’t a critique of just AIMSWeb, but of all tools that claim they can assess comprehension in just a few short, simple questions.
Unfortunately, I see only one benefit to MAZE: it is easy to score. The following are the limitations as I see them with the AIMSWeb MAZE assessment.
1. The format is confusing for students and negatively impacts their performance.
Putting the response choices within the sentence (as opposed to a question at the end of the sentence) interrupts the flow of reading. The student has to pause and consider all 3 options right then and there. If they choose the incorrect option, it will negatively impact how they comprehend the rest of the sentence or passage.
Consider the following: “Most mornings he just tapped a (all, bit, more) of food into the bowl…(6.26)” In this example, I’ve had students get confused and circle 2 answers (bit more) and read it that way, because the words occur together in line, and this made sense when the student read it out loud.
Now consider this example: “This is happening (in, to, for) you,” the fish proclaimed. (6.26).” In this excerpt, I personally think there are 2 options that make sense (to and for).
Some common and effective comprehension assessment questions include text-dependent questions, inferencing, close reading questions, questions about the text or author’s purpose. Never in the history of ever has Cloze Reading been considered an effective measure of comprehension. Cloze reading is a sentence-level, low level skill that does not require the reader to build a mental model and comprehend the meaning of the text.
3. Comprehension should never be timed.
MAZE allows 3 minutes. Imagine what our students could prove they know and understand if given a bit more time. Think of how many 504s, IEPs, and student plans you have read that have included ‘extended time’ as an accommodation.
4. MAZE doesn’t align to instruction.
If your intervention is guiding students with adopting and applying reading strategies, as so many good comprehension interventions do, then MAZE won’t give you any usable information to modify your instruction to meet student needs.
5. You don’t need to comprehend the passage to ‘pass’ MAZE
Students don’t actually have to comprehend the passage to answer the questions. If they have an adequate grasp of English syntax and grammar, then they can simply answer the questions by selecting the correct part of speech or most sensible option for that sentence or even phrase.
6. The distractors are not appropriately challenging.
AIMSWeb claims that, of the 3 response options, 1 is the correct answer, 1 is a near distractor (a word of the same ‘type’ or part of speech), and 1 is a far distractor (randomly selected, dissimilar part of speech).
Consider this example from 6.9: (Was, Jim, Day) stopped playing and ran to the (root, door, thus). Do you think ‘Day’ and ‘root’ were appropriately challenging near distractors? And, back to points 1 and 3 above, this tells me little to nothing about my students’ ability to comprehend the text, and gives me little to nothing to go on to adjust my instruction to help them improve their scores.
So in all, you can gather that I am not a very big fan of MAZE. And yet, it is a piece of data. ONE piece. We need to be triangulating student data and making sure to never make high stakes decisions based on solitary data points. Consider classroom evidence, Fountas & Pinnell benchmarking, MAP or STAR scores, and any other data points you can to obtain a whole picture on a student.
I’d love to hear what you think of AIMSWeb MAZE. I understand that many, many teachers and administrators love it. What are your thoughts on this tool – good, bad, or other?