On the agenda at tonight’s (08/13/2012) Madison Metropolitan School District Board of Education Student Achievement and Performance Monitoring Committee meeting (5:00 PM, rm 103, Doyle Building) is a presentation on the first year Measures of Academic Progress (MAP) scores in MMSD. Explicitly and implicitly this presentation makes assumptions about test scores, cut scores, standards and achievement that are both wrong and dangerous, creating what I am calling the Lake Gonetowoe Effect, the inverse of the Lake Wobegone Effect, which posits “all the children are above average.” In Madison we’ve decided that only half the students in the nation are “proficient,” while retaining the idea that all of our students should be “above average.” The worst of both worlds.
Matt DeFour’s State Journal story on the MAP scores emphasized that these test scores offer another way to document achievement and gaps in the district. That’s not what this is about, but a few words before moving on. Whether they are a better or more accurate measure than the ones used previously is an open question. MAP is designed as a diagnostic, to be used to help teachers better identify their students’ weaknesses. From my conversations with teachers, it appears that little or no professional development was done prior to implementation in MMSD. Unlike Kansas City, for example, where “teachers in the district” were reported “drilling students for the test… practicing like a team would before a big game,” in Madison the tests stood largely outside of instructional practices. This makes a difference, especially since changes in scores from Fall to Spring are a big part of the report. If other districts are using the Fall results to “teach to the test” in preparation for the Spring tests and we aren’t, then it it would be expected that MMSD students would show less change. Some more (and different critiques of MAP here).
I also need to insert the usual caveats about all standardized tests being of limited utility in understanding students, their teachers, their schools and their districts.
Both the grade level benchmark scores and the growth measures in the MMSD MAP presentation are based on the national sample of MAP test takers and are “normed” to match the demographics and school characteristics of the United States school population as a whole. The demographics and school characteristics used to norm are different from those found in Madison and different in ways that are associated with lower achievement, yet there seems to be a sense that our students should out-perform the national norms. There are no published national MAP mean scores broken out by subgroup, but this from a MAP pilot in Montgomery County (MD) Public Schools has some interesting data to look at by way of comparison (not direct with the MMSD presentation, different measures were used). That’s certainly something you want to work for, but it also leads to unrealistic expectations. At the national level a majority of students, much less ‘all students” can not, by definition, be “above average.” To expect a majority of students in MMSD to be above average doesn’t help in any way. High expectations are one thing when used in a classroom to motivate and inspire students,; they are something else all together when analyzing data and making policy.
This conflation of high expectations in the classroom with higher cut scores on standardized assessments has led to the Lake Gonetowoe Effect on display in the MMSD MAP presentation. The explicit move in this direction comes in the section comparing NAEP to the WKCE:
Comparing MAP to WKCE. Proficiency bands of advanced-proficient-basic-minimal for WKCE are established by DPI. To provide a comparable look at results, similar proficiency bands are calculated for MAP by MMSD staff. The national mean is used to mark the difference between Basic and Proficient. Students that are more than one standard deviation from the average are at the Advanced level. Students that are more than one standard deviation below are at the Minimal level.
I’m going to leave the parts about setting other cut scores via one standard deviation aside in order to highlight the definition of proficient as equal to or above the score attained by exactly one half of the normed national sample. With that definition they label 1/2 of the nation’s (and more than 1/2 of MMSD’s) students as failures. And this isn’t based on some platonic ideal of what students should know, it is an absolutely subjective and even arbitrary choice (all cut scores are subjective, but few seem this arbitrary). The weird thing is that the people who produced MAP have done sophisticated alignments of achievement levels to various state standards and tests, including the WKCE, so this wasn’t necessary.
I think it is a reflection and extension of something larger, and potentially destructive (I don’t think this was the intent, but rather that those who prepared the presentation have internalized all of the reformy messages around cut scores and did this without thinking). The big idea seems to be that if we set cut scores for “proficient” at a level few students will attain, then somehow more students will attain that level in the future. Raising the bar via high cut scores does not help students learn. I guess it is easier than looking at the systematic inequality, or asking what resources are need to help kids learn and then providing them. It certainly distracts from those kind of things and as a bonus plays into the “our schools are failing” bash the teachers, bash the “status quo,” “burn the village in order to save it” mentality of many “reformers.”
This can also be seen in the adoption of the very problematic NAEP based cut scores by DPI in the new Wisconsin “accountability” system,” Many of the issues with the NAEP cut scores are detailed in the National Academy of Sciences publication, “Grading the Nation’s Report Card: Evaluating NAEP and Transforming the Assessment of Educational Progress,” especially chapter 5, “Setting Reasonable and Useful Performance Standards. Read the whole thing. Here’s the money quote from the intro:
In addition, the public may misread the degree of consensus that actually exists about the performance standards and thus have undue confidence in the meaning of the results. Similarly, audiences for NAEP reports may not understand the judgmental basis underlying the standards. All of these false impressions could lead the public and policy makers to erroneous conclusions about the status and progress of education in this country.
Are you listening Chris Rickert? How about you, Superintendent Tony Evers? Good, while I have your attention, surf on over to Jay Bullock’s Using NAEP cut scores devastates, disserves our students to get the view from the classroom on the Lake Gonetowoe Effect.
I understand the problems with cut scores that are set so low that they little of use in identifying varying degrees of achievement and create unearned good feelings. Many states did this in order to avoid the forced and unproductive reforms associated with NCLB sanctions. The pendulum appears to be swinging in the other direction and we seem to be entering the era of where cut scores
are designed to inspire reformy Jeremiads (if not actual learning). I hope our stay at Lake Gonetowoe is short, because it isn’t going to be pleasant or productive.
Thomas J. Mertz