• Keeping score
    July 25,2013
    • Email Article
    •  
    •  Print Article
     
    I’m a teacher, but the people who run America’s schools prefer the title “educator.” There are two surefire ways to determine if you’re in the presence of an educator. First, inquire if he has any recent, real, year-after-year experience in a classroom. An educator will either stare blankly or mumble something about working with gifted students a few decades ago. Second, drop the word “data” into the conversation. If his ears prick up, he’s an educator.

    Educators love to talk about “what the research tells us,” even though what passes for education research would get laughed out of any conversation with real scientists. The American Education Research Association’s annual convention recently featured “data poems,” where educators could recite in verse what they thought they’d discovered.

    See what I mean?

    Of course, some educators realize there’s more to research than what you can pack into a sonnet. Every recommendation they make is “data-driven.” No matter how pipe-dreamy or nakedly ridiculous it might be, they’ve got numbers to back it up.

    The question is where do these numbers come from and what are they worth.

    Experts cite data to prove conclusively that preschool students should use computers, and that they shouldn’t, that schools shouldn’t group students by ability, and that they should, and that self-esteem improves student achievement, and that it doesn’t. Experts use assessment data to prove that schools are failing, and that they aren’t.

    Student assessment used to rest mostly on teacher-graded classroom work and a little on multiple-choice standardized tests. Experts didn’t like this system. In their judgment “authentic” classroom grades weren’t standardized enough, and standardized tests weren’t authentic enough. To provide “meaningful data,” they devised scoring rubrics for non-multiple choice standardized tests. Rubrics predate the Clinton administration, they’ve flourished under No Child Left Behind, and they’re the expensive, beating heart of the new Common Core.

    To say that rubric-scored assessments haven’t been a smashing success would be a considerable understatement. We’re talking about decades of embarrassingly epidemic “scoring errors” and retractions nationwide, resulting in students being retained when they should have been promoted, schools rated as failing when they weren’t, and entire years of expensive data discarded, or published despite experts’ concessions that they should have been discarded.

    We’re talking about data so meaningless that the RAND Corp. warned that contemporary assessments weren’t identifying “good” and “bad schools,” just “lucky and unlucky schools.” A Brookings study found that “50 to 80 percent” of the fluctuation in schools’ annual average scores was “temporary” and “had nothing to do with long-term changes in learning.”

    Testing contractors insist their newest “next generation assessments” have “significantly improv[ed] upon” their previous newest assessments, which is why we should give them even more public school money than we’ve been giving them.

    Existing rubrics like New England’s NECAP test require hastily trained scorers, who typically aren’t teachers or often even college graduates, to discern whether a student’s writing has a “general purpose” or an “evident purpose.” Does it have a “strong focus” or a focus that’s “maintained throughout”? Is it “intentionally organized” or “well-organized and coherent”?

    You choose between these ambiguous alternatives, and then tell me your score is data-worthy and meaningful.

    New York’s rubrics split hairs between answers that “develop ideas clearly and fully” and those that “develop ideas clearly and consistently.” How about the difference between “precise and engaging” language and “fluent and original” language? Try distinguishing between “partial control” and “emerging control” of punctuation, or errors that “hinder comprehension” and those that “make comprehension difficult.”

    Does your student’s answer “establish a connection” or “an integral connection”? Does it employ “appropriate sentence patterns” or “effective sentence structure”? Are the details “in depth,” or are they “elaborated”? Does the writing “ramble,” or does it “meander”?

    Each of these subjective coin flips changes the student’s score, typically by 25 percent. Remember that the next time you read about which schools passed and which ones failed.

    Assessment officials claim their new Common Core scoring rubrics eliminate all those glaring ambiguities. Smarter Balanced, the Common Core assessment consortium to which Vermont belongs, only expects scorers to differentiate, with data-worthy accuracy, between “uneven, cursory support/evidence” and “minimal support/evidence.” Is the vocabulary “clearly appropriate” or “generally appropriate”? Does the writing use sources “adequately” or “effectively”? Are there “frequent errors” that “may obscure the meaning,” or do “frequent and severe” errors “often obscure” the meaning?

    I’m glad we straightened all that out with statistical precision.

    Now all we have to do to gather that crop of meaningless data is administer the Common Core assessments, which according to the timetable, students will be taking in 2014. Never mind that the tests haven’t been written yet, and that 2014 doesn’t allow anything close to sufficient time for field testing. We’ll only be using these tests to judge the success and failure of America’s students and our public school system.

    That includes Vermont’s students and schools.

    Never mind that the tests will be administered online when almost all schools lack sufficient technology. The Common Core’s corporate sponsors stand ready to sell us all the computers and software we need.

    Never mind that pilot testing thus far has been plagued by “widespread technical failures,” raising what Education Week described as “serious concerns.” Never mind that these “snafus” involved a rogues’ gallery of leading Common Core assessment contractors, including ACT, CTBS/McGraw-Hill, and Pearson.

    Never mind that these “derailments” are prompting officials and legislatures in many states to reconsider their participation in the Common Core program or that in a recent survey of Washington “insiders” — present and former officials from Congress, the White House, and the Department of Education — three-fourths concluded that Smarter Balanced, Vermont’s assessment administrator, was on the “wrong track.”

    Yes, these tests produce numbers, but they’re expensive numbers with no meaning. And if the dollar cost weren’t exorbitant enough, the tests that produce these worthless numbers devour hours, days, and weeks of instructional time. The drain on time and resources forced on schools by the Common Core’s assessment regime will make No Child Left Behind’s obsessive testing seem light and trifling.

    Then based on their “data,” the unblushing expert “educators” who have mismanaged public education for 40 years will impose more benighted sanctions and consequences on the schools that belong to us. Citizens, parents, and teachers will lose more control, and we’ll face even greater pressure to toe the wrongheaded reform line that has for too long crippled schools.

    Our schools and students need many things, but more “data” isn’t one of them.



    Peter Berger teaches English at Weathersfield School. Poor Elijah would be pleased to answer letters addressed to him in care of the editor.
    • Email Article
    •  
    •  Print Article
    2 Comments
    MORE IN Commentary
    There is good reason to finally share the multitude of our thoughts as a result of the tragic... Full Story
    To aid in prevention efforts, the Northwest Neighborhood Association of Rutland City has formed a... Full Story
    Reading the papers these days I find that the two world leaders who stir the most passion in me... Full Story
    More Articles