I just completed a course in education research. Well, technically I haven’t completed it, because my main project, if it’s accepted by the instructor, can’t even take place until this summer, but in any case the course is over.
Here’s what I’ve learned: all data is bogus.
I know you’ll find this difficult to believe, but scientific research can’t seem to pin down what works and doesn’t work in our schools. “Smaller class size,” says the Kentucky study. “Not really,” says a study from London. “Accelerated Reader,” says Renaissance Learning and all its ‘research institute’ fronts. “Not likely,” says other studies.
“Read to your kids,” says all kinds of studies. “Nope,” says a study released today by the feds, which says nothing parents do makes as much difference as how much money they make and how much education they got before having children.
Well.
What’s the deal here? Big Pharma does this all the time: control group, test group, crunch the numbers, and hey presto! reliable data. And Vioxx.
So why can’t education do the same thing? This is an easy one: they can’t control the variables. Ever. In any way. Sure, you can “take them into account using statistical methods,” like chicken feathers and eye of newt, I suppose, but the problem there is garbage in, garbage out.
However, there is a bigger problem with educational research, and that is measuring results. Scratch a study and you’ll find they’re all about the same thing: increasing student achievement.
Quick: what is “achievement”?
You see the problem. Even if we all agreed that “student achievement” was properly measured by the standardized tests we have or might develop, which we don’t, by the way, the problem remains that the variables going into the results of standardized tests are just as squirrelly and uncontrollable as those skewing the study itself.
Here’s a direct quote from the horrible, horrible textbook from the course which just ended: “Of course, if the mechanisms underlying the creation of academic achievement were understood completely, and if each of the variables was measured well, then a longitudinal survey… could provide adequate information on causal effects.” [Haertel, G. D. & Means, B. (Eds.). (2003). Evaluating educational technologies: Effective research designs for improving learning. p. 196-7]
This of course is the classic Ham & Egg routine from vaudeville: “If we had any ham, we could have ham and eggs, if we had any eggs.” But nobody’s laughing, somehow.
Until we all agree on what achievement is, until we have a universal standard to measure and ways to measure it, then all educational research must be regarded with suspicion.
Does not some of the problem arise not from an INABILITY to control the variables, but instead, an UNWILLINGNESS to control the variables? As with most organic reactions, introduction of new variables, such as (insert latest Kalifornia educational fad here as an example), into the education process takes time to produce results. By the time we are ready to start evaluating those results, we have abandoned (insert latest Kalifornia educational fad here as an example) and moved on to (insert even more recent Kalifornia educational fad here as an example) instead.
I absolutely respect the complexity of the kind multivariant equation that the education process represents. Your Vioxx example was apropos. Human beings have a frustrating habit of screwing up your control and experimental conditions. But should that excuse us from trying to improve the quality of outcomes by the best means possible? No, we should keep doing drug trails and we should keep trying to find ways to measure educational success.
And yes, I heard and absolutely agree with your point regarding agreement on the definition of success. A good friend recently said “Our job is to produce students who can read, write, perform arithmetic, solve problems, find and use information, and act honorably.” With the exception of the last three words, I would have to agree with his standard.
The variables that need to be controlled in order to produce “scientific research,” as the only President we have calls it, aren’t the latest fads. Those are the things we’re trying to test for. The variables are things like class size, teacher ability, teacher commitment, student preparedness, administrative support, student home support, continued or adequate funding for the innovations, student attendance, school scheduling, and on and on.
As for “act honorably,” it’s less controversial than you think. Act with integrity, respect others, work hard. And question authority.