Missives

The following text is a somewhat more verbose version of a letter I sent to a local paper. It is in response to an article concerning a local "alternative" school that is trying to avoid the standardized testing that was recently mandated in California.

The article focused on the "alternative" nature of the school, quoting lines like "[taking a standardized test] would be like forcing a square peg into a round hole" for students of the school. The article didn't address any of the fundamental problems with standardized tests, or the fact that this testing is a problem for all schools.

This has been the rule rather than the exception, in the public discussion of standardized testing. People talk about "standards" and "tests" without discussion of what constitutes a meaningful standard, or test.

In the debate over standardized testing in our schools, Californians seem terribly interested in the abstract idea of standards, and uninterested in real tests, or their effects upon teaching. There is no public discussion of standardized testing in practice (rather than in theory). Your recent article about standardized tests was no exception ("Alternative school wants out of mandated testing", Jondi Gumz, Monday, March 15). It missed most of the major issues surrounding this subject.

The problems with standardized tests are not limited to "alternative" schools, and are not primarily about vague concepts such as treating children as "individuals". Rather, the problems lie in the questions "What do they measure?", and "What effect do these tests have upon teaching?"

The answers are not what you might expect. Largely, we don't know what they measure, and the effect of standardized testing is frequently to worsen the quality of teaching -- for all schools, not just "alternative" schools. Doing well on a standardized test is not at all the same as having a good understanding of a subject. When standardized test scores become the primary goal, effective teaching goes out the window.

For a concrete example, consider the SAT. Originally the "A" stood for aptitude -- an innate ability for a subject. It was even suggested that since it measured something innate, you couldn't prepare for the test. If you were bad at math, well, you were just that way. No amount of studying would change your score. The success of various test-prep programs has demonstrated that this is nonsense; you can prepare for the SAT, and it doesn't measure anything innate.

Now the "A" stands for achievement, implying that the (same) test now measures a student's understanding of a particular subject. However, the ever-successful test-prep programs don't teach the subjects which the test supposedly measures. Instead, they teach how to take standardized tests. For example, students are taught to avoid algebra when solving the algebra questions. "Algebra is where you make mistakes", they are told; it's more effective to learn to eliminate certain answers, and to use other cookbook techniques which require no algebra. Understanding algebra is irrelevant to scoring well. This is a limitation of standardized "bubble" tests, and of any assessment which attempts to quantify the abilities of a wide variety of people through a very highly stereotyped activity lasting perhaps a couple hours.

It will come as no great surprise if the "A" soon stands for "ambiguous" or "arbitrary". The only thing we're certain the SAT measures is socio-economic status.

One might argue this isn't an issue as long as schools teach the subject, and not test-taking strategies. However, if the focus is on improving test scores (rather than, say, education) students will figure this out, and will learn to focus on test strategy rather than learning.

Teachers, too, will be pressured to teach strategy. When performed on a state (or national) level, published test results (in your article, for example) inevitably end up being used inappropriately by admissions boards, by school officials, by politicians... even by real estate agents, who pitch neighborhoods based on average test scores. This creates financial pressures for schools, and a strong temptation for teachers to teach to the test, not to the subject. Their already pathetic salaries may be at stake.

One reason standardized tests fail is norm referencing. Many standardized tests (such as the SAT) are norm-referenced, which means that students are measured against each other and fit to a normal (Gaussian) distribution. This practice leads to many problems. Among them, it makes it difficult to test fundamental concepts which every student should know (e.g. those in the curriculum), and encourages obscure or irrelevant topics. A question which nearly every student can answer does not produce a normal distribution. Such questions are therefore thrown out, in favor of questions which are answered by 40-60% of students. If schools are doing their jobs, norm-referenced tests become increasingly obscure in their subject matter. Differences in test scores become no smaller but are increasingly meaningless. These tests do not measure whether the student knows the curriculum, and they do not measure whether the school is improving.

Educators have noted that norm-referenced tests actually go through cycles over several years. The temptation of teaching to a test forces the test authors to change its focus year to year, in order to retain a normal distribution. In practice, then, norm-referenced tests become a measure of how closely teachers follow the latest fad of the test writers. Like other fads, eventually they repeat themselves.

Norm referenced tests are primarily used for ranking people -- which isn't the purpose of schools -- and then fail even to rank them in a meaningful way. As Alfie Kohn noted at a recent ASCD meeting, many people are shocked to learn that 50% of students score below average on norm referenced tests (this is true by definition).

An alternative to norm referencing is criterion referencing. Criterion- referenced tests measure the student against a set of criteria, not against other students. That is, it focuses more directly on whether a student has learned a core curriculum. When students are measured against a well defined standard, it allows for the possibility that most students will perform well.

This does, unfortunately, cut against our national obsession with rank -- if everyone knows the core curriculum, you can't claim Johnny is smarter than the rest of the class. The desire to somehow capture the abilities of a person in a single number is currently very strong in our culture, though there is little evidence to suggest it is even possible. While high standardized test scores make it easy for a parent, teacher, school board member, city council member, or politician to look down their noses at "lesser" communities, they do not indicate more effective teaching.

More meaningful long-term assessments can be implemented locally by schools or school districts (for example portfolio assessments, or performance assessments). These might include standardized tests (criterion-referenced), but would not rely on them. Local and long-term assessments provide more comprehensive information, are less prone to misuse outside the classroom, and are not as apt to quash effective teaching. A more useful state mandate might require schools to locally adopt or develop assessments to be used to gauge their own progress -- to document their curriculum and assesses their own progress in teaching it.

What every community must ask themselves is "Do we want our children to attend a school or a test-prep program?"

The Gault Open School should be supported in its efforts to maintain meaningful standards. I hope other schools will follow suit.

For anyone unfamiliar with our rich history of misuse of standardized tests, I highly recommend Stephen Jay Gould's book The Mismeasure of Man, which is primarily about IQ testing.

To anyone interested in how standardized tests affect teaching I recommend visiting the following web site: http://www.alfiekohn.org/teaching/standards.htm

bcboy@thecraftstudio.com
This page is copyrighted by the author.