(The Chronicle of Higher Education) By Eric Hoover–
Any student who sweats through the ACT or SAT has reason to ask why such examinations are even necessary. Some colleges, it seems, can offer a more convincing explanation than others.
Although most four-year institutions require standardized tests, only half (51 percent) measure how well test scores predict student success on their own campuses. Of those, 59 percent do so annually.
Those findings come from a new report by the National Association for College Admission Counseling, which surveyed more than 400 colleges to learn more about how its members use entrance exams — and evaluate their usefulness.
The report describes the prevalence of predictive validity studies, which gauge the correlation between admission criteria and specific outcomes, such as first-year grade-point averages. In short, the studies help colleges understand the extent to which their selection tools — grades and test scores — help forecast future performance.
One might ask why more colleges don’t perform the predictive validity studies. Lack of staff and funding are barriers, the association says.
NACAC has rung the same bell before. In 2008 a panel convened by the association released a sweeping report urging colleges to scrutinize their testing requirements. The panel, which included several prominent admissions officials and college counselors, encouraged institutions to determine whether test scores added enough value to justify the insistence on testing. To that end, the report said, admissions offices should rely on internal validity studies — and not on national data compiled by testing companies, or on tradition.
Sure enough, many colleges have long mined internal statistics to isolate the predictive power of test scores. Over the years such inquiries have convinced some institutions that they could make sound admissions decisions without the ACT or SAT. Yet other colleges, after plumbing their data, have concluded that tests provide too much valuable information to stop requiring them.
Whatever policies colleges embrace, Mr. Hawkins says, it’s important to know what their own numbers say. After all, institutions differ in many ways — mission, diversity, selectivity — that might affect the results of validity studies.
How colleges conduct those studies varies from campus to campus, the new report says. Eleven of the institutions surveyed shared their research with NACAC. Their studies reveal a diversity of institutional circumstances and motivations.
One college, for instance, found a strong correlation between the old SAT’s critical-reading section and colleges grades; four others found that the writing test had greater predictive value.
Over all, high-school grades are “by far the most significant predictor” of achievement in college, the report says. Tests also helped institutions predict academic performance — but to varying degrees.
The need for institution-specific research, the report suggests, is growing: “Recent changes in the content of the SAT, increased use of the SAT and ACT as high-school assessment instruments, and the changing demographics of students who take the tests could all affect the predictive validity of test scores.”
Perhaps most important, digging into data also might help colleges evaluate the fairness of their admission requirements. “If we acknowledge the broader educational inequities in our society, and if the tests aren’t giving you much beyond what grades are already giving you,” Mr. Hawkins says, “you might ask if the tests are amplifying those inequalities.”
Students might not ever like those high-stakes exams, but as long as they’re being administered, colleges should know as much as they can about what the tests predict.