Sweeping Changes in Accreditation
While the specific standards implemented by regional accreditation agencies vary, the overwhelming thrust behind recent changes is an across-the-board shift toward a focus on student learning.
By Karla Hignite
Accreditation criteria continue to evolve and change. Take, for example, revisions made by the Southern Association of Colleges and Schools (SACS) Commission on Colleges. In 2002, the commission reduced the number of compliance issues to which institutions must respond-from more than 400 criteria to 78 principles. This significant revamp was based largely on suggestions by institution members to focus more on education quality and student access to services.
One area in which SACS has been ahead of the curve for more than 25 years is that of requiring student learning outcomes for general education courses to be included in the accreditation report. Beyond this, SACS institutions must incorporate educational outcome information for specific majors. The reason: to help evaluate how well the college or university prepares students in writing skills, regardless of their majors.
Rethinking and Revising
SACS is not alone in considering alternative criteria and characteristics on which to base accreditation. Other regional accreditation agencies are following suit in their own ways. (To read more about other changes in accreditation-and how the role of the business officer comes into play-see the February 2008 Business Officer article "Proving That Your Outputs Count.")
Middle States Commission on Higher Education. In 1999, Peter Burnham, president of Brookdale Community College, Lincroft, New Jersey, was asked by MSCHE to chair a special task force. The group was charged with revising the commission's characteristics of excellence, which form the core document that articulates standards for all higher education institutions in the region. What emerged was a set of 14 standards, defining minimum expectations and calling for demonstrative evidence of compliance, along with supporting data.
To develop evaluation standards for colleges and universities with diverse characteristics, one important premise was to apply these standards according to the specific mission of an institution. "The qualifications of teaching faculty," says Burnham, "may vary greatly from Princeton to Brookdale Community College to Bryant and Stratton to St. John's, where the entire curriculum is centered on teaching the great books. That has put the onus on institutions to define how they meet each of the standards."
Accrediting Commission for Senior Colleges and Universities for the Western Association of Schools and Colleges. WASC began rethinking its accreditation standards in the mid-1990s. After a five-year process that was funded by more than $2 million in grants, the commission unveiled a new set of standards adopted in 2001. The modifications simplified the standards and created two core commitments: institutional capacity and educational effectiveness. The commission collapsed the previous process, which involved 9 standards and 268 sub-standards, to 4 standards and 41 criteria for review. It also imposed page limits and required responses to be more analytical.
In addition, the visit process was revised to a three-stage process. During the first stage, the institution develops a formal proposal of how it will use accreditation to address the core commitments. This allows each institution to conduct the review in the context of its mission. Two years later comes the capacity review, which focuses on operations and essentially assesses the institution's ability to deliver its programming. A review of educational effectiveness then follows 18 months later. This revised process focuses on educational outcomes relative to the institution's mission and serves to ensure that everyone involved in the process is on the same page.
Commission on Institutions of Higher Education of the New England Association of Schools and Colleges. New standards that NEASC put into effect as of 2006 reflect an expectation that institutions understand not only what students are learning, but also how they are learning, says Barbara Brittingham, commission director. Institutions are at an early stage of assessing whether certain metrics and quantitative data-similar to the kinds of financial ratios that chief financial officers have been tracking for years-can be useful for tracking improvement in student learning. The goal, says Brittingham, is to complement the more interesting qualitative information about student achievement.
Why the Shift?
"As an accreditation community, all the regional bodies assumed the fundamental view that we can no longer focus on quality without looking at outcomes," says Ralph A. Wolff, WASC's president and executive director. "And, we must require institutions to engage in learning outcomes assessment." This may require, for example, a greater emphasis on engagement of faculty at the ground level to help them understand not only how to construct a course syllabus differently but also how to look at the aggregate program. "It's one thing for a faculty member to say, 'Students do well in my class,'" explains Wolff. "The real issue is this: By the time they graduate, have students been able to integrate all that they've learned?"
Another way to articulate this shift in evaluation criteria is to think in terms of ensuring student learning rather than of focusing solely on good teaching. "There is an assumption," says Steven Crow, president of the Higher Learning Commission for North Central Association of Colleges and Schools (NCA-HLC), "that if you have good teaching, then learning takes place. Now we're saying that we need to know that learning has occurred. "
Training for the Transition
To help institutions think and measure in these new ways, all the accrediting agencies have been developing workshop and training opportunities and beefing up annual meeting programming. For instance, NCA-HLC has developed an academy that offers member institutions a four-year sequence of events and interactions focused on advancing efforts to assess and improve student learning.
Similarly, NEASC-along with its other programming-conducts an annual self-study workshop as a tune-up for institutions preparing for their reviews. A key component of the session is to lay out a variety of approaches colleges and universities can take for assessing student success and to invite institutions to come up with their own approaches and metrics.
"Part of what we can do," says Brittingham, "is share with institutions what others are doing for those that want to compare notes." She points out the great value for institutions in looking beyond their own students and curricula. In doing so, they can see how others are assessing student success and achievement, whether through comparisons relating to faculty, scores on licensure exams, or comments from industry groups about what graduating students need to know.
Karla Hignite, Kaiserslautern, Germany, is a contributing editor to Business Officer.