NACUBO

My NacuboWhy Join: Benefits of Membership

E-mail:   Password:   

 Remember Me? | Forgot password? | Need an online account?

Business Officer Magazine
Loading

True Colors

When University of Wisconsin-Eau Claire leaders set out to assess program priorities, they anticipated resistance to the status quo. A color-coded evaluation process helped the campus make moves to change the culture.

By MJ Brukardt, Stephanie Jamelske, and Andrew T. Phillips

*We think of the University of Wisconsin–Eau Claire as an institution of well-intentioned contradictions. As a campus community dedicated to student success, we struggle to meet the very real challenges of a volatile economy, decline in state support, and demands for improved learning outcomes. Proud of our history of academic innovation and high quality, we are nonetheless highly averse to change, even in cases where we know we can or must do so.

Similarly, we're quite accomplished at policy making, creating a rule for every possible circumstance, and championing accountability to those rules. Yet many of us still view assessment with skepticism and consider "quality improvement" as jargon best reserved for the corporate world, not higher education. We've arrived at campuswide consensus on the need to invest in our academic strengths and do rigorous and meaningful program review, with the universal understanding, of course, that any new investments will be made in my department and the reallocation will come from yours!

In short, these contradictions, as well-intentioned as they were, reinforced a campus culture that could not embrace change easily. The status quo always seemed to rule.

So when we recently completed a highly collaborative strategic planning and reaccreditation process amid continuing budget cutbacks due to state funding shortfalls, we knew that our status-quo culture was no longer adequate for the challenges ahead. In an environment of dwindling resources, we'd need to set priorities and make tough decisions if we were to implement the new vision we'd created for ourselves: to be nothing less than the premier undergraduate learning community in the Upper Midwest. We needed a nimble, collaborative, transparent, evidence-based culture that enables such priority setting and decision making—from the department level to the executive suite. But how to get there?

Charged by our chancellor to assess our strengths in light of our new vision and then identify opportunities for improvement and reallocation, we embarked in the summer of 2008 on a campuswide, comprehensive program review. We called this one-time programmatic snapshot PEEQ—the Program to Evaluate and Enhance Quality—which would eventually include using colors to evaluate the extent to which the work of departments and units supported certain key criteria. In the process, we took the first steps in institutional change and are making steady progress toward the decision-making culture we seek.

Deliberate Design for Dialogue

We intentionally designed PEEQ to engage the campus in a conversation that would help us confirm our mission and vision, identify how each unit uniquely contributes to them, and provide a road map for making decisions about where to invest and where to reallocate to better serve our students. We wanted to set vision-based priorities, and if the fiscal situation were to require reallocations or cuts, we wanted to have those priorities in place before a crisis would mandate such action.

When used in conjunction with integrated planning, budgeting, and assessment by leaders committed to change, PEEQ-style program review can be a powerful catalyst for quality improvement. The process requires strong administrative leadership coupled with broad faculty and staff engagement at all stages—a top-down and bottom-up approach. Our senior institutional leaders set the stage by calling for a yearlong, comprehensive program review immediately following the adoption of our new strategic vision. We integrated this into our existing annual report process so that departments and units were not asked to participate in a separate initiative. While PEEQ did require more work from chairs and directors, the one-time nature of the program review and the focus on involving every department and unit helped to mobilize involvement.

Building the Plane in the Air

A steering team of faculty and staff, chaired by the provost and staffed by the authors, spent the summer of 2008 beginning the creation of PEEQ. We quickly realized that completing a program review of this magnitude in just one academic year would force us to embrace and model the very culture change we'd set out to foster across campus. That is, our work had to be flexible and transparent and basically done on the fly, a process we described as not unlike "building a plane in the air"—creating, refining, and adjusting our process as we implemented it. We could not know how everything would work before the process began. We simply had to adapt and create as we went.

We based our process on several sources, including Robert Dickeson's bible of program review, Prioritizing Academic Programs and Services (Jossey-Bass, 2010); and materials adapted from Bemidji State University, Bemidji, Minnesota; Drake University, Des Moines; and Washington State University, Pullman.

As a result of our research, we decided that three core criteria would form the backbone of our review: mission, quality, and cost. A fourth dimension, an opportunity analysis, would not be formally evaluated, but rather would encourage input for change and improvement from within individual departments and units.

A concise list of 20 questions (intentionally worded to put the focus on providing evidence to support each query) for all academic departments—and 18 questions for all administrative and service units—would help inform the evaluation process. Some questions were the same for both groups; others were unique, based on academic or cocurricular focus. (See sidebar,"Sample PEEQ Questions for Academic Departments,"  for a sampling of questions; visit the PEEQ Web site to access a complete list of all questions for academic and administrative departments, as well as sample evaluations.)

To illustrate our commitment to a culture of collaboration and transparency, we vetted all questions with the entire campus community, using a charrette process that invited faculty and staff to provide us with written feedback. Through the charrette process, for example, we learned that there was confusion about how to answer some of the cost questions and what constituted peer benchmarks. The feedback not only helped us improve the wording of the questions, but also helped us anticipate concerns that we could address with chairs and directors when we rolled out the final self-study report template.

We finalized the self-study report template, and the steering team distributed it to all department chairs and unit directors with the requirement that the final reports include data-supported answers and not exceed 13 pages in length. This encouraged concise responses (important for the evaluators who were volunteering their time) and, we hoped, reduced the temptation to embellish or spin the answers.

The questions required writers to defend all answers with actual data and evidence and were designed to facilitate department- and unitwide dialogue—serving as catalysts for conversations about what really mattered. We found that many departments used the review to do just that; they reported to us that for the first time they had in-depth debates about their mission, program quality, and even resource use and instructional costs. Because PEEQ questions sought to get at the core issues our university faced, the dialogue was meaningful and important. Some units involved students in the conversations, further enriching the discussion. Some departments, as you might expect, left the review to the chair to complete in isolation.

Evaluating With Color Wheels

Finding the right evaluation process was critical if we were to effect the kind of culture change we needed. Fortunately, the steering team found a promising approach shared by Bemidji State University's leadership at the Higher Learning Commission's annual meeting—a color spectrum, rather than numbers or rankings, for evaluating contributions to various university priorities.

We adopted the color spectrum in large part because, as Dickeson notes in the first edition of his book, "data do not substitute for sound judgments." As tempting as it may be to assign a number to every answer and a neat hierarchical ranking for each institutional offering, programmatic decisions cannot easily be reduced to rankings. We wanted to foster thoughtful judgments by informed institutional leadership, not provide a simplistic checklist of what to keep or eliminate. Using colors helped us do this by forcing reviewers and academic leaders alike to look at responses holistically and in context. Here's how it works.

Using a spectrum of five colors (see Figure 1), evaluators reviewed answers to all questions and assigned colors accordingly.

*

While many were tempted to equate blue with bad and red with good, the strength of the color system was that the colors needed to be evaluated in the context of each unit or department. So, for example, in response to the question of how well the unit directly serves students, some units, such as institutional research, received a blue—not because they were performing inadequately, but because their mission did not require direct interactions with students.

Once evaluators assigned a color to each answer for questions in the department and unit program review, we then grouped results into color wheels for each of the three criteria: mission, quality, and cost, as illustrated in Figure 2, a sample color wheel for "Quality." This enabled reviewers, chairs and directors, and administrators to get a holistic picture of each unit, as well as the ability to hone in on any single question.

By viewing the three color wheels together—and by knowing what each  segment of each wheel represented—administrators could better evaluate high-cost programs that nonetheless delivered high-quality and mission-central instruction, for example. To view example evaluations for a department, showing the three criteria, visit the PEEQ Web site, and select "Sample Academic Department Color Wheel."

What the color wheel does not do is easily permit overarching conclusions, such as, "Department X is very good because it's so red," or comparative rankings between departments.

*

Peer Review Done Right

The evaluation team proved to be one of the greatest strengths of the PEEQ process, and the selection method and team dynamics turned out to be especially important.

Ultimately, it fell to a group of 24 dedicated faculty and staff members to create the evaluation rubrics for each question, read all the self-study reports (they totaled almost 1,100 pages of reading), assign evaluative colors, provide validating comments, and—when that was complete—grapple with recommendations for action. And do all that during a single semester.

Selecting the evaluation team. Evaluators were nominated from faculty and staff and selected by the provost and chancellor. We determined that a team of 24 (12 faculty and 12 staff) would provide broad representation, offer a wide range of expertise, and be sufficient to handle the large workload of reading and evaluating the self-study reports. Reflective of our commitment to develop a decision-making culture that was broad-based and transparent, team members represented all divisions and colleges and were nominated for their ability to transcend academic or unit silos to "think institutionally." The team did not include senior administrators or deans, although both groups were involved in providing insights and information to the team when requested.

Working together. Our "build-the-plane-in-the air" approach to PEEQ proved especially valuable in regard to the evaluation team. While departments worked to complete their reports, the team collaborated to develop the evaluation criteria. Our goals for the team were to help them manage the volume of information team members would need to review, and to be sure that they were as consistent as possible as they evaluated 45 different units and 38 academic departments. We did this by employing the team in two different ways in two consecutive stages of review.

First, we assigned the 24 evaluators to eight groups of three and asked them to evaluate only the questions for one of the criteria areas—mission or quality or cost. Each group became the "experts" on those specific questions and ensured that all associated answers were evaluated consistently across all departments and units.

Second, after all questions were evaluated in this cross-sectional approach, we reassigned the evaluation team members to six groups of four and then distributed the 83 reports among them (roughly 13 reports per group). In this stage, we asked the groups of four to evaluate the first stage evaluation by checking for consistency across the entire report and among the 13 reports assigned to them. This two-stage approach—which we also developed "on the fly" as we sought to balance efficiency of work (and workload) while addressing campus concerns regarding fairness and consistency-dramatically increased confidence that every report would be fairly evaluated and fully considered by multiple viewpoints.

Finally, each department and unit had the opportunity to review the preliminary results and provide written responses. Evaluation teams used those responses to correct any errors and refine comments.

Reporting out. When the consistency and response phase was complete, the evaluation team assembled the color assignments into color wheels. We then shared the evaluation reports, including both the color wheels and corresponding narratives, with each department or unit and also with the corresponding supervising college dean or divisional vice chancellor. Chairs and directors were encouraged to use the evaluations to guide program improvement, including their unit decision making, and many have done so. A department in the College of Education and Human Sciences, for example, is now restructuring its curriculum to improve student time to degree.

Recommending institutional priorities. We also convened the team for a three-day retreat to analyze the color wheels and make recommendations for university-level action. Team members transcended their departmental allegiances and engaged in a thoughtful, wide-ranging, and clear-eyed discussion of priorities and opportunities for change. It was both challenging and exhilarating. As a result, the evaluation team not only provided departments with a thoughtful analysis of their strengths and weaknesses, but also compiled more than 60 recommendations for institutional improvement, which the chancellor has adopted and published on his Web site. He continues to regularly update the campus on our PEEQ progress.

Additionally, PEEQ has empowered the evaluation team of 24 faculty and staff with a deep understanding of our university and our programs. Some of their institutional recommendations included:

  • Improving assessment efforts campuswide. (As a result, a new position for a director of assessment was created and the institutional research function enhanced.)
  • Eliminating or radically transforming our Honors Program. (We've reinvigorated the program, which is leading the way in attracting underrepresented students.)
  • Creating a new Center for Actuarial Science, an academic area of institutional strength. We have redirected funding toward support of that field and professional designation.

We Built the Plane and More

We began PEEQ with the goal of institutional improvement, but in hindsight we accomplished something more important—culture change. To set the context, shortly after the PEEQ process concluded, we experienced a major reduction in state funding, and—for the first time—we were able to inform our budget priorities with the data gathered in PEEQ and other campus planning efforts. For example, rather than across-the-board reductions, we asked for deeper cuts from both the technology and facilities areas. While these cuts were deep and difficult, we were able to "protect" several vacant positions that might otherwise have been eliminated.

The following fall, using an integrated budget process, we assigned those positions to three of the top priority areas identified in PEEQ: an admissions counselor to recruit diverse student populations, a benefits coordinator assigned to returning military veteran students, and another position dedicated to institutional assessment.

We candidly admit that we have changed more than we expected but not as much as we would like. PEEQ was effective in prompting—perhaps forcing—a meaningful dialogue about our priorities and resources. That conversation continues today with a newly revised annual report process and a program review cycle, which have been built directly from the PEEQ criteria. The program helped us ask the questions that matter, use data to provide the answers, and talk openly about how to improve. That for us was a meaningful culture shift.

Immediately after we concluded PEEQ, a new provost arrived and used our various PEEQ reports to facilitate her initial conversations with each academic department. The recommendations also have informed her integrated planning, budgeting, and decision making. She used PEEQ, for example, to restructure academic affairs, creating a "student success network" that focuses on improving student advising and undergraduate learning.

PEEQ's focus on using data to drive decisions has changed the tenor of discussion at UW–Eau Claire. Justifying requests based on real data is now required for annual budgeting and all requests for program funding. Discussions about key performance indicators and statistical analyses of underperforming areas are more common now.

Colorful, challenging, frustrating, energizing, and inspiring, PEEQ has helped us continue to move forward to become the nimble, collaborative, transparent, and evidence-based university our vision demands.

Nevertheless, while we know the process has been effective in many ways, we need more practice in demonstrating to others the value of our work. The color wheels did help us provide a holistic picture of unit and department strengths, but in the process we discovered that faculty and staff too quickly focused on colors they perceived as negative—a reaction we suspect they would have had whether we used numbers, grades, or colors. While PEEQ opened doors to broad discussions about assessment, we have more work to do to embrace quality improvement across campus. But, we are well on our way.

MJ BRUKARDT is special assistant to the chancellor for strategic planning, and STEPHANIE JAMELSKE is budget officer for academic affairs at the University of Wisconsin-Eau Claire. ANDREW T. PHILLIPS, former associate vice chancellor for academic affairs and dean of graduate studies at University of Wisconsin-Eau Claire, is academic dean and provost at the U.S. Naval Academy, Annapolis, Maryland.

Sample PEEQ Questions for Academic Departments

  • Mission: Provide evidence that the program's learning goals align with and support the UWEC liberal education learning goals.
  • Quality: Provide evidence that the program demonstrates and promotes equity, diversity, and inclusiveness in its hiring, recruitment and retention, curriculum, and pedagogy.
  • Cost: Provide evidence that the program is cost-effective, relative to the Delaware national benchmarks. Specifically comment on direct instructional expenditures per student credit hour.
^ Top

Culture-Change Learning Curve

Applying a new process, as we did with the University of Wisconsin-Eau Claire's Program to Evaluate and Enhance Quality (PEEQ), requires a significant effort to effect institutional change. Here's what we learned as we took the first steps to creating the kind of campus culture necessary to support new ways of doing things:

  1. When it comes to fostering change, realize that what you say is important, but what you do is the proof of your commitment. Modeling the change you seek is vital if it is to succeed. Our efforts to improve collaboration, engagement, transparency, and data-driven decisions were integral to our process-and produced results.
  2. Find an evaluation tool that meets your goals. Color wheels, while not perfect, helped us to emphasize holistic assessment and unit-level discussion.
  3. Recognize that program review is scary. No matter how often you assure faculty and staff that the goal is enhanced quality, there will also be fear and skepticism. Consistent and continuous communication, while difficult, is imperative.
  4. Include top-down and bottom-up leadership when conducting program review. For example, to develop the PEEQ model, the chancellor and provost held us accountable for a timeline and supported the results, while allowing broad-based teams of faculty and staff to develop and implement the process. Peer review can work, with the right people on board, with careful and thoughtful guidance, and with clear deadlines.
  5. Pay attention to process. We outlined to the campus community the PEEQ process early on and as we continued to develop it, but we also constantly monitored and revised details in response to campus needs and concerns. We truly developed the process as it unfolded. We often had to give people time to work through issues, but in every case that wait resulted in improved results and more fully engaged participants.
  6. Hold people accountable. We knew that we wanted a more data-driven culture and so we evaluated departments on how well they supported their responses with evidence and data. Deans and division leaders rejected reports with too much "spin." This approach has created a new baseline for the way we make decisions.
  7. Build a cadre of leaders. Our evaluation team proved that faculty and staff can rise above turf protection to think and act for the good of the whole. And by engaging faculty and staff in this way, we have developed an important group of committed institutional leaders who are advocates for quality and change.
^ Top