NACUBO

My NacuboWhy Join: Benefits of Membership

E-mail:   Password:   

 Remember Me? | Forgot password? | Need an online account?

Business Officer Magazine
Loading

Peak Performance

NACUBO’s Accounting Principles Council presents a toolkit that links key performance indicators to organizational goals.

By Teresa Gordon, Roger Patterson, and Jennifer Taylor

Performance measurement reporting in higher education has been a topic of ongoing interest and debate. Many institutions lack a practical, easily accessible example of how an institution might most effectively and efficiently approach the measurement and reporting of performance. At the same time, growing attention from outside higher education suggests that decisions on when and how to report performance ultimately may be made for us. This has spurred renewed interest in defining an easy and meaningful way to meet performance reporting goals.

Performance Measurement’s Progression

American higher education has a long history of measuring performance. The first attempt dates back to the early 1900s when an effort was made to rank colleges by reputation, a harbinger of what was to come later in the century. Often performance measures were used as a way to market an institution rather than as a tool that could help assess progress toward meeting institutional goals and improve the college or university.

Over time, the higher education community began to reflect on its core values. In that environment, performance measurement assumed a more important role as institutions grappled with such questions as how to define and measure quality and how best to measure and improve efficiency and effectiveness.

As campuses wrestled with such questions, external stakeholders began to press for performance reporting programs at public institutions. Tennessee implemented a comprehensive performance measurement program in the 1970s. The program tied funding to performance and developed institutional standards, core performance indicators, and peer comparisons.

Since then, higher education administrators and state lawmakers around the country have followed Tennessee’s lead. While the efforts in some states were successful, the results were less satisfying in others. The linking of performance to funding in some states made many institutions unhappy and led them to view performance measures negatively. They felt that states were dictating measures with little or no input from campuses. That led to resistance to the performance management concept. Some institutions even discounted the benefits of using performance measurement to achieve their own goals.

At the same time, national efforts to identify important measures and collect institutional data as useful tools for decision making began to emerge. These efforts included the National Center for Higher Education Management System and the federal government’s Integrated Postsecondary Education Data System. U.S. News and World Report created rankings of colleges and universities using both input and output measures. The rankings initially elicited negative reactions from colleges and universities, but they quickly realized the power of performance metrics and the public’s growing desire for tools to assess performance and quality.

A major expansion in the use of performance measurement occurred toward the end of the 20th century as part of a larger movement to reinvent business and government. A new approach for achieving strategic management devised in the early 1990s by Robert Kaplan and David Norton called balanced scorecard focuses performance measures on achieving an organization’s mission and vision. The University of California was one of the first to implement the balanced scorecard approach in higher education, and the institution used the methodology to assess its administrative areas under what it called the Partnership for Performance Initiative.

Business, government, and higher education institutions increasingly used performance measures to assess results. Special associations and consortia emerged, including the National Consortium for Continuous Improvement in Higher Education. A notable development was higher education’s inclusion in the Malcolm Baldrige National Quality Award program, which recognizes institutions for excellence in quality and efficiency. In 2001, the University of Wisconsin–Stout became the first university to win the Baldrige award. Even more significantly, several of the regional accrediting agencies now include as part of their process an assessment of an institution’s performance measurement program.

Clearly, the public sees the need for and value of assessment in higher education. Citizens are asking: What is the return on my investment? Since it is difficult for an institution to systematically report and improve what hasn’t been measured, more institutions are using performance measurement to assess their progress in achieving strategic goals.

A Project Is Born

Both the Governmental Accounting Standards Board and the Financial Accounting Standards Board (FASB) have launched long-term projects to address performance measurement. GASB issued concepts statements in 1987 and again in 1994 emphasizing public accountability and linking it to external financial reporting. At one point it seemed likely that GASB would require audited performance measures as a component of public institutions’ financial statements. Recently GASB published suggested criteria for effective performance reporting that included a direct request for input and experimentation by various branches of government as well as public higher education.

As a result, NACUBO’s Accounting Principles Council, in its ongoing efforts to inform GASB on behalf of higher education, created a project team to study performance measurement. The APC project seeks to identify what is and is not being done in performance measurement and reporting in higher education. It attempts to provide a straightforward, practical approach to starting or revising a performance reporting format. The goals of the project include:

  • developing a simple, visual template for reporting meaningful performance information both internally and externally;
  • developing a model useful to both public and independent institutions;
  • addressing GASB-suggested criteria for performance reporting and related issues (see sidebar, “GASB Reporting Criteria”); and
  • informing GASB of, and proactively representing, higher education’s performance measurement practices.

To help accomplish the project goals, the team conducted a performance measurement survey, analyzed the survey results, reviewed samples of existing performance measurement reports and prior research results, and developed a key performance indicator matrix. At various stages of the project, the team conducted focus groups at NACUBO professional development events to gain feedback essential for the development of a set of reporting tools.

Assessing the Survey Results

The Web-based survey, conducted in December 2003, went to all NACUBO member institutions. There were 262 respondents (a response rate of 12 percent) representing 129 independent and 133 public institutions. Eighty-two percent were from four-year institutions; the rest were two-year institutions. The majority of respondents were at institutions that had between 1,000 and 10,000 students. Seventeen percent were at institutions with fewer than 1,000 students; 23 percent at institutions with more than 10,000 students. The respondents were primarily involved with financial reporting or institutional planning and budgeting. The institutions they represented matched the general characteristics of the postsecondary environment in the United States.

Respondents were asked whether their institution reported performance indicators externally and/or internally. More than three-quarters answered affirmatively (201 of 262), with public institutions reporting greater use than independent institutions (see Table 1). About half of the 201 institutions reported performance measures externally and used them internally, while the other half were split almost evenly between those that did external reporting without using the measures internally and those that used measures internally without reporting them externally.

Table 1 Independent and Public Institutions With Performance Indicators

N=201 Independent Public
Report Performance Indicators Externally 38% 62%
Use Performance Indicators Internally 46% 54%

The respondents from about half of the institutions that externally report performance indicators said that their reports included or were based on data from audited financial statements. Twenty-eight percent of the institutions involved in external reporting said that a version or portion of their reports was mandated by an external entity. But only 11 percent said they included performance indicators in their annual reports or audited financial statements.

The list of performance indicator categories that respondents evaluated included four types of input measures, two types of process measures, two types of output measures, and four types of outcome measures. Enrollment statistics (an input category) was the indicator most commonly reported externally (92 percent) and used internally (100 percent) (see Tables 2 and 3). An overwhelming majority of respondents agreed that enrollment should be reported both externally (88 percent) and internally (97 percent).

Table 2 Performance Indicators Most Commonly Reported Externally

  Percentage
Reporting
(n=159)
Should Be
Reported Externally
(n=248)
Enrollment Statistics (e.g., full-time equivalent students, head count, semester credit hours) 91.8 87.7%
Persistence and Graduation Outcomes (e.g., freshmen retention rate, percentage graduating in four years) 75.5 82.7%
Efficiency or Comparative Financial Data/Ratios 69.2 65.3%
Graduation Statistics (e.g., degree granted by level or field of study) 63.5 80.9%
Diversity Measures (e.g., statistics on race or gender for students, faculty, or staff) 62.9 66.8%
Quality of Educational Experience Indicators (e.g., faculty to student ratios, average class size, accessibility) 59.1 80.2%
Selectivity Measures (e.g., acceptance rate, high school GPA, SAT, or ACT scores) 56.6 61.5%

Table 3 Performance Indicators Most Commonly Used Internally

  Percentage
Using
(n=156)
Should Be
Reported Externally
(n=248)
Enrollment Statistics (e.g., full-time equivalent students, head count, semester credit hours) 100 96.7%
Persistence and Graduation Outcomes (e.g., freshmen retention rate, percentage graduating in four years) 87.8 97.5%
Graduation Statistics (e.g., degree granted by level or field of study) 86.5 95.4%
Quality of Educational Experience Indicators (e.g., faculty to student ratios, average class size, accessibility) 83.3 88.2%
Efficiency or Comparative Financial Data/Ratios 80.8 96.6%
Diversity Measures (e.g., statistics on race or gender for students, faculty, or staff) 77.6 91.3%
Student Satisfaction or Graduating Senior Survey Results 75.6 95.4%

Persistence and graduation outcomes—an output indicator—was the second most commonly reported indicator externally (76 percent) and was widely used internally (88 percent). Graduation statistics, another output measure, were often reported externally (64 percent) and used internally (87 percent). Efficiency and financial ratios were reported externally by 69 percent of institutions and used internally by 81 percent.

Public institutions reported and used persistence and graduation outcomes more often than independent institutions. In contrast, independent institutions more commonly reported and used selectivity measures. The reporting was both external and internal.

Quality and outcome measures were less frequently reported, although most respondents said they should be used internally. For example, student satisfaction measures were used internally by more than 70 percent of institutions but were only included in one-third of the external reports. Studies of faculty and staff morale and of comparative salaries were reported externally by just a quarter of institutions even though more than half said they were using such measures internally.

When asked what should be reported, the indicators that were least likely to be recommended for external reporting were quality of faculty measures (17 percent) followed by alumni or employer survey results (38 percent) and student satisfaction or graduating senior survey results (46 percent). However, at least 80 percent of respondents said all categories of performance indicators should be reported internally.

A Tailored Reporting Tool

The survey and subsequent focus group research supported the development of a reporting toolkit. All types of institutions expressed interest in performance measurement reporting and, although many institutions report performance measures because of some external mandate, the format and content of the reporting is often not mandated and therefore could be flexible. There is a gap between what many institutions feel is effective reporting and their current mode of reporting. A surprising commonality of reporting goals exists between independent and public institutions, providing an opportunity to make performance reporting an integrating tool; financial reporting, conversely, has taken diverse directions on select key operational issues. Many institutions lacked meaningful metrics directly linked to measurable strategic goals and objectives. But there was a desire for an easily accessible data reporting format with content that would satisfy GASB and would be consistent with other recommended reporting criteria.

The toolkit, which is available at http://www.nacubo.org/x2840.xml includes:

  • Goal and Metric Outline—a sample of common higher education goal statements and corresponding metrics.
  • Web Template—an interactive tool that takes the content from the goal and metric outline and allows for providing greater detail for metrics as described in the GASB performance measurement reporting criteria.
  • Goal Appendix—a listing of goal statements extracted from institution Web sites.
  • Metric Appendix—a listing of metrics in use in higher education extracted from Web sites.

The toolkit is designed to provide a format and suggested content for an internal managerial report linking key performance indicators to strategic budgeting and planning goals. Based on input received during the project, the APC team has included in the goal and metric outline sample goals in common areas of institutional performance monitoring that lend themselves to measurement.

The goals statements and performance measures included in the goal and metric outline were selected based on their conformance with a number of theoretical and practical objectives. Performance measurement metrics should 1) be focused more on outcomes and efficiencies than on straight measurement of institutional inputs and outputs; 2) be easily obtainable from common national sources to support ease of compilation and benchmarking of data; and 3) address the core missions of the institution. To help support this latter objective, the sample goals and related metrics were further categorized under a series of mission areas commonly identified across many institutions currently reporting mission and goal data.

The Web template development is rooted in the idea that a picture is worth a thousand words. The APC created a simple, visual template for reporting key performance information, drawn from theory, checklists, and actual higher education performance reports in use today that both public and independent institutions can reference for ideas. The template seeks to incorporate GASB’s criteria for performance measurement reporting, which affects both content and presentation. The format also provides information in a manner that would easily support aggregation of data for presentation to public bodies.

The project is not designed to develop a standardized performance report card for all of higher education. Although benchmarking is a valid and desirable use of performance measurement, the most important aspect of the project is linking relevant measures to an institution’s strategic goals. The Web template is not prescriptive; it is a starting place for institutions that lack formal strategic performance measurement reporting and a point of comparison for those with reporting systems already in place. Institutions are encouraged to customize the goals and metrics. To help make this possible, the template is augmented by appendices that include a “shopping list” of commonly used metrics and goals statements, categorized for ease of reference.

At the same time, the team felt that the project could eventually help lead higher education toward some degree of standardization in an approach to performance monitoring. Ultimately, that may be required under accountability initiatives launched by GASB, FASB, or other oversight bodies that impact both public and independent institutions. This project allows for being proactive in offering a methodology that resembles what many institutions are already doing. The alternative—waiting for regulatory or funding bodies to impose reporting standards and a specific format—is not desirable for higher education.

Continuing the Conversation

While performance measurement efforts in higher education are increasingly more prevalent and robust, challenges remain. For example, a standard definition for specific metrics can be lacking within a given institution. And some institutions tend to select too many measures, which can lead to focusing on data collection rather than on improvement itself. Moreover, there is often a missing link between strategic goal statements and metrics, as well as a tendency on the part of some campus leaders to set forth unrealistic or unmeasurable goal statements. When it comes to communicating performance measurement results, there can be a real reluctance to share data with stakeholders. Many institutions worry that too much emphasis will be placed on efficiency, to the detriment of effectiveness, particularly when dealing with external stakeholders such as trustees, banks, and rating agencies.

GASB Reporting Criteria
  • Purpose/scope is clearly stated
  • Institutional goals/objectives are clearly stated
  • Citizens are involved in goal setting
  • Multiple levels of detail are presented
  • Qualitative analysis of results/challenges is conducted
  • Focus on key issues is concise yet comprehensive
  • Information reliability is assessed
  • Measures are relevant, link to goals
  • Measures relate to cost/resources/efficiency
  • Citizen perceptions of quality/results are reported
  • Comparative information is presented
  • Factors affecting results are discussed
  • Aggregation/disaggregation by user group is shown
  • Measures are consistent across periods
  • Report is easy to find, access, and understand
  • Report is available annually and in a timely manner

The APC team hopes that the availability of a common toolkit will remedy some of these issues. The toolkit draws from strategies that are working effectively for a variety of institutions operating under similar constraints and it encourages flexibility and customization. Historically, there has been disagreement over whether to include performance reporting in audited financial statements. The template design assumes that it will be separate from the financial statements; however, some or all of the performance metrics could be included as supplementary information within the financial statements. The toolkit can help to address comparability of public and independent institutions, a process complicated by different standards of financial reporting. Enhanced performance reporting, with a shared approach among institutions, will improve transparency of reporting operational data.

The toolkit is a living document. The APC hopes that it will be regularly updated with contributions drawn from the best practices of NACUBO member institutions. To share your input and comments, e-mail Kimberly Dight at kdight@nacubo.org.

Author Bios Teresa Gordon is an accounting professor at the University of Idaho, Moscow; Roger Patterson is associate vice chancellor of finance at the University of North Carolina–Chapel Hill; and Jennifer Taylor is assistant vice president for business and finance at New Mexico State University, Las Cruces.
E-mail tgordon@uidaho.edu; roger_patterson@unc.edu; jetaylor@nmsu.edu