NACUBO

My NacuboWhy Join: Benefits of Membership

E-mail:   Password:   

 Remember Me? | Forgot password? | Need an online account?

Business and Policy Areas
Business and Policy Areas
Loading

You Can’t Understand What You Don’t Measure

May 13, 2015

By Karla Hignite

Higher education leaders increasingly must know how to extract depth and meaning from data to make informed decisions about the business of the institution. Nowhere is this more the case than with regard to human capital, since personnel-related costs account for the largest segment of an institution's budget, notes Barbara Butterfield, currently executive consultant to the University of Michigan, senior consultant to Sibson Consulting, and a consultant educator for the National Science Foundation in support of the Advance Programs for faculty development. Butterfield (bbutterfield@sibson.com) has 40 years of service in higher education, including appointments at Duke University, The University of Pennsylvania, Stanford University, and University of Michigan. In this interview with HR Horizons, Butterfield discusses the need for leaders to accurately decipher institution measures. This interview is a follow-up to her "Combating Disruptive Conduct" interview, which appeared in the Winter 2015 issue. That conversation explored how to build a culture that defines values and promotes expectations for positive workplace relationships and civil engagement among colleagues campuswide.

What is the best approach for identifying which human capital metrics matter most to your institution?

First, it's important to include multiple perspectives—presidents; provosts; and chief business, academic, and human resource officers-who can name what is essential for them to know in their roles as well as in relation to the mission of the institution. Likewise, these leaders must consider the multiple sources of information and data available to them. Today, every institution is collecting way more data than is necessarily useful, so leaders must be disciplined and strategic with their selection of data so they can better understand what the numbers are telling them. Dashboards are a classic example of this.

How so?

When you look at any dashboard, you first have to assess whether it is telling you what you need to know. One way to determine this is by what you do with it. Do you file it, act on it, or take it up as a point of conversation with your CHRO or other leaders? When the information encourages questions and conversations, you know this is helpful data.

You should also scrutinize everything a dashboard includes, retaining those aspects that are interesting and insightful, and replacing any components that are not useful with data that address more pressing questions. This is something of a trial-and-error process through collecting enough different data to find that pattern of what is most important to review on a routine basis. Regardless of who is in charge of data collection and identifying the set of metrics required to paint the full picture of what you want to understand, the best way to build an effective dashboard is for your group of key leaders to start with the critical questions they want to address. Then work back to determine what data are needed in support of those questions.

What common mistakes do many leaders make in assessing institution measures? For instance, do they often fail to look at a sufficiently wide data set—perhaps one year versus several years?

The biggest mistake is to simply count transactions and stop there without questioning what additional information is missing that would provide context. A related challenge for many institutions is a lack of integration of data that provides decision makers with a clear and consistent picture of trends and developments. I have seen deliberate, but slow, progress toward data integration within higher education. More often, it is the case that if you ask a question of HR, you will get data from the HR system, but if you ask a financial question or fundraising question, the data are likely to come from a different system. How to integrate the data so that they tell a consistent story across the institution is a challenge for most colleges and universities.

Can you provide an example of this?

Let's say you are the chief development officer and you want to know how well your employees are doing with bringing in gifts. Relative to the cost of individual employees to the institution, we could obtain data from the HR system to determine what employees are paid to do the job, the value of their benefits, and their raise and promotion history. From the development office, you can obtain information about the donations for which each employee has been responsible over time. And yet, these data sets don't tell the whole story.

We know that within the development arena, relationship management and personal contact certainly factor into the potential for an individual to raise funds for the institution. How best do you quantify that? Through performance reviews? Bottom line, you have to in some manner integrate information from a variety of sources to adequately assess success. All of this is of course predicated on getting a clear picture up front of what your chief development officer considers essential measures of success and what he or she hopes to achieve in gaining a better picture of employee performance. For instance, does he or she want to create a recognition program to encourage productivity of development staff? Knowing who are among the institution's highest performers by career category can help you take action to recognize higher performance across the board and to ensure that your highest performers know they are valued.

What are some specific human capital-related metrics, and how can leaders ensure they are pursuing the best data to get an accurate picture of what they want to know?

One common example for which all college and university leaders have an interest is how to reduce costs associated with faculty and staff. An approach many institutions consider is an early retirement program or retirement readiness program to enable reduction through voluntary means. Traditionally, HR might provide statistics on how many employees retired during the past year and over the past five years as a baseline to predict how many might take early retirement this year or the following year. This estimate typically includes looking at how many employees currently are eligible for retirement, along with their average salary and benefits information. That is a logical starting place, but you need to go much deeper with this. Now that you know historical patterns, and how many currently are eligible to retire, where are the greatest opportunities or vulnerabilities for the institution?

What do you mean by vulnerabilities?

Suppose your entire English department faculty were eligible to retire. What would be the impact on the institution if everyone from that department did leave? For most institutions, English remains an anchor of humanities programming, so you don't want to see a mass exodus in this area. What decisions would leaders have to make to incent retention of individuals with unique skills or critical expertise? If you need to keep at least a small cadre of faculty in a particular area, who is best to retain based on performance and expertise? When an institution's written mission defines its strategic growth and direction, precise segmentation of data can lead to defendable growth and reduction strategies and avoid "brain drain" that often comes from organizationwide programs.

In general, leaders are better served thinking in terms of selective reductions rather than looking to see where those reductions might occur naturally across the board. Where has growth occurred in your programming? Where is downsizing required based on university mission and direction? And, what thought has been given to the use of resources retained by a strategic downsizing? Where you do make reductions in a particular area, how might you use some of those freed resources to take a leap ahead in rank and reputation—for instance, to set up math or language labs to bolster student performance?

Something else leaders must assess in advance is the potential cost of a program to incent target numbers of individuals to retire. And, if you offer a voluntary early retirement program and people don't leave, what are the underlying reasons for this? Are health and welfare post-retirement benefits available to them and their spouse? What kind of investment advice have they followed leading up to decisions about retirement? For instance, individuals who are eligible to retire must also have the financial means to feel secure in their decision to retire. If these retirement-eligible employees haven't been investing enough or managing their retirement investments in a way that have yielded adequate savings, they may find they are well below their target for what they need. So, knowing who is eligible to retire doesn't begin to paint the full picture. You can't simply look at historical retirement patterns or current data, but must also consider factors that include financial, health, and social readiness of those who are eligible based on age, years of service, or other metrics.

What is another example of digging deeper to better understand a human capital-related metric?

The issue of productivity is also of interest to business and HR officers alike. This example is oversimplified, but assume that for every faculty member, an institution has two staff members and 20 students. So, if you are tracking this and you find that your staffing numbers are rising while your student population is dropping, this could signal a failure to respond to a downturn in enrollment. Yet, for any personnel productivity measure, you can't look only at the raw numbers. In this case, you have to consider growth over time and by segment to see where this growth has occurred and by what type of contribution. For example, if you did a study of many large research institutions, you would find absolute growth in staff in two defined areas: in auxiliary services—which typically pays for itself—and in academic and research support staff where indirect cost recovery does not fully reimburse real overhead costs.

Something else to look for in connection with faculty and staff ratios is the relationship of leadership to followership. In some instances where duties are dissimilar within an institution, the ratio of leaders to followers may be close to 1-to-1 or 1-to-2. In units where duties are similar across multiple performers, you may find one manager to 20 employees. Higher education tends to plug leadership into roles all the way down to first-line supervisors. Often the assumption is that if we don't pay competitive salaries, we can offer advanced titles, but that can interfere with assessing the true cost of leadership.

Earlier you mentioned the need to get beyond transactional data. What do you mean by this?

With regard to hiring practices, HR typically has been expected to monitor and manage things like how many days a position remains vacant. That is an example of transactional data. Strategic hiring should not focus on how quickly you select a candidate, but on getting the best candidate, which should include measuring client satisfaction—and by client, in this case I mean the hiring unit or department. Are users happy with the system? Do they feel in control of their own hiring decisions? Where HR can be helpful is tracking such things as whether the institution or a particular unit is making strides in growing high performance and with diversity in relation to its hiring activity, but these, too, can't be based on a single hire or one-at-a-time observations.

In most but the smallest organizations, the hiring process should be automated. Leaders might be wise to consider implementing an online hiring system that puts departments in charge, but with checkpoints and post-analysis that indicates which units are doing a great job in selection and performance, which are ensuring their workforce is diversified, and so forth. If you build all the compliance and expectations into the online system, you then can tap HR as consultant and strategic partner instead of as the controlling agency—and this, by the way, tends to increase client satisfaction. In this instance, HR can assist with things like recommending search firms and providing search and hiring guidelines, screening candidates, performing background checks, resolving salary concerns within the department, onboarding, and assessing what challenges a particular candidate might pose. For instance, is there a trailing spouse? HR can be helpful in considering dual opportunities. In this consulting capacity, the work of both the hiring organization and of HR become less transactional and more strategic.

An important measurement in the selection process is the identification of the percent of internal versus external hires. When the majority of selections are made from the outside, internal candidates become discouraged about their possibilities for growth and promotion. When too many selections are made internal to the organization, the organization loses the potential of new skills, renewal, and stimulation of different perspectives. I like to use a 60/40 view of this metric. For instance, if internal selection occurs more that 60 percent of the time, is the institution too inbred? If internal selection occurs less than 40 percent of the time, are current employees losing hope?

Our previous conversation focused on the issue of disruptive conduct within an institution. (See "Combating Disruptive Conduct.") Here, too, you noted the value of assessing measures and actions over time to determine directional trends. Can you talk a bit more about metrics with regard to disruptive conduct?

As with other important metrics, you have to begin by looking at available historic data. For instance, how many cases of disruptive conduct were reported regarding distress within a particular unit that did not reach legal or discriminatory levels? Next, you can overlay transactional data. Of those cases reported, how many were you able to resolve pre-arbitration or prior to formal complaint? How many went to formal complaint, and how long did it take to hear or investigate each incident?

Gathering the historic and transaction data segues into predictive activity such as considering the potential value of training as a means of intervention. Perhaps you can offer a course on crucial conversations and look to see if this has any effect on moderating the number of cases reported or that you must take action on to resolve the following year.

A next step is modeling. Suppose you do this training and incorporate a special focus on coaching middle managers and upper-level leaders to help them communicate and model appropriate conduct. What kind of target can you set for reducing the number of disruptive events that ultimately result in punishment or dismissal? Can you measure the impact of this over time? Through these activities you can move to strategic assessment. After you incorporate training broadly for your employee population in addition to involving managers and leaders in messaging and modeling the values of the institution, what if you publish a statement on professionalism in the academy or on establishing a culture of care, along with the consequences for breaking those principles? What might happen with your numbers after that?

For instance, in your initial year of review, let's say you have 50 complaints that don't reach legal or discriminatory levels. Then you introduce education and training components and develop an institution statement on professionalism within the academy. While you might anticipate that the number of reported cases would decline, in fact, it is very likely that this number initially will increase by as much as one-third, in part because employees will be clearly informed and feel more confident about coming forward to report. Now, if you can look out two or three years when you have effective reporting and investigation protocols and have implemented effective training and modeling, you should see your incidents begin to drop by at least 20 percent below their original level.

This process of gathering historic and transaction data, moving through to predictive activity and modeling, and then on to strategic assessment, can be used for a variety of measures and can help leaders to interpret and analyze opportunities for data-driven improvement as well as the consequences of doing nothing.

Karla Hignite, editorial consultant to NACUBO, is editor of NACUBO's HR Horizons; e-mail: karlahignite@msn.com

Measuring Impact

Editor's note: The following text is excerpted and adapted from preliminary research by Barbara Butterfield, Jerry Goodstein, and Rachel Schaming, March 2014.

Understanding metrics is critical to effective leadership. Each type of institution measure holds the potential to answer important organizational questions. Metrics can be as simple as a single point of data—for example, the complaint rates for your unit or organization. Yet, when you have that data, what do you really know? You could qualify the data by determining how many cases or formal complaints were processed and recorded in your information system. Still, what do you know from these transactions? It is all too common to stop here and not segment or integrate causal data in the case-management scenario, thereby failing to take action to change the data in a positive way.

When a department or division asks about its own complaint record or case management, response offices can provide the data as experienced this year and sometimes across the past three to five years, as an average and call it predictive. In other words, if unit leaders do not examine and act on the data further, they can expect the same complaint experience next year and in the future. Unless and until leaders examine the causes and remedies for disruptive conduct, solutions will remain unclear.

Conversely, suppose you suggest these leaders look at the data further by segmenting it to understand causes of complaints. Where are the complaints occurring? What is the nature of the concerns and the seriousness of the issues? Are these new or recurring complaints? After studying segmented case data, a leader can begin to see the issues surrounding disruptive conduct as it impacts the individual, the work group, and potentially even the organization. Once leaders better understand the data, they can consider appropriate alternatives to set in motion strategies or goals that will increase desired retention, increase remediation of negative performance or conduct, and result in the desired performance culture.