NACUBO

My NacuboWhy Join: Benefits of Membership

E-mail:   Password:   

 Remember Me? | Forgot password? | Need an online account?

Business Officer Magazine
Loading

Benchmarking Happiness

The University of Michigan’s business and finance division is mapping the connection between employee and customer satisfaction to fine-tune service and demonstrate accountability.

By Catherine Lilly

With the appointment five years ago of a new executive vice president and chief financial officer (EVPCFO), U-M’s business and finance division embarked on a strategic planning process. One challenge we faced early on was to determine what vision, goals, and organizational values all our division’s diverse units share. We have more than 2,800 employees who report to five senior leaders and work within 61 operating units, including 39 units with direct customer contact.

As part of our initial strategic planning process, we wanted to engage employees and create a common understanding of our shared vision and values.  We held several large focus groups that included employees from each operating unit. From these discussions, our senior leadership team developed a strategic framework based on the understanding that no matter where each of us sits, we share three common aspirations.

First, we want our units to be relied on for deep functional expertise throughout the institution, regardless of whether we are developing software, recruiting staff, implementing internal financial controls, or improving floor care. Our services are broad, and our depth of expertise must be consistent across the entire range.

Second, we want our staff to have a solid understanding of the university as a unique type of business that operates in a very different context from the private sector. Our customers range from Nobel laureates to 18-year-olds who are away from home for the first time. And, in our business, there are no predictable downtimes during which to perform cleaning or preventive maintenance.

Third, we aspire to emphasize the long-term impact of every choice that we and others make in our fiduciary role of caring for the university. We bring this perspective to broadly diverse discussions encompassing facility construction, data architecture, faculty recruitment, and donor giving.

Developing Goals and Metrics

We agreed that the underlying theme of our division’s strategic framework is our working motto: “We Make Blue Go” (capitalizing on the well-known U-M athletics slogan, “Go Blue!”). This phrase encapsulates the notion that the entire university relies on U-M’s business and finance division for daily operating functions.

After much initial discussion, our senior leadership team also determined that we would hold ourselves accountable for achieving three primary goals:

  • To be an employer of choice for high-performing individuals;
  • To be a provider of choice for our customers; and
  • To demonstrate best-in-class leadership in managing the university’s data and human, financial, and physical resources.

Central to realizing the first two goals is the concept that satisfied, high-performing employees provide excellent service to customers. Although we serve an exclusively internal public, we recognize that many of the university’s central administrative services could be outsourced at any time. The need to serve our customers more effectively and efficiently than external providers is crucial.

Satisfaction Survey Resources

As a way to chart our progress on these first two goals, we selected periodic employee and customer satisfaction surveys as tools for establishing performance metrics across the division. (See “Resources” sidebar for links to our employee survey questionnaire and customer satisfaction assessment tool.) We took our first measurements through a divisionwide employee satisfaction survey administered during February 2005 and a universitywide customer satisfaction survey administered in June 2005. The second series of employee and customer surveys were completed in October 2006 and June 2007, respectively.

In March 2008, we launched our third employee survey and will follow up with a third customer satisfaction survey in March 2009. While the results from the third iterations will yield the most meaningful data yet for establishing trends and tracking progress against action plans, this article details what we have learned through our process to date and some of the actions we have taken so far to improve employee and customer satisfaction.

Identifying Perceptions

Knowing that we wanted to track the relationship between employee satisfaction and customer satisfaction, we chose first to establish a baseline measure of employee morale. Our first employee survey yielded a 76-percent participation rate. The intent was to measure employees’ perceptions of behaviors that are linked to overall satisfaction, high performance, and ultimately, customer satisfaction. The intent behind our second employee satisfaction survey, which yielded an 82-percent participation rate, was to continue to track those perceptions and provide feedback on the actions taken in response to the initial survey results.

While the overall job-satisfaction mean score remained the same between the two surveys, many unit managers reported significant localized improvements related to actions they had taken during the interim. In other instances, lower scores propelled some unit managers to investigate likely causes and take action. One example was our investment office staff, a group that scored on the top end of survey results. This group’s overall satisfaction score dropped only slightly from the first to the second survey.

While such a seemingly insignificant drop might easily have been dismissed, our investment manager wanted to pinpoint the cause of the decline. After further exploration, his employees realized that the work group had grown in size between administration of the first and second surveys and that internal communication had suffered as a result. The group could no longer operate casually as a close-knit family unit with impromptu feedback and communication; more direct planning for communication and team-building was required.

Armed with this knowledge, our investment manager instituted more regular staff meetings, provided more frequent opportunities for one-on-one conversations, and instituted increased support for team celebrations. Despite very high overall scores, this unit leader used the data and staff conversations to build an action plan, sharpening his own communication competencies along the way.

In another example, a facilities and operations unit formed a labor-management council on the heels of the first survey to recommend and implement actions tied to survey results. This council continues to operate and has made great strides in improving communication and clarifying the group’s future plans. Likewise, the leadership of our human resources and administrative information (information technology) groups focused intently on low-scoring units to identify and act on the root causes driving down satisfaction.

Pinpointing Priorities

Employee Survey Dimensions

The following graph reflects composite University of Michigan business and finance division scores across all operating units.

Of the 15 dimensions, those in the first group are identified as having the greatest impact on employee satisfaction across the division, with the other dimensions following in the second group. These may vary by individual unit. Scores in the third group report survey questions about employee job satisfaction as well as the related outcomes predicted by satisfaction. The numbers reflect 2006 survey mean scores on a 1-to-100 scale, with plus and minus notations in parentheses indicating the percentage-point change from 2005 survey scores. Some scores remained the same from 2005 to 2006.

Note: The recently released March 2008 employee survey scores continue the upward trend shown here. In the three-year interval from the first survey (February 2005 to March 2008), there has been a statistically significant increase in 10 of the 15 dimensions—including those with the highest impact on satisfaction—and in all but one of the related outcome scores.

Survey results can help prioritize the areas that require additional resources for improved performance. Our employee survey results are reported in terms of overall satisfaction and as 15 “dimension” scores, describing areas such as supervision, climate, workload, and communication—all areas that directly contribute to satisfaction. (See “Employee Survey Dimensions” sidebar.) Dimension scores are created from the compiled scores of 123 behaviorally descriptive survey items. Each dimension score is a weighted average score from 1 to 100, created from the responses given by staff members on a 1-to-10 rating scale for each item.

Additionally, the proprietary software used by our consultant to process and analyze the data describes a key driver relationship between the various dimensions and overall job satisfaction. This relationship enables leaders to determine which areas of action planning may offer the greatest return on investment. Data for each of the 61 units varies greatly, so identifying by unit which dimensions carry the strongest correlation to employee satisfaction—and thus, to customer satisfaction—is vital. By focusing on the highly correlated dimensions, we can strive to maintain or increase the factors that contribute most to employee satisfaction yet which yield the lowest survey scores from employees.

Responses to specific survey questions may also provide important indicators of satisfaction to pay attention to across the board. For instance, the dimension of “supervisor” showed no net change in overall rating by employees, with a score of 69 in both the 2005 and 2006 surveys. Such stability might be assumed adequate. Yet, since this area is highly correlated with employee satisfaction, we took a closer look.

One survey statement consistently yielded a low response across most of our units: “My supervisor deals effectively with poor performance.” What could we do to address this perception and boost employee satisfaction? We initiated a 15-month program to educate more than 400 supervisors and managers in skills for confronting performance that does not match expectations. This divisionwide program reinforced our third shared goal of best-in-class leadership by providing everyone with the same tool set.

Honing Customer Feedback

Regarding our customer survey, we collected feedback on all the division’s core service units with direct customer contact. Primary customer segments include academic units, student affairs division, the U-M Health System, and other central administration users. Our first challenge was dealing with the fact that our customers are captive—that is, they aren’t a random group of citizens who happen to walk in and out of a dealership or storefront. While many of the faculty, staff, and students of the university experience service from the majority of our units, we didn’t want to burden them with too many surveys. The question of how much to ask of our customers became one of the most difficult problems to address in our survey administration process.

While we were led by an external consulting team for the initial survey in 2005, the division’s customer satisfaction project team brought the process in-house in 2007, revising the survey and its administration and analyzing and distributing the results. Based on feedback from our EVPCFO, senior staff, and various project subteams, we made significant improvements to the survey and the process in 2007, which included tightening the survey, improving communication with customers, customizing the survey tool, and changing our survey distribution process. As a result of process improvements, the response rate nearly doubled for our second survey—increasing from 3,464 responses in 2005 to 6,085 in 2007.

While some individual units improved significantly in their scores, our overall division customer satisfaction rating dropped from 7.5 in 2005 to 7.3 in 2007. What was the cause of this change? One interpretation is that the dramatically higher response rate in 2007 could have resulted in a more accurate benchmark score compared to the lower response rate of 2005. Other interpretations could be made based on necessary reductions in services that occurred during the interim. Our primary focus in 2009 will be to stabilize the survey process so that we can begin to feel more confident about measuring progress and can appropriately act on the scores.

Putting Satisfaction in Context

Survey Stages and Tips

Here are nine tips for successful employee survey implementation, based on the experience of the University of Michigan’s business and finance division and supported by best-practice research.

Preparing and Administering the Survey

1. Communicate survey goals to the organization broadly in advance. Convene a representative employee survey committee and consider the use of survey ambassadors or advisory committees. Secure expert consultation on survey design and statistical analysis and use proven project planning tools.

2. Ensure that the survey format you choose allows you to obtain accurate, actionable, and measurable data. Time the surveys appropriately. Invite employee and customer comments. Use questions that drive initial predictions regarding overall satisfaction. Maintain a consistent survey format from year to year. Allow paper and online administration of surveys, and allow individuals at least two weeks to provide input. Promise respondents that their comments will remain anonymous.

3. Set goals that demonstrate the organization’s commitment to improving satisfaction and establish a system of accountability for accomplishing these goals. Determine whether significant improvement is necessary in certain areas. Consider tying senior management’s compensation to goal achievement.

Reporting Survey Results

4. Establish a formal process for communicating organizational and individual unit survey results to employees at all levels of the organization. In our case, results of employee surveys are compiled into a formal written report. The group responsible for the survey communicates overall results, key trends, and notable issues to the organization. Overall organization results are communicated to senior management. Unit heads relay results for the entire organization and for their individual unit to employees via face-to-face sessions presented by senior management, through team presentations, or directly from supervisors.

Identifying Satisfaction-Improvement Drivers

5. Gather additional feedback to obtain the story behind the numbers. Conduct focus groups to identify key issues and generate action plans based upon findings. Use survey comments from open-ended questions to pinpoint priorities.

6. Use statistical analysis of survey results to identify which areas of the organization’s functioning have the greatest influence upon employee satisfaction. Use this information with focus group data to help determine the most effective initiatives for satisfaction score improvement. Analyze survey data for individual employee groups rather than the employee population as a whole when identifying key drivers of employee satisfaction or dissatisfaction.

Acting on Results

7. Form action teams to work on improving areas in which employees have indicated dissatisfaction. These teams should be comprised of employees at multiple organizational levels.

8. Regularly and visibly communicate your organization’s efforts and progress toward improvement. Communicate through a variety of vehicles, including periodic meetings, newsletters, and in-unit training programs. Share best practices with all units within the organization. Communicate about failures, not only best practices, to help explain why improvement was not achieved. Celebrate achievements and key milestones.

9. Encourage continuous employee feedback rather than requiring employees to wait until the next survey to voice their opinions and make improvement suggestions. Recognize that this may take different forms depending on the size of a unit and the particular preferences of a leader or employees in that group.

Posing questions that go beyond surface-level evaluation is a serious challenge when trying to determine measurable service attributes on which customers can realistically and meaningfully rate service. Asking “Are we open enough hours of the day?” doesn’t provide a basis on which to hold employees accountable. We tried to identify attributes of service that are shared by all of our units, can be correlated with employee perceptions, and are most important to customers. We ultimately developed a list of nine across-the-board service attributes:

  1. Understanding of customer’s business needs;
  2. Explanation of university policies and procedures;
  3. Communication of service standards;
  4. Functional/technical expertise;
  5. Communication of service changes;
  6. Implementation of service changes;
  7. Accessibility of service and/or service provider;
  8. Level of courtesy; and
  9. Overall customer satisfaction.

As we prepare to mine the data from our third round of surveys, we have already begun to overlay the results from the two employee and customer surveys to assess correlations between employee and customer satisfaction. While a logical assumption might be that we would find high levels of customer satisfaction to correlate with high levels of employee satisfaction in a given area, this is not always the case. In one instance, we saw a significant decrease in employee satisfaction levels while customer satisfaction levels remained stable and were quite high. Taking a closer look, we were able to identify that employees in this particular unit were under increased stress and felt overworked. While they maintained a pleasant demeanor in serving customers, once they had hung up the phone, they were quick to complain to each other about their workload. Obviously this was not a sustainable situation and needed to be addressed.

Easing Anxiety About Metrics

Most of us are anxious about being measured. Benchmarks, while meant to motivate, can prove detrimental if they are unrealistic. Following our first employee and customer satisfaction surveys, we set an overall target for improvement of 5 percent for the second surveys. This was not achieved for either survey. We’ve learned that when establishing baseline metrics, a number is only a number until you have something meaningful and consistent with which to compare it. This is why trend data is so important. For example, if you’re already scoring an 85 on a 100-point scale, it might be overly ambitious to think you can improve your score by 5 percentage points between measurement cycles. However, a valid goal should be to maintain the positive situation you’ve created.

Fortunately, our EVPCFO has been neither guarded about reporting negative results nor fixed on a particular percentage-point improvement. In fact, he has demonstrated the importance of publicly acknowledging all results, helping employees feel at ease as we all continue to assess how to reinvent the processes that keep our customers satisfied. His long-term aspiration is to develop leaders who use objective data to attend to employee satisfaction, customer satisfaction, performance, and continuous improvement as a matter of course.

We also need to be aware of how we tell our metrics story. It is true that we are absolutely committed to continuous improvement with regard to customers and employees, and we do not hesitate to report our results, positive or negative. Yet, it’s important to look as well at our results in context.

Here is one way to describe our progress: While operating within a difficult state economy, we have achieved during the past five years a $17.6 million budget savings through operational efficiencies and increases in productivity. In addition, through acceptable campus service cuts, we were able to reduce our budget by $5.3 million. Examples include reducing the size of the cashier’s office through automation (payments, deposits), eliminating paper mailings (student bills, paycheck notices), streamlining processes to include self-service transactions, and achieving significant savings in procurement contracts.

Therefore, we’ve achieved a total of $23 million in budget cuts over the past five years while at the same time maintaining (and in many cases improving) our existing levels of customer and employee satisfaction. We believe this is quite an accomplishment, however you measure it.

Gauging Readiness

Whether you survey 5 or 35 operating units, the same questions and challenges will likely emerge for developing meaningful tools and assessment processes. Ensuring the backing of your top leadership and unit managers is vital to implementing a successful survey. To get their buy-in, supervisors in particular must be assured that the end goal is across-the-board improvement, not punishment. Likewise, employees must feel empowered to be honest in letting supervisors know their true perceptions of a situation.

Another key consideration is commitment to ongoing assessment. Survey tools are most useful when implemented repeatedly. Because the process for setting meaningful benchmarks and realistic targets requires at least two—and ideally, three—iterations, there is no real point in conducting an employee or customer satisfaction survey only once. Seeking improvement over time requires investing in the process for the long term. The payback for us will include collection of ongoing data trends that benefit all layers of leadership—from the EVPCFO, who now has a basis for applying common standards to the entire division; to associate vice presidents, who can cross-reference progress by unit; to supervisors, who are equipped with unit-specific data for developing specific action plans to boost morale and enhance performance and service.

As we look forward to analyzing the results of our third set of surveys, we’re applying the insights gained thus far to determine the optimal interval between surveys. In addition to the sheer workload required to administer and process the results of each survey and provide 61 sets of unit-specific data, adequate time between surveys is needed for supervisors and managers to actually formulate and implement improvement plans. Therefore, we have settled on a future cycle of 24 months as an appropriate span between each employee survey and each customer survey.

Having a consistent cycle also allows employees to focus on specific performance improvement measures. Another benefit to conducting regular surveys is the culture change it reinforces. Employees understand that this isn’t a one-time exercise that will go away. Rather, they begin to understand and accept in a positive way the notion of ongoing performance measurement and accountability. Our employees see that feedback—from colleagues and from various customer segments—is being taken seriously and acted upon, and they understand that their own happiness is a factor in customer service improvement.

Digesting the Data

As indicated earlier, some of the most interesting data emerging from our surveys are the distinct relationships we are now beginning to draw over time among three dimensions: employees’ perception of their own satisfaction, employees’ perception of the customer service they offer, and customers’ satisfaction with the services they are provided. There is a surprising degree of correlation among all three of these variables.

What does this mean? In general, employees in units that indicate less overall job satisfaction evaluate themselves as not providing as high a level of customer service. This is correlated with lower satisfaction scores reported by the customers themselves. Conversely, increasingly satisfied employees see themselves as working harder to satisfy customers, and this is positively correlated with increasingly satisfied campus customers.

While this might seem common sense, it’s surprisingly uncommon to find managers who truly act in accordance with this understanding. However, when you have specific ongoing metrics and objective data on hand, it becomes possible to hold all managers within your organization equally accountable for taking action, learning what works, and ultimately demonstrating positive results for their employees and for the university.

Catherine Lilly is the senior adviser to the executive vice president and chief financial officer (EVPCFO) at the University of Michigan–Ann Arbor.