Back ] Home ] Up ] Next ]

HOW TO BUILD A BALANCED SCORECARD©

Part 2: Setting Improvement Priorities*

by

Arthur M. Schneiderman

 

Preface

I’m a long-time advocate of the KISS principle: “Keep it simple, stupid,” or its more formal ancestor known as Ockham's razor But as problems become more complex, so unfortunately do their simplest solutions.  Scan ahead in this part and your initial reaction may be that what I’m proposing looks awfully complicated.  But, if there’s a simpler way of getting to a truly effective answer, I’ve yet to find it; nor am I aware of anyone else who has.  

That’s because one of the inevitable consequences of our current form of progress is that over time it creates ever-increasing complexity.  We can no longer manage that complexity with the basic toolset that worked in a simpler, bygone era.  Those tools helped in understanding systems where the whole effectively behaved as the sum of its individual parts.  The tools were used to break a big problem into a set of small, manageable pieces.  By optimizing the pieces, we could expect to optimize the whole system.  The very best of mangers could even do this in their heads.

Today, complexity arises from the increasing interdependencies between the many small pieces of a big issue.  The response “it depends” that once served as a ubiquitous excuse, now takes on legitimate meaning.  The interdependencies become further compounded by their eventual non-linearity.  Together these two effects have pushed the critical problem space well beyond the capabilities of simple tools and individual gut feel.  More and more often we are confronted with situations where the whole is much greater than the sum of its individual parts.  The setting of process improvement priorities now resides in that elusive domain.  Yet it is essential to identify the real improvement priorities, not just for the effective use of limited organizational change capacity, but also to weave the convincing story needed to marshal organizational support and buy-in.

Even when an insightful executive can see through that cloud of complexity, verbal explanations are ineffective in transferring his gut feel to others.  They must take his conclusions on faith.  But today, fewer and fewer organizations can rely on faith as their alignment mechanism.  Knowledge workers in particular demand a compellingly and logical argument before they will sincerely commit to “making it happen.”  

In 1979 the Japanese Union of Scientists and Engineers, the driving force behind Japan’s TQM revolution, codified a set of tools that they called the 7-Management and Planning Tools (or 7-MP).  Over the last thirty years the 7-MP have proven their effectiveness in the achievement of consensus or what we might call “collective” or “group gut feel.”  It’s one of those tools, the Matrix Diagram, which I will be using here.

Other tools useful in dealing with this increased complexity have been around for half-a-century.  The challenge is to choose the simplest of these tools that can adequately address the issue at hand.  Oversimplifying the problem in order force-fit it to our more familiar approaches can only create the illusion of understanding, which cannot be a sound foundation for action.  So be forewarned that what follows, in my view is the least complicated way of correctly identifying strategic process improvement priorities in today’s increasingly complex environment.

This Part describes a methodology for deriving process improvement priorities from an organization’s strategy.  It relies heavily on the framework used in Quality Function Deployment1 (QFD).  That framework uses a series of interrelated matrices to numerically define the strength of the causal relationships that exist between the “what’s” and “how’s” of effective planning.  As you will see, it significantly extends the use of simple casual-loop diagrams (as used for example in BSC Strategy Maps) that only serve to identify major causal linkages.  By quantifying the strengths of these linkages and providing an aggregation mechanism, this approach often uncovers pervasive process improvement opportunities that would be missed when only the most obvious dependencies are considered.  Furthermore, since its output is a numerically weighted list of strategic process improvement priorities, it helps us get the greatest strategic bang for the organization’s limited change capacity buck.

We will start by looking at various strategies and their relationships to segmented stakeholder requirements.  This will allow us to place a strategically chosen “importance” weighting on each requirement.  In doing so, we explicitly identify the specific stakeholder segments that we choose to serve and by implication, those that are not on our strategic agenda.  Next, we will determine actual performance, both absolute (based on customer needs and wants) and relative (based on competitor performance) and combine strategic importance and performance to generate a numerical scoring where the higher the value the greater is the strategic need for improvement of that particular stakeholder requirement. 

Our second matrix defines the relationship between stakeholder requirements and each of the organization’s various value creating processes.  It quantifies the impact of each key internal process on each of the stakeholder requirements.  Finally, we will combine improvement priorities derived from the first matrix with process linkages from the second to produce a process improvement prioritization list.  This list will represent a scored ordering of processes in need of improvement in terms of the impact of these improvements on stakeholder satisfaction and, therefore, strategic success. 

As I will show, this approach is amenable to various levels of detail.  At one extreme, it reduces to a simple normative model that states “if this is your strategy, than this is what your targeted stakeholders expect and these are the processes you have to get right in order to satisfy those expectations.”  For simplicity, that’s the example I’ll use here.  At the other extreme, detailed studies may be necessary to determine the organizations real vs. professed strategy, actual customer requirements by targeted segment, perceived performance, organizational barriers, etc.  Where in this spectrum a particular situation lies depends on the level of detail necessary to achieve the required consensus for action.  Often this is determined through a process of successive approximations, starting with the simple normative model and adding more detail until that consensus is reached.

One definition of consensus is the achievement of a state in which the least supportive member of the group “can live with” the majority’s view.  But a consensus for action often requires a much stronger commitment from that last individual, particularly when their active support and participation is required to make that action happen.

Stakeholders and Their Requirements

Organizations have a number of stakeholders.  Generally, we identify them as: 

bullet

customers, 

bullet

stockholders or owners, 

bullet

employees, 

bullet

suppliers, and 

bullet

the communities in which we do business.  

In some cultures, the environment and future generations are being added to this list (see The Fifth Fitness).  In some industries, there are multiple customers.  For example, in higher education customers can include parents, future employers, academic peers, and research sponsors, as well as students and alumni.  In healthcare not only patients but also doctors, hospitals, regulatory agencies, and insurers needs must be addressed.  Where appropriate, distinctions need to be made between historical, current, and future requirements, as well as different “classes” of stakeholders such as large corporations, small businesses and individuals. 

An organization must identify its strategy and the key requirements for each of its strategically chosen stakeholders.  For example, is its stockholder strategy income, growth or non-profit driven?  If it is income driven, then its targeted stockholders will place a high weighting on a steady dividend stream and a stable stock price.  They will be satisfied with average returns on their investment.  On the other hand, the stockholders of growth driven companies do not value dividends, accept above average price volatility, but demand strong long-term growth in stock price.  They expect to be compensated for higher volatility (or b) with above average long-term returns.  The owners of non-profit organizations usually have non-financial expectations for the return on their investment.

Employee related strategies range from nurturing to competitive.  Employees in nurturing organizations hope for security, lifetime employment, liberal benefits, low stress and a family-like environment, while those in internally competitive companies seek an entrepreneurial environment with rapid personal advancement opportunities.  They place much higher value on short-term rewards than on long-term job security.

Obviously, the various stakeholder strategies need to form a self-consistent set.  They are not in general independent.  Income driven companies tend to have nurturing employee strategies, while growth driven companies often have more competitive employee strategies.

Strategies and the Treacy and Wiersema Value Disciplines

As you can see from the above examples, the strategy is really a name for a particular profile of targeted stakeholder requirements.  The name only takes on general meaning if most companies or business units can be assigned to one of the identified categories based on similarity of their targeted stakeholder requirements. 

One such recent classification system is that of Treacy and Wiersema2 (T/W).  They have defined three “Value Disciplines” as a way for classifying companies’ customer strategies.  In the remainder of this Part, I will be using the T/W model as an example of the application of this methodology.  Using their one-dimensional view of the organization’s stakeholders greatly simplifies my description of the elements of the methodology.  But:

Please keep in mind that the T/W model applies only to customer strategies.  All stakeholder strategies must be considered if a robust prioritization is to be achieved.  Omission of a stakeholder group often will lead to priorities selected at their expense.  For example, the T/W approach alone will probably give the wrong answer if applied to a company whose most important strategic imperative is increased stockholder value through growth.  Customers do not usually value the growth of their suppliers.  Therefore, revenue growth generating processes will tend to be de-emphasized when only the customer perspective is taken into account.  So in applying what follows to a particular company situation the T/W Value Disciplines MUST BE augmented or replaced with a similar type classification for the all of the important stakeholder strategies.  The methodology for doing this is quite straightforward.

T/W identify three Value Disciplines, which they called “operational excellence,” “product leadership,” and “customer intimacy”:  

Companies pursuing an operational excellence strategy provide the lowest total purchase cost to their customers by providing high quality (conformance to specification), low price, and ease of purchase.  They accomplish this by streamlining processes to minimize costs and hassle, standardizing, providing high-speed transactions, and creating a culture that abhors waste and rewards efficiency.

Product leadership companies provide the best possible product to their customers.  They focus on creativity and rapid commercialization.  They relentlessly pursue ways to leapfrog their own products before someone else does.  Intermediate milestones, keeping on track, and celebrating interim victories, characterize their product development process.  They operate a loose, entrepreneurial organization, are results driven, and encourage individual efforts.

Customer intimate companies provide their key customers with the best total solution to their problem.  Their focus is on individual key customers rather than markets.  Their most important process is solution development, which is characterized by delegated decision-making and specific rather than general solutions.

Key Customer Requirements

Let’s now look from the perspective of customers.  They have a portfolio of requirements and will most often choose the supplier that best meets them.  There are many ways to define the general set of customer requirements.  Often they need to be industry specific.  For manufacturing, the set of requirements I usually use is as follows:  

1.     Product Features

a.     Performance Specifications.  These are defined by the performance characteristics of the product relative to competition.  Often they relate to speed, accuracy, resource usage, size, etc.

b.     Fitness for use.  Does the product do what I need to have done?

c.     Fitness for latent needs.  Does the product meet an important need that I did not previously know I had?

d.     Aesthetics.  Is the product visually appealing?

2.     Quality

a.     Conformance to specification.  Does the product actually perform as specified when received?

b.     Reliability.  Does the product continue to perform as specified over its useful life?

c.     Durability.  Is the product robust to normal wear and tear?

d.     Serviceability.  Is the product easily serviced when needed?

3.     Cost

a.     Price.  This is the actual realized selling price, after discounts, etc.

b.     Cost of ownership.  The additional life-cycle costs I incur with the product including inspection, inventory carrying costs to cover poor delivery, rework costs, warrantee costs, etc.

4.     Availability

a.     Quoted Lead Time.  Ability to get a commitment to receive the product when I want it.

b.     Minimum/maximum order size.  Ability to get the product in the quantity that I need.

5.     Service

a.     Delivery.  Past performance to committed delivery dates.

b.     Responsiveness.  Broadly defined, this is the ability to get timely answers to all queries.

6.     Relationship

a.     Willingness to partner.

b.     Reputation.

In any particular situation it is important to replace the above list with an appropriate classification of key customer requirements.  These requirements answer the question: What do our customers consider in making their purchase decision between alternative products and/or suppliers?

Relating Strategy to Key Customer Requirements

If we consider customers using the above purchase criteria, and map them against the T/W Value Disciplines, we arrive at Figure 2.


Figure 2.  Relating Strategy to Customer Requirements

(Click here for a PowerPoint version of this figure)

The central part of this matrix arrays the three Value Disciplines against the list of possible customer requirement.  The symbol used at the intersections represents their degree of relationship.  For example, the double circle shows that there is a strong relationship between Product Leadership and Specifications.  The single circle shows that there is a moderate relationship between Customer Intimacy and Ownership Costs.  The triangle denotes a weak relationship between Operational Excellence and aesthetics, etc.  Blank cells denote no significant relationship.  

Implicit in the use of this tool is that these relationships remain essentially constant over the appropriate planning period, which is typically a year.  By regularly revisiting them, the matrix can be updated to better reflect the current situation.  Also, for simplicity I have omitted an additional step often used in QFD.  In that step, we examine the interrelationships between the various requirements to identify conflicts and reinforcements.  We capture them in what are called “roofs” and use them to identify the impact of candidate changes in one selected requirement on the others.  This becomes necessary when the improvement of one requirement can worsen performance on another.  For example, adding features may be offset by an undesirable increase in price.  There are also synergistic improvements.  Quality improvement usually leads to a reduced cost and increased responsiveness.  If changing degrees of relationship and linkages between requirements becomes important, I generally abandon this entire approach in favor of System Dynamics simulation modeling since it is optimized for those dynamic situations.

The filled in matrix in Figure 2 represents my interpretation of the operational definitions of the different T/W strategies.  For example, the matrix defines a “customer intimate” company as one that sets its highest priority on providing products and services that meet customers needs, including latent needs, while being both responsive and willing to form collaborative relationships.  Furthermore, it makes sure that it has competitive specifications, low post-delivery quality and ownership costs, and that its reputation is consistent with these goals.  Finally, it ensures that delivery and minimum order size do not conflict with its higher priorities.  Its customers are indifferent to the blank requirements unless performance drops below an easily maintained level.

Once filled out, the matrix becomes the dictionary that defines the various strategies.  As you look across each row, you can clearly see that each strategy has its own distinctive signature.  Should a new customer segment appear that has a significantly different set of key requirements, a new name must be created and added to the list of strategies to capture that unique segment. 

In filling out the matrix, I have adhered to some simple pragmatic rules.  For it to be useful, the matrix should be sparsely populated.  There is a tendency for people to see strong relationships between all of the elements.  If this happens, than the matrix looses its ability to distinguish the different strategies.  When working with a group of people, a facilitator can help by asking questions such as “what is the most important relationship?” or “where is the relationship very weak or insignificant?” or “which is more important ‘a’ or ‘b’”?  A good goal is to have 40% - 60% of the elements blank and a fairly uniform distribution of strong, medium, and weak symbols.  Looking along both rows and columns, there should be significant differences in the degree of relationship.  In other words, the strategies should look different from one another.  The use of a non-linear weighting scale will further help in combating too many unimportant relationships.

In developing or refining a matrix, a team may encounter significant disagreement about a relationship.  If progress is to be made, the team should make a tentative choice.  It can then go back after completing the exercise to test sensitivity of the conclusions to that particular relationship.  This is made easy through the use of QFD specific or spreadsheet software.  I recommend QualiSoft’s QFD Designer, which I used to prepare Figures 2 and 3.  Usually many relationships have to change significantly for it to make any difference in the overall conclusions.  If sensitive relationships are found, than further study of them is required.  For example, if improvement priorities change depending on how important reliability is to customers, than a small focused survey can be done to answer that specific question.  Consensus and buy-in are essential parts of this process and can only be achieved by bringing actual data to significant areas of disagreement.

There are two alternatives for the next step.  If the organization knows which of the three strategies it is following, then “1” is used in that strategy’s column entry and “0” is entered for all of the others.  The “importance to customer” row is calculated by replacing the symbol in each matrix element with the numerical weight for that symbol, multiplying by the number in that row of the “strategy” column, and adding the resulting numbers by column.  In this case, the result would simply be the weights for the chosen strategy.

However, the organization often determines that its business is or should be split among the three value disciplines, say 70%-20%-10%, and that its internal processes do not differentiate between orders from customers in different segments3.  In this case the “strategy” column would contain the numbers .7, .2, and .1 (always totaling 1.0) and the same calculation would be made to determine overall importance to customers.  Our purpose here is to discount important requirements for the less strategically significant customer segments. 

The particular weights chosen here, 9-3-1, are used to accentuate the differences in relationships.  This is a common set used in QFD.  Others include 5-3-1 and 3-2-1.  Again, sensitivity testing using different weighing schemes can determine the robustness of the conclusions.  What is really important is that items toward the top of the list really belong there and visa versa.

Sometimes, the organization cannot agree on which value discipline(s) it is following.  This may result from lack of data, multiple strategies, or inappropriateness of the strategy classification system to their particular business.  In this case, a second approach may be necessary: a market segmentation study.  One way of doing this is through surveys or interviews of a representative sample of key customers (50-100).  This sample can include past, present and potential future customers and non-customers (i.e., customers of competitors).  Each customer is asked to distribute 100 points between the key customer requirements. 

It is also useful to uncover trends in their point allocations by asking for significant differences in how they would have distributed the points five years ago and what they think might be requirements of increasing and decreasing importance over the next five years (remember, the total stays at 100).  For example, point allocations to quality and delivery have tended to drop, as they have become “givens” for doing businesses, while relationship, JIT delivery, and e-commerce are likely to increase in importance in the future.  At the same time, need for improvement of the organization and its principal competitors can be ascertained using a scale of zero (low need) to ten (high need) for later use.

The resulting data are sorted into groups of customers having similar key customer requirements.  This can be done using statistical sorting techniques or by subjective means.  I prefer the latter.  Translating the point allocations into bar charts and laying them out on a table, they can be visually grouped into similar profiles or customer fingerprints.  Occasionally, an organization might require more rigorous analysis although in my experience the increased expense adds little or no real value.

It is worthwhile to mention here the techniques developed by Noriaki Kano4 for distinguishing requirements that are “delighters”, “satisfiers”, and “must-be’s (without it, they are dissatisfied).”  This simplified form of conjoint analysis is widely used in Japan.

Often, industry surveys published in trade journals or analyst’s research reports can be used in place of, or as a adjunct to direct surveys or interviews.  This reduces the cost of determining key customer requirements but at the price of customer specificity and interactive learning through the interview process.  Either way, the result is a direct numerical scoring of key customer requirements by importance to them (the higher the points, the greater the importance).

The resulting numbers for a specific customer segment are entered into the “importance to customer” row of the matrix.  This time the calculation is run in reverse, multiplying the weights by the importance, and now summing the result across the rows and entering the sum into the “strategy” column.  Ideally, one of the numbers in the strategy column will be much larger than the others.  This represents the appropriate Value Discipline being followed.  If there is no clear “winner”, then the T/W model is not useful for this market segment.  What we have in fact done is used the methodology as a diagnostic to determine the appropriate strategy name based on key customer requirements.  If the T/W names don’t fit, then we can give the new profile its own, unique name.

Assessing Need for Improvement

The objective to this point has been to rate the key customer requirements in terms of importance to customers in the targeted market segment.  We did this by using the appropriate T/W Value Discipline or by direct measurement.  The next step is to determine need for improvement.  I’ll be assuming that the product of “importance to customer” and “need for improvement” is a good indicator of “improvement priority.”  For those who are unsettled by this assumption, I refer you to the emerging branch of mathematics known as “fuzzy logic.”  A more rigorous approach would be to use the utility function from economics theory, but that would represent a much more complicated refinement.  

Here we have three alternatives:

1.     By entering “1” in the “need for improvement” row, we are in effect determining the key customer requirements you need to get right in order to satisfy those customers.  In the next step, this will produce the enabling business processes or core competencies required to achieve leadership in this strategy or Value Disciple.

2.     By entering absolute need for improvement in the “need for improvement” row, we are in effect determining the performance gap relative to customers perceived needs.  This will lead to a prioritization of improvements most useful to the market leader in maintaining or increasing its leadership position.  There are two sources for these data:

a.     Consensus voting by knowledgeable insiders.

b.     Direct data from customers.  For example, if we asked customers to rate our performance on a scale of one to ten, where ten would be their ideal supplier, then the difference between our score and ten would be an indication of our absolute need for improvement on that requirement.

3.     By entering relative need for improvement in the “need for improvement” row, we are in effect determining the performance gap relative to our best competitor with respect to that requirement.  This will lead to a prioritization of improvements with the objective of gaining share against the market leader.  Again, there are two sources for relative performance data:

a.     Consensus voting by knowledgeable insiders.

b.     Direct data from customers.  For example, customers can be asked to rate our performance relative to each competitor on a scale of one to ten.  The numerical difference between us and the market leader, or the best in class for each requirement can then be used as a measure of “need for improvement.”  

“Need for improvement” scores can be determined in this way depending on the prioritization objective, be it:

bullet

“what do we have to get right?”,

bullet

“what do we have to do to maintain leadership?”, or

bullet

“what do we have to improve in order to gain market share?” 

This is also the place where trend data can be used to explain past performance and to predict future areas in need of improvement.

It is “nice” to have the importance to customer row total 100 and the “need for improvement scores” be based on the original range of from zero to ten.  This can be accomplished be re-normalizing and rounding-off the entries where necessary.

Linking Customer Requirements to Business Processes

We can now turn to our second matrix.  This matrix relates the key customer requirements to the underlying business processes.  There are many ways to classify business processes.  The one I will use here is the system described by Tom Davenport4.  We will use the requirements improvement priority weights determined in the previous matrix.  Our objective is to identify the impact of each business process on each of these key customer requirements.  Following the same rules as previously described, figure 3 represents my view of these relationships.

Figure 3.  Linking Requirements to Processes

(Click here for a PowerPoint version of this figure)

This matrix contains the essence of an organization’s understanding of its business processes.  It is probably unique to a given industry and market segment.  In its detail, it may be dependent on each individual organization.  In a sense, it captures the organization’s knowledge of the internal drivers for customer (or stakeholder) satisfaction.  When done by a group of process experts, it constitutes their collective wisdom as to the key business drivers in their particular industry.  It is the truly proprietary part of what an organization learns about itself in applying this approach.

One of the most important properties of this matrix is that it is not diagonal; there is not a unique one-to-one correspondence between a key customer requirement and a single business process.  Consider, for example, on-time delivery.  Businesses do not usually have an on-time delivery process, staffed by an on-time delivery department and led by a Vice President of on-time delivery.  On-time delivery performance depends instead on many independently managed processes within an organization  (see for example my article on “Metrics for the Order Fulfillment Process”).  In figure 3, the major drivers are manufacturing, logistics (supplier delivery), and information management (scheduling and MRP).  It is this multiple-dependency that creates an interconnected business “system,” which in turn causes the need for this approach to prioritization.

Once the matrix is complete and the customer based improvement priorities transferred from the first matrix, the initial priority can be calculated.  This is done by multiplying the weights by the improvement priority and summing the columns.  But before the final improvement priority is determined, the issue of degree of difficulty or organizational readiness must be addressed.

Organizational Difficulty

Processes differ in complexity, both from a technical and people perspective.  Improvement is more difficult in a process where the root causes relate to human behavior then it is for a process where only equipment or methods need to be changed.  Also, data provides the basic fuel for the improvement process.  Can the needed data be generated by the improvement team or does it have to come from someone else?  Cross-functional processes can be complicated by conflicting objectives and ever-present politics.  Since our goal is rapid improvement in results, we need to raise the priority of processes that can be improved quickly and drop the priority of the more difficult ones.  We do this by adding the row titled “organizational difficulty” to the matrix.

One very interesting commonly observed phenomenon is that “success breeds success.”  Over time, many of the initial organizational barriers dissolve on their own, making the passed-over process improvements more easily tractable.  Often, the elimination of the old culture of blame is the key to this transformation.

Organizational difficulty is characterized using a subjective scale ranging from “1” (low) to “5” (high).  In practice, teams can easily assign values, since the consideration becomes the number and severity of issues rather than who is at fault.  Once the organizational difficulty is established, the final priority for process improvement is determined by dividing the initial priority by the organizational difficulty and rescaling.

The QFD Designer software includes a bar graphing capability that makes the final results for each matrix quickly apparent.  The use of the symbols rather than numbers in filling out the matrices serves a similar role in the visual display of the relevant information.

Performance Goals

The final step in completing the matrix is to determine principal performance metrics and their associated goals, at least for the high priority improvement targets.  These goals must be aggressive yet achievable.  When met, they would move this process from its current high to a significantly lower priority for improvement.  It is these performance metrics and goals that have earned their place on the appropriate BSC. 

In addition to my writings on the half-life method for goal setting, Part 3 will describe a systematic approach for identifying the appropriate measures and metrics for each of the resulting strategic processes improvements.

Results for the Normative Model

Figure 3 has been completed for an organization successfully pursuing operational excellence.  The improvement priorities were determined based on customer requirements rather than performance gaps.  Organizational difficulty was assumed the same for all processes.  Principal performance goals are based on an organization that is delighting its customers (i.e. there’s no customer identified need for improvement).  The resulting process priorities are score ordered in figure 4 in terms of decreasing priority.

 

Key Business Process

 

Raw

Score

Normalized

Score

Cumulative

%

Manufacturing

466

21

21

Customer Requirements Identification

249

11

33

Integrated Logistics

234

11

43

Human Resources Management

205

9

53

Customer Acquisition

178

8

61

Product Development

177

8

69

Asset Management

175

8

77

Performance Monitoring

155

7

84

Information Management

141

6

91

Post-Sales Service

75

3

94

Order Management

74

3

97

Planning and Resource Allocation

57

3

100

Figure 4.  Process Priorities for Operational Excellence

The normalized scores are calculated by dividing the raw score by the total of all raw scores and than multiplying by 100.  In can be interpreted as the percentage of effort or resources that should be focused on maintaining that process at superior performance levels.  It should serve as a major input into an organization’s budgeting and resource allocation processes.  The last column represents the cumulative normalized scores.

As can be seen from figure 4, the number one priories of an operationally excellent company are its manufacturing related processes.  Understanding its customer requirements and managing its suppliers are next in importance.  Getting these three processes right will get them nearly half way there.

Following the same procedure as above, figures 5 and 6 show the process priorities for product leadership and customer intimacy.

 

Key Business Process

Raw

Score

Normalized

Score

Cumulative

%

Customer Requirements Identification

216

25

25

Product Development

136

16

41

Post-Sales Service

123

14

55

Human Resources Management

105

12

68

Customer Acquisition

81

9

77

Planning and Resource Allocation

72

8

85

Asset Management

39

5

90

Manufacturing

37

4

94

Information Management

30

3

98

Integrated Logistics

12

1

99

Order Management

7

1

100

Performance Monitoring

1

0

100

Figure 5.  Process Priorities for Product Leadership

Success in product leadership depends heavily on understanding customer requirements.  In fact it’s more important than the product development process itself.  This result is entirely consistent with the TQM admonition:  “market in, not product out.”  Next in importance are product development, post-sales service, and HR management.  Post-sales service is important because I assumed that it played a major role in determining fitness for use, a very important customer requirement for product leadership.  HR management is key in attracting and retaining the creative people needed for product leadership.

Key Business Process

Raw

Score

Normalized

Score

Cumulative

%

Customer Requirements Identification

349

19

19

Customer Acquisition

304

17

36

Post-Sales Service

261

14

50

Product Development

189

10

61

Planning and Resource Allocation

156

9

69

Human Resources Management

147

8

77

Manufacturing

135

7

85

Information Monitoring

111

6

91

Performance Monitoring

54

3

94

Order Management

48

3

97

Asset Management

35

2

99

Integrated Logistics

25

1

100

Figure 6.  Process Priorities for Customer Intimacy

Winning in customer intimacy requires excellence in all processes that directly touch the customer.  Most important are understanding their requirements, acquiring and retaining them, and maintaining high levels of post-sales support.

Conclusions for Part 2

At the start of this Part, I said that this approach is amenable to various levels of detail.  The examples used here are at the simplest level and provide a normative model for process prioritization based on Treacy and Wiersema’s Value Disciplines (figures 4-6).  There are no real surprises in the normative model, and that’s good news.  The methodology passes this simple validation test.  

The rich and counter-intuitive insights arise when actual strategies, stakeholder requirements, performance, and constraints are added to the picture.  But unlike individual gut feel, how these collective conclusions were reached can be explained to others by following the logic trail.  After stripping away what turns out to be the unessential elements of the two matrices, a much simpler picture unfolds, one that is easily used to illuminate that logic path.  I refer you to Analog Devices later version of its Scorecard Story for such an example.

*This Part is an extension of a research project done for a major international consulting company in 1995 and described in a working paper that I wrote that year.

1 See for example: Yoji Akao (Editor), “Quality Function Deployment: Integrating Customer Requirements into Product Design”, Productivity Press Inc., May 1990, ISBN: 0915299410

2 Michael Treacy and Fred Wiersema, "The Discipline of Market Leaders: Choose Your Customers, Narrow Your Focus, Dominate Your Market", Addison Wesley Longman, Inc., 1994, ISBN: 0201406489

3 In this case, the possibility of creating “cells” within a process that are dedicated to a particular customer segment should be investigated.

4 See for example: Shoji Shiba, Alan Graham, and David Walden, “A New American TQM: Four Practical Revolutions in Management”, Productivity Press Inc., January 1993, ISBN: 1563270323, pg. 221.

5 Thomas H. Davenport, “Process Innovation: Reengineering Work Through Information Technology”, Harvard Business School Press (October 1992) ISBN: 0875843662

ï

Part 1

return to top

Part 3

ð

 

©1999-2006, Arthur M. Schneiderman  All Rights Reserved

Last modified: August 13, 2006