Home ] Up ] Next ]

APICS/Babson Presentation

January 17, 1989©

by

Arthur M. Schneiderman

NOTE:  You may prefer viewing this presentation in 

MS Word, PowerPoint or Adobe Acrobat format.

click here to select alternate format

Presentation date:

Venue:

Notes:

1/17/89

American Production and Inventory Control (APICS)

Academic/Practitioners Operations Management Workshop

Babson College, Wellesley, Massachusetts

Transcript*

The "ADI Story" developed through numerous presentations that I was asked to give to our customers.  In August of 1988, I was invited to share our efforts with the Conference Board's US Quality Council.  The audience was a group of Chief Quality Officers from the leading US companies implementing TQM.  There could be no tougher audience with which to debut.  The presentation was very well received and I was soon invited to become a founding member of the Conference Board's Quality Council II.  That encouraged me to share our experience with others through participation in appropriate conferences.

Word quickly spread in the local academic community on our pioneering work in non-financial performance measurement.  In January of 1989, I presented the Analog Story at an APICS conference.  Prof. Ashok Rao of Babson College, Chairman of the Academic Liaison Committee of APICS, led the Workshop, and also edited the tape transcript for inclusion in the Workshop’s Proceedings.  Appended to this document is Prof. Rao’s version of my presentation.  It includes two slides that I did not use in the actual presentation, but were in the slide package that I later provided to him.

*This presentation was tape-recorded.  A transcription of the tape was done shortly after the Workshop but its quality, unfortunately was very poor.  Sadly to me, the original tapes are gone.  However, it represents the earliest recorded version of my rendition of the Analog Devices Story, so I have endeavored to faithfully interpret and correct the many errors in and incomprehensible portions of the transcript.  My corrections are shown within square brackets: [my corrections c. 2000].  All text outside of the brackets is taken verbatim from the original transcript.  I believe that what follows is an accurate representation of my actual words during the presentation.

 

Slide 1


Performance Measurement

Presented at the

American Production and Inventory Control (APICS)

Academic/Practitioners Operations Management Workshop

Babson College, Wellesley, Massachusetts

January 17, 1989

 

Let me start off this morning … since I am going to be talking as a practitioner about performance measurements at Analog Devices … let me start off under the assumption that [none of you know much about] the company.  I will take a few minutes to tell you what we are and what we do.

 

Slide 2

We are a company that is headquartered not very far from here … a 15 minute drive … in Norwood [Massachusetts].  [We are] publicly held and traded on the NYSE.  Our 1988 sales were $440 million.  We are approximately equally split [in the] business [we do between] the US and the rest of the world.  We will talk more about that in a minute.  [We have] a little over 5,000 employees worldwide.

 

Slide 3

The business we are in is the semi­conductor business.  We serve what is called the data acquisition market.  We are [a] fully integrated manufacturer: we design, manufacture, market, and sell our own parts on a world‑wide basis.

 

Slide 4

Those parts, principally, are called monolithic integrated circuits.  Many of you may be familiar with our digital counterparts ‑ memories, microprocessors, [and] things like that. Our products are very much the same except that they deal with the analog­ world rather than the digital world.  We also make other components: hybrid integrated circuits, assembled products, and a small effort in the area of [board level] systems.

 

Slide 5

I said that we are worldwide.  [This] is the distribution of our revenues, and I think [one thing that’s important], on a particular note, is the section called Asia.  We do [significant] business in Japan ‑ about 17% of our revenues come from Japan.  We are lucky in a couple of ways.  Our market share is growing in Japan and the linear integrated circuit business, where we are [the dominant] suppliers, is one of the few places where the [US has a] positive balance of payments [with] Japan: they buy more from us than we buy from them.

 

Slide 6

We are also a company that has been very successful and part of our success relates to the fact that we made very substantial investments in R&D.  In fact, in 1987, 15% of our revenue was [reinvested back into the] business in terms of R&D. Just [to calibrate that], the US government defines a high tech company as any company that spends more than 5% of its revenues in R&D.  By that definition, I guess we are a high, high tech company.

 

Slide 7

We, historically, have been very successful financially.  Our plan is to continue to be successful as we move into the future.  We are looking towards growth over the next five years [in the] 20 to 25% range, a very significant operating profit and profit‑after‑tax, and a very healthy return on capital.  So Analog Devices is a company that, at least up to this point in time, has not suffered [the same recent problems as other IC manufacturers] although it has participated in the international community.  And it is our objective to make sure that we avoid that pain as we move into the future.

Let me now start where I think Bob alluded [to] as a good starting point yesterday, by talking a little about Analog Devices’ mission statement [or Corporate] Objective.  I am not going to [talk much about it] except to tell you that we have one.  We have one that is known to nearly every employee of the company and is [described in] a document [that we hand] out.  It is given to everyone when they first arrive at Analog.  It was prepared in the mid‑seventies.  [It has undergone] very minor modifications over time, but, basically, this has been the corporate objective of Analog Devices for a very significant period of time. 

What is unique, when you think about the timeframe, about our corporate objective is that it always contained recognition that we are not a company whose sole constituent is the stock market.  Ray S[tata], who is co‑founder of the company, President and CEO and [a] true visionary by any definition of the word, recognized in the mid‑seventies that successful companies could not be here in [the] future with a one‑dimensional view of their purpose.  And so, Ray very clearly (and I think in terms that really have not changed much over time) identified Analog Devices’ broad mission and broad objectives with respect to three major constituencies: our stockholders (which at that time was sort of in common with most companies when talking about a mission statement), our customers and our employees.

 

Slide 8

This diagram really shows a major aspect of Analog Devices’ view of meeting the needs of those constituencies, because we feel very strongly that there is an overlap area.  That overlap is the green section that I call Analog Devices' Business Objectives.  It is our view that we cannot satisfy the needs of any of those  constituencies that you see there without meeting our Business Objectives.   By meeting our Business Objectives, we create value that the stock market recognizes in terms of our stock price; we create opportunities for our employees with respect to their individual careers; and we generate funds            that allow us to invest 15% of our revenues back into the business and bring new products to our customers that better meet their needs. 

So we focus, when we think about performance measurement, when we think about directions for the company, on our Business Objectives.  But, always recognizing that they are necessary but not sufficient to meet the needs of each of those constituencies.  I would like to focus on that green area there.  In a sense it is kind of a glue that bonds these constituencies together at Analog Devices.

[Tom Vollmann] almost stole my thunder because of his last slide.  One of the ways that we think about what the right [questions] are to ask is by looking back at a quote from Kipling:

 

Slide 9

“I keep six honest serving men, they taught me all I knew; their names are what and why and when and how and where and who.”  In fact, the Japanese have picked up on this and you might have come across their expression: “5 W's and an H.”  Very often in a meeting [in] Japan, [at] a very pregnant point in that meeting someone [calls out] “5 W's and an H.”  It is kind of code term and it really brings the people back to focus on these kinds of issues.

 

Slide 10

Well, if you look from that perspective … and by the way, you will see QIP on a number of these slides ‑ it stands for Quality Improvement Process. Some people use QIP and call it Quality Improvement Program.  We avoid the word “program” because it usually has a beginning and it has an end.  You can argue that this is a process that will continue forever.  But it, basically, starts … and it is really the fundamental [underpinning] of any of the thoughts that we have about performance measurement … with asking, first of all, the question, why?  The real issue that they are trying to identify is “why do we want to make measurements?”  We want to make measurements in order to be able to identify the important things that affect our business success.  When we have identified them, we [often] have to make some sort of a [surrogate] because many of the things that you will see in [a] moment that represent the broad business objectives are not [quantifiable] in themselves.  And so you have to, whenever you come up with any sort of measure, come up with something necessary as a substitute: something that when it improves, you are moving in the direction of your objectives.  Not [at] all an easy task.

Once we have identified the keys to our business [and] ways of measuring how we are doing, we use a planning tool that we developed at Analog called Improvement Curves.  As Tom was talking about earlier, we are moving into an era in which everything is time based in terms of our thinking.  The company that can learn the fastest is the one that will be most successful in the marketplace.  And so, we have, basically, set many of our goals around the rate of improvement ‑ something that we call “half‑life.”  It is the time it takes to reduce whatever it is you don't want by 50%.  I will talk about that a bit more later on.

So we have set very aggressive goals using a very clearly defined methodology that everyone at Analog Devices understands.  And we bring accountability into that process by making sure that there is an individual, or a group of individuals, that own [each] measurement. They are the people that have the control and the ability to improve what we are measuring.  But we have to continue from that point to make sure they not only have a goal, not only have a clear measurement, but they have the resources needed in order to achieve that goal.  Then, finally, a formal tracking system that makes sure that you don't simply set up a measurement, or forms of measurements, set up a goal and then not look at it.  And I will talk about how we do that, both on a quarterly basis, on an annual basis, and in terms of our five-year strategic planning.

The remaining two things are a feedback process that makes this a dynamic rather than a static process.  One of these is continual improvement by understanding the variance between what your current performance [is] and what your goal is, understanding the root causes of that variance and having a plan that [has] the corrective actions [needed] to close that gap. And it is also important to note … I will give you some examples in a few minutes … that people are incented and rewarded, in terms of these performance measurements, in the same way that they have historically [been] incented and rewarded with respect to financial performance.

 

Slide 11

The whole area of performance measurements is managed at Analog Devices by a group called the Corporate QIP Council.  The names you do not recognize, but let me tell you the roles that each of these people have.  Jerry Fishman is the Executive Vice President and is responsible for all of Analog Devices' operations.  [Kozo Imai] runs all of our operations in Japan.  Larry LaFranchi is the Corporate Controller ‑ he is put in their to [represent] that [function].  Bill Manning is one of our general managers.  Ray S[tata] is the President of the company and Graham Sterling is [a] Vice President.  [Goodloe Suttler] is the general manager of our largest semi‑conductor operation.  Sue Thompson is our Director of Corporate Training and Development.  Tom [Urwin] runs our operations in Europe. 

That group deals with basically five issues: How do we organize, on a corporate wide basis … in a really top down model … to improve quality within the company?  How do we identify what our goals are and deploy them downward within the organization?  That is the subject that we are talking about today.  [In our]  QIP efforts, we basically follow the methodology that was developed by Joe Juran.  We also have a very extensive monitoring system.  So performance measurements are made available to everyone that needs to have access to them within the corporation, and, individual and group [incenting] and [rewarding] system[s].

Now, the next slide that I would like to show you is going to look deceptively simple.  But let me tell you, it took us a couple of years of very extensive effort by about 200 people within the corporation to arrive at the consensus that this was the right view of Analog Device as we view it in the future.

 

Slide 12

Our business objectives are those three simple things that you see up there.  Currently, [we] are market leaders on average in markets [where] we compete, [being] 2 1/2 times the size of our nearest competitor.  We have been historically [a] growth company.  We want to continue to be a growth company.  Analog Devices was founded in the mid-­sixties.  Very often companies at [our] stage in their evolution decide that they are not as anxious to grow in the future as they were in the past.  They have kind of settled into a more stable pattern.  Growth, for us, is a main challenge because the markets that we dominate are growing about 10% a year.  We want to grow as a company at twice that rate.  Which means that there are major challenges for us to identify new market opportunities and product opportunities, and to do that in a way [that] maintains our [historic] level of profitability.

I think if you said, what is the one statement that you could make that is the single performance measurement that we look at at Analog Devices, it is “to be rated #1 by our customers in total value delivered.”  The choice of those words was a significant effort on a part of a lot people at Analog Devices.  Some people say that [it is] to be rated [#1] by our customers in function[ality] [of the] products that we make.  Analog has been known over the years as being the supplier of the highest performance of [linear] integrated circuits in the industry.  We did not get them to our customers on time.  They often were not very reliable. But when they did work, they out­performed anything else there.

I think one of things that we have grown to recognize at Analog is that the market[place] is changing.  Product performance is not the only issue.  There is something called “total value,” something that we are struggling to define, that is the thing that makes customers decide to buy from us or buy from somebody else.  It is our objective to, in a quantified measurable way … and I will show you some preliminary data … be in a position where at some point in the future we can show people data that says our customers, measuring our performance relative to our competitors, rate us #1.

Everything else is unimportant, mind you, with respect to performance measurement if you meet that one objective.  If your customers believe that you are the best supplier, in terms of their needs, then you can't help but meet your business objectives.  Today, when we ask our customers “what is important?” they tell us to “continue providing us with the right products, but provide those products to us at the defect levels, on time, and within our lead time requirements.”  We are smart enough to realize that we need to do that not through inventory and not through inspection, but through fundamental efforts [in QIP] in Analog: reducing our [time to] market, reducing what we call process ppm (which is defect level [in the] manufacturing process itself), our manufacturing cycle time, and our yields.

 

Slide 13

Now the group that I mentioned on a previous slide, [the] Corporate QIP Council, has [established] a set of objectives with respect to those performance measurements for the year 1992.  Let me kind of give you a starting point in terms of where we stood along both those external and internal measurements in 1987.  In 1987, we were running at about 85% on‑time delivery for our customers; actually, a significant improvement over the previous two years where we were running more like 60% on‑time.  Our defect levels were 500 ppm; our lead times were 10 weeks.

Now, let me just calibrate you in respect to those defect levels.  In 1987, a typical Japanese semiconductor manufacturer was shipping parts at under 10 ppm [defective].  [But] relative to our [direct] competition and relative [to] people who [make] similar parts to ours, we were good.  [But we’re] certainly not world‑class relative to other competitors out there in similar product areas.  Our manufacturing cycle time was 15 weeks.  That was pretty good.  Down from 26 weeks in mid‑1986.  And our process ppm was 5000 ppm - so we got from 5000 to 500 ppm by significant amount of inspection and [rework].  Somebody was talking earlier in the hall about being at 67% yield and, boy, that sounds enviable to me.  In 1987, our yields were only at about 20%; 80% of the product that started in manufacturing process, [went into] a garbage pail.  Our time-to-market on a significant new product line was three years.

Now, I said we used this concept of a half‑life.  The numbers that you see there are nine months which represents that amount of time we expect it would take to reduce the defect level by 50%.  If we compound that to 1992, and put some very realistic English on the ball, this is what we would come up with as our 1992 goals: 99.8% on-­time delivery, which means that we will virtually never shutdown our customers just-in-time [lines].  Less than 10 parts per million defective.  Leadtime [of] only three weeks.  [Manufacturing] cycle [time of] 5 weeks.  You say, "Uh-oh, the implication of that is inventory.”

It is the view of our technical people that that is the theoretical limit of what it would take us to manufacture the products.  So there will have to be the need for a strategic inventory in our manufacturing process.  We will basically pull out of that inventory to meet our customers’ leadtime [requirements].  That will not be finished good inventory, but an in‑process [inventory].

We will do no outgoing inspection of what we ship to our customers.  Our yields will be well over 50%, and we are on our way in that direction now.  And our time-to-market will be below 6 months. 

[We also are pursuing cost-reduction] that I won't go into today.  I don't think that we, as a company, have reached a stage that we are going to abandon costs as a useful piece of information.  That day may come, but certainly there is a lot that I think we can learn by understanding our costs and managing them, particularly in today's competitive world.

So those end up being the kind of broad performance areas that we have identified, and that we have linked into our Corporate Objectives.  We measure these in a system that I think is turning out to be very useful for us at Analog.

Most companies have some sort of financial tracking system, some set of financial goals.  When they talk about [their non-financial] performance measurements, they usually are subservient to the financial goals.  People will basically have them in separate reports, a separate way of thinking about them.  That basically creates a hierarchy of performance measures in which the financial measurements are considered to be superior to any of the other measurements that you put in place.

 

Slide 14

At Analog we set up a parallel system in which we have financial goals [and] financial measurements, but we also have quality‑related goals and measurements.  They go very much along the lines of things that I mentioned earlier.  In fact, I think there is a growing sense that it is the left‑hand side that you need to focus your attention on.  To the extent that you can meet your quality goals, you will meet your financial goals.  [How’s] that, if you want to talk about cutting the Gordian knot?  It is getting to the point that we can truly believe that if you are meeting your quality goals, your financial goals will be met as a result of that.  That represents a challenge.  I think that we are getting pretty close to that point. 

The difference between the things that we see in yellow and things that we don't see in yellow [is that] the ones that are in yellow are the established system that we have had working for a year.  The other ones are measurements that are under development, although now all of these measurements have corporate wide definitions and we are beginning to collect data on them.  So starting in our fiscal year, which started November 1, 1988, we are collecting, in a consistent way, across the corporation, measurements of our performance in those areas.  PPM is outgoing parts per million defective.

One thing that I should note here is that we have had to supplement our traditional financial measurement system with a revenue model.  When you look at a company like Analog, it turns out that our growth is very strongly linked to product introductions.  Our new products can have very similar revenue life cycle and so when you [grow] the company at a sustained rate, it is by ever increasing the number of products [introduced].

So we have a performance measurement system that looks at the number of new products that were introduced and the expectations that we have for each of those new products with respect to peak revenues that they [will] generate, how long it will take to reach that peak revenue and [what] we are able to expect [for] decay.  We are very fortunate as a company in that our products have very long life cycles, partly because they are heavily involved in the military [and] avionics businesses.  Our products take 10 years to reach peak revenue and, although we are a relatively young company, we think they decline [after] about a 15-year period in time.  So our products have about a 25-year life cycle.

In fact, we still manufacturer and sell products we call [bricks], they are old epoxy encapsulated modules that we introduced in the early seventies, and we still sell them everyday.  We also track, relative to this, the leading indicator of future revenue growth: what portion of our new products are in our bookings at any period of time?  Basically we say, so what portion of our bookings this month will [be] generated [by] the products that were introduced in [the] last 6 quarters?  That is the data that you see in this box here: traditional financial measures [and a] model for growth that begins to look at our expectations of new products in the system that tracks new product introductions.

The left hand side … I won't show data on all of this, probably we won't have time to do that … but let me talk about that top bar.  I said to you, our customers expect the product to be delivered on time.  What you would tend to think [of] from that is a pretty simple measure: you [either] ship it on time or you didn't ship it on time.  [It] turns out that it is much more complex than that.

One thing that we have learned about our set of performance measurements is that they have got to be bounded.  We said that what we were looking for measurements that were surrogates for what our objective was.  Our objective is to meet our customers’ needs with respect to delivery.  Just aiming at on‑time delivery is not enough because you have to ask the questions “what if I miss?”  “How late am I when I am late?”  “What if I am using very long lead times?”  What if the customer wants it two weeks and I tell him he can have it in 20 weeks and I get it to him in 20 weeks, should we be satisfied with that?  So we have a very detailed set of measurements that not only look at did we ship when we promised the customer that we would ship it, but [as] a subset of that, if we didn't, who is responsible for it?

We have four groups that can claim responsibility for late shipment.  [The] factories didn't get the product [manufactured] in time.  Our credit people ([I said we are our] own distributors on a worldwide basis) [because] a lot of small accounts don't get their product delivered to them because of credit related issues.  So the credit department could be responsible.  Our warehouse department, who ships the product, could be responsible.  And the fourth group, the customer.  Although it doesn't happen very often, [sometimes] around Christmas and Thanksgiving, we get calls from customers that say, "We are going to shut down for a week, so remember that product you were supposed to ship us next week, don't ship it to us." We don't change our commitment; we [just] don't ship it to them.  And it says that we are not on time at least after that shipment.  We basically say that the customer is responsible for that.  That virtually never happens, aside from the 1% of the time, around the end of the year.

We also look very carefully at our lead times, we look at what our customers are asking for in lead times, we look at what we are quoting for lead times and we look at the difference between those two.  I will show you some of that data.

The only other thing that I would add down here is this turnover.  You might say that everything that you see up there are manufacturing performance measurements.  But there is more to a company than manufacturing.  And what we decided is that we need to start looking at other measurements of performance.  By turnover here we mean labor turnover.  And we now put in place a system … that is probably not a good one, but it is a start … in which we look at turnover within our direct labor force and within our indirect labor force.  It is the same definition for every one of our divisions. 

I should have mentioned earlier that we had seven divisions of Analog Devices located all around the world.  Each of them, historically, has [been] totally autonomous in what they did expect financial reports and human resources policies and a shared sales force.  We are in a world of swinging pendulums, swinging from a highly decentralized organization to a much more centralized organization, particularly in respect to performance measurements.  We make sure everybody uses the exact same set of definitions with respect to all of these performance measurements. 

I will show you in a little while the corporate scorecard [and] tell you how [we] go about using that, but this whole set of data leads into a corporate scorecard, the annual benchmark plan, [and] annual review of our rolling five year strategic plan.

Audience: Art, could I ask you quick question?  Dealing with the ppm on the left hand side.  How quick is that feedback.  Is that your customer’s incoming inspection?  Or, is that after the device is utilized for a short period of time.

Speaker: This is outgoing ppm based on outgoing inspection.  We have an outgoing QC at the end of our [manufacturing line] right now.  This is basically using an industry standard definition for electronics companies that inspect a product before shipping it to a customer.

Audience: What is the time frame for turnaround on this type of scorecard [performance] measure?  Is it a week coming out; is it a month coming out?

Speaker: The parts that I am responsible for [are] available within three days after the end of the month. The parts that our financial people are responsible for takes a little longer.  (Laughter)

I will tell you [a] little about that because one of the things I want to stress is that it is hard to figure out what to measure.  It is hard to get people [to] agree to measure the right things.  It took us over two years to generate that box up there.  In fact, right now, what we are doing, we have eleven wholly‑owned sales affiliates located in Europe and Asia that sell our products.  They have their own [delivery] systems, and their own ability to track customer service performance.  We have not been able to “see through” them to their end customers.  And so now we are in the process of trying to deal [with] or “see through” them to their end customers.  It turns out, as the streams of data come together, the affiliate data, particularly on this right hand side, can take up to four or five weeks.  So it is not timely data.  One of the things that we are now focusing [on] is, how can we get this data up and available certainly within one week of the close of the monthly reporting period?

Data that I am responsible for is published daily.  In fact, that is another issue that I would like to touch on before we finish ‑ the fact that you have to have a hierarchy of performance measurements.  Because what we measure at the corporate level and aggregate level doesn't point to the causes.  It doesn't really help people identify opportunities for improvement.  So there has to be [a] desegregation process that goes down to the levels of detail where people can actually identify root causes of problems.  What I do is I have a daily report that is issued … [that has] a lot of paper [at] the moment, it is going to very shortly [be] available on‑line … in which the operating people come in, go to their terminal, turn it on, find all the lines that did not ship the previous day that were due.  Then they can go and say, “why [didn’t that line] ship?” and [as]sign [responsibility], and do a [Pareto] analysis, etc. But you are absolutely right; getting this stuff timely is a major challenge.  But we have proven that we can do this within three days.

Now, getting the performance measurements is one thing, but presenting them is another.  I cannot overstate the importance of making sure you come up with a thoughtful, delivery vehicle to show them the data. Thick reports with performance measurements in them are not [useful].  I have a very [strong bias to graphical] representations of data.  And [what] I am going to show you is going to look very, very complicated too.  However, any of the people that affect improvements in what I am going to show you now, understand how to use this chart.  This is a chart available on‑line in an executive information system which is currently being upgraded.  It is very difficult to get on to the system three days after the end of the month because that is the time everyone runs to their terminals to see not only what they have done, but what other people have done.

 

Slide 15

Let me try to explain this one to you.  The names you see across the top: the first seven are manufacturing divisions; the last part is the aggregate for all of the divisions.  Let me focus on the first bar here, this is our semiconductor operations here in Wilmington, Massachusetts.  The scale here, for that first bar, is the percent of lines that did not ship by [the] date we promised to ship it to the customer.  This is a semi‑[log] scale.  I don't have time to explain to you why we have chosen that scale at the moment, but I do have some copies of the paper that I have written on this.  If any of you are interested, I would be glad to give it to you.  But this particular division, in the month of January 1988 … this is 10, 20, 40 up to 100% … so about 13% of the lines did not ship on time to the customer.  The next dot that you see there [at] about 10% is for February, and all the way through to the last little x, that plus sign, which is December of 1988.

Now, a couple of things that you can see about those points, one is that they bounce around a little bit.  When is it statistically significant that they bounce around?  Well, fortunately, we have statistical quality control that allows us to know when something is a statistically significant departure.  So, the two green lines that you see there are control limits.  Control limits are placed on [all of our] performance measures.  You look at the last three months, and if you are above the upper control [limit], you replace [the] circle with a red x: [chosen on] purpose in red.  If you are below the lower line, you get a green x. 

Three days after the end of the month, when the data appears, there is a telephone call [to ADS].  The telephone [call] comes from Jerry Fishman ‑ Chief Operating officer ‑ who will call either [Goodloe Suttler] who is the General Manager or Lou Fiori who is the Operations Manager.  Lou is the guy who owns this data.  Jerry Fishman will ask, “What happened?”  I will tell you what the wrong answer is: “Lots of things.”  If you say “lots of things”, you gave the wrong answer.  You can say that within the control [limits], but you can't say it outside the control [limits].  It has to be one or two major things that happen.  People are prepared at that point to answer [Jerry’s question] because they know it is coming.  They get daily reports, which shows month‑to‑date so they know what it is going to look like.  Similarly, if you get a green x here, the same kind of question is asked: You did something different and it worked, what was it, so we can share it with other people.  So performance measurements [are] displayed visually in a way that people can look at and instantly know if there is something significant that needs action.

Audience: Why aren't the [other] two points [that are] above the control [limits shown as] red x[s]?

Speaker: [We only do that for data] within in the [most current] three months.

Audience: Oh, three months.

Speaker: Yeah, [the] last three months.  We forgive you after a while.  People said, “Why are you still holding that mistake that we made six months ago against us?”  So we say, all right, we will just look at the last three months.

Audience: (Inaudible)

Speaker: They were.  Well, they may have [been] because this control chart is recalculated every month.  It is a funny kind of control chart because those of you who have dealt with a control chart may have recognized the fundamental problem: control charts make the assumptions that the process you are measuring is [stable].  And I am in the continuous improvement business, so I have to recalculate the control charts for a process that is not stable, or a process that is improving.

Audience: That means the guy could have an x one month and then three months later you are still arguing and it should be the other way around.  And the thing is three months old.

Speaker: Let me say each [month we] take a point off.  So this is a window, basically.  So you can, through the screen that appears before this [one], choose what type [of chart] you want to look at.

Audience: Do you encourage exceeding the limits, the control limits, on the lower side?  Or do you encourage [them] to stay within the limits.  I am sure that [you do] but, you know, is there any [discussion] over this?

Speaker: Well, let's talk about that a little bit because I have to explain this number down here.  It says “half‑life in months.”  We don't have the time today to go into this whole issue of improvement curves and how we go about quantifying things.  Let me just tell [you that] this number represents the slope of that red line.  It basically says that for this division, if I extended that red line, 25 months month later that division ought to be at half the percent lines late as it was at that point in time.

Audience: That's because improvement is judged more difficult for that progression?

Speaker: Well, or they are having more trouble.  Now these two divisions here are basically similar in the products that they make.  This one is [in] the US.  This one is in Ireland.

Let me, without diverting too far, let me just say that my expectation would be, for a division that was using the [QIP] methodology to improve on‑time delivery, a half‑life of nine months.  That is based on looking at nearly 100 improvement projects, not only within Analog Devices but [also] within other companies. I guess I did the reverse of what you did.  I was a consultant for six years before I came to Analog.  So I had an opportunity to work with companies all over the world in the area of quality improvement.  A lot of the data that I collected began to point in the direction of there being some normative patterns with respect to the rate of learning when you apply the QIP methodology.  With these kinds of problems, 50% improvement every nine months represented a good target.  So my diagnosis of this is that this division here is having problems with quality with respect to its QIP process.

 

Slide 16

Let me just move to that for a moment and show you just part of the issues associated with that.  Now, this happens to be a chart that is a few months older than the previous one, but what you see here is the half‑life, in months, over time for that division, that first division.  You can see if you go back to 1986, they were improving at the rate of 50% every 10 months or so.  So you say, oh, that's pretty good.  Except, let me tell you what happened.

Ray [Stata], our chairman, said that he thought we had to improve customer service.  When you have a company like Analog with autonomous divisions, believe it or not, nobody listened.  And so he said it a second time, and nobody listened.  We did not have performance measurements at that time so it was real difficult for people to listen.  But finally he [pounded] on the table and he said, I want improvements in on‑time delivery.  People started attending to on‑time deliveries.  At the beginning of the improvement process, there are lots of easy things to do.

Starting off in the situation where you are stable at 60% … 60% on‑time deliver, 40% late. Finally [you] get serious about it because the guy up front there says he is going to fire you if you don't.  Then you begin to pay some attention.  You take some of the slack out of the system, some of the slop out of the system.  But, eventually, you run out of steam, you have done all of the easy things.  That is what happened to that division in January.  The rate of improvement slowed dramatically.  It went up to 60 months, 5 years [for] rate improvement. That wasn't going to get us there and all during that period of time that they were up 5 years, they were basically seeking technological fixes for on‑time delivery. 

It wasn't until the Summer of 1987 that they finally said, well maybe we ought to try [this QIP] methodology. That is what they did.  They put in place the QIP methodology and the rate of improvement accelerated.  It got down to [the] 9 or 10-month rate of improvement goal. 

In terms of the questions of incentives, other divisions had gone beyond that goal of 9 months.  But this division went down to 9 months and you can see as of December, they went back up to 25 months [refer to slide 14].  Now, if you look at what was going on here, a lot of that 25-month half‑life has to do with these last few points here.  If it had not been for October and December, they would still be on [an] aggressive half‑life.  So the issue really is going to be what is going to happen in January.  If January brings that back down to something like 7% late then those two red points will not [have much weight] and this half‑life will turn down very rapidly.  But this is a division that is doing outstandingly in terms of its performance.

This division here is the division in England [MDL].  You can see that they have troubles.  They are the corporate black sheep with respect to performance levels in customer service.  The [employees are having a] real struggle in that division, it has just had its general manager changed.  [This performance was part of the] cause.

This division here [IPD] has just been consolidated. It was a consolidation of two divisions.  This division here has a new general manager.  So whether there is any relationship or not, the organization perceives some very important things.  This division here was the only division that got [a] corporate bonus.  A $750,000 check was presented to them two months ago in Ireland.

Audience: Who is the keeper of the scoreboard?

Speaker: I am.

Audience: Is there any stipulation of when there are [???]?

Speaker: The data on on‑time delivery, I own from the inception.  The divisions do not have any input into it.  It comes out of our order entry system, which is a centralized system so they do not get to play with that.  In addition to that, I will tell you, there is always a speculation that one has, perhaps a cynicism that one has, that people are gaming these things.  And yet, with the exception of [one] division, it is quite the opposite.  Even things that we have not measured before, we start measuring [and see] their improvement. 

Now, the other manufacturing metrics: yields, cycle times, stuff like that, that is all coming from the divisions and they are calculating all that for themselves.  We agree upon the equations that are going to be used, we have agreed upon the definitions and it is trust at this point.  I think that the comment that I made to people is that I think you can expect the same sort of “reward” for fiddling with QIP performance measurements as you would [get] if you fiddled with the financial measurements.  So anyone that goes in and fiddles around with the income statement or the balance sheet and does things that are against common practice, they get a “reward” for that.  I don't think we are ever going to have to face that day, but I think people realize that it could be very serious if somebody has been in there and fiddling around with the numbers.  I have a nasty ability of catching people when they do that.

Audience: Are all of your divisions equally [involved] in this quality [improvement effort]?

Speaker: No.

Audience: Are there different trends [for] some of them [that] may be at different stages than others?

Speaker: Yes, that is right.  Absolutely.  I can show that…

Audience: Do you weigh on‑time delivery by size of the order?

Speaker: We don't do it on dollars; we do it on each [line].  Each thing that we measure is one line on a purchase order.  If you order 10 pieces of a certain part and we do not ship 100% of those 10 pieces by the day we promise, then you are [dinged].

Audience: So it is customer service level per line?

Speaker: Per line, yes.

Audience: This is a detail, but [what do you do on] an order of 5,000 shipped 1,000 per month over a 5-month period?

Speaker: Each shipment is a line.

Audience: OK.  Alright.

 

Slide 17

Let me show you a scorecard.  When I created this thing I called it a Corporate Performance Audit.  I thought that would be a neat [name] but it did not last very long.  People said, “Ah, it looks like a scorecard to me”.  So this is called [the] Scorecard.  So you can see financial, new products, [and] QIP goals.  We actually had to distinguish [between IC and assembled products] performance measurements.  The same definitions may not work in all of your businesses. 

At Analog, all of our businesses make electronic components and they tend to sit together on the same circuit boards.  There is a lot of similarity except some of them are integrated circuits and some of them are assembled products.  There are really fundamental differences on how you manufacture those.  We felt that if we aggregated these into a single measurement, they lose all meaning.  So we had to disaggregate even at this highest level of aggregation and differentiate between integrated circuits and assembled products.  You put in where you ended up in 1988, and we have a benchmark planning process, which we are in the midst of today.  This is the negotiated 1989-benchmark level of performance.  Then we use these rates of improvements to come up with individual quarterly numbers and that is all built in and that is up on the computer.

At the end of each quarter [these] are the actuals.  Then I get printouts in this form, it has your [division’s] name, and I sit there with two pens ‑ a red pen and a green pen ‑ and I kind of make a judgment call.  I sit with my boss and make sure he agrees with that call.  I try to put no more that 1 red circle and no more than 1 green circle.  Then I [send] it [to] the division General Managers. We have two quarterly meetings ‑ two meetings a quarter ‑ of all the General Managers.  At one of those meetings ‑ the one at the end of the quarter ‑ they have ten minutes and they stand up among their peers and put their scorecard up.  They explain it for 10 minutes and explain the red circles and the green circles.

If it is a red circle, they explain what went wrong, [and] what was the root cause.  You can't say, I don't know.  People have to understand when they don't meet their performance objectives, what was wrong. Now it could be the performance objectives were too high.  But then they have to stand in front of their peers and say, "Well, I really overestimated what I thought I was going to be able to do.  It was harder than I thought.”  We don't change it at that point, and [renegotiate] it.  You can't negotiate it downwards; but, at least, we understand that the variance was because the goal was not achievable.  That is rarely the case.

It is particularly difficult when you have divisions that are [similar] to one another and somebody else made their goal and you didn't.  And you say, "Well, what is different?  How come Ireland gets 97% on‑time and you tell me [there] is a fundamental limitation [of] only 95% ‑ you can't [get better than] the 95%.” 

So it turns out to be very good.  I guess we tend to be internally competitive.  In fact, as a company, it turns out that when you do a competitive analysis at Analog, most divisions have as their most serious competitor a sister division. Very often we will make several different products that all have the same functionality and the same application.  In fact, there is a story at HP that they had seven separate groups [from] five Analog Devices divisions arrive on different days to try to sell a part into a particular application.  So, we also have some coordination problems.

If I said to you, what do think the reaction of the General Managers is [to the scorecard]?  You would probably say they must hate it.  In fact, they love it.  “Now”, they say, “I know what I am being held accountable for.”  No ambiguity, all definitions worked out, all linked up in such a way that they meet their objectives.  We aggregate it up to the corporate one.  We meet our objectives and, if chosen properly, we will make the greatest advances in the value delivered to our customers that we can make as an organization.  [If] the Corporate [QIP] Council has chosen [the right] things that need improvement than we will succeed in [achieving our objectives].

[Note:  I’m unable to identify the actual slides that were used with the following paragraphs.  They apparently were working slides of then current data that I did not normally use in presentations.  However, the text is self-explanatory].

Let me take one minute to show you some other dimensions of improvement.  These are our factory cycle times.  They are down to 45 days now and they will be down to 35 days by the end of 1989.  That is world‑class.  If you go to a Japanese integrated circuit manufacturer, 45 days is a pretty good number for them.  So we have made very, very significant improvements in factory cycle time.

Let me make one comment on that, there is rarely a situation, in my experience, where if you have a well thought out set of performance measurements, that you can improve one without improving another, [if] you are really taking it seriously when you are identifying root causes.  The spill‑over factor is immense: what you discover as you try to improve on‑time delivery are things that help you [improve] your cycle time, help you improve your quality.  Everything is linked together in a very positive way. If you are really doing a thoughtful process of identifying root causes and taking corrective actions.  So cycle times are down.

Audience: Excuse me, could I ask how you measure that?  Do measure that in WIP dollars?

Speaker: We measure it through our MRP system, in terms of tracking lots as they move through the manufacturing process.  So it is actual time not WIP.  We do have some WIP in the system, and we count that WIP only when it is not in the production plant.  In other words, if it is just a temporary holding area for material ...let me stop for a moment and tell you a little bit about manufacturing process. We have a front end and a back end.  The front end is called [wafer fab].  We have about 15 process locations in there and a typical product will go through about 150 process steps, so it is a recursive process.  It is not like most manufacturing facilities where product moves smoothly from one to the other and then out. It goes around in loops, and that is why it used to take us 26 weeks.  Most of the 26 weeks was figuring out where the stuff was.  So what we do is we measure cycle time in the manufacturing process as actual residence time in manufacturing.  However; there are places where we have strategic inventories, and [we exclude time there from the calculation].

In our outgoing, defect levels are trending down.  And finally, the closest I can get to what I told you was where the rubber meets the road … and that is to say that we are all in this for one reason and that is to be rated number one by our customers.  So how do we rate it?  Well, some of our customers actually do rank their suppliers.  HP will tell us we are 2 out of 8.  Most [customers] have not reached levels of sophistication with respect to their vendor [rating] systems that they can tell us how we rank.  Some of [them] even question whether they should tell us how we rank relative to our competition.

What happens on a corporate‑wide basis is everyone sends me any measurements that our customers make on our performance. The two common things that they measure, almost every one of our major customers measures, is our on‑time delivery which is usually a % and our lot acceptance rate on quality level in terms of their incoming quality.  So we just take that and we average it.

 

Slide 18

The customers that you see on the right are currently in the database, and we add more and more everyday.  We publish a quarterly report, which goes out to our sales force.  It, basically, summarizes these data.  What that data says, is that this group of customers has seen our on‑time delivery improve at the rate of 50% every twelve months ‑ not inconsistent with our own internal records. They will say today that we are at the high eighties.  There is a lag, as often we get this data a quarter after the measurements.  So we would say, "No, we are in the low nineties today", as you expect [with] that kind of lag.  So there is a great deal of consistency between our measurement of our on‑time delivery and their measurement.  That is what it looks like in terms of delivery and, finally, what it looks like in terms of [quality].

 

Slide 19

[They see a] 50% improvement every 8 months in terms of [our quality] efforts.

What I would like to be able to do someday is to stand up in front of you and say [for example, is] that 60% of our customers rate us as their #1 supplier by whatever criteria they choose to use.  To me, that is the ultimate in performance measurements in terms of ones’ business success.

Let me just finally say that all of what I have shown you may look deceptively simple.  What we have learned over the last three years is this whole area of performance measurement is far more complicated than one might think on the surface.  You have to understand what you are measuring and make sure that they’re really linked in your business objectives.  You have to get consensus within the organization on the definitions if you’re really going to measure the right things.  If those things improve, business improves.  You have to make sure that you have timely systems.  You also have to make sure that you have visually pleasant systems so people can look at the data in an efficient way to figure out what the hell you are supposed to do with it.  All of that takes a great deal of time and effort and a great deal of arm twisting and occasional intervention from the Chairman of the Board to twist an arm here or there.  I think that it all pays off.  And it all pays off when the customers say to you, "You are the most improved vendor.”  Or, when, ultimately, your sales engineers say, “You know, I am spending more of my time selling and less of my time expediting today.” So those are the ultimate tests of any performance measurement system.

Audience: You say that you use goals frequently.  How do you reconcile that with the velocity of continuous improvement?  Are you changing the goal?

Speaker: Absolutely.  Everybody knows that the goal is really the nine month rate of improvement, or the twelve month rate of improvement, depending on what you are negotiating; depending on the complexity of the problem.

Audience: Because there is always the tendency to be satisfied when you achieve [your] goal?

Speaker: Let me give you these as an example.  I told you that we do things with respect incenting and rewarding.  Every employee of Analog, direct or indirect, is on the bonus system.  It depends on corporate performance.  There is a pay out factor determined each quarter as the bonus is paid.  We also wanted to give [additional] incentives to [the] people at the divisions.  So we have something called [the] Divisional [Adder].  It is for exceptional financial performance.  If you really exceed your financial goals, significantly, then, in addition to getting [the] normal corporate bonus, [there] is a little added for your [division]. 

The way that we handled quality related measurements is that we put in basically thresholds.  No matter how well you do financially, if you are not above 90% on-time delivery during the last six months of the year, and if your new product bookings are not above a certain level, then you will fall out of [the Divisional adder] bonus.  In fact one of the things I have to do today is prepare for a meeting next week in which we set goals for next year.  Goals for 1987 - 1988 was 90% on-time delivery, 1989 is 95% on-time delivery. In 1990, this will be 97 1/2% on-time delivery.  So nobody is in a position where they can say that they have reached goal and that it is good enough.

[That’s] moving targets, continuous improvement.  In fact, [from] this graphical way of showing [our performance] today, you can see that it is continuous.  It is not something in which a division goes flat when something improves.  Most of the points are within control [limits].

Audience: So all of these targets have been set by [the half-life method]?

Speaker: Yes, people seem to like that.  It is something I accidentally discovered a number of years ago on a very, very long flight home from Japan.  People like the idea that, intuitively, there is a measurable, quantifiable rate of improvement that one can measure themselves against.  It seems that a nine-month half-life is the right thing to do, a nine-month rate of improvement.

[If] they are at eighteen months, they will call me up, and say that something is not going right with our QIP, could you come and sit in?  You sit in on the meeting, and you diagnosis it, and begin to realize that there are not the right people in the room, [or] someone there is dominating it, or it is not being facilitated properly, or the systems that they have in place don't provide them the data that allows them to identify root causes.

Same thing on the other side: that division that has made a sudden and very rapid rate of improvement had finally formalized things in terms of QIP [implementation].  They put together the right group of people, put a very high priority on it, and they were able to deal with some of the cross-function[al] issues. 

One of the things that I proved today and yesterday, are [that for] problems that we wrestle with, most of the issues are cross-functional.  When I first started making these measurements, people in the factories said, “It is not us, it is people in credit.  That is why the stuff is [late].  It is credit policies.”  People in credit said, “Baloney, we never have the material [to] ship.”  Until you have the measurements in place, and are able to sort these things out, and say, “Look, guys, this is the breakdown between responsibility.  How can we work together to solve some of these things?”  And that is happening.

Audience: That was my question.  I was curious as to what [happened when you] incorporated various areas of responsibilities for problem [solving].  My question is, who determines that responsibility and if you have a problem with people assuming that responsibility?

Speaker: It is all done automatically.  What happens in our tracking system is, each day the computer scans through every line that had a [scheduled] shipment of that date and it says, did yours ship by that date? If the answer is no, then it asks the question, was there inventory?  Was the customer on credit hold?  When a customer is on credit hold, a flag goes up in the system that prevents shipping documents [from being printed].  So it would scan that flag to see if the flag was up.  It is a very logical thing - there are 15 different categories. 

Each of the 15 categories has an owner [from] among those four groups.  People have said, “Ok, I'll own that one.”  Some of them are a little blurry; some tend not to fall neatly and cleanly into one of those four categories.  So someone says, “Alright, I'll take that one if you take this one.”  They may end [up] being in the position where they [have] to get people to cooperate with them [from] outside their area of control in order to solve that problem.

Audience: Is it a normal reaction to solve the problem versus going on a witch-hunt to find out who screwed up?

Speaker: It is now.  Wasn't before performance [measurement].  If you don't measure, [it] doesn't get better.

Audience: Who sets the goals?

Speaker: The Corporate [QIP] Council.  What I do is, at the time of [our] benchmark planning, I use half-lives, historic performance and [our] 1992 goals.  I sit there and move things around.  I require more of the bigger divisions [then of the smaller ones].  [They] have greater influence on where we are in 1992.  I send out a [strawman] proposal of their goals.  One of our divisions came back and said that they could do better than that.  [Previously that] division was [unwilling to set aggressive improvement goals].

Audience: Why?

Speaker: Because of labor issues [there], there has been an incentive from the first day [to form workgroup teams].  [It] is the only place that we have a union in our company.  [From the] first day there was a strong incentive for us to have [quality circles].  So we started [quality] circles there 7 or 8 years ago; very soon after, we started that division.

All of these circles led to its way [of improving quality].  It didn't [get] a hell of a lot of improvement [though] because all of these circles are functionally oriented rather then cross functionally oriented.  The important thing was that we had six years of people trained in [quality] circles.  So they [understood fishbone] diagrams, they knew about control charting, they knew about [other QC tools].  This was [the] basic tool[s].  So, in one sense, you have people down at the bottom of the organization [who] know and understood the tools.  Problem was they did not have the empowerment [from] above to use those tools.

So we brought in [a new] general manager.  He came from National Semiconductor; he managed their facilities over in the Far East.  He [had] taken that facility through TQC.  He's a young guy, very enthusiastic, a real leader on this stuff, and he started at the top.  I went over there every month for 8 or 10 months and worked with the people there to get the structure in place.  We signed up one of the people, a very popular person in the organization, to [be] a full-time [QIP] facilitator.  We put together a structure, [facilitating] and everything.  He drove the top, and the people at the bottom drove because they now had [leadership] with zeal.  It is one of these things that is, absolutely … if I can show you, if I find it quickly.  That division, once they got their stuff together... they went through an easy time.

 

Slide 20

There was a point there that they went into negative improvement.  But once they got their act together, put together their team, they just started driving this process down like crazy.  They are the ones now setting [the standard] on half-life.  My proposal to them was [a] goal for 1989 of 96% on-time.  They said that it is not good enough, 97% is our goal.  I said your goal for output ppm is 300; they said it was not good enough, 100.  Everyday they post, in the cafeteria, what their outgoing ppm was.  When you walk into the cafeteria, people gather around.  Now it helps to walk over and give them a check for $750,000.  (Laughter)  That check, by the way, is shared by everyone, so that [it] is a team reward, not individual.

Audience: I'm sure that check was minor compared to what the work benefits [???].

Speaker: Yes.

Audience: You talked about the ownership of a problem or bench[mark] process, when did you start?

Speaker: Ok, within the last couple of years.  We have had a problem at Analog in terms of accountability.  One of the things with Analog is that we have a no layoff policy in the company (which is different [from a lifetime employment] policy).  But people have gotten to feel very secure, and the company, probably as a defensive mechanism for that, never really held anybody accountable.

So it was remarkable when I realized some of the managers that I was dealing with when I first arrived here, had failed in everything that they had done the previous 10 years, [and] were still there.  And that was because people were not held accountable.  They did not own things.  If it didn't work, then the whole group seemed to take responsibility for it.  We are moving away from that, we are not forcing the issue on people but we are [having] to own something and to measure it.  To get rewards or not get rewards is a consequence of how well they perform against those measurements.

Audience: Does that computer program that you have, does it come down to details like [???] not in file?  [???]  revision to it.

Speaker: Right now, the measurement system we have at that level of detail is only customer service: on time shipping to our customers.  My philosophy for measurements is that there is [a] hierarchy [of measurements].  You can't just have a top measurement, which aggregates things and looks at Analog Devices customer service on a corporate basis, or time [to] market on a corporate basis.  You have got to move down to the next level, which, basically, is what people, for example, in the division, are looking at.  What is the engineering manager looking at?  Then, down to individuals; what are they looking at? 

I [even] try to encourage people to come up with their own measurements of their own performance.  Did I get this job done on time, if no one else is measuring it, what percent of the time do I get my job done on time?  So there is a whole hierarchy of those measurements that we have to evolve over time.  We don't have them yet.

Audience: I assume that there are a number of different things that you want to put in on-time delivery [metrics].  You didn't [have] those all at once.  Did you come in and start one at time?

Speaker: We started the on-time delivery one in [the] fall of 1985.  The other ones, I started about a year ago.  What we call the manufacturing measures which are the yield, cycle time, stuff like that.  The one we have had the greatest difficulty with, to be honest with you, is time on market.  Everybody struggles with that because when you are coming up with new products, when do you start?  When do you finish?  We have one division, which released products and doesn't ship them for a year after it releases them.  The data sheet is out on the product and we try to buy it.  Lead-time is about 50 weeks.  We have another division that sells product, because that is the nature of the business that they are in, before they release it.  So how do you define time-to -market?  Even once you define time-to-market it, how do you get down to the level, … which is what we will learn about tomorrow when we talk about project management … how do we get down to the level of controlling that process?  When you think about project management, part of project management is performance measurements.  If we don't hold people accountable for the dates in the project management system, then it is worthless.

[End of presentation of Analog Devices on Performance Measurements]

APPENDIX

[The following is an edited version of the presentation that I gave on January 17, 1989 at the “Academic/Practitioners Operations Management Workshop” sponsored by APICS and Babson College.  The presentation was audio taped, but the tape quality was too poor to get a verbatim transcription.  Prof. Rao, who chaired the workshop, produced this summary based on the overheads, audio tape and other published sources.  This summary, along with those of the other presenters, was published in late 1989.]

 

PERFORMANCE MEASUREMENTS[1]

by

Arthur M. Schneiderman

Analog Devices, Inc.

 

At this presentation I am going to talk as a practitioner about performance measurements at Analog Devices. Let me first take a few minutes to tell you what Analog Devices is and what we do.

 

We are a company that is headquartered not very far from here in Norwood. We are publicly held and traded on New York Stock Exchange. Our 1988 sales were $440,000,000. We are approximately equally split in the business done in the U.S. and in the rest of the world. We have 5,400 employees, approximately, worldwide.

 

We are in the semiconductor business. We serve what is called the data acquisition market. We are a fully integrated manufacturer: we design, manu­facture, market and sell our own parts on a world wide basis. Those parts are called monolithic integrated circuits. Many of you may be familiar with our digital counterparts--memories, micro processors, etc. Our products are much the same. We also make other components: hybrid IC's, assembled products and we have a small effort in the board level area of sub-systems.

 

I said that we are worldwide in the distribution of our revenues. We do significant business in Japan--about 17% of our revenues come from there. And, our market share is growing in Japan and the linear integrated circuit business, where we are the dominant suppliers, is one of the few businesses where the U.S. has a positive balance of trade with Japan. Part of our success relates to the fact that we made substantial investments in R&D. In fact in 1987, 15% of our revenues were allocated to R&D. Since the U.S. Government defines a high-tech company as one that spends more than 5% of its revenues in R&D, we are a high, high-tech company. Our plans for the future include growth over the next five years are in the 20-25% range. We expect a significant (17%) operating profit and profit after tax (9.4%) and a healthy return on capital (15%). So up to this point in time Analog Devices has not suffered the same fate as other IC manufacturers while it has participated in the world market. It is our objective to make sure that we avoid that pain as we move into the future.

 

As a start Analog Devices has a mission statement which is made known to nearly every employee of the company. It is given to everyone when they first arrive at Analog. It was prepared in the mid 70's and has been, with minor modifications, the corporate objective for a very significant period of time. What is unique, when you think of the time frame, is that the corporate objective always recognized that the stock market is not the sole constituent of the company. Ray Stata, who is co‑founder of the company, president and CEO, and true visionary by any definition of the word, recognized in the mid 70's that successful companies could not last if they had a one dimensional view of their purpose. And so he very clearly identified in terms that have not changed much over time, the broad mission and broad objectives for Analog Devices with respect to three major constituencies: our stockholders, our customers, and our employees. In meeting the needs of these constituencies we feel very strongly there are significant overlaps. These are what I call Analog Devices business objectives. It is our view that we can not satisfy the needs of any constituencies I mentioned without meeting our business objectives. By meeting our business objectives, we create value that the stock market recognizes in terms of our stock price, we create opportunities for our employees with respect to their individual careers and, we generate funds that allow us to invest 15% of our revenues back into the business and bring new products to our customers to better meet their needs. These business objectives are a kind of glue that bonds these constituencies together for Analog Devices. When we think about performance measurement we think of directions for the company according to these business objectives. We recognize these are necessary, but not necessarily sufficient, to satisfy each of our constituencies.

 

One of the ways we think about what the right questions are to ask is by looking back to quote from Kipling: "I keep six honest serving men. They taught me all I knew. Their names are What, and Why and When and How and Where and Who". The Japanese too have picked up on this and you might come across their expression: "Five W's and an H". Very often in a meeting in Japan, someone brings up "Five W's and an H" as a kind of code term to bring people back to focus on relevant issues.

 

At Analog Devices we use these to guide the quality improvement process (QIP). Some people use (QIP) as an acronym for quality improvement program. We avoid the word program because it usually implies a beginning and an end. We believe (QIP) is a process that will continue forever.

 

The first question is why, what we try to identify is why do we want to make measurements? You want to make measurements in order to quantify the things important to our business success. When we have identified them there may be several broad business objectives that are not quantifiable yet so we come to the second question, what must be measured? We need to develop a complete set of surrogate metrics. When these improve we will be moving in the direction of our objectives. It is not at all an easy task.

 

Once the keys to our business and the ways of measuring are identified, we answer the third question, when? At Analog we use a planning tool called "improvement curves". Many of our goals are set around the rate of improvement --something called half-life. This is the time taken to reduce an undesirable effect by 50%. I will discuss this in more detail later on. Next, we identify who is responsible and accountable for meeting these very aggressive goals. We make sure that there is an individual or a group of individuals that own the measurement. These people have the control and the ability to improve what is being measured. In addition they must be given the resources to achieve that goal. Next, where? We design a formal tracking system that makes sure that the goal and results are examined on a quarterly basis and on an annual basis in terms of our five year strategic plan.

 

Finally, we provide a feedback process to make this dynamic. Continual improvement is encouraged by understanding the variance between current perfor­mance and goal, understanding the root causes of that variance and having a plan defining corrective actions to be taken to close that gap. In addition, people are incented and rewarded in terms of these performance measurements in the same way they have historically been incented and rewarded with respect to financial performance. The whole area of performance measurements at Analog Devices is managed by a group called the Corporate QIP Council. This group includes the president of our company, several of our vice‑presidents, general managers, and our director of corporate training and development. The group deals with five issues including:

 

-          How to reorganize on a corporate‑wide basis to improve quality within the company?

 

-          How do we identify our goals and deploy them downward within the organiza­tion?

  

This is the subject we are discussing today. Corporate QIP efforts follow the methodology developed by Juran1. In addition we have extensive monitoring systems.  Performance measurements are made available to everyone that needs them within the corporation.

 

Exhibit 1 shows Analog Devices QIP goals. They look deceptively simple. But, it took us two years of very extensive effort by 200 people within the cor­poration to arrive at the consensus that this was the right view of Analog Devices, as we would wish it to be in the future. Currently we are leaders in markets we compete in, being on average, 2 1/2 times the size of our nearest competitor. Historically, we are a growth company and we want to continue to grow at a rate twice the growth rate for markets where we dominate, which is about ten percent a year. This means we need to identify new market and product opportunities in a way that maintains our level of profitability.

 

The single performance measurement we look at is to be rated number one by our customers in total value delivered. The choice of those words is very significant. Over the years Analog has been know as a supplier of the highest performance of integrated circuits in the industry. Our customer delivery was not always on time and our product was not always reliable but when it did work it outperformed anything else in the market. We have come to recognize that the markets are changing. Product performance is not the only issue. Today the customers decide to buy from us or buy from somebody else based on something called total value. It is our objective to be in the position where at some point in the future we can show people data that says, "our customers measuring our performance relative to our competitors rate us number one." If our customers believe that we are the best supplier then we cannot help but meet our business objectives. Today these customers tell us they want us to continue providing the right products but, also to provide those products with low defect levels, on time and within their leadtime requirements. In order to achieve this, Analog determined we need to improve our time to market, reduce the process ppm which is the defect level within the manufacturing process, our manufacturing cycle time and increase our yields.

 

The corporate QIP council has set objectives with respect to those performance measurements for the year 1992. As a starting point, consider where Analog stood on both external and internal measurement in 1987:

-          On time delivery for our customers was about 85%. It is a significant improvement over the previous two years where it was more like 60%.

-          Defect levels were 500 ppm which was down from 5,000 ppm. This had been achieved by adding a significant amount of inspection.

-          Our leadtimes were 10 weeks.

-          Our manufacturing cycle time was 15 weeks which had come down from 26 weeks in mid 1986.

-          Our yields were only 20%.

-          The time to market on a significant new product line was three years.

Exhibit 2 shows these metrics and the status in 1987. The next column shows the half-life in months. A nine for the outgoing defect level indicates we expect to reduce the defect level by 50% in nine months. Compounding that to 1992 and adjusting for realism, we arrived at the 1992 goals. Notice the manu­facturing cycle time of four to five weeks and a leadtime of less than three weeks. This implies inventory. It is the view of our technical people that four to five weeks is the theoretical limit of time required to manufacture the products. So we will need to provide for a strategic inventory necessary to meet our customers leadtime. That will not be finished goods but, in process inventory. We will do no outgoing inspection of what we ship to our customers.

 

I will not show you costs but, will say that as a company we have not reached a stage that we are abandoning costs as a useful piece of information. We can still learn by understanding our costs and managing them particularly in today's competitive world. Most companies have some sort of financial tracking system and a set of financial goals. In their hierarchy of performance measures, financial measurement is usually considered to be superior to any other measurements that are put in place. At Analog we set a parallel system in which we have financial measurements but also quality related goals and measurements. There is a growing sense that it is the quality related goals and measurements that should be the focus of our attention. As a company we are beginning to believe that meeting the quality goals will result in meeting the financial goals. Exhibit 3 shows the QIP goals and the financial goals. We are now beginning to collect data on these in a consistent way across the corporation. I should also note that we have supplemented our traditional financial measurement with a revenue model. A company like Analog grows in direct proportion to new product introductions. These products have very simi­lar revenue life cycles. So the revenue model looks at the number of new pro­ducts introduced, the expectations for those products with respect to peak revenue, and the time to reach that peak, and projections of when the revenue will decay. Fortunately our products have long life cycles partly because we are heavily involved in the military and avionics businesses. Our products take ten years to reach peak revenue and decline over a fifteen year period. So our products have a twenty‑five year life cycle. The bottom box on the right hand side under financial goals shows we track our new products, the bookings, and the number of introductions in a period of time.

 

The QIP goals are not easy to establish. For example, consider on time delivery. It would seem to be pretty simple--either it is shipped on time or not shipped on time. It turns out to be much more complex. Recall the measure­ments intended as surrogates for our objective. Our objective is to meet our customer needs with respect to delivery. Just aiming at on time delivery is not enough because, you have to ask questions like: What if I miss? How late am I when late? What if I'm using very long leadtimes? What if the customer wants it in two weeks, and I tell him he can have it in twenty, and when I get it to him in twenty weeks, should we be satisfied with that? So, on time represents a detailed set of measurements that look at what we ship when we promised our customer that we would ship it. If it was not shipped on time, we establish who was responsible for the late shipment. This could be one of four groups: either the factory who didn't get the product made in time, our credit people who delayed product delivery because of credit related issues, our warehouse department, or our customer. For example, our customer may call around Christmas or Thanksgiving to say they are going to shut down for a week and the product shipment should be delayed. We also look very carefully at leadtimes. We examine what our customers want and what we are quoting and the difference between the leadtimes.

 

Above the dotted line, the measurements you see are manufacturing performance measurements. But, there is more to a company than manufacturing. For example, the measure turnover. This is turnover within the direct labor force and within our indirect labor force. It is the same definition for each of the seven divi­sions of Analog Devices located around the world.

 

An important issue relates to the timeliness of the data. Measurements and support of the QIP goals are available within three days of the end of the month. The financial statements take a little longer. We have eleven wholly owned affiliates in Europe and Asia that sell our products. They have their own measurement systems. Financial data from the affiliates can take up to four or five weeks to arrive. We are focusing on ways to obtain this data within one week of the close of the monthly reporting period. Data for QIP measurements are published daily producing a lot of paper. Shortly it will be available on line. At Analog we are able to collect all of this data within three days.

 

I cannot overstate the importance of providing a well though out vehicle for showing data. The charts I will show you look very complicated but they are easily understood by people effecting the improvements. The chart is currently available on line and in an executive information system. It was very difficult to get on to the system after the end of the month because that is the time everyone runs to their terminals to see what they have done and what other people have done. Exhibit 4 shows the data for on time customer service. Across the top are the manufacturing divisions, and the last column is the aggregate for all divisions. On the vertical scale we record the percent of lines that did not ship by the date we promised to ship to the customer using a semi‑logarithmic scale. It shows the division noted as ADS shipped approxi­mately 33% of its lines late in the first quarter of '86. Over time this had declined to 8°/ by the fourth quarter of '88. Notice that the points bounce around the line drawn here.

 

When is it statistically significant? Fortunately statistical quality control allows us to know when the departure from the line is significant. The next exhibit shows you two lines on either side of the central line representing control limits. The system looks at the last three months. If the performance is above the upper control limit the circle is replaced with a red "+"; if it is below the lower control limit it is shown as a green "+".

Three days after the end of the month when the data appears the chief operating officer will call the person responsible for the data to ask what has happened. "Lots of things" is the wrong answer. It has to be one or two major things that happened. Since people get daily reports showing the month to date, they know the question will be coming. Similarly, a question is asked if a green "+" appears. This implies something different was tried and it worked. By identifying it, the successful actions can be shared with other people.

 

The circles are converted to +'s only for the last three months. If earlier data was also converted to +'s people would feel they were being held accountable for mistakes made six months earlier. So, earlier points on the chart are shown as circles even though they fall outside the control limits.

 

At the bottom of the Performance Measurement Chart the half-life in months is noted. For example, ADS has a half‑life of eight months. This means that the percent of lines late should decline by 50% in eight months. My estimate for typical improvement in on time delivery is a half‑life of nine months. That is based on looking at nearly one hundred improvement projects within companies other than Analog Devices. As a consultant for six years, I had the opportunity to work with companies all over the world in the area of quality improvement. A lot of data I collected pointed in the direction of there being some normative patterns with respect to the rate of learning. With these kind of problems, 50% improvement every nine months represents a good target. So a division like MED with a half‑life of 60 plus months is having serious problems. One cause can be a change in general managers. Another may be the consolidation of two divisions. If a division does very well it gets an extra bonus or division "adder". Two months ago one division of 600 people was presented with a $750,000 bonus check.

 

This is called a corporate performance audit. Within Analog, it is called a "score card." We have had to distinguish performance measurements by businesses. Some of our business involves IC manufacturing, others involve assembling products. These represent fundamental differences on how they are produced. Aggregating their performance measures into a single measurement loses the meaning. So, at the highest level of aggregation integrated circuits and assembled products are differentiated.

 

For the bench mark planning process, we start with where we ended in 1988. Then in 1989 a new level of performance is negotiated. Using rates of improve­ments, we come with individual quarterly numbers and this is built in to the computer. At the end of each quarter the actuals are printed out. I receive the print‑out and sit there with two pens, a red and a green. The red circle identifies something that is "bad" and the green circle identifies something that went "good". At a quarterly general managers meeting each general manager stands up for 10 minutes to explain the red circles and green circles to their peers. This promotes internal competition. In fact, as a company, when they do a competitive analysis most divisions find a sister division is their most serious competitor. This is because we often have several different pro­ducts with similar function and the same application.

 

What do you think is the reaction of the general managers? You would probably think they hate it. In fact, they love it. Now they say they are being held accountable. There is no ambiguity, all definitions are worked out and they are all linked up in a way aimed at meeting their corporate objectives. They realize that by meeting their objectives they will make the most advance in delivering value to the customer.

 

Conclusion

 

This process has been successful along several dimensions of improvement. For example, consider factory cycle times. They are now down to 45 days and are projected to be down to 35 days by the end of 1989. That is world class. If you go to a Japanese integrated circuit manufacturer, 45 days is a pretty good number for them. There is rarely a situation in my experience if you have a well thought set of performance measures that you can improve one without improving another. The spillover factor is immense. What you discover as you try to improve on time delivery are things that help you reduce your cycle time, help you improve your quality. Everything is linked together in a positive way. If you are really thinking through a process of finding root causes and taking corrective actions, all performance measures will improve.

 

Finally, we should note that the purpose of all this is one reason and that is to be rated number one by our customers. So how do we rate? Well, some of our customers actually do rank their suppliers. Other customers have not reached the level of sophistication with respect to their purchasing systems that they can tell us how we rank. Some customers question whether they should tell us how we rank relative to our competition. So, corporate wide, everyone sends me measurements customers make on our performance. Most of our major customers measure on‑time delivery and our lot acceptance rate. We take these and average them. Several customers are in this database and we add more every day. We publish a quarterly report summarizing these data. These data show that our customers have seen our on time delivery improve at the rate of 50% every ten months‑‑this is consistent with our own internal measurements. There is a lag in obtaining this data so it is generally presented one or more quar­ters after the measurements. We find that there is a great deal of consistency between our measurements of on time delivery and their measurements.

 

I would like to be able someday to stand up in front of you and say that 90% of our customers rate us as their number one supplier by whatever criteria they choose to use. To me, that is the ultimate in performance measurements in terms of one's business success. What I have shown you might look deceptively simple. But we have learned over the last three years that the whole area of performance management is far more complicated than it appears on the surface. You have to understand what you are measuring and make sure that it is really linked to your business objectives. You have to get consensus within the organization on definitions if you are to measure the right things. When those things improve, business improves. You have to make sure the systems are timely. You have to make sure you have visually informative systems so people can look at the data in an efficient way to figure out what they are expected to do with it. All of this takes a great deal of time, effort, and arm twisting with occasional inter­vention from the chief executive officer. I think it all pays off. It pays off when the customer says "you are the most improved vendor" or when your sales engineer says "you know I am spending more of my time selling and less of my time expediting today." So those are the true ultimate tests of any performance measurement system.

 


Exhibit 1

 


Exhibit 2

 


Exhibit 3

 

 


Exhibit 4

 

 


Exhibit 5

 

 

 

 

[1] Edited by Ashok Rao, Babson College.

1  Juran, J. M., Planning for Quality, Juran Institute Inc., 1988.

 

back to top

©1999-2006, Arthur M. Schneiderman  All Rights Reserved

Last modified: August 13, 2006