Analog’s Path to TQM Planning
What I’d like to talk with you about this morning is our Quality Improvement Process at Analog Devices. You will hear me interchangeably refer to it as QIP, which stands for the “Quality Improvement Process,” or Total Quality Management. To put it into a context, I think that it’s important that you understand who we are as a company, and what we do. And so I’d like to share with you, for a few moments, a history of Analog Devices and the kinds of products that we manufacture.
It’s interesting that your last speaker was from Intel and Intel is one of the leading manufacturers of digital integrated circuits. Analog Devices is the leading manufacturer of “analog” integrated circuits and frankly we depend very much on one-another. Our components tend to lie between their microprocessors and the real world, which fortunately is linear rather than digital.
When we talk about the Quality Improvement Process at Analog, there are three major dimensions to our efforts. The first is in the process of goal setting, and that’s really the major subject of our discussion this morning. I’ll talk a little bit about performance measurement because I will again say later on that no matter how thoughtful your goals are, if you don’t have a measurement system in place that tracks your ability to achieve and to move toward those goals, you probably won’t achieve them.
We connect our long-term goals to our performance measurement system with a scorecard, and I’d like to share with you that scorecard and show you how we use it in the management of the business.
The third element, problem solving, is something that I really have nothing novel or interesting to talk about. We use the PDCA cycle, the 7-steps, the 7-QC tools, the 7-Management tools; many of the things that I’m sure all of you are familiar with. I won’t talk very much about problem solving methodology, or about the individual projects and how we go about selecting those projects. I’d like to focus the discussion this morning on goals and performance measurement.
Analog Devices is headquartered not very far from here in Norwood, Massachusetts. It’s about 15 miles south of Boston. We are a publicly held company, traded on the New York Stock Exchange at, I’m afraid, an embarrassingly low price at the moment; but we’re a good bargain. Our 1990 sales were just under a half-billion dollars, of which half came from within the US and half from outside the US. So we are a mid-sized international company with a little over 5000 employees worldwide.
The products, as I mentioned earlier, that we make are principally linear and mixed signal (both linear and digital) integrated circuits. We also have a small amount of our business in assembled products and sub-systems.
Our customers use our products in precision measurement and control applications. They are principally from the industrial and instrumentation market, military and avionics suppliers, computers (we make components that go principally into large disc drives on mainframe computers), and 12% in other areas, a very rapidly growing portion of which is consumer products.
Finally, we are an integrated supplier. We design our own products; we manufacture them at eight locations worldwide: US, Europe and the Far East, and we have our own sales force and distribution system. We sell our products, again, worldwide through over 100 sales locations.
So that’s the company that we are.
Analog has been very fortunate in that it has been a very successful company almost from its founding in 1965. We’ve grown very rapidly for a fifteen year period between 1970 and 1985 and have recently, over the last five years, come into a period of significantly slower growth, part of which is from internal causes, part of which is from external causes.
As I’m sure many of you know, the motivation for implementing TQM or QIP in any organization comes from some sense of urgency around an environmental change that makes the company a little bit uncomfortable. Analogs environment is changing in four very important ways.
First of all, our traditional customers, companies like IBM and Hewlett-Packard are themselves experiencing broad worldwide competition. The harder they get squeezed in the market place, the harder they squeeze their suppliers. So we have an immense amount of pressure from our traditional customers.
In addition to that, in order for Analog to return to its historic growth levels, we need to find new customers and new product areas, and as we look at new customers and new product areas we find that they have very much different purchase criteria than the customers we’re use to doing business with. So it becomes essential for us to understand how their needs are different from the needs of our traditional customers and how we can respond to them.
As Analog looks back in time our competitors have tended to be companies much smaller than us; companies that I’m sure very few of you have heard of. But the nature of competition is changing and, as we look into the future, our competitors are large companies, companies that you’ve all heard of: the Japanese semiconductor manufacturers, Texas Instruments, Motorola, very large companies that differ from our traditional competitors in that they have reputations and performance that are very high in quality, rather than low in quality which was more characteristic of our traditional competitors.
Our products, which use to be 100% standard products (we sold them through a catalog) are now becoming custom products: things called ASICS (Application Specific Integrated Circuits). So it becomes very important for us to be “market-in” rather than “product-out” focused in what we’re doing.
In addition, the technology is allowing our integrated circuits to actually behave as systems. So things that you use to buy as a printed circuit board with a lot of components on it is now being integrated into a single chip. And, the technology that’s allowing that to happen is technology which probably doesn’t mean very much to you: the detail level on these integrated circuits is changing from two one-millionths of an inch down to half-a-millionth of an inch. In fact at half-a-millionth of an inch the detail feature sizes are the same as the wavelength of light, which is very, very small.
Now what’s happening with that transition is that the capital costs of putting in the new technology are rising very rapidly. So we’re also in a situation in which we have very rapid increases in the cost of capital equipment in order to run our business. Those four areas of change have been the driving force at Analog for the introduction of Total Quality Management.
Let me turn now to the question of goals. As Larry mentioned to you earlier, I have had a role over the last several years in the area of strategic planning or business planning, as well as the implementation of Total Quality Management. At Analog, we went through an 18 month, very extensive strategic planning process back in 1986 and 1987, and developed a very detailed plan that would allow us to maintain our worldwide market leadership position, to return to growth in the range of 20-30% a year, and to maintain our historic levels of profitability.
So we had a plan, and the plan was at a sufficient level of detail to convince us that it would be achievable, but achievable only if we could do one thing, and that is to “be rated #1 by our customers.” Now in the past it was very easy for us to be determined how we were rated by our customers. All we did is compared our catalog of parts against the catalog of parts of our competitors. And when we did that we found that our parts were special. Our parts had higher performance than our competitors’ parts. They came in smaller sizes. They used less power. And so we tended to differentiate ourselves from competition on the basis of the specifications of our products. It was very easy for us to know that we were #1.
What we began to learn is that our customers were changing their purchase criteria and they weren’t just looking at the specifications of the product when making their purchase decision. They were looking at a lot of other things. And whether they knew it or not, they were actually calculating the “value” of the products that they were buying, not just the performance of those products.
Some of them knew it: companies like TI for example. They knew what they were looking for and they actually calculated something called the “cost of ownership,” which some of you may have heard of. What they did is that they added into the price that you quoted them the costs of quality that they incurred in dealing with defective product that came from you and your competition. They added in the cost of carrying inventory, which they needed as a buffer against unpredictable delivery performance. They added in warrantee costs. They added in the costs of inconvenience of doing business with their suppliers. And they actually created a factor called the “supplier performance index,” which they used as a multiplier on the quoted price in order to come up with a better estimate of the value of that product to them. And they’ve actually gone through the awarding of business, not on the basis of bid price, but on the basis of cost of ownership.
So we knew that this concept of “value” was really an underlying element in the purchase decision of our customers. Now as I said earlier, the criteria were really very simple. In the past it was product specifications, but through the period of the eighties three things became very significantly added to that list. The whole issue of the quality of the products that we sold to our customers became very critical to them. They differentiated between three kings of quality: the quality of the product when it arrived to their incoming inspection, the quality of the product through their manufacturing process, and the reliability of our products in their customer’s application, which reflected in warrantee costs to them.
In addition, as more and more of them moved to just-in-time manufacturing, on time delivery and leadtime became very critical elements in the costs of running their businesses. Price: Analog like our customers have been subject to very significant amounts of worldwide competition, and price, which historically was not a matter of interest to our customers, becomes very critical in their decision making process. And finally the whole issue of responsiveness.
In talking with customers, which I do a significant portion of my time, I’ve asked them to add or subtract items from this list as a way of understanding where Analog needs to focus its quality improvement efforts. The last thing that you see down there, responsiveness, is the one that is most mentioned by our customers as being the thing that will differentiate one supplier from the other through the nineties.
What they mean by responsiveness is quite general. It means how long does it take us to get a part to them? It means how long does it take us to answer a question, if they have a question? How long does it take us to quote a price? How long does it take us to deal with a failure analysis? So the whole issue of responsiveness, the timeliness of our response to their needs, is probably going to be the most important element on that list as we move through the nineties.
Within Analog, we needed to go through a process of identifying what things we had to work on in order to drive each of these external levers, as we call them. We identified time-to-market, process PPM (which is the inherent capability of the manufacturing process and the design to provide high-quality product), manufacturing cycle time, and yield. So those have been the four major focuses of our quality improvement efforts within Analog.
If you talk to people, for example, that are working on a yield QIP team and ask them “Why are you working on improving yield?” you will generally get an answer like the following: “Yield improvement is important because by improving our yields we first of all reduce our costs and therefore are able to reduce our prices to our customers. But also, we’re able to reduce yield variability, improve our on time delivery performance, improve the quality of our product, and by doing that improve the value of what we deliver to our customers. We move into a position where we are more likely to be the supplier of choice and therefore be in the best position to meet our business objectives.” So people working on specific projects within Analog Devices understand the audit trail that goes back to the customer, but beyond the customer back to their own self-interests as employees of the company in terms of its future success.
I’d like to now talk more specifically about the internal levers and the external levers and how we go about measuring them and setting goals. But to do that I have to first introduce to you the concept of a “half-life.”
It’s interesting, I met someone earlier today who reminded me that we had both gone on a trip to Japan back in 1984 that was sponsored by a Japanese consulting company, to study Japanese quality. It was on that trip that we visited Yokagowa-Hewlett-Packard.
I think this slide may be familiar to many of you. It’s shown by everyone at Hewlett-Packard, including John Young, as an example of quality improvement efforts within Hewlett-Packard. This happens to be dip soldering process defects and what they graphed here is the failure rate, first expressed in percent, over time. They got down to the point where, as you can see down at the bottom of this, they couldn’t see any changes any more. So they had to come up with a second graph that changed the vertical axis to parts-per-million. So they were able to magnify the bottom portion of this scale. In fact I was recently back at Yokagowa-Hewlett-Packard and they’ve added a third one on this, which is now expressed in parts-per-billion.
In any case, any of you that have traveled to Japan know that one of the most strenuous parts of that trip is the return flight. I had this in my briefcase on that return flight in 1984 from Japan and in order to pass the time I decided, being an engineer, that I could fit both of these graphs on a single piece of graph paper if I used a logarithmic scale, rather than a linear scale. That turned out to be a great in-flight project.
The problem that I had was that I didn’t have any semi-log graph paper, but I did have a ruler, and I did have a calculator. So the first two hours were spent in making semi-log graph paper, and the next two hours were spent in graphing these data, transposing these data points onto that semi-log paper.
I’ve got to admit that during that whole time the martinis were flowing quite freely, so when I finally looked at what I had gotten, and it looked like this, I didn’t know whether it was the martinis or the data that was trying to tell me something. But later on, when I got back and the martinis had worn off, it turned out that this was real data. That same data that you saw on the previous slide, graphed on semi-log paper looked like this:
And again, because of my engineering background, I realized that there was something very interesting going on during this first three-year period of time. That was that the failures in this dip soldering process were declining at a constant percentage every 3.6 months: 50% reduction every 3.6 months. So if you go to this first point back here, it’s about .4% defective and if you move 3.6 months out in time, it was about half of that, .2%. Another 3.6 months, it was half of that, .1%, another 3.6 months, again, half of that. And that occurred over a period of three years and over nearly three orders of magnitude of improvement. So I said to myself, “this is trying to tell me something; maybe I ought to look more carefully.”
Just to categorize the filter that I applied in the data that I’m about to show: you all recognize the PDCA or the Deming or Shewhart Cycle, and the 5W’s and an H, and one version of the 7-steps. What I did is I collected every example I could; examples from conferences, examples from my own work (at that time I was a consultant), anything that people could give me as examples of the results that they had achieved while applying the PDCA cycle. And I took those data and calculated the half-life. You have in your presentation two pages of data. These data represent about 70 cases of examples of application of the PDCA cycle, the 7-steps, to a variety of quality improvement efforts.
The first column is the calculated half-life in months. The second column is how many factors of two improvement did they achieve: how many “halvings” of the defect. The third is a statistical measure of how good this model fits the data: something called r-squared.
If r-squared is one, the fit is perfect. If r-squared is zero, there’s no correlation between the model and the data. If you look at the bottom of the second page there, you will see that for the seventy-odd projects here, they had an average half-life of 10.9 months, 2.8 cycles of improvement or “factors of two” of improvement, that’s a factor of seven. So on average these projects reduced the defect that they were working on by a factor of seven. And the r-squared was .77. .77 is well above the correlation value that is used in medical research in order to reach some of the conclusions that you often hear published in the newspapers; of the relationship of smoking to cancer, for example. So it turned out that this was a very strong correlation.
The next step was, without the assistance of martinis, to ask the question what were these data trying to tell me? I’ve learned recently that that’s called a right-brain activity. Up to this point it had been a left-brain activity. And so I spent a lot of time staring at that data, staring at the kinds of projects that were described in the data and the kinds of results that were achieved, and created this matrix here:
The two dimensions that seem to characterize the improvement projects where, first of all, variations in organizational complexity of the problem. So that, for example, a quality improvement effort within a given function in an organization, say an accounting department … reducing clerical errors within an accounting department … was relatively simple in terms of the organizational structure: a boss and his or her direct reports. On the other extreme, there were problems that involved working with suppliers, or working with customers: people from other companies, people often within your own company from different functions, so very different amount of organizational complexity associated with those problems. So there seemed to be a range there.
There also seemed to be a range in technical complexity of the problems. Some of them were very difficult problems; some of them fell in the category that I would call “no-brainers,” very easy to fix those problems. And I generated as a hypothesis the table that you see here, which basically said if the problem is of very low technical complexity and very low organizational complexity, the PDCA cycle, the 7-steps, done properly should yield an improvement rate of 50% reduction every month in the defect that you were working on. Whereas, if it was at the other extreme, something that involved working with suppliers, working with many functions within your organization on a problem that everybody agreed was very complex to deal with, then 50% improvement every 22 months; the other end of the spectrum.
What appeared from these data and from this matrix was that in a sense we could think of this half-life as being a speedometer for the PDCA cycle.
It tells you not only how fast you’re going around the PDCA cycle, but how much improvement you’re getting each time you go around it. And in fact, if you think about the issue of results focus versus process focus, the half-life bridges the gap between those two. In fact the half-life, although it’s a number so it looks like it’s a result, is really a measure of the process; it’s a measure of how efficiently you’re turning the PDCA cycle.
Back in 1986 and 87, as part of the planning process that I described to you earlier, we took the things that we had identified as being of importance to our customers and the things that we identified as being the internal drivers for improvement in those areas, and first asked the question “where are we?” Now it turned out back in 1986, if you asked anyone what our yield was they didn’t know. And if you asked anyone what our delivery performance was they’d say “very good.” We had no measurement system; we didn’t know at all.
So in 1986 we established a measurement system in each of these particular areas. I’m going to spend a little a more time in a moment talking about on-time delivery.
We used the half-life concept in order to answer the question “If we took each of these areas and put teams together to work on them, and those teams applied the PDCA cycle, used the 7-steps, where could they be in 1992?” We also asked our customers “where do we need to be in 1992 in each of these areas to have a shot at being your #1 supplier?” Now they obviously said, “well we don’t really know.” And our answer to them was “you know better than we know and we need to have a target, so what do you think is the right number?” In addition to that we did some benchmarking of where we thought competition was today and where we thought competition was likely to be in 1992.
The combination of those three things: what our customers were looking for, what we thought competition was going to be at, and what we thought we were able to achieve by using the QIP process, the PDCA cycle, we arrived at the goals that you see on the right-hand side there: virtually 100% on-time delivery, quality levels that would require no outgoing inspection from us, no incoming inspection of the part of our customers, ship-to-use (one step beyond ship-to-stock), three-week lead-time, 4-5 week manufacturing cycle time…
Any of you that are involved in manufacturing recognize that what you’d really like is that the manufacturing cycle time be shorter than the leadtime, so that basically you’re building to order rather than building to forecast. It turns out unfortunately that the laws of physics have a very prominent role in the manufacture of integrated circuits and a number of the process steps are time-dependant. It’s like asking a woman to have a baby in less than nine-months; it takes a certain amount of time to do it. So we’ve had to develop a strategy based on an inventory point in our manufacturing system in order to achieve the kinds of leadtimes that we think customers need.
… Processes and designs capable of meeting our quality goals without inspection. Yields significantly higher than the yields that we have historically operated at; by the way that is 20% yield, not 20% yield loss. Of every five potentially shippable products we start in this manufacturing process, four of them end up in a wastebasket; one of them ends up going to a customer. By the way they literally are small enough to fit in a wastebasket; in one wastebasket we can put all of our yield loss. Time-to-market, three- years. That’s the time from the initial application of resources to the development of a new product to the point at which that product is shippable to customers.
So we were able to establish seven goals for 1992 that we felt were very credible. We felt they were credible because the concept of the half-life said that we could get there if we applied the PDCA cycle. It was achievable. It was not out of reach and it was not un-ambitious in terms of a goal. But we also recognized that human nature being what it is if we just stopped at that point, it would be the third quarter of 1992 and we would be at the same level we were in 1992 and everybody would say “we’ve got one more quarter.”
So we needed to put in something as an intermediate control point and we established the scorecard.
What you see on this is first of all a few financial measures (unfortunately we still have to look at our financial performance), and a whole bunch of QIP measures: on-time delivery performance, leadtime related measures (I’ll get back to those in a moment), a whole bunch of manufacturing metrics. And what we do each year as part of our annual planning process is we look at where we ended up the previous year. We use, again, the concept of the half-life and the goals for 1992, and I propose a set of quarterly goals for each of our divisions. They go into that column labeled “benchmark.” We go through a negotiation process in which I carry on a discussion with the general managers of our divisions in terms of why the goals are or are not achievable and whether they have the resources available to achieve those goals and we finally settle on a scorecard.
Each quarter the actual numbers are filled in and as part of our quarterly review process we have a meeting of the general managers and they go through their scorecards and present the actual to the plan. So they do the plan-do-check, this is the check part of things. In addition to that they have to do a diagnosis of what the problem was. What were the root causes of any unfavorable variances? What corrective action is being taken? Who’s it being taken by? When is it going to be completed? So we use the scorecard as our principal tool for managing the quality improvement process within the company.
Now again, we couldn’t do anything if we didn’t have a measurement system and as I said at the beginning we strongly feel that performance measures or metrics are also essential to a viable quality improvement process. Now we say that measurement is necessary, but I don’t want to imply that it’s sufficient; that just because you measure things doesn’t mean that they will improve. You need to train people. You need to provide resources. They need to learn how to do the PDCA cycle. You need to have management committed to the process of diagnosing or auditing the performance of teams that are working on quality improvement. So there are a lot of things that are needed in addition to measurement. But it is an essential part of what we do.
On the scorecard, the first set of measurements that we have with respect to the customer is on-time delivery or what we call “customer service metrics.” The reason that they go onto the scorecard is very simple. It would have been appropriate back in 1986 when we designed the scorecard, to go out and interview customers and find out what are the things that are important to them and what are the areas in which our performance does not meet their expectations. But actually it was simpler than that.
I went around and asked about twenty people within Analog Devices at various levels in the organization the following question: “The phone just rang. It’s a customer. The customer is very unhappy. What’s the problem?” Just a simple thing. Twenty out of twenty times the answer was “My shipment’s late and you won’t tell me when I’m going to get it.” So it became very, very clear to us that this Pareto analysis had one very, very significant problem; probably 80% of the problems that we were encountering with customers were delivery related. So we went about designing a measurement system for customer service.
Here you see the elements of that measurement system. The first thing that we looked at was what percent of the time did we ship late to customers? What percent of the time did we ship early to customers? One minus the sum of those two is the percent of time that we were on time: that we shipped within an acceptable window around our commitment to the customer.
Now if we just stopped at that point, we’d have a problem, because as you went around and asked these twenty people or two hundred people, “what was the problem? What was the root cause of late shipments?” they would say “not me, it’s the other guy.” If you’d ask the people in the factory, they’d say “credit!” Remember, we have our own sales and distribution network worldwide so we don’t work through distributors, which means that we have a big credit department and the credit department determines whether Joe’s TV and Radio is going to, with 100% certainty, pay their bill if we ship them the products. So there was a whole issue of who was responsible. We had to move down to the next level of detail.
Initially, responsibility metrics were called the “finger-pointing” metrics. Fortunately, the name got change to responsibility and the spirit behind it got changed. The spirit now was that the portions of late shipments that are controllable by the factory are their job to fix. The portion that is controllable by the credit department is their job to fix. The portion controllable by the warehouse department is their job to fix. So within each of these areas they not only had a measure of overall how well were we doing with respect to the customer, but what their contribution was.
The next group of metrics was added to deal with the predictable capability of any general manager to figure out a way of gaming whatever measurement it is you come up with. And what I mean by that is to make the numbers look good but to do things that are not in the best interest of the customer. One of the holes that we had in our measurement system was that once the product was late to the customer it was out of the system. We just measured “did you meet your commitment?” If not, you get dinged for it, if you did than that’s fine.
That created a potential for a division to decide to ship the product that was available to a customer that had the product due to them today rather than ship it to a customer that you owed the product to because it was due earlier and you’d already gotten dinged for it. So if you gave it to the old customer, you got no credit. If you gave it to the new customer you could prevent getting dinged. So we had to add lateness and earliness. We had to look at when we shipped product late, how late was it? When we shipped product early, how early was it? If it’s still late, how late is it? And, how big is the late backlog?
The final area is the area of leadtime. We had initially thought that the appropriate metrics, and that’s what appears on the scorecard, is what are our customers requesting for leadtime and what are we coming back and quoting them as leadtime? In fact, one of the things we learned about the whole area of performance measurements is that they’re not fixed. As you go through the PDCA process you learn more about what it is you should be measuring. And you have to change that. In this particular case it became clear to us is that the most important measures of leadtime were what you see here: % CRDs met, what that means is: the customer calls up (CRD stands for Customer Request Date) and they say “I want 100 pieces on January 15. What percent of the time do we say “yes”? And that’s really the test of leadtime: what percent of the time do you say yes to a customer when they tell you “I want this quantity by this date”? And on those occasions where you can’t say yes, how much do you miss by? So that’s what “excess leadtime” is.
Finally, we introduced our first measure of responsiveness and that was how long does it take us to get back to a customer and tell them when we can ship the product? They say “I want it January 15th”, how long does it take us before we tell them “you can’t have it until January 18th”?
Now let me show you how we’ve done in that one metric.
This chart looks a little bit complicated. It’s not quite as complicated as it seems. Each of the columns that you see here is a different division of Analog Devices. The last column is the corporate total. And if you look, for example, at this first column, it is our Analog Devices Semiconductor Division located in Wilmington, Massachusetts. And the first point that you see on there is for the first quarter of 1986. These are quarterly data starting in the first quarter of 1986 and going through the third quarter of 1990. There’s our logarithmic scale. Percent late, all of our metrics are weakness focused: what percent of the time are we late? That’s a tenth of a percent late. That would be 99.9% on time, and the other end that’s 100% late or zero percent on time. So you can see that in the first quarter of 1986, they were late about 30% of the time. By the third quarter of 1990, they had gotten to the point where that is 1%, 2%, 3%; they were late less than 3% of the time; 97% on time in their delivery performance.
And the number that you see down here is the half-life calculated from the fit of this straight line through those data. So they were able to reduce the percent of lines shipped late to their customers by 50% every 15 months over a period of four years. You can see a number of halvings of their percent late.
As part of our executive information system, we have these data in a slightly different format.
Again, this is a similar sort of chart, but now each data point represents on months worth of data and this is a twelve-month moving window. So each month we add a point and we drop the oldest point. This happens to be for September of 1990. The red line is the fit of the half-life model to the data. The number down at the bottom here represents the half-life for this particular division in Greensboro. They are reducing their late shipments by 50% every four months.
The two green lines are also something that you might recognize, they’re control limits. Now we’re use to seeing control limits in a manufacturing process, but not use to seeing control limits in a performance measurement system. And yet the same principle holds here; there are month-to-month variations that are not statistically significant, that require no reaction. On the other hand there are events that do require an action and that is an out-of-control event. So that you see, for example, in this particular division here there’s and out-of-control event. We look at the last three months. We change the dots to a plus sign. If it’s above the upper control limit it becomes a red plus sign. Red at Analog means “bad.” Green at Analog means “good.” If it’s below the lower control limit it becomes a green plus sign.
Again at the quarterly meetings of the general managers they have to stand up and describe their performance. In the short term, they talk about their out-of-control situations. If it’s a red plus: what was the problem? Now they’ve learned that there are right answers and wrong answers to that question. And you all know what some of the wrong answers are: “lots of things” is a wrong answer, because we know that this is an out-of-control situation so there’s got to be a special cause. So they have to describe what that special cause is and what corrective action is being taken. On the occasion on which there’s a green plus sign, there’s been a breakthrough. They have been able to speed up their rate of improvement. And so they share that with their colleagues. It provides a great opportunity for the general managers to learn from one another and it’s really an instrument that we use as part of our organizational learning activities in the area of quality improvement.
The particular system that we use allows us to access these data with nothing more than a mouse. It’s called an “executive information system” although it’s used by a large number of people at Analog. I think it gets its name from the fact that executives are at least able to push a little button on a mouse and click. So they can point to this, for example and click on that number “4” and what will appear on the screen next is the history of the half-life.
This would be the result of clicking on the corporate total. And you can see that for the twelve months that ended May of 1989, the half-life of improvement was around nine months. And it remained at that level for a six of eight-month period of time. So during that period of time, Analog Devices in aggregate was reducing late shipments to its customers by 50% every eight to ten-months.
We’ve now come into a period in which the half-life has started growing significantly; our rate of progress has slowed. And in fact in the month of April, on this particular chart, the half-life either became negative or got so large that I didn’t put it on the chart. We’ve stopped making progress. If you look at the data on the previous slide, you can see that quite obviously. Not only have we stopped making progress, but we’ve had some out-of-control events on a corporate-wide basis that have actually had us slip back a little bit. That’s the bad news. The good news is that we’re slipping back from 98% on time to 97% on time. So we’re still at acceptable levels. But something’s happening.
The answer in this particular case in terms of what is happening is I think best described by a model that I’ve heard called the “slack rope model,” which basically says that initially, all of the things that you’re working on: yield, cycle time reduction, you can work on independently. You can have a group of people working on yield improvement as an independent activity. But as you improve, you get to the point where the slack comes out of the system and in order for you to get improvements in one area, you need to have improvements in the other area. So that for example, we’ve reached that point in on time delivery.
We cannot improve our on time delivery at Analog Devices without reducing our manufacturing cycle time or improving our yields and therefore reducing our yield variability significantly. If we try to increase our leadtimes in order to improve our on time delivery, we loose business. And we know that the market is such that we’ve got to go in the opposite direction. We have to reduce our leadtimes even further. And our leadtimes are now smaller than our manufacturing cycle time and are very much subject to variability in manufacturing cycle time.
Also, we can’t improve by assuming that the yield is going to be less than the average yield and therefore building inventory. So we’re at the point now where any improvements in on time delivery above the 97% level are going to require reductions in manufacturing cycle time and improvements in yield.
Let me try to close the loop, because we very strongly believe in the PDCA cycle. We very strongly believe that all of these things have to be closed loop. So I’ve described to you a process by which we’ve gone about identifying what the critical things are for us to work on from the perspective of our customers. We’ve mapped them into things to work on internally within Analog Devices. We’ve developed a performance measurement system. We’ve applied the PDCA cycle. We think we’re doing pretty well. We think we’ve gone from 70% on time to 97% on time. The question is what do our customers think?
So we maintain a database. It’s Analog Devices “57 varieties” at the moment. These are 57 companies who are customers of Analog Devices, who have vendor-rating systems, and who, on a regular basis give us the results of their vendor ratings. They tell us what their measurements are of our delivery performance to their definitions of what constitutes being on time. And each quarter, we take those data, we do no massaging of it at all, we take those data and we publish our average.
That’s it. These are measurements by our customers of our delivery performance. You can see at the beginning part here, they measured our delivery to be somewhere between 20 and 30% of the time late. You might remember from the earlier slide that’s what we measured before we started our QIP efforts. The latest data that we have, which is this past summer, they were measuring about 3% late, 97% on time. Again, very good correlation to what we’re measuring. Even though every one of these customers, in general, has a different definition of what constitutes on time, they average out. Some forgive us; some have tighter windows.
Also, the half-life that we measure from their data is ten months, which again is in reasonable agreement with what we measure as our rate of improvement. So both the level and the rate of improvement are consistent between our customers’ measurements and our own measurements of our performance. You can also see this flattening effect down here. They’ve seen a flattening of our performance over the last several months at about the 97% level. By the way, if you ask them whether that’s good enough, they say that’s good enough for today, but not good enough in the long run.
This is looking at the list that we had at the beginning, looking again at where we were in 1987, where we want to be in 1992, and giving you a snapshot of where we stand at the end of 1990. You can see for example, in terms of what I was talking about on on time delivery that we have been improving. We’ve been learning more. Aside from the problem that we now have in terms of the slack rope model we think that we will achieve 99.8% on time delivery by 1992.
The area of quality, that turned out to be a much more difficult job than we had anticipated. We’ve only been able to reduce our outgoing defect levels by a half over this three-year period of time. It’s very slow, and it turns out there’s a very interesting reason for it.
Analog’s products have very long life cycles. As you saw at the beginning, a lot of our products go into military/ avionics applications and industrial applications where those products have a very long lifetime: twenty-year lifetime. So we have a lot of products in our product line that were designed twenty years ago, fifteen years ago, and designed for one criterion only and that was performance. So although we had design rules that talked about the manufacturing capability, we also had a culture among our designers that said it’s ok for me to push a little bit past this because I can get a little extra performance; part of the reason that we had twenty percent yields.
We started by doing a Pareto analysis of what our worst products were and chose the largest ones to redesign in order to improve the quality. That sounds very logical, the right thing to do. And then the phone started ringing. Our customers started calling us and saying “Now you’ve made a change in the design of the product, which means we need to re-qualify that product. Re-qualification means we have to find the original design engineer that designed our piece of equipment and we have to get him to come back and re-qualify your part. They won’t do it!” So, a number of these customers said to us “it’s fine that you’ve redesigned the part, we still want the old one. We can live with the quality problems, we’ve lived with them for twenty years, and we can continue to live with them. And the products that we’re designing in today, we want to have at those quality levels, but not the products that are the old products.” So we ended up in a number of instances redesigning the old product and not selling it, because the customers still wanted the old version of that product. So we learned something in that process.
As I mentioned earlier, we’ve expanded our view of leadtime. We didn’t have goals for 1992 for what percent of the time we wanted to say yes to our customers and what our excess leadtime was, but we’ve been working on those, and in 1987, one-third of the time we said to a customer “you can have it when you want it.” And now we’re above half of the time we say that. And when we disappointed them, back in 1987, we disappointed them by nearly four weeks. Today we’re disappointing them by under three weeks.
This line here turns out to be perhaps the most interesting one from a dollar perspective, because as I mentioned in the beginning, the technology of our business is going in such a direction that in order for us to build a new wafer fab today, it would cost us around a hundred million dollars. Now that may or may not sound like a large number to you but remember where a five hundred million dollar company. And you can’t build part of a wafer fab; you either build a wafer fab or you don’t build a wafer fab.
Now look what’s happened here. We’ve doubled our yields; we’ve gone from twenty percent to almost forty percent. We’ve doubled our effective capacity. We’ve delayed the point in time in which we have to make a major investment in a new wafer fab. And that has had a very dramatic economic impact on the operation of the business.
If you remember on an earlier slide it was our objective to be rated #1 by our customers. Unfortunately, most of our customers don’t have a rating system that says you’re #1, 2 or 3. So it’s very difficult for us to know what our rating is. They don’t have it because they don’t generally measure their suppliers in such a way that they can come up with a ranking system.
But one of our largest customers does have a system and this customer is of special importance to us at Analog Devices because John Doyle, who is the Executive Vice President of Hewlett-Packard, is also on Analog Devices Board of Directors. So they’re not only one of our largest customers but I have a lot of memos that were sent my way from John Doyle to Ray Stata, who’s the president of the company, to Art Schneiderman from the past. And the reason that I got those memos was that in 1986, Hewlett-Packard had sixteen IC suppliers. And of those sixteen, our ranking was number eight. What was really significant about that is that they also told us in 1986 when they gave us this ranking that they were going to reduce the number of linear suppliers by half.
So that in 1987 they said there were only going to be eight linear IC suppliers. That made us a little uncomfortable, but we were working, and we were able to get up to fifth position on their list by 1987. And then they said ok, now we’re not going to measure you just against other linear IC suppliers, we’re going to measure you against all of our IC suppliers: linear and digital suppliers. And so they added another seven to the list and we maintained our position as fifth out of fifteen IC suppliers. They just recently came out with the 1989 results and we’re tied for first place out of the twelve surviving IC suppliers over this period of time. I don’t know what the total number of suppliers was back in 1986 but it was probably in the mid-twenties. So they’ve probably reduced their vendor count by at least 50% in total.
Well let me stop at this point and say that what we’ve learned out of this process is that by understanding what our customers needs are and understanding in what ways we are not meeting those needs, we’ve been able to come up with some very specific initiatives at Analog Devices. And we’ve also been able to develop performance measurement systems and a goal setting process based on this concept of a half-life and the linkage of that half-life concept to the PDCA cycle or the process by which one does improvements. And we think that once the economic conditions of today get settled out and we get back on to a growth path, that we will be the beneficiaries of increased market share, increased sales and increased profitability as a consequence of the improvements that we’ve been able to make.
So, any questions?
Q and A:
Q. I’m curious, you’re an international supplier, what effect has the EN29000 or ISO9000 had upon your QIP, or to turn the question around the other way, what has QIP done to contribute to your position relative to ISO9000?
A. Although at the beginning Larry introduces me as having as role in corporate quality assurance within Analog, that’s actually changed and our quality assurance function is now managed through our manufacturing operations. So I can’t give you very specific answers to that question.
We’ve done nothing specifically different in order to respond to that. I think that as I travel around the world, and I spend about 25% of my time traveling around the world visiting customers with two objectives: to tell them what we’ve done, but also to listen to their needs. I think that what we’ve done is increase their comfort level with us as a supplier, above and beyond any of the specific quality related measures or criteria that they might have. So it’s more an issue of increasing their comfort level that Analog is in fact a forward facing supplier that has had the greatest impact on their view of us in the general area of quality.
Q. Even though you stated with some chagrin that the financial area of measurement was something that you were still doing, has that followed your half-life curve?
A. That’s a very good question. The answer is no. In fact we’re learning something very interesting and that is that we all know the old rule of thumb that poor quality costs you 20-40% of revenues. It doesn’t take a lot of calculation to figure out that we must have significantly reduced our cost of quality, perhaps reduced it by as much as ten or fifteen percent. So the question is why hasn’t it made its way right down to the bottom line and why don’t we see profitability improvement as a consequence of that?
The only way that I can answer that question is by analogy to other companies. If you take a look at companies like Motorola, Hewlett-Packard, IBM, any of the Baldrige Award winning companies, and ask the same question of them, by looking at their financial performance and ask them has their financial performance improved as their quality has improved, you’ll see the same thing as is true at Analog. And that is that it hasn’t. You have to ask why? Why with all of these improvements, why with all of these reductions in the costs associated with poor quality, why doesn’t that make it down to the bottom line? And I think the answer is the competitive environment we’re in. What basically happens is every time we reduce out costs, we have to pass that savings on to our customers. We’re not in a situation where we can pass 90% of it on. We’ve been passing 100% of it on, just to stay in business. We’ve entered an era of international competition where the guy that’s out there with the shortest half-life, that is able to improve at the most rapid rate, sets the prices. And they set the prices in order to gain market share, particularly if they’re Japanese competitors. And so we’ve seen our unit prices drop consistently over the last five to ten years, even though our products are becoming more complex, they’re becoming larger, their becoming more functional, they’re becoming systems rather than functional ICs, and they’re providing immense savings to the customer, we still end up passing on at least 100% of what we save to our customers. So I think that anybody that argues that that 20-40% is going to make you rich is wrong. That 20-40% is going to keep you alive.
Q. What’s the percentage of employees that are involved in QIPs currently and did you have particular rate of acceleration in order to achieve that?
A. We have about 250 QIP teams active in the company now on a worldwide basis. These are teams ranging from the executive group level, in which we have a team working, for example, on the design of our training program, we have a team working on a customer visitation program where we more formally go out and listen to the voice of the customer, down to operators. We don’t distinguish between Quality Circles and QIP teams so we have groups at the operator level. There’s probably some overlap. The average group size is probably 5-7, so if you multiply those two numbers together you probably get a thousand or so people that are actively involved.
In fact, believe it or not, we’ve had to slow down the diffusion of QIP throughout the organization; we had to hold back on it, because we don’t have any formal corporate training programs. The way that we have done training is on a one-on-one basis. Everybody in my group, in addition to their other jobs, spends half of their time facilitating. There are currently two people in my group plus myself and there have been as many as five people in the group in total. So we facilitate QIP teams and we train on-the-job, so that as the team is moving along, if it needs to do an Ishikawa diagram we teach them how to do an Ishikawa diagram. If they need to do an Affinity diagram, because that’s the appropriate tool to pull out of the toolbox at that time, we teach them how to do an Affinity diagram or KJ diagram. So that is really what has limited the diffusion of QIP on a company-wide basis.
We’re at the point now where we need to do something about that, and so as I said earlier, we have an executive group now working on both the philosophy and the design of a training program on a corporate wide basis. I can tell you right now that that training is going to be the manager does the training, rather than professional trainers do the training. So that’s the model we’ll be working from.
Q. Where do you see the future in terms of your yield which you see going to 50%? That would seem to offer you the most hope for the future. Do you see that getting to 60%, 70%, 80%, or what?
A. It’s very interesting. I arrived at Analog Devices in 1986, not only not being from the semiconductor industry, but I wasn’t even an electrical engineer. Just because I’m a mechanical engineer, that didn’t qualify me. And I had been a consultant for six years. So there could be nothing worse than that.
I had to come up with this five-year plan within the first six months of being at Analog. So I put that slide up in the late part of 1986 after I’d been at Analog for a half a year, and if there had been food available in the room I would have been wearing it. Because I proved to everybody that I was a consultant; I didn’t understand the semiconductor industry; I was a staff guy, not a line person.
The biggest complaint they had was on the yield number. They said, “You just don’t understand our business. You don’t understand the inherent nature of manufacturing very complex, linear ICs.” In fact, back in 1986 it took us six months to manufacture these products - twenty-six weeks from the start to the finish. And they said, “It is inherent that there will be large scrap rates in these products.” And so we argued. And finally we settled on (I think I had 70% as the number there originally and that was the only number we changed, we put that at) 50%.
Where we’re at today is, I think, that there’s now recognition that there is no theoretical limit; that 95%, which is the kind of yields on memory ICs, the digital ICs that are made by the Japanese and are made by companies like Intel and TI, those will yield in the mid-to-high 90%. And they are equally as complex as our products to manufacture.
So I think that people have made the intellectual leap that there is no inherent low yield. Right now I feel very comfortable that we will be in the mid-sixties by 1992. And I would say that within five years after that we will be designing products to controlled manufacturing processes. We will be able to predict the yield at the time we design the product, and that those yields will be in the 90% range.
Q. Art, something I’ve been puzzling about here: you said in 1987 you had a 36 month actual time-to-market, and you were looking for a half-life of improvement rate of 24 months and I’m struggling with how, when your PDCA cycle can be 36 months to begin with, one can achieve a half-life shorter than that. Says that you have to get more than two-to-one improvement on every time you do it at least the first time.
A. I did gloss over, not accidentally, the bottom of this slide [Slide 22] that has a question mark in the area of time-to-market. That has turned out to be the toughest one of the things that we’re working on at Analog. First of all, measuring time-to-market is tough. It’s tough to say when did you start. I mean you’ve got a designer kind of toying around with some idea in his mind, when do you start the clock? There are a whole bunch of issues in this area of time-to-market; it’s probably the most important, most leveraged area for improvement at Analog Devices, and at the same time it’s the area we’ve made the least progress in. If you said what is your time to market right now, I’d say it’s probably still 36 months. I don’t see any improvements in time-to-market, partly because of the difficulty of the product, partly because the argument given from the designers is basically: “I’m going to spend the next three years working on a product. That’s fixed. It’s going to take me three years to design the next product. As I become more productive, I’ll just put more functionality into that product.” So there’s nothing fixed.
The whole idea of time-to-market as a metric is bankrupt. It’s the wrong metric. In fact, you have to drill down from that number. You have to talk about specific products. What are the inputs required? What are the expectations of that product? Hewlett-Packard does a very superb job of dealing with that in something that they call BET, break-even time. The answer is that we should really strike that metric off of here. We need to say design resources, how can we utilize those design resources for the best profit, or whatever criteria we come up with, for the company and for our customers.
I don’t know if I’ve answered your question except to say that we’ve made very little progress in that area, except on the point of wondering whether or not that’s the right metric. We use to combine that, we use to have time-to-market and number of new products introduced on the scorecard. It was very simple. What you did with that kind of a performance measurement system is that you incented people not to take any risks. You incented people to come up with derivatives of existing products. If I come up with a new product that’s slightly different from my last one, I can do lots of them, because it doesn’t take many resources to do that, and I can do them in a shorter period of time.
So the whole issue of time-to-market is one that’s undergoing immense debate and has been for the last couple of years. We’ve even brought consultants in, which is something that we rarely do, in that whole area of time-to-market to help us think out what is the right way to think about that whole issue.
End of presentation.
Last modified: August 13, 2006