of the First Balanced Scorecard: 1987-1992©
The following pages contain examples of Analog’s scorecards covering a five-year period: FY1988 through FY1992. I have noted the year-to-year changes along with their rationale.
This is the original long-term “unbalanced” scorecard. It represented the small set of non-financial goals that focused on results (external) as well as their process drivers (internal).
This chart was published by Analog’s CEO an a 1989 Sloan Management Review article.
c. August 20, 1987
This is my prototype for the first balanced scorecard. It contained the vital few metrics that would show Analog’s progress in achieving its 1988-1992 strategic plan objectives. This scorecard contains a balanced mix of financial/non-financial, results/process, internal/external, leading/lagging, etc., metrics.
My proposed scorecard went through several revisions over the next few months as I built consensus for its implementation and use. Notice the increased number of metrics, a temptation that was present from the very beginning. I decided not to resist too much since buy-in was my primary objective and pruning could be left as a future refinement.
Some of the measures that we put on this scorecard had been tracked for several years (e.g. on time delivery and new product booking ratio). Others were in the process of being defined (e.g. cycle time and yield). But cost and employee productivity were there as goalless "place-holders" as we labored to find good operational definitions. The last one, labor turnover, was the best measure that we could think of at the time of how well we were satisfying our employees.
At the bottom of this scorecard, you can see the links to our TQM effort (review of results by the scorecard owner) and to Analog’s traditional management processes (presentation to the CEO’s staff).
c. July 13, 1988.
This update of the 1988 scorecard was the starting point for improvement discussions.
The Corporate QIP Council made the several improvement on July 13, 1988.
The major change in the scorecard resulted from our recognition that although the results metrics: delivery, outgoing quality and leadtime (as seen by customers) applied equally to all of our products, the process metrics: processes defect level, cycle time and yield differed significantly for our two major businesses of ICs and assembled products. It made no sense to aggregate these on the corporate scorecard, and so we tracked them separately.
Although we persisted with cost and productivity placeholders, good metrics defied our discovery. We also recognized that direct and indirect turnover had very different business implications and so we decided to track them separately.
Once the 1989 scorecard was established by the Corporate QIP Council, I proposed these as our corporate goals based on the five-year plan and the agreed upon improvement half-lives.
The 1990 scorecard reflected our conscious decision to start the transition from measuring delivery performance against our promise date (FCD), to measuring it against the customer's requested delivery date (CRD).
A detailed revenue model had shown us that our existing new product metrics were poor predictors of our future performance and so we decide to replace these metrics with a detailed product tracking system. However, the desire to have some metrics associated with innovation on the scorecard led to the inclusion of two measures of how well we were on track to our 1988-1992 strategic plan. We therefore included absolute bookings of products introduced post-1985 and the aggregate forecast of third year bookings for the current vintage of new products.
The scorecard remained unchanged for 1991 and 1992. I left Analog at the end of 1992 and so I have no firsthand knowledge of its subsequent evolution. However, it is still in use today and looks very similar to its earlier ancestors. One area that has continued to evolve is the set of metrics associated with the new product generation process. More than a third of the metrics on the 1996 scorecard dealt with this process. Of the eight new product metrics on that scorecard, half are process (vs. results) metrics dealing with issues of cycle time, WIP and rework.
Outside of the US, Analog distributed its products through wholly owned sales affiliates. Here’s what we came up with as their scorecard for FY1988.
Below the division level, scorecards were voluntary and were not generally distributed outside of the team responsible for its improvement. Here’s an example of one such scorecard. In this case, manufacturing yield is broken down into its component parts. Each of those parts generally had an owner and an improvement team. The total, or chute yield, appeared on the division’s scorecard.
Because of slow progress improving the then independent divisional product development processes, the Corporate VP of Technology formed a world-wide team of product development managers. This was the scorecard that they developed. It provided the necessary focus for dramatic improvements in product development as well as the creation of a standardized best-practice process.
Last modified: August 13, 2006