It was pretty obvious that the QIP
Strategic Plan, by itself, was insufficient to assure achievement of our overall
strategic objectives. It would be too tempting, because of the daily press
of other business, to delay QIP implementation, given that the only checkpoint
was years away. So in August of 1987, I proposed a modification to our
traditional annual planning process (see August
1987 Benchmark Planning Proposal), whose output we called
the "Benchmark Plan." Where it got that name, no one seems to
know. It had nothing to do with benchmarking as practiced today. The
plan was typical of most companies annual business plan and dealt principally
with the annual financial budgets of the various business units.
One slide in that presentation, I called
the Quarterly Performance Audit. This was the prototype of Analog's first
Balanced Scorecard (c. 8/20/87):

The basic idea in creating the scorecard
was to integrate financial and non-financial metrics into a single system in
which they did not compete with one another for management airtime. Prior
to this scorecard, the financial and non-financial results were reviewed in
separate meeting agenda items. Whichever came first on the agenda was
perceived as the higher priority. By combining them this unproductive
tension was greatly reduced (see How the scorecard became
balanced).
My proposal was adopted and we proceeded
to use it as the template for our FY1988 (November through October) Benchmark
Plan. The resulting "Scorecard," as it quickly became known,
became the first "official" balanced scorecard (c. fall, 1987)

Note that this scorecard contained
financial (revenue, revenue growth, profit and ROA), customer (on time delivery,
leadtime, outgoing PPM (quality)), internal (cycle time, yield, process PPM,
cost, employee productivity), and learning and growth (new product intros, new
product bookings, new product booking ratio (% bookings from products introduced
in the most recent six-quarters), new product average 3rd year
revenues (average revenue in third year after into), time-to-market and employee
turnover). These constitute all of the elements that today's scorecard
promoters deemed essential to an effective balanced scorecard.
As part of the annual planning process,
each division and sales affiliate generated there own version of the
scorecard. My job was to make sure that they all added up to the corporate
plan and that that plan was moving us at an acceptable rate (based on my
half-live model, of course) toward our long term goals. As an additional
integration step, starting with Q1 1988, the scorecard and its associated
metrics were included as a regular section in the "Red Book," Analog's
internal report of quarterly results.
Development of metrics proceeded in
parallel with that of the scorecard. What emerged from was a disciplined
approach to performance measurement. The following slide summarized the
state of our measurement system (c. 1987):

The metrics in the yellow boxes existed
by the time we deployed our 1988 scorecard. Most of the remaining metrics
were developed during FY1988. Note also that the symmetry of this slide
shows the implicit balance between financial and non-financial measures that was
always present in Analog's scorecards.
The continuing interplay between metrics
definition and Analog's scorecard can not be understated. For example, it
quickly became apparent that Analog had two, very different businesses:
integrated circuits and assembled products. Consolidating process PPM
results for these businesses to the corporate scorecard would be mixing apples
and oranges. Therefore later corporate scorecards tracked these two
businesses separately.
Measuring overall employee turnover, an
assignment given to the HR Advisory Committee, proved relatively
straightforward. Dividing the total turnover into its voluntary and
involuntary components, on the other hand, was extremely difficult. Yet
this division was essential in order to identify root causes and corrective
actions, as well as to set appropriate goals. This problem eventually led
to the elimination of this metric from the scorecard, not because it was
unimportant, but rather because we found it to be unmanageable.
A similar problem existed with the
creation of new product development metrics. I will not get into that
here, suffice to say that we found it necessary to supplement our scorecard
metrics with a revenue model that tracked individual products rather than
aggregate numbers.