When a consultant walks into a company to help solve a technical problem, it is safe to assume, at the start, that he or she knows less about the product and problem than anyone else on the team. And everyone knows it. Someone is sure to be thinking, “What does he have to offer?” Usually by the time the consultant’s phone rings, the team is frustrated; most want help, and some don’t, but are told to get it. And someone, often a customer or the boss, is certain to be unhappy. In any event, the first day or so can be a bit tricky.

David Hartshorne, John Allen and The New Science of Fixing Things bring years of experience solving difficult technical problems, mostly with people and companies that make planes, trains and automobiles. We deal with these difficult situations all the time. We thrive on it.

John and David have learned that figuring out how to make the machines you sell to customers run better and longer, and the ones you buy for the factory make good parts for a long time, is a very narrow, specific discipline. It is a profession that is best thought of, from the strategic and tactical level, as a discipline driven by comparative analysis.

If the strategy is to look at differences, we have to remember that the most fundamental question in any field of comparative analysis — to move quickly and efficiently from the strategic to the tactical level — is, “Compared with what?”

Technical problems get solved every day without making much of it. Most problems get solved by one person, working alone without meetings, teams, flip charts or Powerpoint presentations. Or consultants. Anyone who works for a living solves problems, and many do it all day, every day. Their tactics must be sound. What are those tactics? Once in a while, a person gets stuck, scratches her head, then asks for help, usually from someone with more experience. Then the two of them figure it out. Given that this is the most often used form of solving problems, and usually works, it merits a closer look, up to and including the point where help is called for.

QUESTION ONE: WHAT’S WRONG?

When do you ask for help? What do you have to do before you ask? How do you know you have done enough to justify asking for help?

First of all, let’s take a look at the tactical approach used when trying to solve problems at the most basic level. Problem solvers usually ask, “What’s wrong?” Then they make some sort of comparison to what they think is the ideal, or at least better, state or condition. This must have started with cavemen after they invented the first tools. When they don’t get the answer, they might say, “Can you give me some help? I can’t figure out what is wrong or why this doesn’t work.”

The practice of asking, “What is wrong?” is the most basic tactical approach to solving problems. In order to get the answer, however, the problem solver has to have a model in his head as to how things are supposed to work. That being said, every person who tries to solve a technical problem understands that some form of comparative analysis seems to help. Typically, the more experience you have, the better the models.

If we fail to get an answer, there is usually one thing that went wrong:

The model we have in our head for purposes of comparison is weak, either because we do not understand the physics of function or the physics of the failure. This can lead to confusion and the tendency to start guessing, the bane of professional technical problem solving.

Some model of how things are supposed to work needs to be part of any effective approach to solving problems. It is the key to keeping the business strategy in phase with the deployed tactics.

Problem solving is different from inventing. It is about refining rather than redesigning (changing the way the functions are performed) or reengineering (changing functions.) Redesign is a medium-term activity, while reengineering is a much longer-term activity. Within both, a great deal of problem solving takes place. Problem solvers are not trying to invent, but to put things into the order they are supposed to be, hopefully the order in which they were at some point.

Those trying to refine by inventing their way out of a problem have broken the phase relationship between strategy and tactics.

QUESTION TWO: WHAT’S CHANGED?

When a problem is yet to be solved by the process of asking what’s wrong and comparing it to some better state, some people find it necessary to change the tactical approach rather than find out what is missing from the existing one. This is where the train can really go off the tracks, and cause knowledgeable people to be annoyed. Changing tactics is unsettling and unnecessary.

In most cases, this tactical change is not a function of a strategic decision, but of the skill set of the new person placed in charge.

Asking, “What has changed?” is completely different from asking, “What is wrong?” By asking, “What has changed?” one assumes some form of statistical stability has been lost and needs to be restored. The world of this type of well-meaning person is based on the assumption that all systems must be stable. He typically sees the world in terms of common causes and special causes. This approach can work, but the model fits only certain circumstances, and it is therefore severely limited. That it works at all could be seen as unfortunate, because today it is widely at- tempted where it will not work, serving to confuse and frustrate otherwise capable people. It should not be used as a replacement for the physics model, and any well-trained professional statistician will agree. The common cause-special cause model to solve complicated product performance and reliability problems usually results in long lists of action items, leaving those with a sound knowledge of how things are supposed to work out in the cold. When supplemented by brainstorming, the tactics have decayed to organized, iterative guessing and voting.

A more technical description, for those interested, is as follows:

Statistics is the mathematics of uncertainty. When we use statistical tools we are choosing to replace a deterministic model which explains the physics (F = ma) with statements of probabilities of obtaining a specific outcome. It really can be quite useful, if done effectively and for the right purpose. We try to use statistical models for the purpose of confirming the ability to obtain a specific outcome once we have reason to believe that we understand the physics of failure and the physics of function. Probabilistic models for purposes of investigating a physics problem are not nearly as powerful as the deterministic, physical models.

In simpler terms, there is no such thing as a statistics problem. Your customers don’t care about statistics. They care about performance and reliability. Stick to physics models to solve technical problems, stick to measuring performance in Joules, and use the statistics models to confirm that you know what you are talking about. The New Science of Fixing Things will show you how to do both.

Asking, “What’s wrong?” effectively requires the ability to compare two states of being. Choosing those two states of being based on the physics of function requires skill and experience of how things are supposed to work or to use energy to create useful work. Without it, the ability to see contrasts between two states of being is weakened. This is why asking, ”What has changed?” is the weakest form of solving problems. It ignores the physics of function, and often just looks at time-based contrasts and obvious alternate manufacturing paths. “What’s changed since last month when we didn’t have this problem?” is a weak approach.

“What’s changed?” is a tactical approach quickest to break the phase relationship with a sound business strategy, which must include speed, order, discipline, honesty and simplicity as principles. Those who ask, “What’s changed?” tend to limit their presence to the conference room and their tools to flip charts, markers, and fancy graphs using old data from accounting, suppliers, or factory records. This approach requires little or no understanding of how things are supposed to function. One central weakness is the usually false assumption that whatever changed was in fact measured and recorded. Asking, “What’s changed?” can be done by very smart people doing the wrong thing well. Once this gets started, it is hard to stop.

QUESTION THREE: WHAT’S DIFFERENT?

“What is different?” is a big step forward. It can encompass “What has changed?” by examining different time periods, but it is considerably more powerful. Of course, it is possible to attempt to answer this question in a very haphazard way, without understanding how manufacturing is organized and things are supposed to function. This negates the power of the question and is still very close to guesswork.

Another big step forward when asking, “What is different?” is to develop a convergent strategy and associated tactics based upon an efficient process of elimination. A good problem solver, when asking, “What is different?”, has to make a comparison of two states of physical being, while limiting the field of vision to a meaningful contrast. This is not as simple as it sounds until you get the hang of it (we can teach you how to keep it simple). A disciplined approach to asking this question really helps keep the list of potential contrasts, then the list of suspect variables, down to a manageable size, hopefully one or two.

Dorian Shainin was really the first to put some order into this process, and he became recognized as a leader in the field of problem solving he called Statistical Engineering. Dorian got Six Sigma started while working as a consultant at Motorola with Keki Bhote. Keki wrote at least two books about what he learned from the experience. (www.bhoteassociates.com)

David Hartshorne, Tim Nelson, and John Allen spent years building on the work of Dorian during and after the time we worked with him, developing more effective ways to define meaningful contrasts to learn more and go faster. David and John saw this as a potential for a bit of a sea change in problem solving. The logic was based on the fact that every problem can be written in the form Y = f(X). Making lists of X’s that can go wrong is time consuming and wasteful, and usually doesn’t work anyway for tough problems. The Y-axis, when forced to reveal its nature, is the key to solving tough problems faster. When we started this, we just used the Y-axis based on the way the customer saw the problem. For example, if the problem was a noisy motor, we developed a way to measure it, then built the list of contrasts, all derivations of good and bad motors and the manufacturing process that created them. Once we found the best contrast (fastest way to the answer), we bored down to the level of X’s. The trick was to make sure that every X was a function of the narrowed scope of the Y. We found this was a pretty good idea and an effective way to keep things that did not fit the clues off the list.

There was still something missing. It was the physics of function and failure. We were still trying to fix things based on an assumption that people knew how things worked, and leaning overly toward the thinking of the probabilistic models. You can ask, “What’s different?” without understanding how things are supposed to work. It is a lot more effective than asking, “What’s changed?” because it changes the scope of work from making lists to building Y-axis contrasts and looking at parts and processes in a disciplined way.

QUESTION FOUR: WHAT’S HAPPENING? HOW DOES THIS WORK?

Three things happened that really changed our way of looking at the world of fixing things.

First, there were a few instances of working with project teams where we found they were trying to solve problems with an incomplete understanding of how things were supposed to work. Even when there was one person with a sound knowledge of function, there was not a very good way to communicate it to the rest of the team members. We really needed a way to fill this gap.

Let’s step back for a minute. When we ask, “How’s this supposed to work?” we are asking a very narrow, specific question. Getting the answer to that question requires that you limit the answer to how a machine consumes energy in order to create useful work. And there are only seven ways that machines use energy, or seven machine functions. (Link to seminar) One machine can use energy in several ways in order to be able to provide useful work. When the output is unacceptable, we want to figure out which function we care about (the high risk function), how energy is being wasted (“what’s happening?”), and what the governing physical law is that dictates how it is supposed to work.

When you choose to look only at the difference between those that work versus those that don’t, you are severely limiting your opportunities to select a meaningful contrast. Therefore, the models you use, and you always have one, will be weak and slow you down.

The absence of an understanding of the physics of function and failure often result in engineering changes that are supposed to be ”directionally correct.” Translation: “I am not sure what is wrong, but this ought to help.” Remember this:

Making engineering changes in the absence of a demonstrated understanding of the physics of function and failure is irresponsible behavior.

Remember this, and it will save you a lot of grief. Making changes that are an outgrowth of “action items” is a sign of panic, not wisdom.

The second thing to influence our way of thinking was the book, Great Ideas in Physics, by Alan Lightman. The following passage was important:

The second law of thermodynamics, which states that all isolated physical systems unavoidably become more disordered in time explains why machines cannot keep running forever…

That seemed like good advice and confirmed our thinking that we needed to focus more on the deterministic, or physical models, not the probabilistic models, as a basis for solving technical problems. Examining how machines consume and waste energy had to be a powerful tool to make them perform better and last longer. It’s the law! And the perfect machine of the second kind now becomes the basis for all contrasts, since it really is the best of the best.

The third influence was our introduction to function models that are a tool for Value Engineering. John first saw them used at Pratt & Whitney. (Juran’s Quality Control Handbook, page 13.63) He saw similarities between FAST (Functional Analysis Systems Technique or function models, and Boolean algebra and logic diagrams. David and John changed them around, integrated Boo- lean principles, and put some discipline into building them to suit our intended use.

This is how we developed the tool we needed to move technical problem solving forward, but keep the objective of any good problem solver in mind, which is an effective model against which to compare, so we can effectively ask, “What’s happening?” We decided to call our new tool E- FAST, because it is based on FAST models, (link to FAST) but we always start building them based on the seven ways machines consume energy to create useful work. Since the Y-axis must be energy based, we refer to it as the E-axis. If, when trying to solve a product perform- ance or reliability problem, the Y-axis is not based on one of the seven things machines do with an energy supply, then you are not using the best tactics available. (link to my article)

Only a few problems need to become team projects. If you can’t figure it out by yourself, ask for help, and still don’t get the answer, then you have a project on your hands. You need a small team, with a good leader. Once you have a team, then you already must have asked, “What’s wrong?” and did not get the answer. Then it is time to ask, “What’s happening?” and “How is this supposed to work?” E-FAST, combined with the rest of our sound tactics, will help you do it the best way.

E-FAST DIAGRAMS

Focusing on the physics of function and the physics of failure is the purpose of E-FAST. We do it at the outset of a project. It is a new tactical approach to solving problems that is very exciting. E-FAST preserves the strategy of asking, “What’s different?” based on comparison to a model. Our models, however, are the most powerful. The most exciting thing is that it really gets back to the basics of how a machine manages energy in accordance with the second law of thermody- namics. Rather than being restricted to measuring a problem as the customer sees it, we can now measure in clever ways based on how machines consume energy to create useful work and how they decay from wasted energy or by energy exchanges with their environments. Now, in- stead of focusing on the Y-Axis, we focus on the E-axis. E-FAST is a diagram of energy-based contrasts that include the governing physics principle to keep the scope of a project as narrow as possible.

Another exciting outcome of using E-FAST diagrams is their value in reengineering activities. With the discipline they bring, the opportunities for reengi- neering become very clear to the executives as well as the engineers. The understanding that E-FAST diagrams provide concerning machine energy ex- changes with the environments also allows us to develop clever ways of accelerating life testing without creating foolish failures.

E-FAST is the key to really closing the loop in the science of solving technical problems and gets us asking the right questions, keeping strategy and tactics effectively in phase.

This is what The New Science of Fixing Things is all about..

David Hartshorne and John Allen have years of experience solving tough technical problems in manufacturing, product performance and reliability around the world, mostly with people who make planes, trains and automobiles. They thrive on solving technical problems with speed, order, discipline and simplicity, and teaching clients how they do it!

David and John have created The New Science of Fixing Things in order to offer their most significant developments in a new training seminar. For more information, visit www.tnsft.com