How to create Sustainable Process Improvement

Critical success factors

What to do:

  1. Eliminate as many as possible steps, actions, interfaces, complexity etc. Anything that adds no value: get rid of it, so it can’t be in the way and does not need any attention!
  2. Make sure that all the relevant steps, activities, data etc.  are logically arranged: available in the right moment and place. Every action that needs to be performed should be making complete sense to the one that needs to take it. Step into the position of the ‘operator’ and check whether the right thing to do really makes sense from his or hers perspective. (The right thing seems to be a natural thing to do).
  3. Make sure all the ‘workplaces’ (what or where ever that may be) are absolutely ‘spick and span’ ; clean, well-organized,
  4. Make sure players in the process have sufficient overview on what is going on, make it ‘visible’ at a glance what should be done, how and when. Use visual techniques where ever possible.
  5. If something goes wrong: Do not focus on the person! Focus on the question: Why did it make sense for this person to do the wrong thing (if he would have had a choice, he would have done the right thing wouldn’t he?) So whenever things go wrong: adapt the system, to make it more ‘logical’ to do the right things!

How to do this:

  1. Design the process together with all key players that have to perform in the process; only in this way you will get all crucial information and understanding from the team members to implement the new process
  2. Be very VERY critical when discussing what is value adding and what not (thus what should be enhanced and what should be eliminated). Typically most of the non-value adding activities are considered to be ‘necessary’… and thus they remain annoyingly present…
  3. Adding people, space, investments, complexity is NEVER a solution. Simplification does.
  4. Use the power of ‘human intelligent decision making’; allow decision making within the operation, based on clear decision criteria, based on reliable data.
  5. Use the power of human pattern recognition: the less cluttered the process, the easier it is to detect anomalies in the field. Allow people to respond to such in a way that makes sense.
  6. Always check: What would be the normal situation at this point in the process? What could be different, and what would make sense to the ‘operator’ to do in such circumstance?

Conditions:

  1. Allow ‘operators’ to be responsible for the correct outcome of the process by providing solid designed processes and conditions to fulfill the tasks within them.
  2. Do not focus on individual players, however focus on the whole system.
  3. It is the management that is responsible for providing a system where players can perform
  4. Only the players in the system can provide the inside knowledge about what they need to perform

Checklist ‘Specifications’

The checkpoints below are a guideline to detect weak spots in your process, simply by analyzing just óne issue or problem.

To establish a stable and reliable process, for all relevant parameters those points have to be solved.

  1. Define a specific issue or problem
  2. Investigate and define the REAL needs of the customer (in respect of this issue)
  3. Define the desired output
  4. Define the parameters to judge the desired output
  5. Define the upper and lower limits for each of those parameters
  6. Describe how to measure/test/detect those values. (How can I see I am within spec?)
  7. Describe what conditions have to be met to keep those parameters within the limits
  8. Describe how to fulfill those conditions
  9. Design a VISUAL system to monitor the conditions AND the actual status of the parameters.
  10. Describe what to do when conditions or limits are not being met.
  11. Prove that all parties involved know about the above points and that they CAN act and stick to it.

Zero Defect, complete process control; is that possible?

Although the old Masters like Deming, Juran and Crosby have been teaching us the same message over and over again (read the book “Quality Is Free” by Philip B. Crosby) it still is an issue that seams to be hard to be convinced about…

Is it possible to have a Zero Defect production system? Is it possible to make your processes completely stable?

Today, in a discussion with highly knowledgeable and experienced manufacturing experts I heard this quote:

 

“There is a balans point where it cost more money to improve the process than it will generate. At that point you have to stop”

 

Does that mean “Zero Defect” and “fully stable processes” are Utopia?

a German Delphi Site, producing Zero Defect since 2001, was audited by the German CETPM after they strived since seven years for insourcing and excellent processes;

Zero PPM defects, no accidents and less than 3% illness.

Is ‘complexity’ really ‘complex’?

On many occasions I got struck by the fact that what we experience as ‘complex’ sometimes is just a construct in our head because we do not understand what we see.

Example? For some a computer is very complex, hardly to understand. To others it is a simple thing, which occurs in many variations on a single theme: Every computer has a hart called the processor, around the processor we find a couple of in- and output devices and there are devices to store data, called memory. That’s it. Whatever computer you meet; basically they are all the same, presuming you know where to look for the common denominator(s).

What’s that to do with process-analysis?

Assume you find a process that gives you a headache, it dazzles you. There now are 3 options:

  1. It is a very good process but you simply do not know what it does and what you see…
  2. It is a complete random set of actions, that seam to belong together but that’s just ‘a coincidence’…
  3. Or it is an intermediate form; there is a kernel of valid actions mingled with some more or less useless and/or random activities…

We know that for business processes, 95 till 99.5% of the activities in most processes do not add any value to the intended purpose of that process… (Peters, End of the Hierarchy)

The problem now is: We do nót know what the valuable part of the process is and what the ‘noise’ part of the process is.

And even the valuable part of the process may show ‘an infinite amount of deviations’…

From the millions and millions of cars in the roads, there are simply no two identical cars; however they all are unmistakably cars.

From the 6 billion people on earth, each and everyone is unique en yet we all recognize them immediately as ‘a human being’; even the most mutilated or handicapped ones. And even stronger: We need only to see a part of the face to identify óne specific individual! And we do not know why that is…

Obviously there are some ‘parameters’, some ‘markers’ that we scan for, to distinguish a car from a bicycle, a man from a woman or a human being from an ape.

Human beings have a set of common denominators that distinguish them from apes. The eyes of a human being have a set of common denominators that makes it possible to discriminate one from the other. Complex? Only as long as you do not know the ‘system’ behind this process!

Untangling complexity

Let’s apply this knowledge to our processes… Could it be true that every invoicing process is basically the same? That they all have the same common denominators? Than what are the common steps in that process and what are the possible variations on those steps? As soon as we understand this ‘system’ of needed activities to perform a certain task, we have the basis to design a process.

For cars we know: they all belong to a certain brand, and within each brand there are several engines, all from a basic pallet (Gasoline, Diesel, LPG, Electricity…). Cars have a colour or set of colours, they all have wheels based on a rim and a tire etc.

So although the occurrences will be infinite, the parameters and its pallet of choices are limited!

As soon as the parameters and its palettes are identified, we have the ‘key’ to untangle the ‘complexity’ and bring it back to a set of simple choices…

Examples

  1. Koch’s Curve shows how it is possible to have an endless pattern on a line by simply recursively reshaping the line. Fractals are other examples of apparently extreme complex figures that are basically reoccurrences of a much simpler activity. As soon as you see how the figure is constructed, its complexity disappears and it becomes a transparent pattern…
  2. Non-linear dynamics show how to find out whether what you see is really ‘random’ or whether it has a structure. Although not proven yet, I feel it should be possible to use this techniques to detect whether the occurrence of an incident (like an accident or a quality defect). From Heinrichs Law and Birds Law we know that there is a correlation between the presence of abnormalities and the occurrence of an incident. Therefore accidents and incidents are no longer an “unhappily joining of circumstances”. The incident simply hád to happen, based on the parameters having a certain value at a given time! It is a deep misunderstanding of statistics to assume that things will not happen because the chance of it to happen is incredibly small. The chance of that car in front of you at your next tank stop will be there is one in a zillion, and yet tomorrow or next week it will be there!
  3. Distinguishing ‘Value’ form ‘noise’ or ‘non-value’ in a process allows us to eliminate unneeded activities from the process. Less activities and correctional loops in the process will result in the reduction of its complexity. To identify the steps taken and analyse them on value, Makigami is the starting technique. To further design a robust and structurally correct process, the PSD is being used.

Design Rules

  • Data is collected as near at the source as possible
  • Who enters the date is owner
  • Who owns the data enters it
  • Who owns the data is responsible for its correctness
  • Who guarantees, is allowed to set requirements
  • Who performs a task guarantees its quality
  • Who can not guarantee the quality of its output is obligated to stop the process
  • Who detects a downstream failure, returns it to the previous step without correction
  • Who makes a mistake solves it
  • It is allowed to make a mistake ónce, in order to prevent it from ever re-occurring
  • The management guarantees a system that allows the employees to perform its task within spec

Principles:

  • The shorter the throughput time, the less chance on disturbance.
  • Mistakes are chances for improvement; they are allowed ónce, not twice!
  • Any ‘Improvement’ that adds complexity is no improvement.
  • Just adding investments, technology, square feet or people is no improvement.

Chaos or order?

In the article about Koch’s curve, “fractal’s” have been mentioned.
At first glance a fractal seams to be a rather ‘random’ created pattern. For who doesn’t know any better it might even be described as somewhat chaotic.
What fascinates me is this subject:

  1. Is chaos really always chaos?
  2. Is ‘unpredictable’ really always unpredictable?
  3. Is indissoluble complexity really that complex?

Or is it just a matter of not knowing and thus not seeing the structure? Could it be, that there are in many of those ‘unpredictable, uncontrollable and chaotic’ situations somewhere a set of rules, an algorithm, that makes ‘chaotic’ situations transparent, (because at once we can see the structure) and fairly predictable thus controllable?
My real search is the answer to this question: How do theories and insights as given in nonlinear dynamics relate to Birds- and Heinrichs incident pyramid?
Since Tripod and Grothus’ studies we do know there seams to be something like ‘basic-risk factors‘ and even some kind of an estimate about its behaviour leading to all kind of incidents and disasters.

Process Simulation: An example

Process Simulation

A well designed process is executed correctly under all circumstances (full-proof and fool-proof). Furthermore, a correct process does not have unnecessary steps. This seems trivial but in practice this is not easy.

Have a look at the following example:

At first sight this procedure looks correct. However, there are three mistakes in this procedure:

 

 

Mistake 1

The first action is to open the book. Then we need to assess if the page number is unequal to the last page number. We cannot give an answer to this question, because we have no information about the last page number! Furthermore, we do not know on which page we opened the book.

Mistake 2

What happens when we turn over the last but one page and arrive at the last page? Then the page number is equal to the last page number. At this point the condition is no longer satisfied and the last page is not read.

Mistake 3

The situation at the start of the process is unknown. So opening the book, the first action, is at random.

Benefits of simulation

A PSD gives the opportunity to simulate a process during the design phase. Gaps are identified easily because of the visualized process. This prevents mistakes after implementation during execution. Both structural and sporadic mistakes are traced using the PSD simulation in an early stage.

Below the example is corrected:

Optimization

The next step is to analyse if the correct PSD for *** reading a book *** can be optimized. Can we eliminate unnecessary actions? An option is to read the book without constantly paying attention to the page number.

Advantages

Without bothering your organization with trial and errors, you may first design the needed workflow, the decisions that have to be made and the go-no go points before even thinking about how- and by whom those actions and decision need to be taken.

If the design of the process is full-proof, this will become your bacon to steer upon when implementing it in your organization.

In pursue for Perfection

‘Process Control’ means to know in advance what the outcome will be…

The fascination for this phenomenon is widely spread in Japan. Extreme process control can be fun, as this video shows!

Koch’s Curve

Koch’s Curve is a basic Fractal. ‘Fractals’ where first described in 1975 by Benoït Mandelbrot, but those fascinating figures were already discovered 100 years earlier by mathematicians investigating bizarre mathematical behavior, and called ‘monster curves’.

A fractal is a geometric object which is highly irregular at every scale. Some of the most famous fractals have self-similar structure: they have a repeating structure at all level of magnitude. One of the most familiar examples is Sierpinski’s Triangle. Many of these fractals can be generated by repeating a pattern in a recursive process.
Lets start with a very early fractal-like phenomenon

In 1904 Helge von Koch introduced the Koch curve. Here is how the curve is recursively constructed:

  1. Begin with a straight line (the blue segment in the top figure).
  2. Draw an equilateral triangle with the middle segment as base.
    Remove the middle segment
  3. Now repeat, taking each of the four resulting segments, dividing them into three equal parts and replacing each of the middle segments by two sides of an equilateral triangle (the red segments in the bottom figure).
  4. Continue this construction.
In the picture at right, suppose for the sake of argument that the line segment in Stage 0 of the figure is 1 meter long.

 

The next stage, Stage 1, is produced from the previous stage by first dividing the line in Stage 0 into three equal pieces of length 1/3 the original size, then removing the middle third and inserting the tent of an equilateral triangle.

Stage 2 is obtained from Stage 1 by applying the above process to each of the four straight line segments in Stage 1. And we continue… If you want to draw Stage n you simply apply the process to the previous stage, Stage n-1 . But, of course, you need to know all the stages prior in order to do this. The result is a sequence of drawings becoming more complex the higher the stage number, but still looking somewhat like the previous members of the sequence.

You can see in the figure that already at Stage 4, the drawing is quite complex with much detail. In fact, if you continued the construction further you might say that stages 4,5,6,7,… don’t look that much different from one another, and you’d be right. Of course they are different fundamentally, but at the scale we’ve drawn them, we can’t see much difference.

 

0

1

2

3

4

Niels Fabian Helge von Koch (1870-1924) was a Swedish mathematician who first played with the figures we are discussing. He noticed that as the stages progressed, the figures seemed to “settle down” to a figure not that much different from that in Stage 4, as we’ve observed. He asked the question, “What happens to the figures if we continue the process indefinitely?” In other words, suppose you would let your computer keep on calculating at high speed this algoritm, could you tell the difference between Stage 1,000,000 and Stage 5,555,679, or further? Does this sequence have a limit? When zooming into this curve it would look about like this:

 

In fact, this sequence of drawings does have a limit, in a technical sense, and that limit is called “von Koch’s Curve.” What’s interesting, is that if you arrange 3 copies of the curve along the edges of an equilateral triangle, you get the figure at left. Now it’s clear why it’s also been referred to as the ‘snowflake fractal’

What is the length of von Koch’s curve? The only way to answer such a question is using limits. Here’s a guide:
1. Recall that the line segment in Stage 0 was 1 meter long. Follow the process and compute the length of Stage 1, remember that each straight segment is the same length.
2. Compute the lengths of the next few stages (you made need a calculator for this). Can you see a pattern? Find a formula for the length of Stage n. Check your formula against those you previously computed.
3. What happens to the lengths as n becomes very large? Do the lengths settle down to a particular number? How are they behaving? Your answer should make you feel a little uneasy if you’ve never done this before.
Don’t panic. There are three possible answers to the command: “Find the limit of this sequence (of numbers).” The limit exists and is finite. The limit exists and is infinite. The limit does not exist. Notice that existence is an important part of the answer…

 

Online calculation of the Koch Curve:

http://www.arcytech.org/java/fractals/koch.shtml

Basic Concepts in Nonlinear Dynamics and Chaos:

http://www.vanderbilt.edu/AnS/psychology/cogsci/chaos/workshop/Workshop.html

Other Sources:

http://fractalfoundation.org/resources/fractivities/koch-curve/