In improvement science, we define “theory” as an idea or concept that’s testable. Theories are different than facts or laws that imply that something is fixed and unchangeable. A theory invites inquiry and experimentation.
One powerful leadership tool is to listen for theories when people say things as if they were facts. Here are a few that I’ve heard recently:
- Response times are a lousy measure of EMS system performance.
- Too much focus on pain management by EMS providers is contributing to the opiate overdose epidemic.
- There has to be a consequence for poor EMS provider patient care.
- Patients die every day in our system who could be saved with rapid sequence induction facilitated intubation.
Those of us who are deeply immersed in the science of improvement have a few annoying habits. One of these is responding to pronouncements of fact with something like, “That’s an interesting theory, tell me more about that. Where did it originate? I’d love to see the evidence behind it.”
Several years ago, I worked with a Chief Financial Officer (long since retired) that would call those of us running EMS ambulance operations whenever one of our key performance numbers went down for the second month in a row.
The first time I got one of these phone calls he said, “Your transport volume is down for the second month in a row. What’s going on and what are you going to do about it?”
There was a long and dreadful silence after I told him, “Volume is down because less people called 911.”
I could not believe it when he responded, “Well what are you going to do about that?”
It took all of my resistance not to say something career-endingly sarcastic.
Testing the CFO’s theory
A month or so after this phone call, I had the opportunity to ask him why he made these phone calls. He said, “A one month drop in performance can happen to anyone. If it happens a second time, then that means there is a trend and a real problem. When I call, things get better. And, quite frankly, a little fear is necessary to lead effectively.”
The way he said it indicated that this was factual, the way it is. He left no room for question or debate. Of course I said, “That’s an interesting theory.”
Let’s break his declaration into testable chunks:
“When I call things get better.”
Since our organization (long since amalgamated) had several 911 operations in various locations, this theory would be easy to test. We could do a prospective controlled experiment. Half of the operations would be on the call list and the other half would be left alone. We could track their respective performance over time and compare the two groups. This method would not get us published in the New England Journal of Medicine, but we are doing improvement not peer reviewed research. The methodology does not have to be perfect, just good enough to reliably answer a performance question for our organization.
“A little fear is necessary to lead effectively.”
Just like spanking a toddler, this topic tends to produce hot-blooded opinions for and against. W. Edwards Deming, Ph.D., one of the originators of improvement science, is solidly in the “drive fear out of the workplace” camp. Fans of Machiavelli tend to believe that fear of punishment is helpful.
This is a good spot for reader participation. How would you test the impact of a fear-based leadership style? It would also be great to hear your stories about leading or being led with or without fear. Please respond in the comments or send me an e-mail . We will gather your perspectives and share them in a future article.
“A one month drop in performance can happen to anyone. If it happens a second time, then that means there is a trend and a real problem.”
This is a theory that’s been extensively tested by expert statisticians again and again since the 1930s. It highlights one of the cornerstones of improvement science, understanding variation.
All processes have variation. The process that produces response time performance, your blood pressure and your team’s happiness level all yield different results at different times. Your blood pressure at this moment is not exactly the same as it was when you fell asleep last night.
These normal up and down alterations are known as “common cause variation.” So the first part of our CFO’s theory, “a one month drop in performance can happen to anyone” is supported by decades of statistical evidence. With common cause variation, a smart leader will not ask about specific data points, but will ask if the overall performance is good enough or not. That’s because with common cause variation, the individual ups and downs don’t mean anything.
If you’re allergic to bees and get stung on the inside of your mouth by a bee that’s landed in your soda can, chances are your blood pressure will drop so low that a limbo dancer would not be able to get under it. This level of change in performance is known as “special cause variation” or as Walter Shewhart, the father of improvement statistics, called it “assignable variation.”
When you have special cause variation, smart leaders will ask, “What happened here?” That’s because special cause variation signals that something has changed for better or worse.
In a future article, we will teach you several ways to differentiate common cause variation from special cause variation. For now, let’s just focus on one. If you have six or more data points in a row that are going up or down, it’s a sign of a special cause. Five or fewer data points — going up or down — is common cause variation.
So our CFO was asking a special cause question based on only two data points, common cause variation. When someone higher on the pecking order asks special cause questions of common cause variation, people will come up with answers that are logical, compelling, rational and completely fabricated.
Acting on those fabricated answers often makes things worse. Of course that’s just my theory.