A
couple weeks ago, I raved about Better: A
Surgeon’s Notes on Performance, in a post I wrote over at The Working Life on Masters of the Obvious. Author and surgeon Atul Gawande provides a
series of compelling essays about how, through one tiny small step at a time,
by gathering, studying, and using evidence, and by focusing on small seemingly unimportant
details, mortality and complication rates can decline steadily in everything
from battlefield injuries, to children suffering from cystic fibrosis, to
halting the spread of polio. It is a
compelling read, and has many implications for what it takes to be a great manager,
not just what it takes to keep improving the quality of medical care throughout
the world. I was especially struck how
the best doctors and hospitals have what Jeff Pfeffer and I have called “the attitude
of wisdom,” they have courage to keep acting on the best knowledge that they
have right now and the humility to doubt what they know, so that when new
information comes along, they can change their beliefs about what works –and
their behavior too.
I
was also taken with Gawande’s suggestion that, if a hospital or medical unit
wants to improve its performance, one of the most effective ways is to study “positive
deviants,” those statistical outliers that are doing far better than the
rest. As I read his stories, especially
about the best versus the average hospitals that treat cystic fibrosis, I was taken with the approach. At the same
time, I realized that it is remarkably similar to what many companies do when
they benchmark: they find the very best performers in their industry – or another
industry – and then try to imitate everything they do as closely as
possible. This method can be useful, but
at the same time, as our work on
evidence-based management shows, it is a risky method if done in a casual
way, without thinking about what you are imitating and why.
If
you are going to try to learn from top performers, there are at least five
pitfalls you need to keep in mind:
1. What seem like characteristics of top performers may actually not distinguish them at all from poor performers –Don’t just look at winners, look at winners and losers. This was the main flaw with Peters and Waterman’s huge best-seller In Search of Excellence. They only looked at excellent companies, so it was impossible to tell if what the winners were doing was any different from the losers!
2. Watch the correlation is not causation
problem. Everyone learns this in statistics, but a lot of leaders forget it
when they benchmark. Just because something
is associated with performance, doesn’t mean it causes performance. For an enduring example that seems to persist
despite our complaints in the Harvard
Business Review (as well as directly to Bain partners), go to www.bain.com. The very first thing you see is chart the says “Our Client’s Outperform
the Market 4 to 1.” I remain amazed that
the smart people at Bain have had this on their website for so many years. Do they really mean to imply that using Bain
has a huge positive wallop? They have
some bold sounding and meaningless text beneath the chart (e.g., “Companies
that outperform the market like to work with us; we are as passionate about their
results as they are.”). The marketing people seem smart enough to duck
the question of if using Bain really drives these results, because they are
smart enough to know there are so many alternative explanations (e.g., perhaps firm that make more money can better afford an expensive management consultant). But they aren't wise enough to take the chart down -- I suspect it is one of those sacred cows, something that many people realize is dumb, but are afraid to change. I also want to emphasize that I am a big fan of Bain, in part,
because I think they are among the most evidence-based of the major consulting
firms, but again, I urge them to take this down – it makes them look bad.
3. When you compare winners
and losers, beware of “untested” differences. Just
because every winner you look at does something and every loser you look at doesn’t
do it, isn’t enough – it may just be the result of a bad sample. This is the one of the main problems with Jim
Collins’ best-seller Good
to Great. I find this a compelling read and would like
to think, for example, that firms with level 5 leaders, those who are unselfish and
relentlessly driven to improve firm performance, will trump firms with
selfish and less driven leaders. But note that Collins reaches this conclusion
by comparing his 11 “great” firms to an equally small matched sample of firms
that didn’t make the leap. He fails to
point out that no attempt was made to find firms that also had level 5 leaders, but
failed to make the leap –- and he could have left out thousands of firms from this tiny sample that had level 5 leaders, but didn't make the leap. Again, I like a lot of things
about this book, but I do wish that Collins wouldn’t hold it up as up as such a
rigorous study, as while I think it has helped a lot companies, it is not a model of a rigorous research, and could only be published in a peer reviewed journal if it made careful links to the prior research that supports his conclusions (something the book doesn't do) and if he acknowledged the numerous methodological flaws. See The Halo
Effect for a more damning attack. I am not as negative about Good
to Great, as I think it has helped many managers despite the excessive
claims Collins makes about the research. But the well-crafted critique in The Halo Effect is
worth reading and, frankly, I wish Collins would acknowledge some of these problems. It would still be a great book -- everything has flaws and few books are as compelling as Good to Great. Also, admitting the flaws strikes me as something a level 5 leader would do!
4. What is good for them might be bad for
you Consider the case of a quality movement. If General Motors had not
massively improved the quality of its cars ands trucks (despite its other persisting
problems), I believe that the company would simply no longer be alive
today. Yes, Toyota continues to be a very tough
competitor, and GM struggles, but their quality has improved massively in the
past decade, and without it, the company would be in far worse shape – if not
out of business. BUT that doesn’t mean
that every company needs a quality movement. Kodak, for example, had a fairly effective quality effort some years
back, but the problem was that it was focused on their soon-to-be obsolete chemical-based film business –so it helped them to be more efficient at doing the wrong.
5. Winners may succeed
despite rather because of some practices. This brings me to my favorite example. It
is very well-documented that Herb Kelleher, who was CEO of Southwest Airlines
during an unprecedented run of growth and profitability in the industry, smoked
a lot of cigarettes and (according to multiple reports, including his own)
drank about a quart of Wild Turkey whiskey per day during this period. If mindless imitation of successful companies
is the key to success, this means that you need to get your CEO to start
smoking and drinking a lot – or to keep it up if he or she is already doing it.
Sounds absurd, doesn’t it? But it is no different than the arguments that
armies of consultants are making right now about GE, Google, and P&G – you should
do it because they do it, and are successful.
In closing, I want to emphasize that you can learn a lot from “positive deviants.” But you need to stop and think carefully about why they succeed and what work for your organization. And, consistent with the argument I make again and again here and elsewhere -- following from design thinking -- if you are going to do something new, try a small some cheap experiments if you possibly can: It is a lot cheaper than rolling out a big program that turns out to be a bad idea. That is also why, although some mergers are a good idea, it is one of the organizational changes that I see as most risky because it so difficult to reverse once it is started. Also, as I’ve written here before, mergers have high failure rates, despite the success stories you may hear from your local investment banker.
Yes, I agree that "Made to Stick" has important messages for us teachers striving to improve our part of the education process, that that its not "just the facts". But then I read Cialdini's "Influence: Science and Practice" and Gilbert's "Stumbling on Happiness" and realized just how enormous that challenge really is. I still haven't recovered from those two books.
Posted by: JMG3Y | May 22, 2007 at 05:06 PM
Hi: I wish I could argue with you, but you are absolutely right. The notion that even the best effects the probabilities of success, rather than in a pure "this is right" or "this is wrong" sense ends-up being a hard thing to teach. And also something that is the best decision you can make right now may end-up being wrong or incomplete later, as better data come along. We try to make this point in part by arguing that effective managers act on the best knowledge they have right now, but doubt what they know... but in practice I agree it is tough.
The stories are another interesting thing. An irony -- if you read Made to Stick by the Heaths, is that they will influence behavior more strongly than statistics (and they have the studies to prove it!), so the trick is to tell stories that are true, or as true as you can make them, given what you know now, to sell evidence-based management.
Indeed, if you read things by David Sackett, one of the pioneers of evidence-based medicine in the U.S., he is a darn good storyteller!
Thanks again for your very thoughtful comments,
Bob
Posted by: Bob Sutton | May 22, 2007 at 11:25 AM
Bob: I meant to include the caveat that I haven't read Hard Facts as yet; it is on my list.
I'd add that teaching the EBM process to students is hard, at least for me. Students do not like dealing with the uncertainty and complexity on the professional knowledge base side as well as the patient side. They want solid facts upon which to base their actions; from their perspective just getting all the facts down is a tough enough task. Experienced practitioners are more amenable but tend not to react well to anyone questioning the integrity of their personal knowledge base.
Posted by: JMG3Y | May 22, 2007 at 08:16 AM
Thanks for the comment. Note that our book, Hard Facts, Dangerous Half-Truths, and Total Nonsense, does exactly what you suggest. It uses evidence-based medicine as a point of departure for understanding management. And while the above is meant to consider logic rather that quantitative evidence, that book, and in fact, many of the ideas on this blog, proposes ideas based largely on quantitative evidence from peer-reviewed studies -- which I confess are weaker than medical studies, but better than what most companies use. At the same time, as quantitative evidence shows, points stick better when they are made with sticky stories. Thanks again for the comment.
Posted by: Bob Sutton | May 21, 2007 at 12:33 PM
A core principle of EBM (of the medical variety, who have a prior claim on the acronym) is the assessment of strength of evidence and the use of reliable, repeatable procedures to do so. Developing such procedures and then performing these to answer specific clinical questions is a difficult task involving incredible amounts of person-hours. The Cochrane Collaboration is the primary location of this effort and most of their materials are available on-line.
The strength of medical evidence hierarchy forms a logical pyramid, with the strongest (randomized controlled blinded clinical trials) being the least plentiful tip and the weakest (anecdotal case reports) being the most plentiful broad base. Another broad but useful classification is the study form - experimental (randomized allocation of subjects) vs. observational, the weaker base of case-control, cohort and the weakest, case series.
Most of what is described above as management evidence is at best observational evidence and at worst descriptive case series, the weakest form from a logical perspective. IMO, EBM of the business management variety could learn much and perhaps avoid some pitfalls by looking over the fence at the history and current developments in the EBM of the medical variety. For example, once it becomes popular, paying attention to the actual strength of that which is labeled EBM by its individual proponents is critical.
Posted by: JMG3Y | May 21, 2007 at 10:53 AM
I agree with 'Don’t just look at winners, look at winners and losers.' I agree with you that the positive deviance approach as decribed here has the disadvantage of creating a benchmarking situation. An alternative would be to look at positive deviance not only within the system (organization) as a whole but to look at positive deviance within individual members of the system too. Both with top performers and underperformers you could lookt at when what they did worked well. If you can identify WITHIN the performance of an underperformer when his performance was adequate or even quite good you can then invite the individual to treat this like an 'INTERNAL BENCHMARK'. This will lead to finding 'internal solutions' which have a higher chance of working because people (1) know how to apply them, (2) have the skill to apply them, and (3) trust in the relevance and effectiveness of the solution. This is one of the basic ideas of the solution-focused approach: help people to learn from their past successes. This can be done even if, overall, they don't function too well at a specific moment. There will always be moments when things have been better (or slightly better). These are the moments from which you often can learn a lot.
Posted by: Coert Visser | May 20, 2007 at 12:27 AM