My
dear friend and co-author Jeff Pfeffer and I have started a series of
interesting conversations about what we might study next. We’ve been doing a little brainstorming and constructive
argument. As part of this adventure, we’ve been talking about the impact of our
last book on evidence-based management and what evidence-based management
means.
One
of the themes that we keep returning to is our concern that managers and the
business press seem to automatically assume that quantitative evidence is
always the best evidence. This point
especially came home a in recent Wall
Street Journal article by Scott Thrum called “Now
Its Business By Data, But Numbers Can’t Tell The Future.”
Scott
talks about how quantitative data have helped companies including Yahoo!,
Google, and Harrah’s gain competitive advantage, and talks about our book Hard Facts and Tom Davenport’s Competing
on Analytics, with the implication
seeming to be –-based on stories from P&G and Google – that evidence-based
analysis is useful for making short-term tweaks, but not for seeing the future
or making big breakthroughs. I think
that this perspective is partly right, although quantitative evidence can also
lead to huge changes in organizational strategy (e.g., consider one hard fact:
The huge numbers of baby boomers retiring in the next decade, now that is
something that shaking a lot of organizational strategies).
But
there is an implication in this article and others that I find especially
disturbing: The message seems to be that evidence-based management means management
by quantitative data. I reject that
thought, and have always believed that there are times when qualitative data
are more powerful, valid, and useful for guiding action than quantitative data. I will likely touch on this point more in future
posts, but to get things started, there are three times when I believe that
qualitative data are essential.
1. When you don’t know what
to count. Unstructured observation of people at work,
open-ended conversation, and other so-called ethnographic methods are
especially useful when you don’t know, for example, what matters most to
customers, employees, or a company. Just
hanging around and watching can have a huge effect. I am reminded of something that happened
years ago at HP. Senior management was concerned
that people weren’t buying their PC’s, so instead of just reading marketing
reports, they each went out and tried to by an HP at a local computer
store. I remember then CFO Bob Wyman
telling us that it was one thing to hear that consumers weren’t impressed with
HP PC’s, and quite another to have a salesperson suggest that ought to buy
something other than an HP because they were a poor value. HP is now the leader in the PC business, and
although I am sure this one little experience wasn’t the main cause, it did
help senior executives get a more complete understanding of what elements of customer experience they might start counting.
2. When you can count it,
but it doesn’t stick. As Chip and Dan Heath conclude in Made to
Stick, statistics show that people are swayed by stories
, not
statistics. So this means that even if
you have good quantitative data to back your decisions, your decision will be
harder to sell if you don’t have some compelling stories and images to go with
it. So, to take the case of Procter
& Gamble, they have had quantitative evidence for many years that the “in-store”
experience of encountering a P&G product has a huge effect (beyond
advertising, prior brand loyalty and so on), but the message really sunk in
when folks for the Institute for the Future simply took CEO A.G. Lafley and his
team shopping a few years back. This
experience, in combination with work done with IDEO and P&G’s fantastic
head of design, Claudia
Kotchka, have helped P&G develop a deeper understanding of their
customer experiences – and to tell better stories – than could have happened
through quantitative evidence alone. And it has led them to focus greater effort on designing the experience of encountering the product on the shelves -- not just packaging, but also where and how the products are displayed, and also, they've learned the importance of educating store employees about their products.
As
another example, our d.school students did a project about a year ago on ways
that large financial institutions alienate young college grads who want to
start saving money. Look at the desk to
the left, which came with a banker in a three piece suit. The students who went to talk to that banker
were all under 25 years old and were dressed in shorts t-shirts, but most had lucrative
job offers – which meant that they would be making more that that banker in a
few months. The setting and the banker
were so stiff that the idea of putting their money in his bank seemed like a
bad idea to the students – they found it intimidating and they felt as if they
couldn’t trust the banker or institution. The picture of that desk (and the guy in the
three piece suit, not shown here) is much ‘sticker” than any survey finding
that young people hesitate to put money a bank or investment fund.
3. When What You Can Count Doesn’t
Count. Researchers are always looking for things
that are easy to count, so they can get numbers that are amenable to
statistical analysis. There are times
when these numbers do matter. Sales, numbers of defects, and so on can be
valuable. But in the hunt for and obsession
with what can be counted, the most important evidence is sometimes overlooked. As
Einstein said, “Not everything that counts can be counted, and not everything
that can be counted counts.”
The
best example I’ve ever seen of the limits of quantitative data – and virtues of
story telling stories and qualitative experience – is found in on page 3 of John
Steinback’s 1941 classic “The Log from the Sea of Cortez,” a
book about marine collecting expedition that he went on with his dear friend Ed
Ricketts. I first heard about this from
Karl Weick, and have repeated it in many contexts – it is one of those
paragraphs that every manager and researcher in every field can benefit from:
The Mexican Sierra has “XVII-15-IX” spines on the dorsal fin. These can be easily counted. But if the sierra strikes hard so that our hands are burned, if the fish sounds and nearly escapes and finally comes over the rail, his colors pulsing and his tail beating the air, a whole new relational reality comes into being – an entity which is more than the sum of the fish plus the fisherman. The only way to count the spines of the Sierra unaffected by this second relational reality is to sit in a laboratory, open an evil smelling jar, remove a stiff colorless fish from the formalin solution, count the spines, and write the truth "D.XVII-15-IX." There you have recorded a reality that cannot be assailed -- probably the least important reality concerning either the fish or yourself.
Again,
I am not rejecting quantitative evidence, it is essential in many settings. But
qualitative evidence has great virtues as well, for spurring hypotheses,
emotions, and for enabling us to “see” truths that aren’t easily counted. I
love that line “The man with the pickled fish has set
down one truth and has recorded in his experience many lies.”
This post is meant to get conversation started. When is quantitative evidence especially
valuable? And when does it lead people
to record apparent – even unassailable -- truths that mask many lies and
dangerous half-truths?
I liked your comment "But qualitative evidence has great virtues as well, for spurring hypotheses, emotions, and for enabling us to “see” truths that aren’t easily counted."
Last week I attended the @task 2010 User Conference and they made an interesting announcement about social project management. They developed a tool called Stream which basically does what you're talking about. Here's a link to learn more: http://www.attask.com/stream. It basically takes qualitative or conversational information and ties it into project reports. Very interesting idea. Everybody at the conference seemed to be very excited about the new platform.
Posted by: Social Project Management | May 17, 2010 at 09:35 AM
Thought-provoking post, Bob!
What about..."When you are delaing largely with an audience that can't count (for nuts..:))"
This maybe an irrelevant example....but it is something I can think of immediately. If we're dealing with Children I guess we wouldn't achieve much if we were to be number-oriented.
My humble 2-cents...
Regards
Nimmy
Posted by: Nimmy | August 31, 2007 at 05:02 AM
Great and impactful insights Bob!...Connect this to The Black Swan by Nassim Nicholas Taleb (http://www.fooledbyrandomness.com/) (on the impact of the highly improbable) and you have a powerful understanding of why data are not always what they seem.
Posted by: John L Warren | August 27, 2007 at 12:57 PM
A great post.
Posted by: Charles Frith | August 25, 2007 at 02:03 AM
I thoroughly agree - both quantitative and qualitative research have their place in uncovering insight and directing strategy. Your reference of "truths that mask many lies and apparent half-truths" points to a weakness of quantitative research when measuring certain issues: people sometimes report what's expected/encouraged and withhold what they really think/feel/do.
My most recent example of this phenomenon occurred during qualitative interviews I conducted with members of the Girl Scouts. During initial interactions, most of the girls reported the Girl Scouts as a hugely positive, fun program. In many ways, this response was what the girls felt they were expected to say. The notion of Girl Scouts as a fun and beneficial program has been ingrained in them by their parents and by society as a whole. Past studies supported this finding - current Girl Scouts members (especially the younger ones) love Girl Scouts.
But after an hour of in-depth conversation with these girls, a different picture emerged. Many of the girls admitted that if they were to choose between going to Girl Scouts or not attending at all, they would gladly choose the latter. In fact, some of the girls were dissatisfied with the Girl Scout activities but felt pressured to attend (and like) Girl Scouts by their parents. Quantitative work alone wouldn't have been able to uncover these buried attitudes towards Girl Scouts. The "truth" or status quo in this instance is the brand equity of Girl Scouts as a thing to love, a "shrine brand." Perhaps the "half-truth" comes in when you realize that some of the girls the program serves don't necessary agree.
Of course, qualitative results are much harder to generalize. If some Girl Scouts in a select region feel this way, do others as well? We spoke with some die hard Girl Scouts families. And the fact that many were less than satisfied with Girl Scouts leads me to think that other girls probably share this attitude. People don't always say what they mean, and often, don't know what they mean. Qualitative work has the ability to uncover latent attitudes, perceptions, and behaviors - a power that quantitative research lacks.
Posted by: Shauna Axton | August 22, 2007 at 11:36 AM
There are two tendencies at play here. First, many disciplines seem to me to suffer from "physics envy," the belief that only an answer supported by quantitative data is valid. Even a well-conceived study is only valid for a specific set of conditions. Peters' and Waterman's In Search of Excellence may not have met academic study standards, but it was a book that changed the way people thought and talked about management, mostly for the better.
Second, many people look at good research and cherry-pick the pieces they like best. This goes as far back as managers looking at Taylor's shovel study and keeping the part about shovel design and placement of the coal pile, but leaving the part about giving workers breaks out of their practice. Or, with Deming, keep that statistical process control techniques, but leave out the worker empowerment.
Management is a complex human task in a constantly changing environment. It's important for us to try things, measure the results and adjust behavior. It's important to find ideas for new things to try in lots of places, including academic, quantitative studies. But sometimes the best thing to do is find someone who's done something successful and figure out what they did. Then try it, measure results (not necessarily count results) and adjust.
Posted by: Wally Bock | August 20, 2007 at 01:09 PM
I love this post.
When does quant matter more? This is the central question in my professional life. Right now, my best thought on this is that the closer, more routine, more familiar the territory you're operating in is, the more you can (and should) rely on quantitative data. But when you're out exploring new territory, qualitative evidence is probably more valid than quantitative data.
?????
Posted by: Diego Rodriguez | August 19, 2007 at 08:50 PM
My favorite example of a quantitative measure which is of some value while masking lies: the calculated benefit to a company in considering mass layoffs. As you and Dr Pfeffer have often noted, Bob, the reduction in overhead which cutting staff provides is seldom a leading indicator of a company on the way up.
Instead, it is typically an indication of management seeking to do something to placate the market or their board. The costs which are difficult to measure including productivity-destroying morale issues and other soft costs, are often ignored.
The unmeasured harm that such layoffs cause--be it unmeasured by choice or by ignorance--is most-often greater than any reduction in overhead the staff downsizing can offer.
Great topic, Bob!
Posted by: Rick Hamrick | August 17, 2007 at 05:35 PM
Good post. Dr Deming has been misinterpreted by quantitative six sigma and lean people (largely a US influence); it is very important to stress that evidence isn't the same as numbers. Thank you.
Posted by: Brian Sherwood Jones | August 17, 2007 at 11:19 AM