Book Me For A Speech

My Writing and Ranting

Press Room

Good Books

« A Cautionary Tale: Watch the Email | Main | It is Firefox 3 Download Day -- Help Set a New World Record »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

The Comedian

I'm not sure which came first, the Dilbert above or the Daily WTF's article on "The Defect Black Market," but they certainly go together.

http://thedailywtf.com/Articles/The-Defect-Black-Market.aspx

Elsa

I bought the No Asshole Rule book and I love it! I can't say that I would have bought it had it not been for that (what some might deem as offensive) adjective. I was actually in the bookstore browsing for something else when the title caught my eye and I picked it up to see what the book was about. I'm really glad I did because I bought it and read it and loved the new terminology I learned from it such as "certified asshole" and how enlightened I was to learn that I too have been a "temporary asshole" at certain times. I am also happy to report that my quest for a workplace free of these menacing lower body orifices has finally been realized!!! :-D

Kevin Rutkowski

Since I don't directly tie metrics to performance reviews, the people on my team don't seem to try to game the metrics that I want to collect. However, my goals are sometimes thwarted by other managers.

For example, I want my testers to record every defect that they find so we have an accurate picture of the testing effort. However, one development manager directly ties performance of his developers to the number of bugs that my team finds. This results in the developers on his team pressuring the testers on my team to not record bugs. They ask if the testers will simply tell them about the bugs so they can be fixed without an official record.

Similarly to what I mentioned in my previous comment, I have found that the reality is that many of the best developers have the most bugs. This is because the best developers are usually given the most complex and far-reaching portions of code to write. Regardless of the skill of the developer, there seems to be a direct correlation between complexity of code and number of errors.

Wally Bock

My experience cries out that every choice has both intended and unintended consequences. The problem is often that the perverse outcomes only become obvious in retrospect.

Adrian

A comment regarding the linked article: Near the end, Google Earth is used as example for a product developed in an employees "spare time". However, Google Earth was developed by Keyhole Inc which in Google acquired in 2004.

Peter G. Klein

I'm not sure why you think the term "perverse incentives" is, well, perverse. Perhaps you think economists are making a normative statement about the characteristics of people affected by such incentives. But "perverse" is used here in its literal sense of simply "counter to what is expected" or "contrary."

dblwyo

Marvelous, here, here. It ain't just S/W development. Though my last big s/w team we measured when the product came out the final testing process and worked; AND also measured success against the market driven product release plan.
This applies to all jobs in all firms - consider the recent Wall St. catastrophes - what were people getting paid for ? Moving product. Did they make good decisions, please. Check out MSFT's "Code Rode" problems with Longhorn from which it has never recovered.
This is a general principle - so when will your next book take a deep dive on incentive and comp systems vs enterprise objectives ? How 'bout paying people not to be assholes ?

Bob Sutton

Kevin,

Fascinating comment. Your point about the most valuable testers being the ones who can identify the root causes of bugs seems especially wise, and something that could be applied in many other settings -- hardware engineering and surgery, for example.

Kevin Rutkowski

I have worked in IT for 14 years, much of that as a software QA management consultant. I have seen some common perverse incentives for software testers:

1) rewarding testers for the number of test cases they write, which results in cursory, poorly written test cases;
2) rewarding testers for number of bugs they find, which results in a high number of unimportant or duplicate bugs reported; and
3) penalizing testers for bugs rejected by the test lead or development staff, which results in bugs going unreported.

Of course, all of those metrics are useful, but too many managers use metrics out of context. As a manager, I do look for anomalies in those metrics among testers. However, I look into why the anomalies exist and make decisions based on that information.

It may be that an anomaly indicates that a tester needs coaching, but more often, it means that the tester is working on solving the most difficult problems. Often, the most valuable software testers are those who report fewer bugs but are able to identify the root cause of bugs that are extremely difficult to reproduce consistently.

I do not directly tie any compensation or performance ratings directly to those metrics because I have found that doing so generally results in unwanted behavior. From experience, I realized that tying metrics directly to performance has negative consequences, but since reading Hard Facts, Dangerous Half-Truths and Total Nonsense, I'm better able to convince my clients to change their methods for evaluating software testers.

The comments to this entry are closed.

Asshole Survival

Scaling Up

Good Boss Bad Boss

No Asshole Rule

Hard Facts

Weird Ideas

Knowing -Doing Gap

The No Asshole Rule:Articles and Stories