Copyright

All the information available on this website is subject of copyright laws and regulations. Still, it can be used for non-commercial purposes with the previous approval of the owner, specifying the source.


Sunday, 20 May 2007

Effective software reviews

Definition: “A review of a work product, which passes stated entrance criteria, led by a moderator who is not the author,…, that uses product specific checklists, uses scenarios and/or other effective reading techniques, initiates re-inspection based on stated criteria, passes or fails the work product based on exit criteria, and adds to the base of historical data. “ (Software Technology Transition)

How is the effectiveness of an inspection measured? Mainly, through the percentage of the defects in the work product that have been identified. If an inspection is rigorously performed (it can be described by the above definition), the efficiency is about 50%, but it can go higher than 90%. Another measure is the time spent for detecting a defect. The more mature the teams, the less will be this time.

It is important to find out what is the lowest cost of defect detection and removal. Depending of the type of architecture, design and programming language, modular/unit testing or inspection can be the most efficient and the cost is the lowest. (The efficiency of testing is also given by the time spent for detecting a defect – test implementation and test running). What ever would be the answer, for sure a combination of these two techniques for defects identification is more efficient. It’s subject of discussion if the review should be performed after the testing or before because it depends very much of the typicality of the artifacts.

As “cost of the defect removal” came in discussion, just think about what it takes you to solve a defect before merge, integration and release and what it takes to solve it between integration and release (new dev branch, new regression testing, new integration, new sanity tests) or after the release, when reported by customer, or even found by yourself (all the above + all the release activities; when between customer and development there is a validation team, include also the retesting that has to be done by this team in hours/man).

In order to increase the efficiency of peer reviews, it is very important to have as little “process overload” as possible, to make sure that the individuals involved in the inspection know how and what to look for, why collecting the data they are asked to record is important and what it will be used for.

Let’s see a few industry figures regarding how much of various artifacts can be inspected during one hour in an efficient manner:

Code: upto 150LOC

Requirements: 3 to 8 pages

High level design: 3 to 8 pages per hour

Low level design: 6 to 16 pages per hour or 100 to 200 psedo code lines per hour

Usually, for each of the facts above, the preparation time is also one hour.

These being said, what happens if there is no time left for inspecting everything? Well, in this case, some previous experiences or, better, a historical database is very useful: there are some pieces of code, requirements and design that encapsulate more defects than the others. So, when time is not enough, reviewing these parts and the functionalities critical to the end-users assures a pretty good efficiency.

Summarizing, in order to have an efficient system of peer reviews, the following are mandatory:

No management pressure for the author (the person that evaluates the author in terms of performance management shall not attend the peer review, unless he/she is the only person with the required technical skills; and, in this case, he/she must be confident that is able to review only the product and not the author)

Clear entrance criteria:

What is the state of the artifact that will be reviewed?

What are the rules it must obey?

What is the amount to be reviewed?

What is uplift factor (how much of the total amount have been modified?)

Clear roles assigned to the attendees

All the attendees are prepared for the review

Checklists (not longer than a page) for each type of artifacts

Additional session and re-inspection setting criteria

Clear understanding of the data to be collected and the reasons for collecting it

The tools used for supporting the peer reviews ease the “process overload”. For sure, some data still have to be collected and recorded, but this is the only way to build the historical database which will help in the future for better estimations and enhancement of the peer reviews.

I’ll finish this article with the “Seven Deadly Sins of Software Reviews”, as they were identified by Karl Wiegers, from Process Impact - http://www.processimpact.com/ :
1. Participants don’t understand the review process
2. Reviewers critique the producer, not the product
3. Reviews are not planned
4. Review meetings drift into problem solving
5. Reviewers are not prepared
6. The wrong people participate
7. Reviewers focus on style, not on substance

No comments: