University of Minnesota
Software Engineering Center
/

You are here

Software Metrics - A Panel and Group Discussion

Date of Event: 
Thursday, February 6, 1997 - 1:00pm

The February TwinSPIN meeting was attended by about 20 members and sponsored by United Defense, L.P. and Mel Brauns. After introductions and some organizational items those in attendance were treated to an extremely interesting and informative panel discussion. More on the topic below.

The feedback on a joint dinner program with TCQAA was positive, but due to time constraints we are looking at moving this event to next fall or next year. We have several members looking for a top notch software speaker. If you can help, let Jesse Freese know.

We also identified some topics of interest for future meetings:

  • Risk Management
  • Pros and Cons of Contract Labor
  • The new CMM model - How does it compare to MSF (Microsoft Process Frame) and SDD Creating and managing geographically diverse teams
  • Creating and managing integrated product teams
  • Configuration management
Anyone who wants to lead a topic or knows someone who can talk to these topics please let me know. I would appreciate any help you can provide.

I have had feedback from several people that my messages have format problems, some get O20Os at the end of their lines. I am trying something different with this message so for those of you who have had that problem, do you have it with this message too?

The following is complements of our great and wondrous note taker, Gail Bertossi:

Panelists:
  • Dennis Robison - Honeywell
  • Dotty Coffey - AEFA
  • Tom Brown - West Publishing
  • Dick Hedger - DataCard
Each of the presenters described their experience with software metrics and measures. This was followed by group discussion. The following report summarizes each presenter's material and concludes with some notes from the group discussion.

Dennis Robison - Lessons Learned at Honeywell

Dennis presented 10 lessons learned.

1. Don't underestimate how long it takes.

  • The SEPG used a bottom-up approach to introduce metrics;
  • A key element that was missing was the support of upper management.
  • The effort started in 1992.
  • 1.5 years spent defining measures and metrics
  • 1.5 years spent designing the collection form in Excel 1996 - starting to use metrics; forms revised once.
2. Keep the number of metrics to a minimum
  • Less resistance
  • Less rework when definitions change - Original 30 metrics were pared to 10
  • All projects select a subset for their use (Basic 4 plus others)
Basic 4: cost, schedule, size, problem reports
Others: effort (hrs), critical computer resources, project risks, development productivity, cycle time productivity, engineering productivity, requirements stability.

3. Stabilize your definitions.

  • Treat the definitions as the metric program requirements
  • Changing definitions may devalue or render already collected data obsolete
4. There really is a step to go from definition to collecting data.
  • Honeywell has 19 measures for software size:
  • Project name, effective date of software, CSCI name
  • Planned & actual logical LOC (new, modified, unmodified, total)
  • Planned & actual physical LOC (new, modified, unmodified, total)
  • Product area
  • Type of Software application
5. Make the collection mechanism easy for the users. Automate where you can.

Dennis was of the opinion that the Web would be a better tool for data entry than Excel. He stated that the latest version of Excel does not support the 4.0 add-ins used in the original Excel form. Time has been spent compensating for dropped add-ins as they advanced to later versions of Excel.

  • A first time user takes between 30 minutes to 2 hours to complete the data collection form.
  • An experienced user takes about 15 minutes.
  • Data is collected and analyzed monthly.
6. Watch out for culture issues.
  • People fear the metrics are used to rate them individually.
  • Project leaders are concerned about the amount of time to collect metrics.
7. Designing metric displays (charts) takes time.
  • Make sure the important data stands out.
  • Keep the charts simple.
  • Use multiple types of charts, but keep them appropriate. (Don't use pie charts to show LOC.)
  • Make charts available, but tamperproof. (e.g., on web in .PDF format or paper copies only).
8. Some metrics have little value for some types of products.

Some R&D projects spend a whole year in requirements or design. Code size may not be appropriate.

9. Process metrics have more value when a common process is used.

Resist the temptation to compare projects that do not share a common process.

10. Get management support first.

They are only getting the support now that the charts are available for all to see.


Dotty Coffey - American Express Financial Advisors

The level of process is varied across the company. System Integration and Test was chartered to unify the processes and metrics. The development environment is diverse: Small Talk, Visual Basic, mainframe, Unix. The focus is on the PC development. Metrics are used, but not uniformly.

The current task is to

  • Identify where we are
  • Identify where we are going
  • Identify when we will be able to predict the following:
  1. when a unit is ready to leave a testing phase for the next phase
  2. what type of data is needed to determine the answer to 1.
  3. what data is currently available.
Lotus Notes is the repository for the data collections. Project's reuse each other's templates The use of Lotus notes and the templates is not coordinated or standardized. Examples of collected data are PTR rate, PTR closure rate, and test case prep rate.

The source of estimates is not known. Testing effort is assumed to be proportional to the allotted development time and effort.

The current action item is to provide concrete examples (cases) of how controversial metrics will be used. This task is very difficult given the diversity of backgrounds.

The biggest challenge is to coordinate and standardize the use of Lotus Notes across the organization.


Tom Brown - West Publishing

Tom Brown shared his experiences as a developer at a small (5 developer) company (not his current company). He shared the metrics he used because they were very useful.

Tom wore his salesman hat for metrics. He said that selling metrics is a constant struggle; it is difficult to get people to collect and use metrics. Especially after a bad experience, where management did not understand metrics presented to them; they misinterpreted the data because of a lack of background in statistics.

Tom used the Tracker tool to track bug reports, code views, and accepted code changes; the data was stored in an Access database. He then charted the number of tracked items over time (weeks). After a code freeze, Tracker tracked the number of customer reported problems.

The data charts showed that if the code freeze was too early, the number of customer reported problems submitted to the customer support center increased, and didn't go down. If the freeze was at the right time, the number increased, but leveled off.

Tom used this data to convince management of the benefits of using the available data to avoid releasing product before it is ready.


Dick Hedger - DataCard

Dick shared experiences from 31 years at IBM Rochester and DataCard. He noted that Steve Kan's presentation to the TwinSPIN in September was based on 15 years of data.

Key Metrics

  • Delighted customers
  • Revenue - making money
Measures
  • Size - LOC, Function points, (just pick one) Effort person months (hrs) by development phase
  • Defects - pre-customer ship starting no later than test
  • Defects - post-customer shipment from customer & further testing by development
  • Cost - $ by development process phase Schedule - key milestones completed
  • Problems - customer calls to help desk
  • Number of customers - using the product (# shipped, # installed, # in use)
  • Customer satisfaction - by survey
Measurement Process
  • Measure process, not people
  • Define measures clearly
  • Plan - estimate vs. actual
  • Automate measure to allow collection and analysis as a consequence of process execution
  • Make key metrics visible (also improvement goals) - Normalize for process & product comparison
Metrics - A place to Start

Definition: A metric is a computation using values from the measurements (e.g. LOC, $, Person months, time)

  • Defects/KLOC (Pre & Post Ship)
  • LOC/PM by development phase
  • $/LOC by development phase
  • $/PM by development phase
  • Schedule attainment & slippage
  • Problems/User month
Summary
  • Tie to business success
  • Define measures and automate collection - Start simple

Group Discussion

At IBM senior management met alternating weeks to review project data, defect detection rates. Questions were raised about defect detection rates higher or lower than the normal (based on historical data). The use of data caused questions to be asked & answered to determine whether or not a problem existed. The data could not answer the question; but the research could. If there was a problem then corrective action could be taken.

IBM's experience showed that there would be 60 defects per KLOC from requirements to ship. Therefore, they could predict how many were left to be found.

Data was used at IBM to determine whether the product was ready to ship. The potential impact of known problems could be assessed. If defects existed that would severely impact the customer, then the product was not shipped. IBM developed a rule that product could not be shipped with Level 1 or Level 2 defects.

Q. Are there any metrics or measures to avoid?

Defects/module because they often point to one individual

Complexity because it cannot be measured until a module is complete. It is useful input for inspections, but not for testing.

Productivity measures for dissimilar projects without multiple measures (size, schedule, effort, quality, application type).

Q. Are there any tools to predict the total number of expected defects:

Yes, Larry Putnam's SLIM tool predicts the number of defects based on size, application type and other variables. Medtronic has had success using the tool.

Q. Are there differences in defect densities due to differences between OO and Structured methodologies?

No one could answer this question. Someone suggested that the measures may be different.

Q. Are there success stories for the use of metrics?

  1. Measuring requirements changes over time helped one group determine when requirements were stable enough to begin design.
  2. Having data from a tool like Cocomo helped management identify when and how many people to add to a project before it started.
  3. Measuring types and sources of defects helped identify training needs.

Our next meeting is March 6th. We will be looking at a sneak preview of a SEPG conference talk, "A Measurement Framework That Works", by Tim Olson.

Thanks,

Jesse Freese
Fissure
612-882-0800
Jesse_Freese@Fissure.com