Video of QASIG Presentation Focused on Managing Quality Debt

I had a great time speaking at the QASIG group meeting last night and met some great folks along with reconnecting with others I haven’t seen in some time. Here is the video of the presentation.

 

Video streaming by Ustream

And the slides

Agile in Defense Conference

As the Department of Defense focuses on “delivering 75% solutions in months [instead of] 100% solutions in years” Agile is finding its way into big, traditionally managed programs. This event http://www.afei.org/events/2A01/Pages/default.aspx specifically addresses Agile in Defense. My presentation was an invitation following a successful meeting at the ADAPT meeting and also included tips for making Agile successful. See presentation link below. Several of the speakers were from Defense departments and provided good insights.

The Keynote speaker, Dr. Steven J. Hutchison, provided some compelling information starting with his problem statement. He showed the workflow diagram of the defense acquisition process. Dr. Hutchison also showed some compelling information where test integrates throughout the lifecycle process this includes all testing types including OT and others.

Dr. Robert Charette followed with an solid reminder about risk management. The specific types of risk that Dr. Charette discussed included acquisition and that the acquisition folks must be involved. Dr. Charette also cautioned us that several good past change efforts have failed.

Mr. Tony Stout did an excellent job focusing on the people the qualities of good teams and how to build a quality workforce, specifically in Agile environments. This was the second speaker from a test focus. I find this refreshing.

Mr. Ronald Pontius is heading the Council on the changes to governance in order to support agile development. Mr. Pontius updated us on the responses from several agencies and working groups to support section 804 http://www.defense.gov/pubs/pdfs/804Reportfeb2007.pdf.

Impediment Monkey Overview Video Posted

If you haven’t already checked out our FREE online service, ImpedimentMonkey.com, then check out this overview video describing what it can do for you, your team, and those that you collaborate with to increase the habitability of your team’s delivery environment and deploy continuous improvement on a daily basis.

The Stability Index: Focusing on Release Stabilization

While recently working with Juni Mukherjee on a team that is focused on finding ways to extending and increasing the value of a large legacy platform she brought up what I thought was a brilliant idea. We had been working on creating metrics that have tension with each other to drive continuous integration effectiveness from the component level up to the deployed system and continue to generate value to users. Juni’s research and the team discussions on the topic went through multiple scenarios with different metrics to better understand how they balance and/or mislead. As we all know any metric can be gamed. And not only that, but a metric is not always valuable in a context even though it is extremely valuable in another. During the conversation Juni blurted out that what she was looking to come out with was a “Stability Index”. This brilliant phrase along with the outcomes of our team discussions lead me to think that this is a valuable way to look at quality measurements alongside other release constraints to support delivery of continuous value.

This article is a first attempt at putting The Stability Index down on paper as it is already in use, to some degree, in Juni’s organization. In the past, organizations and teams I have worked with have come up with similar approaches that allow us to balance effective quality signals. This should lead to early detection of what are usually weak signals of quality issues that many times are found too late or whose symptoms are seen away from the signals origin.

Goals of The Stability Index

Stability Index is a function of signals that are observed in pre-production and production environments. The purpose of calculating a Stability Index is two-fold:

  • The Stability Index is an indicator of how an organization is progressing towards its business goals. For example, if Stability Index goes up, Cycle Times reduce, Release Stabilization Period goes down and Customer Retention improves.
  • The Stability Index also reveals any correlation of pre-production signals to signals observed in production. For example, when Code Coverage goes up, the number of defects found in production goes down.

If proper balance is given to these pre-production and in production signals than this should lead to a stable application platform that continuous to deliver value at a sustainable rate as new business needs arise. Of course, this does not mean that business needs will be continuous or steady so there is still a potential to impact The Stability Index but by keeping balance, a team or teams should be able to manage fluctuations in business needs more effectively.

Ultimately, creating an implementation of The Stability Index is deciding on metrics that produce effective pre-production and in production early warning signals. The following sections will go into detail about the metrics which were are initially used in this implementation of The Stability Index.

Pre-Production Signals

Pre-production signals are focused towards technical craftsmanship of engineering teams.

Percentage of Broken Builds

This metric is used to objectively measure behavior patterns of engineers who may not be using effective gating criteria before checking in code. Build breakages could be while building source code, compiling, unit testing, distributing artifacts, deploying images or further downstream while testing for functional correctness, integration issues and performance gaps. Irrespective of where the breakage is, code will not be able to flow to production through a fully automated pipeline unless engineers test code changes in their local sandbox environments before checking into the common repository.

 Code Duplication

This metric is an early indicator of software debt that indicates that software programs have significant sequences of source code that repeat and this calls for refactoring. This has a high business impact in terms of maintainability.

Cyclomatic Complexity

This is a software metric that indicates the conditional complexity of a program (function, method, class etc.) by measuring the number of linearly independent paths that can be executed. This is also an early indicator of software debt and is very expensive to the company when it comes to getting new employees up to speed.

Code Coverage

Code Coverage can be either Line/Statement/Decision/Branch/Condition coverage and is a measure of how effective test suites are in certifying software programs. Code Coverage also indicates that 100% Line or Statement Coverage may give a false sense of security since all lines may have been covered by tests although important decisions, branches and conditions may not have been tested.

The caveat here is that a high code coverage percentage does not guarantee bug-free applications since the primary objective of tests should be to meet customer requirements and all customer use cases may not be exercised by tests even when Code Coverage is at 100%.

Test Cycle Time

Test Cycle Time is a measure of the time it takes to execute the entire test suite before either reporting bugs that would cause another iteration or software can be certified for production. For the same number of tests, the Cycle Time could be high unless tests are designed to be independent of each other and are launched in parallel. As an aside, for tests to be launched in parallel, the test environments can often become a bottleneck in terms of providing the required capacity and reliability. Virtual environments may not offer a guaranteed share of the CPU whereas shared clusters in distributed environments may queue jobs and hence make the execution time long and unpredictable.

Production Signals

Production signals are focused towards the customer.

Customer Delight / Satisfaction

It’s all about the customer. Software is released to meet the customer’s needs and to leave the customer delighted and craving for more. Although “Delight” can be subjective at times, surveys are an effective way to measure satisfaction. An example of customer dissatisfaction could be that albeit the software behaves correctly, it takes more number of clicks to perform the same activity than it used to take in the previous version. Defects reported by the customer are a good measure of this metric.

Defect Containment

This is an important trend to watch out for since customers can be inconvenienced if their support tickets are queued up. Moreover, defects reported in production that translate into code and configuration errors should be fixed by the engineering team within acceptable SLAs depending on the severity of the issues. Being able to iterate fast is one of the key factors for customer retention.

Uptime

Systems could go down due to hardware incidents like router malfunction and disk crashes or due to software inadequacies like fault tolerance not being built in. Either way, downtimes cause revenue losses and are a critical contributor towards Stability Index.

Relationship of Signals to The Stability Index

Each of the above trends has a bearing on the Stability Index. Some trends are directly proportional to the Stability Index (), while the trends of others are inversely proportional (). For example, when Code Coverage goes up, it has a positive impact on Stability Index. On the contrary, when Code Duplication goes up, Stability Index goes down.

The relationships of all the metrics with Stability Index are illustrated below.

Pre-production Metric Relationship to Stability Index
Percentage of Broken Builds  
Code Duplication  
Cyclomatic Complexity  
Code Coverage  
Test Cycle Time  
Production Metric Relationship to Stability Index
Customer Delight / Satisfaction  
Defect Containment  
Uptime

Recognizing Software Debt Talk at Beyond Agile Meeting

A couple days ago I spoke at the Beyond Agile group meeting on the topic of “Recognizing Software Debt”. Early in the presentation we ran an exercise to get a feel for the effects of software debt that was original created by my friend, Masa Maeda. Here is a link to the exercise:

http://www.agilistapm.com/understand-technical-debt-by-playing-a-game/

The exercise went great even though I was using the audience as guinea pigs to run the exercise for my first time. Below is the slide deck that I used as a backdrop for the presentation.

Measuring the value of Agile: Forrester wants to hear from you

We recently spoke with Diego Lo Giudice at Forrester Research to share our views on how organizations are measuring the value of Agile. Diego is leading a research effort to uncover how both software vendors and enterprise IT groups are approaching this challenge.

While  ”Working software is the primary measure of progress” is one of the twelve principles of Agile, working software is not the only measure that software development organizations are using to show progress and to measure the value of Agile to the business. Our own discussions with clients indicate that many try to use velocity and quality (defects) to show that their Agile teams are improving their ability to deliver. A few focus on cycle time or progress to plan. The ‘holy grail’ is to be able to directly connect a software deliverable to measurable business value – improved revenue or decreased business costs.

What metrics does your organization use to measure its Agile efforts?  Learn more about Forrester’s research project, or go directly to the survey to share your views (and receive a free copy of the resulting report).

Speaking at PMO Symposium 2011

I just finished speaking at the PMO Symposium 2011 http://www.pmosymposium.org this morning. This has been a great conference focusing on how do organizations deliver value. Historically, there has been a lot of challenges between PMOs and software teams, notably in the Agile space. Many of the conflicts are misunderstandings. The true conflicts can be addressed better when we start migrating from constraint-driven to value-driven management. Many people asserted that the PMO’s need to help organizations achieve success through Lean and Agile principles, practices and methods.

My presentation addressed the issues of communication between business and agile teams. Traditional EVM makes no sense in software (and is potentially harmful) because claiming value earned based on intermediate work products–without an assertion of quality–does not provide reasonable forecasts. Agile provides an assertable and inspectable quality. Also, by ordering in terms of highest Business Value and risk considerations along with potentially shippable increments, I believe starts to include notions of value. Still, AgileEVM measures performance against plans (that can be re-baselined every iteration if needed). AgileEVM integrates cost management. Doing it well means not giving up what Agile offers: adaptive planning, quality.

SeaSPIN Talk: Dollars and Dates Are Killing Agile

Last night I spoke at SeaSPIN on the topic “Dollars and Dates Are Killing Agile”. The focus of this talk is how we can speak more like business with the benefit of building organizations that are more supportive of adaptive planning, continuous improvement and team empowerment. If we do not speak the language of the business and continue to create friction without enabling strategic planning needs that businesses have then we are bound to see Agile methods have a short lifespan in most organizations. Below is the topic description and slides:

Agile teams speak in points and iterations, but project and business managers think in terms of dates and dollars.  This conceptual and language barrier makes strategic business planning, funding, and project status reporting a significant challenge for Agile teams.  Because of these barriers, many successful Agile/Scrum initiatives are discontinued or never expanded.

Delving into Technical Debt – Cutter Article

The following is an except from the article authored by Israel Gat and myself named “Delving into Technical Debt”:

Many of the findings and the recommendations we make in Cutter technical debt engagements are broadly applicable in concept, if not in detail. There is commonality in the nature of the hot spots we typically find, the mal-practices we identify as the root causes and the ways we go about reducing the “heat.” Granted, your technical debt reduction strategy might dictate investing in automated unit testing prior to reducing complexity, while your competitor might be able to address complexity without additional investment in unit testing. However, the considerations you and your competitor will go through in devising your technical debt reduction strategies are fairly similar.

It is this similarity that we try to capture in this Executive Update. Some of specifics we recount here might not be applicable to your environment. However, we trust the overall characterization we provide will give you, your colleagues and your superiors a fairly good “3D” picture of how the technical debt initiative will look like in the context of your own business imperatives and predicaments.

As a Senior Cutter Consultant, it has been a pleasure working with Cutter to release this Executive Update from which the excerpt cited above is taken. For a free donwnload, click here and use the promotion code DELVING. Let us know what you think about the article in the comments section of this post.

Puget Sound PMI Talk: Integrating Quality into Project Portfolio Management

Last week I did a talk in Lynnwood, WA with the Puget Sound PMI chapter on “Integrating Quality into Project Portfolio Management”. I feel that the slides were the best yet in providing specific indicators and understanding around the troubles with scaling Agile methods and in strategic decision-making at scale. Although stage, phase, or “gated” approaches don’t provide a better answer than Agile methods, they provide an illusion of knowledge and control that Agile methods do not initially bring to the table. This talk goes into patterns for scaling Agile across an organization, the issues inherent in scaling, and ways to identify, specifically from a quality trend perspective, if there is troublesome waters ahead on specific project endeavors. The focus is on those quality indicators being found amongst all the noise in scaled projects to figure out if the project should be re-committed to, transformed, or killed (reference from Johanna Rothman in her book “Manage Your Project Portfolio”) so that value can be optimized.

So, without further ado, here are the slides…