Category Archives: Open Source

Recognizing Software Debt Talk at Beyond Agile Meeting

A couple days ago I spoke at the Beyond Agile group meeting on the topic of “Recognizing Software Debt”. Early in the presentation we ran an exercise to get a feel for the effects of software debt that was original created by my friend, Masa Maeda. Here is a link to the exercise:

http://www.agilistapm.com/understand-technical-debt-by-playing-a-game/

The exercise went great even though I was using the audience as guinea pigs to run the exercise for my first time. Below is the slide deck that I used as a backdrop for the presentation.

Example Facilitation of Agile Adoption Strategy Session

More than a year ago I was training and consulting with a company that was deciding how to adopt Agile software development methods across all of their teams in the organization after some successful pilots. After a 2-day private Scrum course we decided to use the 3rd day to run a workshop on how to take their organization’s strategic business goals and support that with Agile methods adoption and Lean thinking across their project portfolio. This article will share the facilitation techniques and exercises that I used, which spanned many techniques that could helpful to others supporting strategic decision-making meetings:

For more information on Innovation Games®, please check out:

For more information on Cynefin, please check out the following articles and videos:

Hope this gets folks started in looking at Cynefin and methods around complexity. My one suggestion for anyone getting to know Cynefin is make sure that you don’t look at the “Cynefin Model” and misjudge it to be a basic 4-box model. There are actually 5 domains and a fold at the bottom that have tremendous significance in the model. Not only that, the model is about sense-making rather than categorization. With a focus on these suggestions while learning more about Cynefin I think will help make the experience more beneficial.

As for an experience report from the actual client. A company that I was asked to work with wanted a 2-day Certified ScrumMaster course along with a 3rd day workshop to focus on their specific needs. Of course, during the 2-day class we pulled many specific areas of opportunity and ideas that the group had. The participants were about 1/2 Director and above and the other 1/2 were development team members and project managers. There were about 30 folks in the class and the 2-day class was providing plenty of insights.

One thing that I am focused on in every class is not how can Scrum be implemented but what is the most valuable next steps an individual, team, or organization can make starting the very next day. In the 3rd day workshop we decided to focus on the business goals that implementing more agility would help achieve and a strategy to attain these business goals. Overnight I came up with a loose facilitated session with the following exercises:

1. Impediment Management Exercise using basic facilitation techniques:

  • Brainwriting: 10 reasons per individual that Scrum and other agility cannot be implemented effectively at company
  • Affinity grouping: on a wall place all items and affinity group them with names to provide context and insights into to the data
  • Multi-voting: not a scientific method necessarily, but quick way to get feedback on what is most important on the wall
  • Debrief actions to take

2. “Give them a hot tub” – an Innovation Games® exercise (http://innovationgames.com/resources/the-games/)

  • Used to identify goals and initiatives that would improve business outcomes focused on software development

3. Ritual Dissent – http://www.cognitive-edge.com/method.php?mid=46

  • set up tables with 6-7 folks per table
  • each table comes up with a strategy for implementing agility to attain business goals and value of agility (20 minutes)
  • 1 person is chosen by table to go to another table and present the strategy for 5 minutes – important that all tables have a chair at the end of it to have the visitor form the other table sit at
  • folks at the table that are listening to the strategy are not allowed to speak during the presentation
  • at 5 minutes the person presenting turns their chair around with back to folks at table and takes out a notebook
  • the folks at the table now ritually tear apart the presented strategy (be sure to tell all of the participants before the exercise that this will be occurring and that their duty is to be as cutting as possible) (3 minutes)
  • at 3 minutes the person that presented does not turn around and make eye contact or talk with the table of folks and then just goes back to their own table with all of their notes
  • the table uses those notes to modify their strategy (10 minutes)
  • the presenter goes to a different table 2-3 more times
  • you can end with a round where the folks at the table talk about what they liked about the strategy once it has went through 3-4 rounds but we did not do this (this is called “Ritual Assent”)
  • Find out more alternatives and ideas on Ritual Dissent at the Cognitive Edge web site

4. Combine Strategy

  • at the end it was fairly simple to pull strategy from all tables and since they shared with each other we decided what the combined strategic alignment and implementation would be presented to executive management

As an epilogue to this, the company did implement most of the strategic plan and found effective changes in their organization even 2 years later since I was there. Hopefully this article provided some opportunities for those facilitating strategic decision-making sessions and added some other options to learn about, Cynefin and Innovation Games® in particular, to your facilitation tool belt.

Managing Software Debt in Practice Presentation

Today at the Scrum Gathering in Seattle, I held a session on “Managing Software Debt in Practice” where we got into:

The presentation had too much for the less than 90 minutes that we had for the session. I did not get into scaling Scrum team patterns and heuristics to manage software debt at scale and also less around testing than I’d hoped. Hopefully it was useful for the participants and they got at least one new idea leaving the session. It is difficult to take a 1-day workshop and create a less than 90 minute talk, as I learn again.

Our Book is Available: Managing Software Debt – Building for Inevitable Change


I am quite happy about the book that took much of my time over the past couple years has finally come out. Thank you Addison-Wesley for asking me to write a book. Also, I want to thank Jim Highsmith and Alistair Cockburn for accepting the book into their Agile Software Development Series. Finally, I have to thank all of those that have guided, influenced, and supported me over my career and life, with special thanks to my wife and kids who put up with me during the book’s development. My family is truly amazing and I am very lucky to have them!

Automated Promotion through Server Environments

To get towards continuous deployment, or even continuous validated builds, a team must take care of their automated build, test, analyze, deploy scripts. I always recommend that you have 2 scripts:

  • deploy
  • rollback

And that these scripts are used many times per iteration to reduce the risk of surprises when deploying to downstream server environments, including production. The following diagram shows a generic flow that these deploy and rollback scripts might incorporate to increase the confidence of teams in the continuous deployment of their software to downstream environments.

The incorporation of continuous integration, automated tests (unit, integration, acceptance, smoke), code analysis, and deployment into the deployment scripts provides increased confidence. Teams that I have worked on, coached, and consulted with have found the use of static code analysis with automated build failure when particular metrics are trending in a negative direction enhances their continued confidence in changes they make to the software. The software changes may pass all of the automated tests but the build is still not promoted to the next environment because, for instance, code coverage has gone down more than 0.5% in the past week. I am careful to suggest that teams should probably not set a specific metric bar like “90% code coverage or bust” but rather that the trend of any important metric is not trending in the wrong direction.

Please let us know how your teams move towards continuous delivery or at least continuous deployment to downstream environments with confidence in the comments section of this blog entry. Thanks.

Using Sonar Metrics to Assess Promotion of Builds to Downstream Environments

For those of you that don’t already know about Sonar you are missing an important tool in your quality assessment arsenal. Sonar is an open source tool that is a foundational platform to manage your software’s quality. The image below shows one of the main dashboard views that teams can use to get insights into their software’s health.

The dashboard provides rollup metrics out of the box for:

  • Duplication (probably the biggest Design Debt in many software projects)
  • Code coverage (amount of code touched by automated unit tests)
  • Rules compliance (identifies potential issues in the code such as security concerns)
  • Code complexity (an indicator of how easy the software will adapt to meet new needs)
  • Size of codebase (lines of code [LOC])

Before going into how to use these metrics to assess whether to promote builds to downstream environments, I want to preface the conversation with the following note:

Code analysis metrics should NOT be used to assess teams and are most useful when considering how they trend over time

Now that we have this important note out of the way and, of course, nobody will ever use these metrics for “evil”, lets discuss pulling data from Sonar to automate assessments of builds for promotion to downstream environments. For those that are unfamiliar with automated promotion, here is a simple, happy example:

A development team makes some changes to the automated tests and implementation code on an application and checks their changes into source control. A continuous integration server finds out that source control artifacts have changed since the last time it ran a build cycle and updates its local artifacts to incorporate the most recent changes. The continuous integration server then runs the build by compiling, executing automated tests, running Sonar code analysis, and deploying the successful deployment artifact to a waiting environment usually called something like “DEV”. Once deployed, a set of automated acceptance tests are executed against the DEV environment to validate that basic aspects of the application are still working from a user perspective. Sometime after all of the acceptance tests pass successfully (this could be twice a day or some other timeline that works for those using downstream environments), the continuous integration server promotes the build from the DEV environment to a TEST environment. Once deployed, the application might be running alongside other dependent or sibling applications and integration tests are run to ensure successful deployment. There could be more downstream environments such as PERF (performance), STAGING, and finally PROD (production).

The tendency for many development teams and organizations is that if the tests pass then it is good enough to move into downstream environments. This is definitely an enormous improvement over extensive manual testing and stabilization periods on traditional projects. An issue that I have still seen is the slow introduction of software debt as an application is developed. Highly disciplined technical practices such as Test-Driven Design (TDD) and Pair Programming can help stave off extreme software debt but these practices are still not common place amongst software development organizations. This is not usually due to lack of clarity about these practices, excessive schedule pressure, legacy code, and the initial hurdle to learning how to do these practices effectively. In the meantime, we need a way to assess the health of our software applications beyond just tests passing and in the internals of the code and tests themselves. Sonar can be easily added into your infrastructure to provide insights into the health of your code but we can go even beyond that.

The Sonar Web Services API is quite simple to work with. The easiest way to pull information from Sonar is to call a URL:

http://nemo.sonarsource.org/api/resources?resource=248390&metrics=technical_debt_ratio

This will return an XML response like the following:

<resources>
  <resource>
  <id>248390</id>
  <key>com.adobe:as3corelib</key>
  <name>AS3 Core Lib</name>
  <lname>AS3 Core Lib</lname>
  <scope>PRJ</scope>
  <qualifier>TRK</qualifier>
  <lang>flex</lang>
  <version>1.0</version>
  <date>2010-09-19T01:55:06+0000</date>
  <msr>
    <key>technical_debt_ratio</key>
    <val>12.4</val>
    <frmt_val>12.4%</frmt_val>
  </msr>
  </resource>
</resources>

Within this XML, there is a section called <msr> that includes the value of the metric we requested in the URL, “technical_debt_ratio”. The ratio of technical debt in this Flex codebase is 12.4%. Now with this information we can look for increases over time to identify technical debt earlier in the software development cycle. So, if the ratio to increase beyond 13% after being at 12.4% 1 month earlier, this could tell us that there is some technical issues creeping into the application.

Another way that the Sonar API can be used is from a programming language such as Java. The following Java code will pull the same information through the Java API client:

Sonar sonar = Sonar.create("http://nemo.sonarsource.org");
Resource commons = sonar.find(ResourceQuery.createForMetrics("248390",
        "technical_debt_ratio"));
System.out.println("Technical Debt Ratio: " +
        commons.getMeasure("technical_debt_ratio").getFormattedValue());

This will print “Technical Debt Ratio: 12.4%” to the console from a Java application. Once we are able to capture these metrics we could save them as data to trend in our automated promotion scripts that deploy builds in downstream environments. Some guidelines we have used in the past for these types of metrics are:

  • Small changes in a metric’s trend does not constitute immediate action
  • No more than 3 metrics should be trended (the typical 3 I watch for Java projects are duplication, class complexity, and technical debt)
  • The development should decide what are reasonable guidelines for indicating problems in the trends (such as technical debt +/- .5%)

In the automated deployment scripts, these trends can be used to stop deployment of the next build that passed all of its tests and emails can be sent to the development team regarding the metric culprit. From there, teams are able to enter the Sonar dashboard and drill down into the metric to see where the software debt is creeping in. Also, a source control diff can be produced to go into the email showing what files were changed between the successful builds that made the trend go haywire. This might be a listing per build and the metric variations for each.

This is a deep topic that this post just barely introduces. If your organization has a separate configuration management or operations group that managed environment promotions beyond the development environment, Sonar and the web services API can help further automate early identification of software debt in your applications before they pollute downstream environments.

SeaSPIN Lightning Talk from Last Night

Thanks to Jeremy Lightsmith, the lightning talk I did last night at SeaSPIN on Portfolio Management and Software Debt was caught on video. The talk went over 5 ills that you might see in software projects that can affect strategic planning and some ways that can help identify the ills more quickly for planning purposes. It ends on a note about how we can do more of the right things in the software industry if we translate our plans into dollars and dates effectively for strategic planners.

YouTube Preview Image

Extreme Feedback from My Tools – Part 1: Maven 2 Configuration

Feedback

For many years now, it has been a goal of mine to get feedback as early as possible when developing software. Past blog entries here and here have discussed how we can approach increased feedback. A tweet from Jason Gorman mentioned his list of tools that provide continuous feedback on his code and design: “Emma, Jester, XDepend, Checkstyle and Simian”. This inspired me to write a post on how I approach setting up project reporting and my IDE to provide increased feedback. This article will be the first part of a series on “Extreme Feedback from My Tools” and will focus on Maven 2 configuration and reporting.


Maven is my tool of choice for managing builds, versioning, deployment, and test execution. Although, it wouldn’t hurt my feelings if teams I worked on used Ant, make, or other scripting methods to manage these, but it tends to be more difficult overall. For those who are alright with using Maven, here is a look at different aspects of a typical POM file configuration I use:

<build>
  <plugins>
    <plugin>
      <artifactId>maven-compiler-plugin</artifactId>
      <configuration>
        <source>1.5</source>
        <target>1.5</target>
      </configuration>
    </plugin>
    <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-surefire-plugin</artifactId>
      <version>2.4.2</version>
      <configuration>
        <includes>
          <include>**/When*.java</include>
        </includes>
        <redirectTestOutputToFile>true</redirectTestOutputToFile>
        <trimStackTrace>false</trimStackTrace>
        <useSystemClassLoader>false</useSystemClassLoader>
      </configuration>
    </plugin>
  </plugins>
</build>

The above portion of the POM file are configurations for all Maven execution scenarios for this project. The first plugin, “maven-compiler-plugin”, sets the expected source code compliance and the JVM version that the compiled binary will target. The “maven-surefire-plugin” executes tests such as those developed with JUnit and TestNG. Because my approach is to take a more BDD-like naming convention and style for test cases, this POM is configured to execute unit tests that start with the word “When” in the test source code directory, by default this is “src/test/java”. Having the full stack trace from test execution issues is essential to effective debugging of the automated build and tests, therefore the configuration makes sure that they are not trimmed in the output file. Finally, some code that I have created in the recent past needed to find classes on the Maven classpath and through much debugging I found out that the system class loader was used by default with surefire so I now make sure to set it up to use the Maven class loader instead.

<reporting>
  <plugins>
    <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-pmd-plugin</artifactId>
      <version>2.3</version>
      <configuration>
        <linkXref>true</linkXref>
        <targetJdk>1.5</targetJdk>
      </configuration>
    </plugin>
    <plugin>
      <groupId>org.codehaus.mojo</groupId>
      <artifactId>cobertura-maven-plugin</artifactId>
      <version>2.4</version>
      <configuration>
        <formats>
          <format>html</format>
          <format>xml</format>
        </formats>
      </configuration>
    </plugin>
    <plugin>
      <groupId>org.codehaus.mojo</groupId>
      <artifactId>jdepend-maven-plugin</artifactId>
    </plugin>
    <plugin>
      <groupId>org.codehaus.mojo</groupId>
      <artifactId>dashboard-maven-plugin</artifactId>
    </plugin>
    <plugin>
      <groupId>org.codehaus.mojo</groupId>
      <artifactId>findbugs-maven-plugin</artifactId>
      <version>2.2</version>
    </plugin>
</plugins>
</reporting>

Reports are effective at giving the team indicators of potential problems in their project artifacts early. Teams tend to find that trends are more valuable then specific targets in the generated reports. If the code coverage is going down we ask ourselves “why?”. If more defects are being detected by source code analysis tools then we can look at how we can change our approach to reduce the frequency of these issues. The 5 plugins used in this POM report on different perspectives of the software artifacts and can help to find problematic trends early.

When the continuous integration server successfully executes the build and automated tests, the Maven reporting command is executed to generate these reports. This happens automatically and is shown on our video monitor “information radiator” in the team area.

<dependencies>
  <dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.7</version>
    <scope>test</scope>
  </dependency>
  <dependency>
    <groupId>org.mockito</groupId>
    <artifactId>mockito-all</artifactId>
    <version>1.8.0</version>
  </dependency>
</dependencies>

We make sure to update the POM to use JUnit 4 so that our team can use annotations and better names for the tests. Also, Mockito has become my favorite mock objects framework since it stays away from the “replay” confusion of other mock frameworks (or their old versions at least) and also has a BDDMockito class that enables our team to use the given/when/then construction for our tests.

Once your POM file is configured with these reporting plugins, you can generate the reports by executing the ‘site’ life cycle in Maven:

mvn site

Part 2 of this series of articles will discuss configuration of an Eclipse IDE environment for Extreme Feedback.

HOWTO: Maven + StoryTestIQ + Selenium RC

StoryTestIQ is an automated acceptance testing tool, which was originally a mashup of 2 existing open source projects, Selenium and FitNesse. StoryTestIQ is many times shortened to STIQ (pronounced “stick”) so it rolls off the tongue more easily. STIQ takes the idea of testing inside the browser a la Selenium and enables editing, tagging, and multi-table execution through a modified FitNesse underneath the covers. The version control from FitNesse was removed so that the generated wiki files are able to be checked in, without any binaries, alongside the code those tests are executing against. The multi-table execution allows for test components, written in “selenese”, to be refactored out of and included in larger test cases. There are many more modifications that the development team, headed by Paul Dupuy, have done to enhance STIQ’s capabilities beyond running Fit and Selenium tests but this is enough background for now.

During STIQ’s development, we created a Maven 2 plugin, imaginatively named maven-stiq-plugin. This plugin did only 1 thing: start up the STIQ server for your project without having to run Java from the command line. In the past couple of days, I have finally had enough time and desire to develop integration of exporting STIQ tests into Selenium RC compliant “selenese” so they can also be executed during your Maven integration-test cycle in the 2.2-SNAPSHOT version. So, lets get down to the “how”.

First, add the STIQ Maven 2 repository to your POM (pom.xml file for your project) as shown below: (NOTE: updated with correction on URL from original posting)

<repositories>
...
  <repository>
     <id>STIQ Sourceforge</id>
     <url>https://storytestiq.svn.sourceforge.net/svnroot/storytestiq/trunk/www/maven2/</url>
  </repository>
</repositories>

We, also, must put the STIQ Maven 2 repository into our plugin repositories within the POM because so that we can find the maven-stiq-plugin to execute during the integration-test cycles: (NOTE: updated with correction on URL from original posting)

<pluginRepositories>
...
  <pluginRepository>
     <id>STIQ Plugins Sourceforge</id>
     <url>https://storytestiq.svn.sourceforge.net/svnroot/storytestiq/trunk/www/maven2/</url>
  </pluginRepository>
</pluginRepositories>

Next, we will put in the maven-stiq-plugin configuration.

<plugin>
 <groupId>net.sourceforge.storytestiq</groupId>
 <artifactId>maven-stiq-plugin</artifactId>
 <version>2.2-SNAPSHOT</version>
 <executions>
   <execution>
   <id>export</id>
   <phase>pre-integration-test</phase>
   <goals>
     <goal>export</goal>
   </goals>
   <configuration>
     <pageRootPath>repository</pageRootPath>
     <suiteWikiPagePath>ProjectRoot.StoryTests</suiteWikiPagePath>
   </configuration>
 </execution>
 <execution>
   <id>exec</id>
   <phase>integration-test</phase>
   <goals>
     <goal>exec</goal>
   </goals>
   <configuration>
     <userExtensions>src/main/resources/user-extensions.js</userExtensions>
     <browserType>*firefox</browserType>
     <suiteFile>target/stiq/ProjectRoot.StoryTests.html</suiteFile>
   </configuration>
   </execution>
 </executions>
</plugin>

Now, to tell you a little bit about what is going on in the above configuration. The <groupId> and <artifactId> elements describe what plugin to grab from the plugin repository and use in the project. In the executions section, we define 2 separate execution elements. The first execution is called “export”. This execution will occur during the “pre-integration-test” cycle within the full Maven 2 build life cycle. The goal, similar to an Ant target, on our maven-stiq-plugin that you will be “export”, which is the goal that exports our StoryTestIQ acceptance tests as “selenese” to run within the Selenium RC server. The configurations shown above are <pageRootPath>, which is the directory located below your top-level project directory where the StoryTestIQ tests are located, and <suiteWikiPagePath>, which is a wiki page location for the top-level suite including all of the tests to export. If you don’t already have STIQ tests, please go to http://storytestiq.sf.net to find out how to get started.

The second execution element is called “exec”. This execution will run during the “integration-test” cycle in the Maven build life cycle and will execute the exported tests using Selenium RC server. The configurations for this goal are <userExtensions>, which is where any new selenese actions are defined specific to your project, <browserType>, which is the web browser to execute the tests within, and <suiteFile>, which is where the exported selenese tests were generated during the “export” goal execution. As a convention, the generated selenese file will be located under the “target/stiq” directory by default with the name of the file as <suiteWikiPagePath>.html.

Now you can run the usual ‘install’ command in your project’s top-level directory:

mvn install

This should compile and execute all of your unit tests then during the integration-test phase it will run the maven-stiq-plugin goals, “export” and “exec” in that order. During the maven-stiq-plugin “exec” goal execution, the web browser will open up in the background and you will see the STIQ tests run. After the tests have executed, the web browser will close and the Maven build life cycle will complete.

NOTE: If you are having trouble with “*firefox” as <browserType>, then you might be seeing a current bug with Selenium RC server version 1.1.1. An upcoming release version will include a fix and we will update the dependency once we see the update. For now, the fix is to go back to Firefox version 3.5.3 or switch to using a different browser as listed here on the Selenium RC documentation.

There is still much more to do with the plugin before getting to a release version of 2.2. Please comment on this blog post with any suggestions or issues that you have with the plugin and it’s configuration. If you are interested, do a search on StoryTestIQ within this blog to find out more about using StoryTestIQ or visit the main project page http://storytestiq.sf.net. Thank you.