Category Archives: Java

Managing Software Debt in Practice Presentation

Today at the Scrum Gathering in Seattle, I held a session on “Managing Software Debt in Practice” where we got into:

The presentation had too much for the less than 90 minutes that we had for the session. I did not get into scaling Scrum team patterns and heuristics to manage software debt at scale and also less around testing than I’d hoped. Hopefully it was useful for the participants and they got at least one new idea leaving the session. It is difficult to take a 1-day workshop and create a less than 90 minute talk, as I learn again.

Using Sonar Metrics to Assess Promotion of Builds to Downstream Environments

For those of you that don’t already know about Sonar you are missing an important tool in your quality assessment arsenal. Sonar is an open source tool that is a foundational platform to manage your software’s quality. The image below shows one of the main dashboard views that teams can use to get insights into their software’s health.

The dashboard provides rollup metrics out of the box for:

  • Duplication (probably the biggest Design Debt in many software projects)
  • Code coverage (amount of code touched by automated unit tests)
  • Rules compliance (identifies potential issues in the code such as security concerns)
  • Code complexity (an indicator of how easy the software will adapt to meet new needs)
  • Size of codebase (lines of code [LOC])

Before going into how to use these metrics to assess whether to promote builds to downstream environments, I want to preface the conversation with the following note:

Code analysis metrics should NOT be used to assess teams and are most useful when considering how they trend over time

Now that we have this important note out of the way and, of course, nobody will ever use these metrics for “evil”, lets discuss pulling data from Sonar to automate assessments of builds for promotion to downstream environments. For those that are unfamiliar with automated promotion, here is a simple, happy example:

A development team makes some changes to the automated tests and implementation code on an application and checks their changes into source control. A continuous integration server finds out that source control artifacts have changed since the last time it ran a build cycle and updates its local artifacts to incorporate the most recent changes. The continuous integration server then runs the build by compiling, executing automated tests, running Sonar code analysis, and deploying the successful deployment artifact to a waiting environment usually called something like “DEV”. Once deployed, a set of automated acceptance tests are executed against the DEV environment to validate that basic aspects of the application are still working from a user perspective. Sometime after all of the acceptance tests pass successfully (this could be twice a day or some other timeline that works for those using downstream environments), the continuous integration server promotes the build from the DEV environment to a TEST environment. Once deployed, the application might be running alongside other dependent or sibling applications and integration tests are run to ensure successful deployment. There could be more downstream environments such as PERF (performance), STAGING, and finally PROD (production).

The tendency for many development teams and organizations is that if the tests pass then it is good enough to move into downstream environments. This is definitely an enormous improvement over extensive manual testing and stabilization periods on traditional projects. An issue that I have still seen is the slow introduction of software debt as an application is developed. Highly disciplined technical practices such as Test-Driven Design (TDD) and Pair Programming can help stave off extreme software debt but these practices are still not common place amongst software development organizations. This is not usually due to lack of clarity about these practices, excessive schedule pressure, legacy code, and the initial hurdle to learning how to do these practices effectively. In the meantime, we need a way to assess the health of our software applications beyond just tests passing and in the internals of the code and tests themselves. Sonar can be easily added into your infrastructure to provide insights into the health of your code but we can go even beyond that.

The Sonar Web Services API is quite simple to work with. The easiest way to pull information from Sonar is to call a URL:

http://nemo.sonarsource.org/api/resources?resource=248390&metrics=technical_debt_ratio

This will return an XML response like the following:

<resources>
  <resource>
  <id>248390</id>
  <key>com.adobe:as3corelib</key>
  <name>AS3 Core Lib</name>
  <lname>AS3 Core Lib</lname>
  <scope>PRJ</scope>
  <qualifier>TRK</qualifier>
  <lang>flex</lang>
  <version>1.0</version>
  <date>2010-09-19T01:55:06+0000</date>
  <msr>
    <key>technical_debt_ratio</key>
    <val>12.4</val>
    <frmt_val>12.4%</frmt_val>
  </msr>
  </resource>
</resources>

Within this XML, there is a section called <msr> that includes the value of the metric we requested in the URL, “technical_debt_ratio”. The ratio of technical debt in this Flex codebase is 12.4%. Now with this information we can look for increases over time to identify technical debt earlier in the software development cycle. So, if the ratio to increase beyond 13% after being at 12.4% 1 month earlier, this could tell us that there is some technical issues creeping into the application.

Another way that the Sonar API can be used is from a programming language such as Java. The following Java code will pull the same information through the Java API client:

Sonar sonar = Sonar.create("http://nemo.sonarsource.org");
Resource commons = sonar.find(ResourceQuery.createForMetrics("248390",
        "technical_debt_ratio"));
System.out.println("Technical Debt Ratio: " +
        commons.getMeasure("technical_debt_ratio").getFormattedValue());

This will print “Technical Debt Ratio: 12.4%” to the console from a Java application. Once we are able to capture these metrics we could save them as data to trend in our automated promotion scripts that deploy builds in downstream environments. Some guidelines we have used in the past for these types of metrics are:

  • Small changes in a metric’s trend does not constitute immediate action
  • No more than 3 metrics should be trended (the typical 3 I watch for Java projects are duplication, class complexity, and technical debt)
  • The development should decide what are reasonable guidelines for indicating problems in the trends (such as technical debt +/- .5%)

In the automated deployment scripts, these trends can be used to stop deployment of the next build that passed all of its tests and emails can be sent to the development team regarding the metric culprit. From there, teams are able to enter the Sonar dashboard and drill down into the metric to see where the software debt is creeping in. Also, a source control diff can be produced to go into the email showing what files were changed between the successful builds that made the trend go haywire. This might be a listing per build and the metric variations for each.

This is a deep topic that this post just barely introduces. If your organization has a separate configuration management or operations group that managed environment promotions beyond the development environment, Sonar and the web services API can help further automate early identification of software debt in your applications before they pollute downstream environments.

Extreme Feedback from My Tools – Part 1: Maven 2 Configuration

Feedback

For many years now, it has been a goal of mine to get feedback as early as possible when developing software. Past blog entries here and here have discussed how we can approach increased feedback. A tweet from Jason Gorman mentioned his list of tools that provide continuous feedback on his code and design: “Emma, Jester, XDepend, Checkstyle and Simian”. This inspired me to write a post on how I approach setting up project reporting and my IDE to provide increased feedback. This article will be the first part of a series on “Extreme Feedback from My Tools” and will focus on Maven 2 configuration and reporting.


Maven is my tool of choice for managing builds, versioning, deployment, and test execution. Although, it wouldn’t hurt my feelings if teams I worked on used Ant, make, or other scripting methods to manage these, but it tends to be more difficult overall. For those who are alright with using Maven, here is a look at different aspects of a typical POM file configuration I use:

<build>
  <plugins>
    <plugin>
      <artifactId>maven-compiler-plugin</artifactId>
      <configuration>
        <source>1.5</source>
        <target>1.5</target>
      </configuration>
    </plugin>
    <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-surefire-plugin</artifactId>
      <version>2.4.2</version>
      <configuration>
        <includes>
          <include>**/When*.java</include>
        </includes>
        <redirectTestOutputToFile>true</redirectTestOutputToFile>
        <trimStackTrace>false</trimStackTrace>
        <useSystemClassLoader>false</useSystemClassLoader>
      </configuration>
    </plugin>
  </plugins>
</build>

The above portion of the POM file are configurations for all Maven execution scenarios for this project. The first plugin, “maven-compiler-plugin”, sets the expected source code compliance and the JVM version that the compiled binary will target. The “maven-surefire-plugin” executes tests such as those developed with JUnit and TestNG. Because my approach is to take a more BDD-like naming convention and style for test cases, this POM is configured to execute unit tests that start with the word “When” in the test source code directory, by default this is “src/test/java”. Having the full stack trace from test execution issues is essential to effective debugging of the automated build and tests, therefore the configuration makes sure that they are not trimmed in the output file. Finally, some code that I have created in the recent past needed to find classes on the Maven classpath and through much debugging I found out that the system class loader was used by default with surefire so I now make sure to set it up to use the Maven class loader instead.

<reporting>
  <plugins>
    <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-pmd-plugin</artifactId>
      <version>2.3</version>
      <configuration>
        <linkXref>true</linkXref>
        <targetJdk>1.5</targetJdk>
      </configuration>
    </plugin>
    <plugin>
      <groupId>org.codehaus.mojo</groupId>
      <artifactId>cobertura-maven-plugin</artifactId>
      <version>2.4</version>
      <configuration>
        <formats>
          <format>html</format>
          <format>xml</format>
        </formats>
      </configuration>
    </plugin>
    <plugin>
      <groupId>org.codehaus.mojo</groupId>
      <artifactId>jdepend-maven-plugin</artifactId>
    </plugin>
    <plugin>
      <groupId>org.codehaus.mojo</groupId>
      <artifactId>dashboard-maven-plugin</artifactId>
    </plugin>
    <plugin>
      <groupId>org.codehaus.mojo</groupId>
      <artifactId>findbugs-maven-plugin</artifactId>
      <version>2.2</version>
    </plugin>
</plugins>
</reporting>

Reports are effective at giving the team indicators of potential problems in their project artifacts early. Teams tend to find that trends are more valuable then specific targets in the generated reports. If the code coverage is going down we ask ourselves “why?”. If more defects are being detected by source code analysis tools then we can look at how we can change our approach to reduce the frequency of these issues. The 5 plugins used in this POM report on different perspectives of the software artifacts and can help to find problematic trends early.

When the continuous integration server successfully executes the build and automated tests, the Maven reporting command is executed to generate these reports. This happens automatically and is shown on our video monitor “information radiator” in the team area.

<dependencies>
  <dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.7</version>
    <scope>test</scope>
  </dependency>
  <dependency>
    <groupId>org.mockito</groupId>
    <artifactId>mockito-all</artifactId>
    <version>1.8.0</version>
  </dependency>
</dependencies>

We make sure to update the POM to use JUnit 4 so that our team can use annotations and better names for the tests. Also, Mockito has become my favorite mock objects framework since it stays away from the “replay” confusion of other mock frameworks (or their old versions at least) and also has a BDDMockito class that enables our team to use the given/when/then construction for our tests.

Once your POM file is configured with these reporting plugins, you can generate the reports by executing the ‘site’ life cycle in Maven:

mvn site

Part 2 of this series of articles will discuss configuration of an Eclipse IDE environment for Extreme Feedback.

Slides from Managing Software Debt Talk at PNSQC 2009

Tomorrow at 1:30pm I will be discussing my paper published by the Pacific Northwest Software Quality Conference 2009 in Portland, OR on “Managing Software Debt: Continued Delivery of High Value as Systems Age”. I have uploaded the slides for this presentation and I hope that some of the new content will help those looking for ways to manage their software debt more effectively in 5 key areas:

  • Technical debt: tends to focused on the code and reveals itself in duplication and code smells
  • Quality debt: focuses on QA aspects of software development and shows up in growing bug databases and longer regression test runs
  • Configuration Management debt: focuses on integration and release management aspects and becomes apparent with extreme branching and inability to recreate environments from scratch
  • Design debt: focuses on design constructs of components within an application or enterprise infrastructure and is usually difficult to figure out until you are close to a deadline such handling production load
  • Platform Experience debt: focuses on the people in the software creation process and usually involves extreme specialization and waiting on people to finish their part

Without further ado, here are the slides:

Also, here is the picture I use to discuss Managing Software Debt from high level in terms of maintaining and enhancing value of software assets:

Effect of Managing Software Debt to Preserve Software Value

Executable Specifications – Presentation from AgilePalooza

Earlier this year I did a presentation on Executable Specficiations for AgilePalooza conference. There is information about working with legacy code, commercial off-the-shelf (COTS) systems, and Acceptance Test-Driven Development (ATDD) using automated acceptance testing tools. Also, the presentation lists types of automated acceptance testing tools out there along with actual names of tools and what they are best used for on projects. Hope it is interesting to you.

Designing Through Programmer Tests (TDD)

To reduce duplication and rigidity of the programmer test relationship to implementation code, move away from class and methods as the definition of a “unit” in your unit tests. Instead, use the following question to drive your next constraint on the software:

What should the software do next for the user?

The following coding session will provide an example of applying this question. The fictitious application is a micro-blogging tool named “Jitter”. This is a Seattle-based fictitious company that focuses on enabling coffee injected folks write short messages and have common online messaging shorthand to be expanded for easy reading. The user story we are working on is:

So that it is easier to keep up with my kid’s messages, Mothers want to automatically expand their kid’s shorthand

The acceptance criteria for this user story are:

  • LOL, AFAIK, and TTYL are expandable
  • Able to expand lower and upper case versions of shorthand

The existing code already includes a JitterSession class that users obtain when they authenticate into Jitter to see messages from other people they are following. Mothers can follow their children in this application and so will see their messages in the list of new messages. The client application will automatically expand all of the messages written in shorthand.

The following programmer test expects to expand LOL to “laughing out loud” inside the next message in the JitterSession.

public class WhenUsersWantToExpandMessagesThatContainShorthandTest {

    @Test
    public void shouldExpandLOLToLaughingOutLoud() {
        JitterSession session = mock(JitterSession.class);
        when(session.getNextMessage()).thenReturn("Expand LOL please");
        MessageExpander expander = new MessageExpander(session);
        assertThat(expander.getNextMessage(), equalTo("Expand laughing out loud please"));
    }

}

The MessageExpander class did not exist so along the way I created a skeleton of this class to make the code compile. Once the assertion is failing, I then make the test pass with the following implementation code inside the MessageExpander class:

public String getNextMessage() {
    String msg = session.getNextMessage();
    return msg.replaceAll("LOL", "laughing out loud");
}

This is the most basic message expansion I could do for only one instance of shorthand text. I notice that there are different variations of the message that I want to handle. What if LOL is written in lower case? What if it is written as “Lol”? Should it be expanded? Also, what if some variation of LOL is inside a word? It probably should not expand the shorthand in that case except if the characters surrounding it are symbols, not letters. I write all of this down in the programmer test as comments so I don’t forget about all of these.

// shouldExpandLOLIfLowerCase
// shouldNotExpandLOLIfMixedCase
// shouldNotExpandLOLIfInsideWord
// shouldExpandIfSurroundingCharactersAreNotLetters

I then start working through this list of test cases to enhance the message expansion capabilities in Jitter.

@Test
public void shouldExpandLOLIfLowerCase() {
    when(session.getNextMessage()).thenReturn("Expand lol please");
    MessageExpander expander = new MessageExpander(session);
    assertThat(expander.getNextMessage(), equalTo("Expand laughing out loud please"));
}

This forced me to use the java.util.regex.Pattern class to handle case insensitivity.

public String getNextMessage() {
    String msg = session.getNextMessage();
    return Pattern.compile("LOL", Pattern.CASE_INSENSITIVE).matcher(msg).replaceAll("laughing out loud");
}

Now make it so mixed case versions of LOL are not expanded.

@Test
public void shouldNotExpandLOLIfMixedCase() {
    String msg = "Do not expand Lol please";
    when(session.getNextMessage()).thenReturn(msg);
    MessageExpander expander = new MessageExpander(session);
    assertThat(expander.getNextMessage(), equalTo(msg));
}

This forced me to stop using the Pattern.CASE_INSENSITIVE flag in the pattern compilation. Instead I tell it to match only “LOL” or “lol” for replacement.

public String getNextMessage() {
    String msg = session.getNextMessage();
    return Pattern.compile("LOL|lol").matcher(msg).replaceAll("laughing out loud");
}

Next we’ll make sure that if LOL is inside a word it is not expanded.

@Test
public void shouldNotExpandLOLIfInsideWord() {
    String msg = "Do not expand PLOL or LOLP or PLOLP please";
    when(session.getNextMessage()).thenReturn(msg);
    MessageExpander expander = new MessageExpander(session);
    assertThat(expander.getNextMessage(), equalTo(msg));
}

The pattern matching is now modified to use spaces around each variation of valid LOL shorthand.

return Pattern.compile("\\sLOL\\s|\\slol\\s").matcher(msg).replaceAll("laughing out loud");

Finally, it is important that if the characters around LOL are not letters it still expands.

@Test
public void shouldExpandIfSurroundingCharactersAreNotLetters() {
    when(session.getNextMessage()).thenReturn("Expand .lol! please");
    MessageExpander expander = new MessageExpander(session);
    assertThat(expander.getNextMessage(), equalTo("Expand .laughing out loud! please"));
}

The final implementation of the pattern matching code looks as follows.

return Pattern.compile("\\bLOL\\b|\\blol\\b").matcher(msg).replaceAll("laughing out loud");

I will defer refactoring this implementation until I have to expand additional instances of shorthand text. It just so happens that our acceptance criterion for the user story asks that AFAIK and TTYL are expanded, as well. I won’t show the code for the other shorthand variations in the acceptance criteria. However, I do want to discuss how the focus on “what should the software do next” drove the design of this small component.

Driving the software development using TDD focusing on what the software should do next helps guide us to only implement what is needed and with 100% programmer test coverage for all lines of code. For those who have some experience with object-oriented programming will implement the code with high cohesion, modules focused on specific responsibilities, and low coupling, modules that make few assumptions about other module they interact with will do. This is supported by the disciplined application of TDD. The failing programmer test represents something that the software does not do yet. We focus on modifying the software with the simplest implementation that will make the programmer test pass. Then we focus on enhancing the software’s design with the refactoring step. It has been my experience that refactoring refactoring represents most of the effort expended when doing TDD effectively.

Top 25 Open Source Projects — Recommended for Enterprise Use

This is a bit off my usual topics on this blog but I am a heavy open source user and this article is something that I hope gets to more enterprise operations, managers and executives. I have been using and deploying production available applications using open source tools, libraries, and platforms for over 12 years now. Open source tools can do almost anything commercial products are able to do and have transformed the software industry in that time span. The list given in the article contains open source projects that I would recommend and have used in the past either directly or indirectly including *nix tools and libraries shown.

I would like to add to this listing with some of the tools I have come to use often:

  • Maven 2.x+ (http://maven.apache.org/)
  • JBoss (http://www.jboss.org/)
  • Rio/Jini/Apache River (http://incubator.apache.org/river/RIVER/index.html)
  • Apache Commons (http://commons.apache.org/)
  • Subversion (http://subversion.tigris.org/)
  • Apache Web Server (http://httpd.apache.org/)
  • Bouncy Castle (http://www.bouncycastle.org/)
  • Time and Money (http://timeandmoney.sourceforge.net/)
  • Spring Framework (http://www.springframework.org/)
  • Hadoop (http://hadoop.apache.org/)
  • Ruby on Rails (http://www.rubyonrails.org/)

This is some of the open source that I have and still use on my projects. What are your favorites that were not on the list?

Executable Design — A New Name for TDD?

For multiple years now I have thrown around the name “Executable Design” to describe Test-Driven Development (TDD) and how it is used for design rather than a test-centric tool. The name itself causes problems for those who are initially introduced to the technique. As a coach I was looking for a way to introduce it without stereotyping it as extra tests inhibiting more code getting delivered.

From my readings of multiple books, articles, and blog postings along with my own experiences with TDD the content of what I am about to distill is not new. This post is entirely about explaining the technique in a way that garners interest quickly. There are multiple pieces to “Executable Design” beyond the basic process of:

  • Red, Green, Refactor or
  • Write Test, Write Code, Refactor

These statements and the technique is the basis for practicing Executable Design but are not sufficient for describing the value and nuance of the practice. Not that I will be able to present it sufficiently in a single blog post but I want to present the basic principles.

While in a meeting with a team recently we were presented with a question I have heard often:

“Why should we use TDD?”

There are many reasons but generic reasoning alone is not sufficient. We discussed the safety net that good code coverage creates. We discussed the reason system tests do not take the place of unit tests. Then we started to touch on design and this is where it got interesting (and usually it does about this time for me). Before I can describe the rest of this discussion I want to present what lead up to this meeting.

A coach that I highly respect seemed a bit preoccupied one day when he wandered into my team’s area. I asked him what was going on and he told me that some of his issues with the current team he was coaching. He wondered why they were not consistently using TDD in their day-to-day development. The team had allowed a card saying “We do TDD” onto their Working Agreement and were not adhering to it.

I happened to know a bit about the background of this project that our company has been working on for over 2 1/2 years. There is a significant legacy codebase developed over many more years with poor design, multiple open source libraries included, and heavy logic built into relational database stored procedures. Also, just recently management on the client’s side had changed significantly and caused quite a shake up in terms of their participation and guidance of release deliverables. Yet the team was supposed to deliver on a date with certain features that were not well defined. This lead me to discuss the following situations that a coach can find their way into:

  1. You could come into a team that has limited pressure on features and schedule and has considered the impact of learning a new technique such as Executable Design. Also, they have asked for a coach to help them implement Executable Design effectively. This is a highly successful situation for a coach to enter.
  2. You could come into a team that has deadline pressures but has some leeway on features or vise versa and has considered the impact of learning a new technique such as Executable Design within their current release. Also, they have asked for a coach to help them implement Executable Design effectively. This is somewhat successful but pressures of the release rise and fall in this situation and may impact the effectiveness of the coaching.
  3. You could come into a team that has deadline pressures and has not considered implementing Executable Design seriously as a team. Also, they have NOT asked for a coach and yet they have gotten one. The coach and the techniques they are attempting to help the team implement may seem like a distraction to the team’s real work of delivering a release. This is usually not successful and please let me know if you are a person who is somewhat successful in this situation because we could hire you.

The current team situation seemed to be more like #3 above and therefore the lack of success in helping the team adopt TDD did not surprise me. Also, I started to play devil’s advocate and provide a list of reasons for this team NOT to do TDD:

  • At current velocity the team is just barely going to make their release date with the minimum feature set
  • Not enough people on the team know how to do TDD well enough to continue it’s use without the coach
  • The architecture of the system is poor since most logic is captured in Java Server Pages (JSP) and stored procedures
  • The code base is large and contains only about 5-10% test coverage at this time
  • It sometimes takes 10 times longer to do TDD than just add functionality desired by customer

This is not the full list but you get the picture. Don’t get me wrong, the list above begs to me the need for Executable Design but if the team does not have significant experience to implement it effectively it could seem overhead with little benefit to show for it.

After discussing this and more stuff that I won’t go into he told me about a couple of things that he can do to help the team. One of them was to work on minimizing the reasons for not doing Executable Design by discussing them with their ScrumMaster and actioning them on the impediments list. Some of those actions would go to upper management who get together each day and resolve impediments at an organizational level. One of the actions was to get our CTO and myself into a room with the team so they can ask the question “why should we do TDD?”.

Now we are in the room and most of the team members had been exposed to TDD through pairing sessions. Some of them had some ideas about where TDD was useful and why they thought it was not on this project. During the discussion one of the team members brought up a great discussion point:

“One of the problems with our use of TDD is that we are not using it for improving the design. If we just create unit tests to test the way the code is structured right now it will not do any good. In fact, it seems like we are wasting time putting in unit tests and system tests since they are not helping us implement new functionality faster.”

This team member had just said in the first sentence what I instinctually think when approaching a code base. The reason to do TDD is not just to create code coverage but to force design improvement as the code is being written. This is why I call TDD and its best known principles and practices of applying it Executable Design. If you are not improving the design of the application then you are not doing Executable Design. You might be just adding tests.

Some of the principles I have found to help me in applying Executable Design effectively are (and most, if not all, of these are not something I came up with):

  • Don’t write implementation code for your application without a failing unit test
  • Separate unit tests from system and persistence tests. (as described in this previous blog entry)
  • Create interfaces with integration points in a need-driven way (as described in this previous blog entry)
  • Always start implementing from the outside in (such as in Behavior-Driven Development and as described in this previous blog entry)
  • Mercilessly refactor the code you are working on to an acceptable design (the limits of which are described in this previous blog entry)
  • Execute your full “unit test” suite as often as possible (as described in this previous blog entry)
  • Use the “campground rules” of working in the code: “Leave the site in better shape than when you arrived”
  • Create a working agreement that the whole team is willing to adhere to, not just what the coach or a few think is the “right” agreements to have.

Try these out on your own or with your team and see how they work for you. Modify as necessary and always look for improvements. There are many thought leaders in the Agile community that have written down important principles that may work for you and your team.

And finally, now that I have filled an entire blog post with “Executable Design” what do people think about the name? It has worked for me in the past to explain the basic nature of TDD so I will use it either way unless others have better names that I can steal?