Category Archives: Maven

Our Book is Available: Managing Software Debt – Building for Inevitable Change


I am quite happy about the book that took much of my time over the past couple years has finally come out. Thank you Addison-Wesley for asking me to write a book. Also, I want to thank Jim Highsmith and Alistair Cockburn for accepting the book into their Agile Software Development Series. Finally, I have to thank all of those that have guided, influenced, and supported me over my career and life, with special thanks to my wife and kids who put up with me during the book’s development. My family is truly amazing and I am very lucky to have them!

Using Sonar Metrics to Assess Promotion of Builds to Downstream Environments

For those of you that don’t already know about Sonar you are missing an important tool in your quality assessment arsenal. Sonar is an open source tool that is a foundational platform to manage your software’s quality. The image below shows one of the main dashboard views that teams can use to get insights into their software’s health.

The dashboard provides rollup metrics out of the box for:

  • Duplication (probably the biggest Design Debt in many software projects)
  • Code coverage (amount of code touched by automated unit tests)
  • Rules compliance (identifies potential issues in the code such as security concerns)
  • Code complexity (an indicator of how easy the software will adapt to meet new needs)
  • Size of codebase (lines of code [LOC])

Before going into how to use these metrics to assess whether to promote builds to downstream environments, I want to preface the conversation with the following note:

Code analysis metrics should NOT be used to assess teams and are most useful when considering how they trend over time

Now that we have this important note out of the way and, of course, nobody will ever use these metrics for “evil”, lets discuss pulling data from Sonar to automate assessments of builds for promotion to downstream environments. For those that are unfamiliar with automated promotion, here is a simple, happy example:

A development team makes some changes to the automated tests and implementation code on an application and checks their changes into source control. A continuous integration server finds out that source control artifacts have changed since the last time it ran a build cycle and updates its local artifacts to incorporate the most recent changes. The continuous integration server then runs the build by compiling, executing automated tests, running Sonar code analysis, and deploying the successful deployment artifact to a waiting environment usually called something like “DEV”. Once deployed, a set of automated acceptance tests are executed against the DEV environment to validate that basic aspects of the application are still working from a user perspective. Sometime after all of the acceptance tests pass successfully (this could be twice a day or some other timeline that works for those using downstream environments), the continuous integration server promotes the build from the DEV environment to a TEST environment. Once deployed, the application might be running alongside other dependent or sibling applications and integration tests are run to ensure successful deployment. There could be more downstream environments such as PERF (performance), STAGING, and finally PROD (production).

The tendency for many development teams and organizations is that if the tests pass then it is good enough to move into downstream environments. This is definitely an enormous improvement over extensive manual testing and stabilization periods on traditional projects. An issue that I have still seen is the slow introduction of software debt as an application is developed. Highly disciplined technical practices such as Test-Driven Design (TDD) and Pair Programming can help stave off extreme software debt but these practices are still not common place amongst software development organizations. This is not usually due to lack of clarity about these practices, excessive schedule pressure, legacy code, and the initial hurdle to learning how to do these practices effectively. In the meantime, we need a way to assess the health of our software applications beyond just tests passing and in the internals of the code and tests themselves. Sonar can be easily added into your infrastructure to provide insights into the health of your code but we can go even beyond that.

The Sonar Web Services API is quite simple to work with. The easiest way to pull information from Sonar is to call a URL:

http://nemo.sonarsource.org/api/resources?resource=248390&metrics=technical_debt_ratio

This will return an XML response like the following:

<resources>
  <resource>
  <id>248390</id>
  <key>com.adobe:as3corelib</key>
  <name>AS3 Core Lib</name>
  <lname>AS3 Core Lib</lname>
  <scope>PRJ</scope>
  <qualifier>TRK</qualifier>
  <lang>flex</lang>
  <version>1.0</version>
  <date>2010-09-19T01:55:06+0000</date>
  <msr>
    <key>technical_debt_ratio</key>
    <val>12.4</val>
    <frmt_val>12.4%</frmt_val>
  </msr>
  </resource>
</resources>

Within this XML, there is a section called <msr> that includes the value of the metric we requested in the URL, “technical_debt_ratio”. The ratio of technical debt in this Flex codebase is 12.4%. Now with this information we can look for increases over time to identify technical debt earlier in the software development cycle. So, if the ratio to increase beyond 13% after being at 12.4% 1 month earlier, this could tell us that there is some technical issues creeping into the application.

Another way that the Sonar API can be used is from a programming language such as Java. The following Java code will pull the same information through the Java API client:

Sonar sonar = Sonar.create("http://nemo.sonarsource.org");
Resource commons = sonar.find(ResourceQuery.createForMetrics("248390",
        "technical_debt_ratio"));
System.out.println("Technical Debt Ratio: " +
        commons.getMeasure("technical_debt_ratio").getFormattedValue());

This will print “Technical Debt Ratio: 12.4%” to the console from a Java application. Once we are able to capture these metrics we could save them as data to trend in our automated promotion scripts that deploy builds in downstream environments. Some guidelines we have used in the past for these types of metrics are:

  • Small changes in a metric’s trend does not constitute immediate action
  • No more than 3 metrics should be trended (the typical 3 I watch for Java projects are duplication, class complexity, and technical debt)
  • The development should decide what are reasonable guidelines for indicating problems in the trends (such as technical debt +/- .5%)

In the automated deployment scripts, these trends can be used to stop deployment of the next build that passed all of its tests and emails can be sent to the development team regarding the metric culprit. From there, teams are able to enter the Sonar dashboard and drill down into the metric to see where the software debt is creeping in. Also, a source control diff can be produced to go into the email showing what files were changed between the successful builds that made the trend go haywire. This might be a listing per build and the metric variations for each.

This is a deep topic that this post just barely introduces. If your organization has a separate configuration management or operations group that managed environment promotions beyond the development environment, Sonar and the web services API can help further automate early identification of software debt in your applications before they pollute downstream environments.

Extreme Feedback from My Tools – Part 1: Maven 2 Configuration

Feedback

For many years now, it has been a goal of mine to get feedback as early as possible when developing software. Past blog entries here and here have discussed how we can approach increased feedback. A tweet from Jason Gorman mentioned his list of tools that provide continuous feedback on his code and design: “Emma, Jester, XDepend, Checkstyle and Simian”. This inspired me to write a post on how I approach setting up project reporting and my IDE to provide increased feedback. This article will be the first part of a series on “Extreme Feedback from My Tools” and will focus on Maven 2 configuration and reporting.


Maven is my tool of choice for managing builds, versioning, deployment, and test execution. Although, it wouldn’t hurt my feelings if teams I worked on used Ant, make, or other scripting methods to manage these, but it tends to be more difficult overall. For those who are alright with using Maven, here is a look at different aspects of a typical POM file configuration I use:

<build>
  <plugins>
    <plugin>
      <artifactId>maven-compiler-plugin</artifactId>
      <configuration>
        <source>1.5</source>
        <target>1.5</target>
      </configuration>
    </plugin>
    <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-surefire-plugin</artifactId>
      <version>2.4.2</version>
      <configuration>
        <includes>
          <include>**/When*.java</include>
        </includes>
        <redirectTestOutputToFile>true</redirectTestOutputToFile>
        <trimStackTrace>false</trimStackTrace>
        <useSystemClassLoader>false</useSystemClassLoader>
      </configuration>
    </plugin>
  </plugins>
</build>

The above portion of the POM file are configurations for all Maven execution scenarios for this project. The first plugin, “maven-compiler-plugin”, sets the expected source code compliance and the JVM version that the compiled binary will target. The “maven-surefire-plugin” executes tests such as those developed with JUnit and TestNG. Because my approach is to take a more BDD-like naming convention and style for test cases, this POM is configured to execute unit tests that start with the word “When” in the test source code directory, by default this is “src/test/java”. Having the full stack trace from test execution issues is essential to effective debugging of the automated build and tests, therefore the configuration makes sure that they are not trimmed in the output file. Finally, some code that I have created in the recent past needed to find classes on the Maven classpath and through much debugging I found out that the system class loader was used by default with surefire so I now make sure to set it up to use the Maven class loader instead.

<reporting>
  <plugins>
    <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-pmd-plugin</artifactId>
      <version>2.3</version>
      <configuration>
        <linkXref>true</linkXref>
        <targetJdk>1.5</targetJdk>
      </configuration>
    </plugin>
    <plugin>
      <groupId>org.codehaus.mojo</groupId>
      <artifactId>cobertura-maven-plugin</artifactId>
      <version>2.4</version>
      <configuration>
        <formats>
          <format>html</format>
          <format>xml</format>
        </formats>
      </configuration>
    </plugin>
    <plugin>
      <groupId>org.codehaus.mojo</groupId>
      <artifactId>jdepend-maven-plugin</artifactId>
    </plugin>
    <plugin>
      <groupId>org.codehaus.mojo</groupId>
      <artifactId>dashboard-maven-plugin</artifactId>
    </plugin>
    <plugin>
      <groupId>org.codehaus.mojo</groupId>
      <artifactId>findbugs-maven-plugin</artifactId>
      <version>2.2</version>
    </plugin>
</plugins>
</reporting>

Reports are effective at giving the team indicators of potential problems in their project artifacts early. Teams tend to find that trends are more valuable then specific targets in the generated reports. If the code coverage is going down we ask ourselves “why?”. If more defects are being detected by source code analysis tools then we can look at how we can change our approach to reduce the frequency of these issues. The 5 plugins used in this POM report on different perspectives of the software artifacts and can help to find problematic trends early.

When the continuous integration server successfully executes the build and automated tests, the Maven reporting command is executed to generate these reports. This happens automatically and is shown on our video monitor “information radiator” in the team area.

<dependencies>
  <dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.7</version>
    <scope>test</scope>
  </dependency>
  <dependency>
    <groupId>org.mockito</groupId>
    <artifactId>mockito-all</artifactId>
    <version>1.8.0</version>
  </dependency>
</dependencies>

We make sure to update the POM to use JUnit 4 so that our team can use annotations and better names for the tests. Also, Mockito has become my favorite mock objects framework since it stays away from the “replay” confusion of other mock frameworks (or their old versions at least) and also has a BDDMockito class that enables our team to use the given/when/then construction for our tests.

Once your POM file is configured with these reporting plugins, you can generate the reports by executing the ‘site’ life cycle in Maven:

mvn site

Part 2 of this series of articles will discuss configuration of an Eclipse IDE environment for Extreme Feedback.

HOWTO: Maven + StoryTestIQ + Selenium RC

StoryTestIQ is an automated acceptance testing tool, which was originally a mashup of 2 existing open source projects, Selenium and FitNesse. StoryTestIQ is many times shortened to STIQ (pronounced “stick”) so it rolls off the tongue more easily. STIQ takes the idea of testing inside the browser a la Selenium and enables editing, tagging, and multi-table execution through a modified FitNesse underneath the covers. The version control from FitNesse was removed so that the generated wiki files are able to be checked in, without any binaries, alongside the code those tests are executing against. The multi-table execution allows for test components, written in “selenese”, to be refactored out of and included in larger test cases. There are many more modifications that the development team, headed by Paul Dupuy, have done to enhance STIQ’s capabilities beyond running Fit and Selenium tests but this is enough background for now.

During STIQ’s development, we created a Maven 2 plugin, imaginatively named maven-stiq-plugin. This plugin did only 1 thing: start up the STIQ server for your project without having to run Java from the command line. In the past couple of days, I have finally had enough time and desire to develop integration of exporting STIQ tests into Selenium RC compliant “selenese” so they can also be executed during your Maven integration-test cycle in the 2.2-SNAPSHOT version. So, lets get down to the “how”.

First, add the STIQ Maven 2 repository to your POM (pom.xml file for your project) as shown below: (NOTE: updated with correction on URL from original posting)

<repositories>
...
  <repository>
     <id>STIQ Sourceforge</id>
     <url>https://storytestiq.svn.sourceforge.net/svnroot/storytestiq/trunk/www/maven2/</url>
  </repository>
</repositories>

We, also, must put the STIQ Maven 2 repository into our plugin repositories within the POM because so that we can find the maven-stiq-plugin to execute during the integration-test cycles: (NOTE: updated with correction on URL from original posting)

<pluginRepositories>
...
  <pluginRepository>
     <id>STIQ Plugins Sourceforge</id>
     <url>https://storytestiq.svn.sourceforge.net/svnroot/storytestiq/trunk/www/maven2/</url>
  </pluginRepository>
</pluginRepositories>

Next, we will put in the maven-stiq-plugin configuration.

<plugin>
 <groupId>net.sourceforge.storytestiq</groupId>
 <artifactId>maven-stiq-plugin</artifactId>
 <version>2.2-SNAPSHOT</version>
 <executions>
   <execution>
   <id>export</id>
   <phase>pre-integration-test</phase>
   <goals>
     <goal>export</goal>
   </goals>
   <configuration>
     <pageRootPath>repository</pageRootPath>
     <suiteWikiPagePath>ProjectRoot.StoryTests</suiteWikiPagePath>
   </configuration>
 </execution>
 <execution>
   <id>exec</id>
   <phase>integration-test</phase>
   <goals>
     <goal>exec</goal>
   </goals>
   <configuration>
     <userExtensions>src/main/resources/user-extensions.js</userExtensions>
     <browserType>*firefox</browserType>
     <suiteFile>target/stiq/ProjectRoot.StoryTests.html</suiteFile>
   </configuration>
   </execution>
 </executions>
</plugin>

Now, to tell you a little bit about what is going on in the above configuration. The <groupId> and <artifactId> elements describe what plugin to grab from the plugin repository and use in the project. In the executions section, we define 2 separate execution elements. The first execution is called “export”. This execution will occur during the “pre-integration-test” cycle within the full Maven 2 build life cycle. The goal, similar to an Ant target, on our maven-stiq-plugin that you will be “export”, which is the goal that exports our StoryTestIQ acceptance tests as “selenese” to run within the Selenium RC server. The configurations shown above are <pageRootPath>, which is the directory located below your top-level project directory where the StoryTestIQ tests are located, and <suiteWikiPagePath>, which is a wiki page location for the top-level suite including all of the tests to export. If you don’t already have STIQ tests, please go to http://storytestiq.sf.net to find out how to get started.

The second execution element is called “exec”. This execution will run during the “integration-test” cycle in the Maven build life cycle and will execute the exported tests using Selenium RC server. The configurations for this goal are <userExtensions>, which is where any new selenese actions are defined specific to your project, <browserType>, which is the web browser to execute the tests within, and <suiteFile>, which is where the exported selenese tests were generated during the “export” goal execution. As a convention, the generated selenese file will be located under the “target/stiq” directory by default with the name of the file as <suiteWikiPagePath>.html.

Now you can run the usual ‘install’ command in your project’s top-level directory:

mvn install

This should compile and execute all of your unit tests then during the integration-test phase it will run the maven-stiq-plugin goals, “export” and “exec” in that order. During the maven-stiq-plugin “exec” goal execution, the web browser will open up in the background and you will see the STIQ tests run. After the tests have executed, the web browser will close and the Maven build life cycle will complete.

NOTE: If you are having trouble with “*firefox” as <browserType>, then you might be seeing a current bug with Selenium RC server version 1.1.1. An upcoming release version will include a fix and we will update the dependency once we see the update. For now, the fix is to go back to Firefox version 3.5.3 or switch to using a different browser as listed here on the Selenium RC documentation.

There is still much more to do with the plugin before getting to a release version of 2.2. Please comment on this blog post with any suggestions or issues that you have with the plugin and it’s configuration. If you are interested, do a search on StoryTestIQ within this blog to find out more about using StoryTestIQ or visit the main project page http://storytestiq.sf.net. Thank you.

Top 25 Open Source Projects — Recommended for Enterprise Use

This is a bit off my usual topics on this blog but I am a heavy open source user and this article is something that I hope gets to more enterprise operations, managers and executives. I have been using and deploying production available applications using open source tools, libraries, and platforms for over 12 years now. Open source tools can do almost anything commercial products are able to do and have transformed the software industry in that time span. The list given in the article contains open source projects that I would recommend and have used in the past either directly or indirectly including *nix tools and libraries shown.

I would like to add to this listing with some of the tools I have come to use often:

  • Maven 2.x+ (http://maven.apache.org/)
  • JBoss (http://www.jboss.org/)
  • Rio/Jini/Apache River (http://incubator.apache.org/river/RIVER/index.html)
  • Apache Commons (http://commons.apache.org/)
  • Subversion (http://subversion.tigris.org/)
  • Apache Web Server (http://httpd.apache.org/)
  • Bouncy Castle (http://www.bouncycastle.org/)
  • Time and Money (http://timeandmoney.sourceforge.net/)
  • Spring Framework (http://www.springframework.org/)
  • Hadoop (http://hadoop.apache.org/)
  • Ruby on Rails (http://www.rubyonrails.org/)

This is some of the open source that I have and still use on my projects. What are your favorites that were not on the list?

How To Create a Maven Plugin

Background on My Use of Maven

A few years ago I made the transition from using Ant for my Java project builds to the mostly wonderful world of Maven. In Maven's previous incarnation there were many issues in using the tool. One of the major points of contention was Jelly and it's executable XML model. Most of the benefits outweighed these issues in my use of Maven. The benefits included good dependency management, a strong project structure, easy to integrate plugins, great reporting facilities, and the ultimate dashboard for viewing all pertinent project information.

One large problem that Maven 1.x had was the learning curve. With Ant, many developers had direct access, similar to their programming tools and environments, to launching functions or “targets”. Ant provided a relatively simple set of targets which provided an usable API for developing complex build scripts. I was a heavy user of Ant for quite a few years before I came across Maven. For me, Ant did the job of building and providing valuable details for my projects better than makefiles but there was a catch. In order to make my build environment extendable I would have to provide a large amount of indirection without a nice IDE to help swallow the cost of maintaining my scripts. The Ant builds that I created, and those that were created by others in projects I worked on for that matter, started to get broken up into multiple files with interesting property file loading mechanmisms. Not only that, each project had it's own way of solving the build script bloat which created a large cost in knowledge transfer and maintenance.

Then Maven came along. Most of the extra targets I had been creating to generate reports on my unit tests, code statistics, and documentation were now just plugins that I could integrate into my Maven artifacts. There were only two files that I needed, project.xml and project.properties, unless I needed to extend Maven in which case I added functions or “goals” to a maven.xml file. All of the plugins had access to the data which was present in your POM (project object model). This data included source control management, project developers, versions, issue tracking URL, and other project detail information. The fact that running these goals were slower than my previous Ant scripts was overshadowed by the fact that I could run `maven eclipse` and import my project into Eclipse IDE. Also, I no longer had to think about how to check jar archives into my source repository so that I did not fill up the file system since the source control management had no way to diff binary files.

Of course, Maven 1.x had quite a few warts which is to be expected for a 1.x release. I found ways to work around many impediments but it always got the job done. And then there was the release of Maven 2.0 and my expectations jumped up a few notches. On my first day of using Maven 2.0 I could see how this upgrade was going to make my life even easier. Now there was only one file to put all of my project details into, pom.xml. I had created a plugin for Jini in the Maven 1.x plugin paradigm already. This creation of this plugin had me up late on many an evening for two weeks. I finished but not without some heavy ground to air attacks on Jelly. This made me a bit hesitant to upgrade the plugin to Maven 2.0. One weekend night I made the decision to go forward with the upgrade. To my astonishment, I was finished with the upgrade by Sunday night and it was just plain old Java development. What a nice surprise.

Now to the Good Part

In order to create a new plugin project with Maven 2.0, you can use the “mojo” archetype by issuing the following command:

mvn archetype:create -DarchetypeGroupId=org.apache.maven.archetypes \
    -DarchetypeArtifactId=maven-archetypes \
    -DarchetypeVersion=1.0-alpha3 \
    -DgroupId=com.mybusinessname.maven.plugin \
    -DartifactId=mymojoprojectname

This command can be broken down into the plugin goal for archetype execution, the specific archetype to use, and the new plugin's project information. The plugin goal to execute for archetypes is “archetype:create” which is reference to a plugin called “archetype” which has a goal called “create”. Upon execution, the “create” goal looks for some information about where to get the archetype artifacts for generating a new project. This information is contained within the property values for “archetypeGroupId”, “archetypeArtifactId”, and “archetypeVersion”. This information describes how to find the archetype inside a Maven repository to download and use in the execution process. Finally, the “groupId” and “artifactId” property values describe the namespace and the project name to use when generating the new project directory structure. In this case, a directory called “mymojoprojectname” would be created in the current directory and the groupId would be used as the package name in the Java files and as the groupId inside your pom.xml.

Now that we have a new project created we can run the following to install our plugin inside our local Maven repository, which is usually located inside your home directory under “.m2/repository”:

mvn install

This should create a jar file named “mymojoprojectname-1.0-SNAPSHOT.jar” inside the target directory which was generated during the execution of the “install” build lifecycle. The jar will also be copied into your local Maven repository location under the “${HOME}/.m2/repository/com/mybusinessname/maven/plugin/mymojoprojectname/1.0-SNAPSHOT/” directory. As you can see, the “groupId” property was expanded into a directory structure in which the artifacts are placed.

Now that you have successfully built your plugin which does not do anything which you intend it to do, we can modify the plugin by modifying the MyMojo.java class which is now located in the “src/main/java/com/mybusinessname/maven/plugin/” directory. As you can see, the main source directory for Maven 2.0 suggested structure is the “src/main/java” directory. Inside of that, the “groupId” property value was again expanded into the package directory structure for the plugin source. Since I use Eclipse IDE for my Java development I am inclined to use the Eclipse plugin which is executed against my new plugin project running the following command:

mvn eclipse:eclipse

This will generate the Eclipse IDE project files “.classpath” and “.project” which makes my project easily importable. Once I have executed this command, go into your Eclipse IDE and import and existing project into your workspace from the project directory. If you have not setup your Eclipse environment to work with Maven 2.0 before you will have to add a classpath variable called “M2_REPO” into your IDE preferences. Select “Window->Preferences” from the main menu. Drill down the left side tree in the dialog to “Java->Build Path->Classpath Variables”. Click the “New” button and enter “M2_REPO” into the name field and “${HOME}/.m2/repository” into the path field where “${HOME}” is your environment's home directory such as “C:\Documents and Settings\{username}” on Windows and “/home/{username}” on *nix. When you are finished entering this in your workspace should rebuild your projects if you have “build automatically” selected in your IDE preferences.

Under the “src/main/java” source folder in Eclipse you will find a package named “com.mycompanyname.maven.plugin” with a class named “MyMojo” inside. Open the MyMojo class in your Java editor and you should see only one method which is implemented from the abstract superclass AbstractMojo called “execute()” which looks something like this:

package com.mycompanyname.maven.plugin;

...

import org.apache.maven.plugin.AbstractMojo;
import org.apache.maven.plugin.MojoExecutionException;

import java.io.File;
import java.io.FileWriter;
import java.io.IOException;

/**
 * Goal which touches a timestamp file.
 *
 * @goal touch
 *
 * @phase process-sources
 */
public class MyMojo extends AbstractMojo
{
    /**
     * Location of the output directory.
     * @parameter expression="${project.build.directory}"
     * @required
     */
    private File outputDirectory;

    public void execute() throws MojoExecutionException {
        ...
    }

}

Also, you'll notice the class contains some private class variables which are declared as parameters to your new plugin by the inclusion of the “@parameter” tag inside the javadoc comment for each variable. Now that you are ready to work on the details of your plugin, I will guide you to the developing Java plugins documentation on the Maven web site. This will give more information about modifying your Mojo.

Conclusion

Maven is not only a developer's build tool. It provides a mechanism for distributing information about your project easily to external viewers such as project managers, customers, and IT management. I see this as a direct benefit to developers since they no longer have continuous conversations with these parties since the information is usually made available through continuous integration systems or generated project dashboards. However, Maven does provide great facilities to solve project configuration management issues in a consistent and manageable manner across all of your projects which use it. I hope that if you choose to use Maven in your projects that you will find it an incredibly valuable tool.

Perspective on Maven 2.0

The Maven version 2.0 release is definitely an advancement for business-driven software development. Many organizations struggle to keep up with all of their IT projects and the high cost of maintenance due to differing project build, documentation, design, and deployment processes. Maven is a great tool to help alleviate these costs by creating structure across projects and still allowing implementors to innovate within their own internal team processes. These processes are usually provided as Maven plug-ins, also called Mojos, which can be shared with other projects.

An extremely interesting result of the 2.0 release is the extension capabilities of the architecture. Carlos Sanchez wrote a blog entry recently entitled “Maven Ruby Mojo Support”. This support for other languages inside Maven is a great benefit to the overall tool usage. With the increased popularity of interoperable platforms such as Java and .NET, all of these projects can have the same build tool. This ability would decrease project delivery, maintenance, and deployment costs tremendously for an organization who uses the tool wisely. This link shows the support for other languages in Maven 2.0. Besides the C and C++ plug-ins shown there, a C# plug-in is being created which will support compiling with the Windows .NET and Mono compilers along with NUnit and Visual Studio project support.

My own experience with writing a Maven 2.0 plug-in was extremely pleasant. The Maven Jini plug-in was originally created for Maven version 1.1 and took approximately 2 weeks to create due to issues with Jelly. Using the new Java Mojo style of developing plug-ins for version 2.0, I recreated the functionality of the original plug-in within 1 day. The Maven Jini plug-in version 2.0 will be released in the next couple of days on the default Maven 2.0 repository at http://www.ibiblio.org/maven2/. Please go to the Maven Jini plug-in home page for more details on how to use it.

Overall, Maven 2.0 is well worth the investment to learn about it‘s capabilities. There are multiple improvements over the 1.x versions such as transitive dependencies, performance, configurable build life cycle, built-in multiple project handling, and a highly flexible architecture to build upon. I recommend taking it out for a spin if you get a chance.