The following paragraphs came from an email I sent in response to questions about FitNesse and StoryTestIQ which can be found on the Scrum development mailing list here.  I was reminded of this post to the mailing from a recent blog entry on my friend’s blog here.
Good day Anne,
I will preface my email with the fact that I am one of the developers for StoryTestIQ (aka STIQ) and that we use it on many projects currently at SolutionsIQ. We created STIQ out of a need to help our Product Owner describe what they were asking for in their feature requests and ultimately user stories. STIQ is a combination of Fit, FitNesse, and Selenium with some special sauce that allows you to do both web UI and beneath the UI acceptance testing.
Here is a scenario of how we use it:
Upon completion of the Sprint Planning Meeting we come out with the following artifacts related to the development of our acceptance tests:
* User Stories the team has committed to
* Acceptance Criteria (aka “Confirmation” from Ron Jeffries description of a user story)
* Some high level communications about the user interactions which have been formally or informally captured
Our teams immediately go to work on created automated Acceptance Tests using this information and further collaboration with the Product Owner and other Subject Matter Experts (SME). After we have built up a good amount of Acceptance Tests for our applications we will collect a bunch of utility scripts which get us to specific parts of the application or do repetitive things like login as multiple types of users. Usually we can put together some scaffolding for our Acceptance Tests rather quickly using these utility scripts and much of the collaboration happens after this.
After setting up our Acceptance Tests we will “tag” them with the name of the Sprint so that we can create a test suite which represents all Acceptance Tests for the Sprint. All of the individual tests which are collected into this test suite should be failing to begin with because we have added in the expectations of the User Story’s Acceptance Criteria into the tests. Once a user story has tests ready (meaning failing tests created with acceptance of the tests from our Product Owner) then we begin coding using TDD and potentially creating more QA regression tests which go beyond the Acceptance Tests. These extra tests may go into a different tool (such JMeter, QTP, DBUnit, xUnit, etc…) or could be extra tests added to STIQ.
Once the team has developed all of the artifacts needed to meet their Definition of Done and the Acceptance Tests are all passing for a User Story then the User Story is “accepted” with validation from the Product Owner during the Sprint hopefully. We get feedback on a continual basis through a Continuous Integration tool that we use with STIQ to show if all Acceptance Tests are passing that have been included into the build (meaning the development has been worked through to make that test “pass”).
We are not perfect in the creation of all Acceptance Tests at the beginning of the Sprint with the Product Owner, SME, business analysts, etc. But we do have a very good start on capturing the intent of the Sprint deliverables. There are many times that development of code to satisfy an Acceptance Test may identify a potential issue which the Product Owner negotiates with the team to resolve. There are also times that the Product Owner, upon seeing the actual working code, may decide to negotiate on specific details of an Acceptance Test. I believe that the tight feedback loops with the Product Owner increases our ability to deliver closer to what the customer wants without as much rework.
I hope this helps and would be interested in what and how you decide to move forward with Acceptance Testing mostly in the automated capacity. In full disclosure it is not an easy practice to work into your daily routines. There are many bumps and bruises along the way but once it starts to settle out things get a whole lot better. I know for myself that I do not like creating software in any other way.


