Long-surpressed, in the back of my mind, a niggling doubt has lurked: perhaps JUnit isn't The Best Thing Since Sliced Bread?
Don't get me wrong, it is a great tool: powerful, easy to pick up and actually quite elegant in its simplicity. It is fair to say that JUnit has advanced software development practices and raised the profile of testing and continuous
integration. "Green is Good."
Two things have worried me however. Firstly: the amount of work I have to put in to actually building all these tests could very well be more than that required to build the 'real' code, and thus represents a cost that not too many
organisations are actually willing or able to pay.
The second issue goes like this: how do I know that my tests are actually relevant? I don't mean have they achieved the required coverage but rather, are they testing things that the product owner would get value out of;
You Ain't Gonna Need It may well apply to tests as well as code.
Enter easyb, a so-called Behavior Driven Development tool.
easyb is a Groovy-driven Domain Specific Language for building executable specifications. It lets your customer write stories that can then be decorated to produce executable tests and the result of these tests can be presented
back to the customer for validation. To me, this sounds a lot better: I can't for the life of me forsee a situation where I would plonk a customer down in front of my JUnit code…
Venkat Subramaniam has a good presentation on easyb.
An example is useful, I think.
Consider the following (groovy) Class Under Test:
public class Test {
def doublePositive = { x ->
x <= 0 ? x : x * x
}
}
It is probably not unreasonable (although I have met BAs and PMs that seem to regard themselves as a sort of firewall protecting the customer from the mad-haired, wild-eyed developers) that I could work with a customer to develop the
following:
description "Testing the Test Class"
narrative """The Test Class is a cornerstone of our application;
the PHB says it had better be correct!""", {
as_a "Starving Developer"
i_want "The Test Class to be bug-free"
so_that "My boss gives me a raise"
}
scenario "0 is given as input", {
given "A new instance of Test"
then "Doubling 0 should produce 0"
}
scenario "-1 is given as input", {
given "A new instance of Test"
then "doubling -1 should produce -1"
}
scenario "5 is given as input", {
given "A new instance of Test"
then "doubling 5 should produce 25"
}
Now the part that brought it on home to me: once the customer is happy that these stories are relevant and realistic, I can take the specification and execute it as a pending specification…not particularly
useful, granted, but a solid start that permits me to incrementally decorate the specification to perform real, useful testing. I end up with the following tests:
description "Testing the Test Class"
narrative """The Test Class is a cornerstone of our application;
the PHB says it had better be correct!""", {
as_a "Starving Developer"
i_want "The Test Class to be bug-free"
so_that "My boss gives me a raise"
}
scenario "0 is given as input", {
given "A new instance of Test", {
test = new Test()
}
then "Doubling 0 should produce 0", {
test.doublePositive(0).shouldBe 0
}
}
scenario "-1 is given as input", {
given "A new instance of Test", {
test = new Test()
}
then "doubling -1 should produce -1", {
test.doublePositive(-1).shouldBe(-1)
}
}
scenario "5 is given as input", {
given "A new instance of Test", {
test = new Test()
}
then "doubling 5 should produce 25", {
test.doublePositive(5).shouldBe 25
}
}
I also get a nice report that I can allow the customer to oversee:
3 scenarios executed successfully
Story: test story
Description: Testing the Test Class
Narrative: The Test Class is a cornerstone of our application; the PHB says it had better be correct!
As a Starving Developer
I want The Test Class to be bug-free
So that My boss gives me a raise
scenario 0 is given as input
given A new instance of Test
then Doubling 0 should produce 0
scenario -1 is given as input
given A new instance of Test
then doubling -1 should produce -1
scenario 5 is given as input
given A new instance of Test
then doubling 5 should produce 25
If something were wrong with the Class Under Test, it is clear in the execution trace (for interest, note the use of gant as my build tool of choice for this little exercise):
C:DEVELOPMENTEB>gant
[easyb] easyb is preparing to process 1 file(s)
[easyb] Running test story story (TestStory.story)
[easyb] FAILURE Scenarios run: 3, Failures: 1, Pending: 0, Time Elapsed: 0.2 81 sec
[easyb] "doubling -1 should produce -1" -- expected -1 but was 1
[easyb] 3 total behaviors run with 1 failure
[easyb] easyb execution FAILED
Execution halted as behaviors failed
C:DEVELOPMENTEB>
It is also clear in the report:
scenarios executed, but status is failure! Total failures: 1
Story: test story
Description: Testing the Test Class
Narrative: The Test Class is a cornerstone of our application; the PHB says it had better be correct!
As a Starving Developer
I want The Test Class to be bug-free
So that My boss gives me a raise
scenario 0 is given as input
given A new instance of Test
then Doubling 0 should produce 0
scenario -1 is given as input
given A new instance of Test
then doubling -1 should produce -1 [FAILURE: expected -1 but was 1]
scenario 5 is given as input
given A new instance of Test
then doubling 5 should produce 25
Note that the easyb report is 'live': it isn't stuck away in a Word document somewhere to slowly decay into irrelevance, it clearly shows the result of the test and will continue to do so. This is particularly important when one is
using continuous integration.
To recap: I now have a sequence of stories, written for, or perhaps by, my customer that I-as a developer-am using as the basis for my test regime.
This I like!