The dangers of a passing test

Anyone who's done any automated testing will know that a failing test is a good thing. Why? Because it reveals problems. These problems aren't necessarily a part of the code under test, it could be a problem with the test or even the testing framework.

But a passing test? They might as well be invisible.

Might as well... actually, it's worse than that, passing tests give the impression that the area of code tested works.

When a test fails the tester immediately jumps on the test case with questions, why did it fail, is the test case working properly, can I get information from the logs, is the code actually broken. But passing tests only ever get these questions asked when they're first written and, possibly, when they're put into the automation.

The issue is compounded when working with a large suite of regression tests. When a test suite becomes large enough, no one will ever have the time to go through the entire suite to ensure the test cases are still test the correct areas.

The other problem with passing tests is they can give the illusion that items are actually tested. Take the following code:

@Test
public void testSomeFeature() {  
    doTheTest();
}

private boolean doTheTest() {  
    if(ourSystem.hasSomePropertySet()) {
        fail("Proprety not set");
    }

       ourSystem.performSomeTask();

    if(!outSystem.hasSomePropertySet()) {
        fail("Property was not unset");
    }

    return !ourSystem.didError();
}

Spot the problem? It all looks sensible, but the return from doTheTest is never used. So even though the code is run and picks up failures of the property not being set/unset, but if the system errors; we never know. Worse would be the other way around, picking up on error from the system, but not some of the other behaviour.

Other problems come when tests are not idempotent. The ordering of tests then affects the output, leading to what me and some guys in the office like to call "quantum tests" (tests which only fail when you look at them individually).

There's a lot of controvisy in the testing world over automated tests. Some believe they provide next to no value, others see the value, but are careful about how much they automate.

Unfortunately, sometimes there's no avoiding massive regression suites: a product for which backwards compatability is a necessity, for example.

So what do you do? Well that's a difficult question and one I'm trying to answer; if possible when there's time you go back and refactor the tests at least each release. But often there's other things to be done and tests stay passing for no good reason.

And that is the danger of a passing test.