TDD is a software development practice that turn software requirements into test cases before their actual implementation. It encourages the developers to write the test at the same time or even before the writing of the production code. This has the advantage of locating bugs earlier, which is by itself cheaper to fix compare to if the bugs are found later. Some communities such as Smalltalk and eXtreme Programming refer this practice as code a little, test a little or continuous integration, relentless testing.
The codes are not done if only the implementation of them are written into the codebase. The code can be said is done after all the tests ran. If the tests can’t be done on a component, this means that the component is not modular enough. Decouple# it further to make rooms for more modular testing.
Note: If a tester found a bug, build a test for that bug. Once a human tester finds a bug, it should be the last time a human tester finds that bug.
Unit Testing# is the foundation of this whole practice. That being said, other testing such as Integration Testing#, Validation and Verification of System Requirements#, Environmental Testing#, Performance Testing#, Usability Testing# and Runtime Diagnostics# should not be underestimated. All test cases should honour its contract where we can check whether the code meet the contract of the contract means what we think it means.
Both Unit Testing and Integration Testing should be composable; that is, a test can be composed of subtests of subcomponents or subsystems to any depth, so that we can easily select which kind of tests that we should run without involving the whole code base. You can add in Regression Testing# to enforce this practice.
Run all the tests that doesn’t require specialised equipment or environment# (Unit Testing# and Integration Testing#) before checking the code. We should do it often and automatically, which could be done by incorporating the testing facility# into the project build system such as CMake and Makefile. If a full build is necessary, make sure all available tests are running during the process.
Hunt et al. recommends that both Validation and Verification of System Requirements# and Usability Testing# should be done as early as possible. And that Regression Testing# to be incorporated into the build system in order to compare it to the previous build.
Note: Make sure test facilities are easily accessible by the project team.
Since we have such extensive testing facilities, newcomers could just look at the tests to know how to use a particular module or function. They will know what’s the expected input and output for it, and use it accordingly. Even if they failed to understand it, the testing facilities will remind them by throwing fail tests to them.
The test data for these testing could be collected from real-world (usually user data coming from an existing system, a competitor’s system or a prototype) or artificially created (in order to satisfy the need of large amount of data, to stress the boundary conditions, or exhibits certain statistical properties such as the runtime of an algorithm#). These data can later be used in the Regression Testing#.
Although more code coverage can mean a better tested codebase, a full coverage, that is 100% code coverage is impossible. Therefore, it is advised to test the state of the program, not the line of code.
To improve this model, the development team can hire or allocate a dedicated test saboteur. Their sole responsibility is to sabotage the testing by deliberately causing the bugs for the tests to catch in a separate source tree.
Note: Even with extensive testing, one could not avoid drawing a #naive assumption about a routine, resulted in false causalities and coincidental outcomes. Hunt et al. advices “Don’t assume it, prove it”.