Test Driven Development (TDD)

TDD is a software development practice that turn software requirements into test cases before their actual implementation. It encourages the developers to write the test at the same time or even before the writing of the production code. This has the advantage of locating bugs earlier, which is by itself cheaper to fix compare to if the bugs are found later. Some communities such as Smalltalk and eXtreme Programming refer this practice as code a little, test a little or continuous integration, relentless testing.

The codes are not done if only the implementation of them are written into the codebase. The code can be said is done after all the tests ran. If the tests can’t be done on a component, this means that the component is not modular enough. Decouple# it further to make rooms for more modular testing.

Note: If a tester found a bug, build a test for that bug. Once a human tester finds a bug, it should be the last time a human tester finds that bug.

Unit Testing# is the foundation of this whole practice. That being said, other testing such as Integration Testing#, Validation and Verification of System Requirements#, Environmental Testing#, Performance Testing#, Usability Testing# and Runtime Diagnostics# should not be underestimated. All test cases should honour its contract where we can check whether the code meet the contract of the contract means what we think it means.

Both Unit Testing and Integration Testing should be composable; that is, a test can be composed of subtests of subcomponents or subsystems to any depth, so that we can easily select which kind of tests that we should run without involving the whole code base. You can add in Regression Testing# to enforce this practice.

Run all the tests that doesn’t require specialised equipment or environment# (Unit Testing# and Integration Testing#) before checking the code. We should do it often and automatically, which could be done by incorporating the testing facility# into the project build system such as CMake and Makefile. If a full build is necessary, make sure all available tests are running during the process.

Hunt et al. recommends that both Validation and Verification of System Requirements# and Usability Testing# should be done as early as possible. And that Regression Testing# to be incorporated into the build system in order to compare it to the previous build.

Note: Make sure test facilities are easily accessible by the project team.

Since we have such extensive testing facilities, newcomers could just look at the tests to know how to use a particular module or function. They will know what’s the expected input and output for it, and use it accordingly. Even if they failed to understand it, the testing facilities will remind them by throwing fail tests to them.

The test data for these testing could be collected from real-world (usually user data coming from an existing system, a competitor’s system or a prototype) or artificially created (in order to satisfy the need of large amount of data, to stress the boundary conditions, or exhibits certain statistical properties such as the runtime of an algorithm#). These data can later be used in the Regression Testing#.

Although more code coverage can mean a better tested codebase, a full coverage, that is 100% code coverage is impossible. Therefore, it is advised to test the state of the program, not the line of code.

To improve this model, the development team can hire or allocate a dedicated test saboteur. Their sole responsibility is to sabotage the testing by deliberately causing the bugs for the tests to catch in a separate source tree.

Note: Even with extensive testing, one could not avoid drawing a #naive assumption about a routine, resulted in false causalities and coincidental outcomes. Hunt et al. advices “Don’t assume it, prove it”.

Links to this page
  • Validation and Verification of System Requirements

    #202206201159 recommends to do both Validation and Verification and 202206201428 as early as possible.

    As the name suggests, Validation and Verification of System Requirements is to validate and verify whether the system is adhered to the user’s need and functional requirements of the system. This is quite handy in #202206201159.

  • Usability Testing

    Usability Testing aims to test the usability of the software, that is about the user interface and user experience (UI/UX) while using the program from the end-user point of view under real environment. As the testing is done regarding the user, it puts most of the concerns in terms of human factors rather than relying on synthetic data collected elsewhere, which by some sense quite similar to 202206201346. #202206201159 recommends doing them as early as possible.

  • Unit Testing

    It is #recommended testing the subcomponents first before testing on the main module in which we’ll have a composable testing facility. However, don’t test every public method if they are too simple to test as advised by Fowler.

  • The Pragmatic Programmer

    Even though assertions can add overhead to the program, turning them off when building the binary is a bad idea since it assumes tests alone would find every bug in the codebase which is not the case.

    If there is something that surprise you, reevaluate your assumptions. Don’t just “know” your code works, prove it, in this context, with this data, with these boundary conditions. Add new test# to it, 202207091736# or put some 202207091744#.

    Interview the user who reported the bug to gather sufficient level of detail about the dysfunction. Furthermore, this could mean that test assets# don’t cover enough area of the application. Write thorough test based on boundary conditions and realistic end-user usage.

    Know why the code works. Test# it thoroughly. Don’t assume the code will work as intended just because it’s the way it is currently written.

  • Test Harness

    Test Harness is a collection of software and test data configured to test a program unit by running it under varying conditions and monitoring its behaviour and outputs. * It allows #test facility #automation.

  • Software Development Practices
  • Relationship Between Problem Report, Symptoms and Defects

    Note: A bug report could mean that test assets# don’t cover enough area of the application. Write thorough test based on boundary conditions and realistic end-user usage#.

  • Performance Testing

    Performance Testing is a testing to ensure the the performance requirements of the program under real-world conditions. It includes but not limited to performance testing, stress testing and testing under load. It plays an important part in #202206201159.

  • “It Can’t Happen”

    If, for example, you have stumbling across the application codebase, run it, and found something that surprise you, reevaluate your assumptions, be willing to debug it#. Don’t just “know” your code works, prove it, in this context, with this data, with these boundary conditions. Add new test# to it, Crash Program Earlier# or put some Assertions#. You can go further by adopting Defensive Programming#, enforcing checks on precondition, postcondition and class invariant by Design by Contract (DBC)#.

  • Environmental Testing

    It is important to check the environment of the software that is going to reside in. Thus, testing the external factors such as memory, disk space, CPU bandwidth, wall-clock time, disk bandwidth, network bandwidth, colour palette and video resolution should be done too according the #202206201159 principle. This is to know the expected environmental limitations that are going to imposed on the program.

  • Documentation Guide

    Note: If you are unsure about the behaviour of the function, write down your assumption and test them.

  • Design by Contract (DBC)

    The programmer should be strict about what should they accept before the beginning of the function or method implementation, and promise as little as possible in return. Such mindset will help to construct a good contract for the function or method to follow. Additionally, contracts must be designed before committing to code implementation, just like setting up the test facilities.

  • Debugging Guide

    Note: Always double-check the test assets! For the new and old tests.

  • Assertions

    Assertions in programming languages such as #c, #cpp and #rust are used as a mean of comparing the output of a function or method of a class to the expected value. They can be incorporated well into the test facility like Rust.

#test #oop #functional-programming #)