No test assertion
Tests written after the action often fall foul of common mistakes — tests that were added ‘just for the sake of having them’. These kinds of tests interact with the system but make no real verification of whether that interaction was successful or expected, according to the setup. Usually, they have either no ‘assert’-phase or are asserting on some meaningless result (such as an: undetermined response from the system).
A test without an assertion has no reason to exist.
These kinds of tests tell the developer that the system is not yet finished, and that the work is not done yet. The system is hiding and exposing the wrong things. The most common argument — ‘but only the tests use it’ — is mostly made by people who write their tests after the fact. The tests that are added for those kinds of implemented systems usually do not have assertions, are not asserting enough, or are asserting the wrong things. This can all be avoided if test scenarios and test assertions are considered before the functionality is implemented.
No test input
Many times, the input of the test is so small that it can’t possibly verify any written functionality correctly. Avoid hard-coded values at any cost. A test written with a hard-coded value is weaker and has less chance of catching any bugs. You have to imagine a test suite run many times. If you use a single static example, you miss the opportunity to test many examples. Randomize all the other inputs you do not care about, and only show and set the necessary inputs in your test body. Many bugs and faulty implementations could be avoided just by using a randomized input. Use property-based tests in combination with some example-based tests for maximum coverage.
If you use a central place where you can generate valid/invalid/dirty representations of your input, you can reuse it everywhere (both unit and integration tests) and have a greater impact on what you are testing. This also immediately forces you to think about test assertions, as you can no longer rely on anything you considered ‘always there’. It fleshes out the ‘real’ purpose of the functionality and shows you what tests are missing: you only have to be open to listening.
No test name
I have seen tests with names like: ‘Test’. For some reason, integration tests suffer more from this than unit tests. There is a simple explanation for this. If there are integration tests written, there may be only one. This differs from the unit tests, where you are forced to come up with different names (to avoid conflict with others). This single integration test is usually the result of a POC-like creation of a test that was never cleaned up. The name remains something as basic as ‘Test’.
Unit tests however also often have obscure and incomplete names. This makes it very hard to see what went wrong at first glance. I personally use the ‘three-part’ system for naming tests, where the test name includes what functionality you are testing, what kind of input you are using, and what output you expect. But be careful of becoming too specific: for example, don’t add the kind of exceptions you expect. If the system changes and the exception is changed, the test name becomes deprecated without you knowing. This would result in an obscure test failure. A too-specific, deprecated test name is just as bad as no name at all.
Conclusion
These are just the most basic and most common mistakes in testing. There are a lot more out there in the wild: no test logging, the overuse of test fixtures, copy-paste testing, testing third-party code. These mistakes could easily be avoided if more tests were written, and more tests were used, as this demonstrates the problem first-hand.
Thanks for reading,
Stijn
Subscribe to our RSS feed