Recently I setup a continuous integration system at work and was adding code coverage as a part of the build process to keep track of which sections of code are missing tests, or light on tests. The creation of this got me wondering, should code coverage be run on all tests or just unit tests?
I am inexperienced in the matter, and unfortunately do not work with anybody that possess experience with continuous integration systems.
To figure out best practices for code coverage I began searching around. Unfortunately I just found debates about the usefulness of code coverage and whether or not it even provides any value; I am well aware of the pitfalls and that one should not put much weight into this metric, but I still believe it is valuable in some cases.
After thinking about the use of code coverage I came to the conclusion that the most appropriate tests to use it with are Unit Tests only. The reason is that Integration tests, or any other test less focused than a Unit Test, will result in tremendous coverage, however much of the code that was ‘covered’ wasn’t explicitly tested. This causes your code coverage number to not represent code that is well tested, just code that was run during the tests, which for an integration test is a very large portion.
The way I think of it is that the Integration Tests are using a large amount of code and they are testing only that they work together. It is not testing edge cases, or really stress testing your logic or critical sections. In fact it is likely only testing your logic code under a small portion of the available scenarios.
However it will all show up as being covered by your tests.
By exclusively using code coverage with my unit tests the code coverage reports have done a good job of showing me sections of code that I need tests for (or that are now obsolete and not used).
This still isn’t a silver bullet approach though, as many of you clever folks probably noticed, I can have great coverage and poor tests just by having a unit test that tests only one scenario (similar to what an integration test would do).
Well yes this is true, this is where the developer needs to be disciplined and take the responsibility to create tests for all of the scenarios you can think of at the time of writing.
These will change.
When they do be sure to add or modify the scenarios to meet the new criteria. This is critical to build a high quality and comprehensive test suite.
I am not advocating for 100% coverage, that is unreasonable and frankly a waste of time. I am a believer to reaching for 60-80% coverage (depending on the code base). However as I just outlined the coverage percentage alone is only a good metric if you are only running coverage on highly focused tests, and are disciplined enough to cover all of the required scenarios for each code section.
So if you use code coverage in your organization as a metric to gauge how comprehensive your test suite is, be sure you are collecting properly otherwise it will falsely display an excellent code coverage.
Go forth and test!