Test coverage can be a deceiving metric to make decisions based off. Time and time again I have seen large organizations set arbitrary code coverage mandates that don’t necessarily provide them with the value that they are after.
Saying to all developers within your organization - “You need at minimum 90% code coverage on all applications” - is a dangerous game that can waste developer time and actually cost you money in the long run.
If you create tests with the primary goal of reaching an arbitrary code coverage metric, then you need to have a frank discussion with your management about priorities.
Instead, you should be focusing on building tests that deliver value to your organization. If you have a small service that is tasked with a single responsibility and is the core of your business, you would rather the core business logic of that service is covered by incredibly value tests that truly ensure the validity and correctness of the service.
Wasting time on creating tests for simple getters and setters within your code is pointless and adds to the amount of code your team has to maintain going forward.
Why Should You Care About Code Coverage?
Code coverage is a valuable metric to a degree. It can let the responsible developer know how many code paths within a critical function they have covered. If they see paths of code within these critical functions not covered by tests, they can then patching up the gaps.
Calculating Code Coverage in Go
Now that we’ve covered one of the important anti-patterns of code coverage. Let’s dive into our codebase and see how we can calculate our test coverage.
Within the terminal of your project, try running the standard go test
command with the ./...
notation and append the --cover
flag to the command like so:
$ go test ./... --cover
ok github.com/TutorialEdge/go-testing-bible/calculator 0.167s coverage: 100.0% of statements
As you can see from the output, we’ve managed to cover a fairly unimpressive 100% of the code of our tiny project.
Let’s add a new function now to our code just to validate what’s going on.
package calculator
import "math"
// CalculateIsArmstrong takes in a 3 digit number 'n'
// and returns true if it is an Armstrong number
// Armstrong number example 371 == 3^3 + 7^3 + 1^3
func CalculateIsArmstrong(n int) bool {
a := n / 100
b := n % 100 / 10
c := n % 10
return n == int(math.Pow(float64(a), 3)+math.Pow(float64(b), 3)+math.Pow(float64(c), 3))
}
// RandomFunction
func RandomFunction(n int) bool {
if n > 10 {
return true
} else {
return false
}
}
Running this again will show exactly how much of our code is now covered by tests:
$ go test ./... --cover
ok github.com/TutorialEdge/go-testing-bible/calculator 0.287s coverage: 57.1% of statements
Now, this metric in itself isn’t too helpful in informing us of what we’ve missed. In more complex applications, having some visual clues as to what methods you missed is incredibly important.
Thankfully, we can generate a coverage report which can then be displayed in our browser to see exactly what lines of code we have missed:
$ go test ./... -coverprofile=coverage.out
With this coverage report now generated, we can open it up in the browser by running the following:
$ go tool cover -html=coverage.out
This should now display the code that we have covered in green
and the code we haven’t covered in red
.
Conclusion
So, in this chapter, we have “covered” code coverage and how it can genuinely useful to developers when it comes to testing critical sections of their code. We also looked at some of the anti-patterns that large corporations have historically adopted in the past mandating that code coverage must exceed an arbitrary limit.
In the next topic, we are going to look at how you can leverage the testdata
directory as a means of storing config and files which your tests will be consuming as part of their runs.