Accepting Interfaces and Returning Structs

Elliot Forbes Elliot Forbes ⏰ 10 Minutes 📅 Apr 27, 2022

👋 Welcome Gophers! In this article, we are going to be covering the concept of accepting interfaces and returning structs and how this can help improve your code to make it more testable as well as maintainable.

Overview

When writing Go applications, one of the key things I like to keep in mind is “how can I make this particular function as testable as possible?”.

For more complex applications, being able to exercise all of the various code-paths within our application can be a bit of a nightmare depending on the way that we architect certain components.

In this article, I’m going to start off by demonstrating an approach that accepts pointers to structs and how this can limit our ability to test things. We’ll then look at how we can improve the code with just a few subtle changes that should enable us to do fancy things like unit testing our code with mocks and fakes.

Passing Pointers

Let’s first take a look at a standard example of a package that doesn’t follow the accept interfaces, return structs mantra.

In this example, we’ll be defining a user package in our application that will require some form of database in order to fetch users and then, after a little bit of business logic, it will then be able to persist any changes made.

package user

import (
    "github.com/TutorialEdge/path/to/db"
    "github.com/TutorialEdge/path/to/models"
)

type Service struct {
    Store *db.Database
}

// New - our constructor function that takes in a pointer to a database.
func New(db *db.Database) *Service {
    return &Service{
        Store: db,
    }
}

// GetUser - an example that checks to see if the user has access
func (s *Service) UpdateUserPreferences(ctx context.Context, userID string) (models.User, error) {
    u, err := s.Store.GetUser(ctx, userID)
    if err != nil {
        // handle the error
        return models.User{}, err
    }

    // potentially have some business logic in here that defines what
    // we are allowed to do to a User object

    // persist the changes via the Store interface
    err := s.Store.UpdateUser(ctx, u)
    if err != nil {
        // failed to persist the changes, we can then decide
        // how we want to handle this.
        return models.User{}, err
    }
}

In our the constructor func we have defined, you’ll notice that we take in a pointer to a db.Database struct which we hope will implement the methods that our user package will need in order to function.

We’ll also need a shared models package or something to that effect that both the user package and the db package can import in order to gain access to the User struct definition. This is a somewhat necessary evil with this approach as both packages need this definition, however if the db package tried to import the user package, we would be presented with a cyclic dependency error when compiling our code:

package models

type User struct {
    ID string
    Email string
}

Testing Limitations

Let’s now consider how we would test the UpdateUserPreferences method that we have defined.

Well, to beging with, we would need to create a new *db.Database struct. This may be fairly easy if the db package defines a similar constructor func to the one we have in our user package above.

However, let’s think about what would happen if that constructor function required a running Postgres database in order for it to successfully work?

If we wanted to test our UpdateUserPreferences method we would then have to ensure that a locally running Postgres instance is available and that we have set up all the required environment variables we need in order to run our tests.

You may be asking, “that doesn’t sound too bad? I can test both packages are working together with the one test” - This statement is certainly true, but let’s consider what our approach would be like if we had to synthesize an error response from our db package.

This approach increases the complexity of your tests and this ultimately impacts your ability to thoroughly test all of the important code-paths within your user package.

Defining Interfaces

Let’s take a look at how we can improve on the above code snippet and make it loosely coupled and easier to test.

First, we’ll try define the interface that any dependencies must implement or our application will not compile. Whilst we are at it, we can also move our User struct definition into this package:

type Store interface {
    GetUser(ctx context.Context, userID string) (User, error)
    UpdateUser(ctx context.Context, u User) (User, error)
}

type User struct {
    ID string
    Email string
}

Next, we’ll need to update the constructor function for our package:

func New(store Store) *Service {
    return &Service{
        Store: store,
    }
}

The final thing we’ll need to do (if your editor hasn’t already done this for you) is to remove the import at the top of our file to pull in the external db package.

Et Voila! We have now modified our package in an incredibly subtle way, however this approach unlocks our ability to do things like unit-test this package and ensure that, no matter what our store returns, we are handling it appropriately.

Let’s see this put together:

package user

type Store interface {
    GetUser(ctx context.Context, userID string) (User, error)
    UpdateUser(ctx context.Context, u User) (User, error)
}

type User struct {
    ID string
    Email string
}

type Service struct {
    Store Store
}

func New(store Store) *Service {
    return &Service{
        Store: store,
    }
}

// GetUser - an example that checks to see if the user has access
func (s *Service) UpdateUserPreferences(ctx context.Context, userID string) (User, error) {
    u, err := s.Store.GetUser(ctx, userID)
    if err != nil {
        // handle the error
        return User{}, err
    }

    // potentially have some business logic in here that defines what
    // we are allowed to do to a User object

    // persist the changes via the Store interface
    err := s.Store.UpdateUser(ctx, u)
    if err != nil {
        // failed to persist the changes, we can then decide
        // how we want to handle this.
        return User{}, err
    }
}

Benefit - Loose Coupling

With this new approach, our user package no longer imports or necessarily cares about the db package used in the first example. All this code is focused on now is the business logic surrounding being able to UpdateUserPreferences.

The db package will still need to have access to user package in order to understand the shape of the data it needs to return, however we’ve removed the need for that pesky models package that we previously needed.

As an aside - how nice is it having the User struct definition local to the code that is actually using it? Brilliant right?

Mocking and Faking

Let’s say we didn’t want to have to spin up a database in order to exercise the code within our user package. Using this interface-based approach this is possible through the use of mocks and fakes.

We could use tools such as golang/mock in order to generate mocked implementations of our Store interface and then use these mocks within our test.

func TestUpdateUserPreferences(t *testing.T) {
    ctrl := mock.NewCtrl()
    mockedStore := mocks.NewStoreMock(ctrl)

    t.Run("happy path - test user pereferences can be updated", func(t *testing.T) {
        // Note - this code is just pseudocode to demonstrate generally how this could be done
        mockedStore.Expect().ToBeCalled().ToReturn(User{ID: "1234", Email: "new@email.com"}, nil)

        // we can then use the created mock to instantiate our userSvc and it will compile as the mockedStore
        // will implement all the methods defined within our `Store` interface.
        userSvc := New(mockedStore)

        // we can then call our method and then run assertions that the business logic
        // defined within this method is working as we expect it to
        u, err := userSvc.UpdateUserPreferences(context.Background(), User{ID: "1234", Email: "new@email.com"})
        assert.NoError(t, err)
        assert.Equal(t, "new@email.com", u.Email)  
    })

    t.Run("sad path - errors can be handled properly", func(t *testing.T) {
        // Note - this code is just pseudocode to demonstrate generally how this could be done
        mockedStore.Expect().ToBeCalled().ToReturn(User{}, errors.New("something bad happened"))

        // we can then use the created mock to instantiate our userSvc and it will compile as the mockedStore
        // will implement all the methods defined within our `Store` interface.
        userSvc := New(mockedStore)

        // we can then call our method and then run assertions that the business logic
        // defined within this method is working as we expect it to
        _, err := userSvc.UpdateUserPreferences(context.Background(), User{ID: "1234", Email: "new@email.com"})
        assert.Error(t, err)
    })
}

By setting up these expectations, we can very quickly exercise both the happy paths and sad paths within our UpdateUserPreferences method and we do not need a Postgres instance running at all for this.

The benefit of this approach becomes more and more apparent as the underlying code you are trying to test becomes more and more complex.

Benefit - Migrating Dependencies

Another benefit of this approach is the ability to easily define and swap our concrete implementations of our Store interface.

Let’s imagine we get a requirement from our companies' product team that we need to move to a different type of backing store for any arbitrary reason.

This approach allows us to implement a second package that handles all the implementation details when it comes to talking to this new database type. We don’t have to update our user package as it frankly doesn’t care about these details. All it really cares about is whether or not it will implement the interface defined.

Benefit - Dependency Agnostic

In the above code snippets, the example demonstrated how we could use this approach for a database dependency. It is worth noting that you can use this same approach throughout the vast majority of your Go app development.

For example, if I needed to talk to a downstream API, I could follow this same approach as I have done above. I could effectively define a client package that would handle all the implementation details needed to talk to this external API, in my user service I could define another interface such as APIClient that would define all the methods this client package would need to implement.

Using the above approach you can then employ either mocks or fakes in your unit tests to exercise your user package with actually hitting these downstream APIs which could prevent you burning API credits or hitting rate limits.

Caveats

It should be noted that the abstractions we’ve covered in the above code snippets don’t necessarily come “for free”.

Whilst the abstractions we have covered in the above article are incredibly useful and allow us to test our code, if you need to squeaze every last ounce of performance out of your code, you may find that you’ll need to cut out these abstractions in certain places.

Even when writing applications that work with things like payment processing, I’ve never found the performance benefits of removing these abstractions to be worth the constraints the lack of abstractions place on my testing approach.

Conclusion

So, in this article, we have covered why the mantra of accepting interfaces, and returning structs can be incredibly useful for Go developers wanting to write testable and highly maintainable services in Go.

It should be noted, as with all the articles on my site, that these concepts are personal preference that I try and follow when and where I can. There will always be exceptional circumstances and edge cases that make this approach impossible.

Note > I follow this same mantra in my new course Building Production Ready Services in Go - 2nd Edition - if you would like to see this in a full Go app, then feel free to subscribe and check out the course!