Three Micro Services Coupling Anti Patterns

Six months ago I joined my first Micro Services team. I was surprised that we had no set-in-stone rule banning inter services calls. Indeed, one of the fundamental idea of Micro Services is that each service should be decoupled from the rest of the world so that it can be changed more easily (as long as it still fulfils its Consumer Driven Contracts).

Why is it that we did not follow the rule? Why did we insist on suffering from agonising pain? Once the project finished, I had time to reflect on three “anti patterns” where the temptation of making calls between services is great. But fret not: I’ll show you a path to salvation.

The Horizontal Service

The first Use Case is when a Micro Service provides a capability that other Micro Services need. In our case we used an AuthService to identify a user and associate her with a session through an authorisation token. Some services in turn extracted the token from the HTTP request header and interrogated the AuthService as to its existence and validity.

Because the AuthService.CheckToken endpoint interface was simple and almost never changed, the issue of coupling a few services to an Horizontal Service did not hit us once in production. However during development, stories around authentication and authorisation proved painful to develop, partially because you had at the very minimum to touch the web client, the AuthService, and at least another Micro Service that was consuming the AuthService.CheckToken endpoint.

If you are in this situation, make sure you have some conversations on the use of native platform support (like DotNetOpenAuth) to bring this capability directly into your services. Indeed, if a horizontal feature that most services need (e.g. Logging or Model Validation) is supported natively by your toolchain, why rolling out a Micro Service that by nature will have high afferent coupling?

The Aggregator Service

The second Use Case is when you need some data aggregated from different bounded contexts. An example could be a page where the end-user is presented with general customer data (e.g. name, login) alongside some more specific accounting data (e.g. billing address, last bill amount).

The CQRS pattern proposes an approach where ReadModels are built and customised to enable this scenario. But CQRS is a relatively advanced technique, and building read models at a non-prohibitive cost requires some tooling such as an EventStore. Instead, the lambda developer could consider exposing a new endpoint, an Aggregator Service, that reaches across domains to create a bespoke view of the data.

When I first faced this scenario, I opted instead for having the web client fabricating the aggregated view by calling several Micro Services, as opposed to implementing an Aggregator Service endpoint. I really like this approach for several reasons. First the web client is in its very nature a data aggregator and orchestrator. It knows what data it needs and how to find them. This is what people expect from a web client in a Micro Services world, and it should/will be tested accordingly. Second because the decision to make a blocking or non-blocking call is brought closer to the end-user, so with a better understanding of how much the User Experience will be impacted. In comparison, the Aggregator Service endpoint would have to guess how consumers intend to call and use it: is it ok to lazily or partially load the data?

Of course the drawback of this approach is that it makes the client more bloated and more chatty. This can usually be addressed with good designs and tests on the client, and good caching and scaling practices to as to reduce your services response times.

The Workflow Service

The last example is where the downstream effect of calling a Micro Service is the execution of a complex workflow. For instance when a customer registers, we need to have his user account created, associate a session, update the financial backend, and send a welcome email. So four different actions. Some are asynchronously (financial backend and email), and some synchronous (account and session). So really what choice do we have but to implement the workflow in some sort of CustomerService?

Similarly, we had a ModerationService that aimed at post moderating illicit content. For a moderation request, we sometimes had to obfuscate the customer account, delete its profile bio, reset its avatar, and remove its profile gallery images. Here again the ModerationService had to implement the workflow and decide whether to make these calls synchronously or asynchronously.

An execution stack within a Micro Service that is a mix of synchronous and asynchronous calls to other services is really a recipe for some fun games further down the line. The intent in itself is very different between a blocking call that is in nature core to a process, and a send-and-forget call, which is more of a purposeful side-effect. Indeed, there are two challenges here:

  1. How to implement a Use Case with two sequential and blocking service calls?
  2. How to implement a Use Case with two non-sequential and non-blocking service calls?

My advise would be to break the Workflow Service into two parts:

  1. For the synchronous part, ask yourself the following two questions: Can it be done client-side? Should I merge some services together? Indeed, if two steps of a workflow are so crucial that one cannot happen without the other then either they belong to the same domain, or an “orchestrating software component” (aka the client) should ensure all steps are successful.
  2. Enable loosely coupled asynchronous communications in your Micro Service infrastructure with a messaging middleware, which can be an MQ Server, an Atom Feed, your own JMS bus, or a bespoke pub/sub message bus. Then, the asynchronous service calls can be replaced with posting to a queue or topic that the downstream services subscribe to.

Now that you have met the Horizontal, the Aggregator, and the Workflow Services, make sure to avoid them in your next Micro Services project.

Using the Proxy Pattern for Unit Testing

Understanding an Object Oriented Programming design pattern requires having a clear and specific use case for which you would apply it. Here is one for the Proxy Pattern.

The Proxy Pattern as described in the Gang Of Four book does:

Allow for object level access control by acting as a pass through entity or a placeholder object.

Although it is one of the simplest of all patterns (maybe with the Singleton Pattern), this description remains nebulous. The simple drawing below attempts to provide a clearer picture. As in real life, the proxy is a middle man, here between an API and some client code.

Screen Shot 2013-11-17 at 9.38.38 PM

The question is: if the proxy does nothing but delegating to the API (unlike the Adapter Pattern or the Facade Pattern), what do I need a proxy for? Shall I not just call the API directly?

Let’s look at the case where you have no control over the API, for instance if you use a third party library, or even an internal library whose source code you do no have access to. A recent example I came across is the use of the C# UrlHelper API in a .NET MVC project. The Action() method below returns a URI as a string given the controller name, the action name, and the query string parameters as a hash:

string Action(string action, string controller, object routeValues)

Therefore, a call to:

urlHelper.Action("Products", "View", new {store = "BrisbaneCity"})

will return:

http://myapp.mydomain.com/products/view?store=BrisbaneCity

The TrackerController below has an explicit dependency on UrlHelper. The Registration() action method sets the URL of the mobile site back button using the UrlHelper.Action() method.

using System.Web.Mvc;

public class TrackerController : Controller
{
    private UrlHelper UrlHelper { get; set; }

    public TrackerController(UrlHelper urlHelper) 
    {
        UrlHelper = urlHelper;
    }

    [...]

    [HttpGet]
    public ActionResult Registration(Guid? order)
    {
        ViewBag.MobileBackButtonUrl = UrlHelper.Action("Index", "Tracker", new { order });
        [...]
    }
[...]
}

A unit test that checks the MobileBackButtonUrl property of the ViewBag would look like that:

[TestClass]
public class TrackerControllerTest
{
    private Mock<UrlHelper> mockUrlHelper;
    private TrackerController trackerController;

    [TestInitialize]
    public void Initialize()
    {
        mockUrlHelper = new Mock<UrlHelper>();
        trackerController = new TrackerController(mockUrlHelper);
    }

    [...]

    [TestMethod]
    public void Registration_SetsTheMobileBackButtonUrl()
    {
        var testUrl = "http://thebackbuttonurl"
        mockUrlHelper.Setup(h=>h.Action("Index", "Tracker", It.IsAny()).Return(testUrl);
        var result = trackerController.Registration(new Guid());
        Assert.AreEqual(testUrl, result.ViewBag.MobileBackButtonUrl;
    }

[...]

}

Unfortunately, because the UrlHelper does not have a parameterless constructor, it cannot be mocked using the C# Moq library. Even it it had a parameterless constructor, the Action() method not being virtual will prevent Moq from stubbing it. The bottom line is: it is very hard to unit test a class that has dependencies on third party APIs.

So, let’s use the Proxy Pattern to make our testing easier. The idea is the create a proxy for the UrlHelper class. We’ll call it UrlHelperProxy.

using System;
using System.Web.Mvc;
using System.Web.Routing;

namespace Dominos.OLO.Html5.Controllers
{
    public class UrlHelperProxy
    {
        private readonly UrlHelper helper;
        public UrlHelperProxy() {}

        public UrlHelperProxy(UrlHelper helper)
        {
            this.helper = helper;
        }

        public virtual string Action(string action, string controller, object hash)
        {
            return helper.Action(action, controller, hash);
        }
    }
}

As you can see the UrlHelperProxy does nothing fancy but delegating the Action() method to the UrlHelper. But what it does differently is:

  1. It has a parameterless constructor
  2. The Action() method is virtual

Therefore, by changing the TrackerController code to accept injection of an UrlHelperProxy instead of UrlHelper, we will be able to appropriately unit test the Registration() method.

using System;
using System.Web.Mvc;

public class TrackerController : Controller
{
    private UrlHelperProxy UrlHelper { get; set; }

    public TrackerController(UrlHelperProxy urlHelper) 
    {
        UrlHelper = urlHelper;
    }
[...]
}

[TestClass]
public class TrackerControllerTest
{
    private Mock<UrlHelperProxy> mockUrlHelper
    private TrackerController trackerController;

    [TestInitialize]
    public void Initialize()
    {
        mockUrlHelper = new Mock<UrlHelperProxy>();
        trackerController = new TrackerController(mockUrlHelper);
    }
[...]
}

I am always in favour of changing a design to improve testability. An example is the provision of setter injectors to some of the properties that the unit test need to alter/stub. It usually pays off by providing a deeper understanding of the code and some of the design considerations to be had.

Here is a simple example that illustrate a well known and simple Object Oriented Design Pattern. We made our TrackerController class more testable, and we understand better what the Proxy Pattern can be used for: win win!