Three Micro Services Coupling Anti Patterns

Six months ago I joined my first Micro Services team. I was surprised that we had no set-in-stone rule banning inter services calls. Indeed, one of the fundamental idea of Micro Services is that each service should be decoupled from the rest of the world so that it can be changed more easily (as long as it still fulfils its Consumer Driven Contracts).

Why is it that we did not follow the rule? Why did we insist on suffering from agonising pain? Once the project finished, I had time to reflect on three “anti patterns” where the temptation of making calls between services is great. But fret not: I’ll show you a path to salvation.

The Horizontal Service

The first Use Case is when a Micro Service provides a capability that other Micro Services need. In our case we used an AuthService to identify a user and associate her with a session through an authorisation token. Some services in turn extracted the token from the HTTP request header and interrogated the AuthService as to its existence and validity.

Because the AuthService.CheckToken endpoint interface was simple and almost never changed, the issue of coupling a few services to an Horizontal Service did not hit us once in production. However during development, stories around authentication and authorisation proved painful to develop, partially because you had at the very minimum to touch the web client, the AuthService, and at least another Micro Service that was consuming the AuthService.CheckToken endpoint.

If you are in this situation, make sure you have some conversations on the use of native platform support (like DotNetOpenAuth) to bring this capability directly into your services. Indeed, if a horizontal feature that most services need (e.g. Logging or Model Validation) is supported natively by your toolchain, why rolling out a Micro Service that by nature will have high afferent coupling?

The Aggregator Service

The second Use Case is when you need some data aggregated from different bounded contexts. An example could be a page where the end-user is presented with general customer data (e.g. name, login) alongside some more specific accounting data (e.g. billing address, last bill amount).

The CQRS pattern proposes an approach where ReadModels are built and customised to enable this scenario. But CQRS is a relatively advanced technique, and building read models at a non-prohibitive cost requires some tooling such as an EventStore. Instead, the lambda developer could consider exposing a new endpoint, an Aggregator Service, that reaches across domains to create a bespoke view of the data.

When I first faced this scenario, I opted instead for having the web client fabricating the aggregated view by calling several Micro Services, as opposed to implementing an Aggregator Service endpoint. I really like this approach for several reasons. First the web client is in its very nature a data aggregator and orchestrator. It knows what data it needs and how to find them. This is what people expect from a web client in a Micro Services world, and it should/will be tested accordingly. Second because the decision to make a blocking or non-blocking call is brought closer to the end-user, so with a better understanding of how much the User Experience will be impacted. In comparison, the Aggregator Service endpoint would have to guess how consumers intend to call and use it: is it ok to lazily or partially load the data?

Of course the drawback of this approach is that it makes the client more bloated and more chatty. This can usually be addressed with good designs and tests on the client, and good caching and scaling practices to as to reduce your services response times.

The Workflow Service

The last example is where the downstream effect of calling a Micro Service is the execution of a complex workflow. For instance when a customer registers, we need to have his user account created, associate a session, update the financial backend, and send a welcome email. So four different actions. Some are asynchronously (financial backend and email), and some synchronous (account and session). So really what choice do we have but to implement the workflow in some sort of CustomerService?

Similarly, we had a ModerationService that aimed at post moderating illicit content. For a moderation request, we sometimes had to obfuscate the customer account, delete its profile bio, reset its avatar, and remove its profile gallery images. Here again the ModerationService had to implement the workflow and decide whether to make these calls synchronously or asynchronously.

An execution stack within a Micro Service that is a mix of synchronous and asynchronous calls to other services is really a recipe for some fun games further down the line. The intent in itself is very different between a blocking call that is in nature core to a process, and a send-and-forget call, which is more of a purposeful side-effect. Indeed, there are two challenges here:

  1. How to implement a Use Case with two sequential and blocking service calls?
  2. How to implement a Use Case with two non-sequential and non-blocking service calls?

My advise would be to break the Workflow Service into two parts:

  1. For the synchronous part, ask yourself the following two questions: Can it be done client-side? Should I merge some services together? Indeed, if two steps of a workflow are so crucial that one cannot happen without the other then either they belong to the same domain, or an “orchestrating software component” (aka the client) should ensure all steps are successful.
  2. Enable loosely coupled asynchronous communications in your Micro Service infrastructure with a messaging middleware, which can be an MQ Server, an Atom Feed, your own JMS bus, or a bespoke pub/sub message bus. Then, the asynchronous service calls can be replaced with posting to a queue or topic that the downstream services subscribe to.

Now that you have met the Horizontal, the Aggregator, and the Workflow Services, make sure to avoid them in your next Micro Services project.

6 thoughts on “Three Micro Services Coupling Anti Patterns

    • Beth, sure. I would not say it provides “less” coupling, but just a different and safer type of coupling. With a library, as the developer in charge of the Micro Service that uses it, you typically control when you want to upgrade (unless you are careless). So the behaviour of the library won’t change without you knowing. On the other hand, if you depend on another Micro Service, then you need a more complex testing and backward compatibility strategy across services, which can be a nightmare. Don’t get me wrong, shared code between services can be evil (we had to upgrade the LogLibrary to all our 17 services once), but it is a lesser evil than services coupling in my opinion.

      • Presumably your library calls out to an authentication service under the covers though? Do you not have the same issues with backwards compatibility between the service and the library?

      • In my post I was referring to a native platform capability (e.g. .Net Windows Authentication) or a basic capability (e.g. Logging). If the library you are using is a client to another web service then it is a different ball game. If you own this web service, then I would argue against the need for a library at all (just make an HTTP call). If this web service is a third party then I would presume they will maintain backward compatibility on your behalf (e.g. you use client V1_3 for an HTTP/SOAP service), hence you do not have to worry about testing and releasing new versions of this service to all clients.

        I guess one more difficult case is the use of Facebook, and Twitter, and Google accounts to log-in to the same app. Making one central service to manage all those authentication providers could be seen as a good idea. But in fact will provide a single point of contention, which bundles together things that *do not* have to change at the same time. I would in that case let the client app make the call to Facebook or Twitter or Google if possible. Unfortunately I believe these days they all use OAuth, so I guess it has to be done server side. In that case I would probably create one service for each provider, and make my service very dumb, just a a decorator that adds the correct token and whatever credentials the end-point needs.

  1. In the aggregator microservice scenario, if the application is client side based (e.g. mobile app, Single Page web application etc.), then requiring client to have to manage all the microservice composition and adaption may be very expensive due to bandwidth constraint (as oppose to the traditional server side web application). In that case, I see aggregator service (a façade layer and can be doing more than just aggregation but also additional adaptions). The REST based Web APIs has been a solution to provide application specific tailoring while allowing the domain microservices not have to make any compromises for the client apps.

    • Dan,

      Thanks for your comment. I understand where you are coming from: you want to make sure you do not give too much to do to the client, particularly for mobile apps. There is a trade-off to be found between complexity server-side or client-side. From my experience, and particularly since you should not try to anticipate, as a service provider, what kind of composition the consumer will need. Hence, I would prefer to keep the services granular and avoid data aggregation when possible, particularly cross domains. A good application of that is the use of hypermedia controls, where the response will contain in the header a URI to a resource associated with the returned data. See Martin Fowler’s blog on Richardson Maturity Model for details.

      Having said that I know there are use cases where bandwidth is reduced (for instance when working on sites where wireless connection are provided by local LANs such as remote mining sites) or hardware is limited (for instance when using inexpensive tablets for NGOs working in Global South).

      So to come back on aggregation, there are two patterns I have seen used.

      The first one, as you describe, is to have a facade, or edge, or presentation service, whose responsibility is to provide resources is a certain way for a certain client or view. This service calls some domain services, hence is strongly coupled to them, which is dangerous. I could however be deemed acceptable, if you keep this practice to presentation services only. But I think this is an open door and this will leak to all services. This pattern really remind me of DTOs & Transformers in an MVC application. Usually you like them when you first learn about them, then hate when you’ve used them for a while, because at the end of the day the transformers do not provide the degree of isolation you need as they concentrate all the coupling in one class. By the way, I am not sure I understand what you mean by “The REST based Web APIs has been a solution to provide application specific tailoring”. I would very much appreciate if you could provide more details.

      Coming back to microservices, a better approach IMHO is to use events. I will soon have a post on the ThoughtWorks Insight pages that talks precisely about that. The idea is to have services posting events when the domain has changed, and downstream services subscribing to those events. A downstream service can be a presentation service that would build the data based on the received events. Here the presentation service is a downstream service, and does not call any of the domain services, which is much better as the only dependency is to the event store, and no service needs to know the other services exist.

      Thanks again for you comment Dan.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s