Transition from Monolith to Microservices

A Transition from Monolith to Microservices Part 2: Finding a Solution

John Holliman

By John Holliman, 03.06.2019

This post is part of a 3-part series on Expanse’s transition to a Microservices [1] Architecture built on Java and Spring Boot. In this series, we seek to share the issues we faced with the monolithic system, why we think the Spring Boot-based services will address them, and how we are affecting this change with other technologists who love to design or think about systems.

This series is best for readers with some working knowledge about building and maintaining web applications, relational databases, SQL, and common interfaces between web applications and databases (e.g. ORMs).

In installment 1 of this series, I discussed the chain of events that lead us to transition to a microservices architecture from a monolithic one, as well as the primary goals and requirements of this new architecture. In installment 2, I will go into more detail about the requirements for our API gateway and our User / Authorization service, as well as some general advice for startups adopting microservices.

Microservices Architecture for Growing Startups

Breaking apart a monolithic application into smaller sub-components is referred to as the Microservices Architecture pattern [2]. Many large organizations have adopted this solution to deal with the problem of how to distribute API/service development across many people.

A common question related to the Microservices Architecture, and a question has a big impact on whether microservices will solve more problems than they create, is: How large should my microservices be?

There is not a hard and fast answer to this question, but there are some general rules of thumb that suggest that smaller organizations should air towards larger services:

  • The network is not free
    • RPCs (remote procedure calls) are expensive and slow relative to single-node IPCs (interprocess communications)
    • Additionally, the network is unreliable and services will have to be built to handle and be resilient in the face of failures related to it
  • Distributed transactions are tough
    • Transactions spanning multiple services will require careful thought to provide the necessary consistency guarantees
  • Developing features spanning multiple services is slow relative to implementing features in a single service
  • Managing service versions and coordinating updates across the various clients of a given service is difficult
  • Integration testing becomes more challenging as you increase the number of services a service depends on
  • Distributed tracing and centralized logging is a requirement for effective debugging
    • The more microservices that exist in a system the harder it is to reason about the services that fall on the critical path for a given request

That said, if we backpedal on microservices too much we will end up back worshipping our monolith.

We decided that a service should be large enough to fully encapsulate a distinct business capability and be small enough to be developed by a single team. If you’re just getting started with microservices, avoid cross-service communication as much as possible to reduce the amount of time you’ll inevitably spend debugging RPC calls.

Your organization should only transition to microservices if it is or is ready to become excellent in:

  • Automated testing and deployment
  • Tracing and logging
  • Cross-team communication

We’ll discuss these more in a future post.

The API Gateway

The centerpiece of the Microservices Architecture is the API gateway. The API gateway is the unifier of the independent services. It is the layer responsible for presenting multiple services as a single logical interface. All requests can go to a single endpoint that routes to the API gateway. The API gateway is responsible for forwarding requests to the correct upstream service. As a unifying layer, the API gateway may also be responsible for authentication/authorization, caching, aggregating results from multiple services, tracing and serving static assets.

There are many great API gateways out there. To make a decision on which one to use, we used the following set of criteria:

After a good amount of research, we landed on Kong. Kong checks all of the boxes above with the exception of the last one. Kong does not currently support declarative configuration [3], but there are a handful of 3rd-party tools that wrap the Kong Admin API to provide declarative YAML configuration.

  • Open source
    • Open source projects are great for a lot of reasons
      • The freedom to dive in and fix any issue you may be experiencing is amazing
      • Also, the number of eyeballs on a project is often directly proportional to its quality, especially in terms of performance and security.
  • Optional Enterprise Support
    • Fixing your own issues is great, but having an expert on-hand to guide you to a quick solution can be even better
  • Performant
    • API calls from our customers must return in a reasonable amount of time; therefore, they should not be burning clock in a gateway
  • Extensible
    • It should be possible, easy in fact, to create custom middleware in order to extend the functionality of the API gateway
  • User-friendly
    • Satisfying all of the above criteria is great, but if you have to be a member of the Linux Blood Tribe [4] to get it running, deployed, or configured that’s a dealbreaker
  • Supports declarative configuration
    • At Expanse, we are firm believers that infrastructure specifications and configurations should be checked into source control
    • There are many benefits to this which I won’t go into here—suffice it to say, it helps greatly to avoid errors

The API gateway also provides us the ability to gradually rollout our new Java services. Kong, as with any good API gateway, allows us to exercise extremely fine-grained control over what requests map to what upstream services. This means as soon as we rewrite even a single API route we can get it into production by configuring Kong to route traffic for that route to the new service. This satisfies our requirement of avoiding an all-or-nothing rewrite.

The User/Authorization Service

Installments 1 and 2 summarized the primary requirements for our microservices architecture and the goals we had for the system. In installment 3, I will discuss the impact of our transition so far.

As mentioned in the requirements section above, we wanted to maintain a centralized service to manage user information and permissions. This service would be used by the API gateway to authorize requests and to attach user information, including features and permissions for the user, to the requests before proxying them upstream. The upstream service would then use the attached information to handle the request, potentially making an additional call to the user/authorization service if more information about the user is required to handle the request.

Footnotes

[1] I personally prefer the term services to microservices because microservices implies that they must be small. That said, I’ll use the microservices vernacular because it’s what most people are familiar with hearing and has better SEO ¯\_(ツ)_/¯.
[2] Amazon and Netflix are notable companies vocal about running microservices at scale.
[3] They mentioned at the recent Kong Summit that it is on the roadmap and should be coming soon.
[4] This organization doesn’t exist.