ruk·si

API
Microservices

Updated at 2021-03-10 15:38

TL;DR:

  • If unsure, never start with microservices.
  • Start with a monolith and break it later.

Initially plan for 10x traffic, expect a rewrite to reach 100x traffic.

High-level architectural patterns for an application:

  • Monolith: you have one server running with your main application with some other monolithic services like a database and a queue
  • Microservices: service-oriented architecture where you build a lot of small APIs to act as external services for other components

Some random guideline could be that a microservice is something you could rewrite in 2 weeks, exposes an API and hides the internals from consumers. Usually REST over HTTP.

Microservices are now a lot more attracting because of the cloud. Easier management of individual server has made microservice architecture viable solution to companies of all sizes. The questions is, is it still good for your company?

Microservice design pattern is not better than monolithic. It's a trade-off. Microservices pattern emerged to allow big organizations to distribute work. For smaller teams, using a lot of microservice reduces the release cycle time of full product features because of the management overhead. But for bigger companies where individual developers can take more ownership of individual microservices, it forces to define cleaner interfaces between them, which should speed up release cycle.

Monolith:

  • adds development complexity if not done right on the code level.
  • you have only one repository to track and deploy.
  • work really well for small and medium sized applications.
  • work well for small teams where everybody needs to understand the whole app.
  • are faster to build than microservices, but worse to maintain.

Microservices:

  • adds management complexity without clear guidelines for development, deployment, passing secrets, managing servers, etc.
  • you have multiple code repositories to track and deploy
  • dependencies are less likely to get conflicts
  • work really well for huge applications
  • work for large organizations where individual teams can take ownership of a service
  • are slower to build than monolithic apps, but better to maintain

Splitting a Monolith to Microservices

When to split to microservices:

  • The code base gets too large to be managed by a single owner.
  • The software stack is so vast that no team can understand it from end-to-end.
  • You need scale up your solution and microservices allow for more fine-grained scaling.

Split your monolithic application into contexts. Move your code into discrete packages. Check dependencies, move things around, check usage. Start from a piece that will change a lot during refactoring or that has very distinct ownership from the rest.

What models exist in each context e.g. User, Invoice, etc. ? Which of these models are shared between which contexts? Sharing internal representation causes tight coupling so you should only share limited representation.

When dividing a legacy monolith to microservices, it usually goes that during the first 3 months you only get 2 microservices extracted but then the pace will pick up.

Deployment

Avoid monolithic build where all services are built the same time. Each service should be build separately.

Think about how to package the artifact that you will deploy. Preferably it will 1) work with installing as little dependencies as possible and 2) work on as many environments as possible from local to cloud. Docker is usually the best as you simply install Docker and then start the image and it works across environments.

My preference from best to less-optimal:
    Linux container image (e.g. Docker image)
        > infrastructure specific image (e.g. using Packer to build AWS AMI)
            > operating system specific image (e.g. deb for Debian and Ubuntu)
                > programming language specific package (e.g. egg or pex for Python)
                    > source code with install scripts

Avoid packing config to artifact itself, configuration should come outside the artifact. Define a separate file, define when installed or use separate configuration server.

Deploying a single service per host is the best approach. Many services per host is common but problematic monitoring and resource sharing. Usually the main downside is management but just use proper tools.

PaaS are good for common web applications but not for anything more than that. The more you differ, the more you need to customize e.g. load based scaling to fit you needs. Don't host your APIs on Heroku or similar PaaS. Customizing them is either impossible, expensive or the very least tedious.

Write your own build and deploy scripts. I'd recommend using Python with something like Click/Invoke/Fabric. Allow controlling at least: 1) which artifact to deploy 2) where to deploy 3) which version to deploy. It has to be this simple, a single command to do everything after the artifact exists. Preferably your CI/CD pipeline would build the actual artifact though, but that would simply run this command. These scripts can have many layers of logic to run tests, verify, notify different channels, etc. as your project grows. Available environments should defined in some central place e.g. a YAML file in S3 that is pulled by the scripts.

NAME = name of the project
ENV  = where to deploy e.g. local, qa, prod
VER  = semantic version e.g. 1.2.3

# build --artifact=<NAME> --version=<VER>        # in CI/CD
# publish --artifact=<NAME> --environment=<ENV>  # in CI/CD

deploy --artifact=<NAME> --version=<VER> --environment=<ENV>

Use Terraform or other Infrastructure-as-Code to define your cloud setup. Terraform is pretty clean and slick.

Resilience

Everything will fail, learn to recover. Don't foolproof anything; learn to fail fast and restart the component.

Even AWS guarantees 99,95 availability on a region,
and this is only on region across all available availability zones.

Slow downstream services are worse than failing ones. Use timeouts, bulkheads (partial shutdown to protect the rest) or a circuit breaker (ignore slow services for a while after slowness) like Hystrix.

Bulkheads can be implemented simply by using
different connection pools for different downstream services.

Make all operations idempotent. When working with distributed systems, you should make sure multi applications of operation are fine or even expected. How you do this is to add some identifier on operations e.g. "add 100 credits for user vs add 100 credits because reason 456" This endures multiple workers can coordinate. Note that idempotence applies to in business logic, you still log and monitor each operation separately.

Sources