Episode 9: How to Deliver a Kubernetes Microservices Platform

Over the past few episodes we have talked in detail about a number of aspects of delivering an agile platform. We’ve looked at Kubernetes, NoSQL, DevOps and logging & tracing. These are all essential elements in the recipe of an agile microservices platform. In this episode we talk about our approach for delivering the whole thing, a Kubernetes technology platform in the cloud that you can use to rapidly develop microservices-based applications.

Summary

At a high level, our approach is split into 4 areas:

  • Define – discovery and definition of what we’re going to deliver
  • Deliver – delivering the DevOps, infrastructure and shared services
  • Exploit – maximising the ability to deliver apps onto this platform
  • Maintain – ensuring we stay current, and making sure we can adapt to future needs

In this podcast we’re looking at the first two of these. We are going from scratch to the point where you have the technology in place on which you can deliver and operate your microservices.

When we talk about the technology platform, the latest iteration is the one we have implemented for Glu (https://www.glu.lu/). This is launching in Q2 2019 and is a full, living example of the tech we are talking about.

Define

In the define phase we have 3 main areas of activity:

  • Technical scope – discovering the main goals and constraints within which we work. This includes things like preferred cloud vendor, services that we must connect to, how to apply security.
  • Technical blueprint – creating a definition that can be shared with everyone involved so that we communicate a common understanding of what is going to be delivered. We already have a template based on what we would deliver, and we make the appropriate adaptations to this blueprint based on what we have learned in discovery.
  • Implementation plan – this is the backlog of tasks that need to be done and a first cut at priority. We would approach this in an agile fashion, if you’re following this methodology you might choose a different way of project planning.

These activities are not expected to take a long time. We would expect this phase to be 2-4 weeks for a simple implementation and 4-8 weeks when there are particularly new or unique constraints in place. If this is taking longer than this timeframe you should take a hard look at your constraints to make sure you are not over-complicating things.

Deliver

After we’ve defined our blueprint we move swiftly on to implementing it. You can get in touch with us about how to reuse our out-of-the-box platform, you could modify it to suit your own needs or simply follow these steps to create your own from scratch.

  • Environments – we create the cloud accounts that resources will be deployed into. We can’t start building without this, and it’s the foundation for billing and security. At the very least we’d expect to have a master “parent” account and within this there would be a DevOps account and a separate account for each environment so we get billing broken down in each.
  • DevOps infrastructure – we create scripts to set up the DevOps environment, which will then persist for the duration of the project. Unlike typical application DevOps, this is triggered manually and tends to stay in place. In our case we use Spinnaker running on its own dedicated EKS cluster, and you could use other tools such as Jenkins in the same way.
  • Environments – as described in episodes 6 and 7 we create repositories and DevOps processes for each environment. In the podcast we talk about how we treat different environments, for example we spin up and tear down performance testing environments each time we use them, whereas many of the other environments are created once, kept live and then maintained.
  • Note that we’re talking about each of the environments being based on Kubernetes. We are not trying to replicate a complex on-premise deployment here, that’s not the point. We’re delivering a new, clean microservices platform that lets you build apps quickly using standardised technology.
  • Logging and diagnostics – as we described in episode 1, we would then put in place the logging and diagnostic tools into each of the environments as this is shared across all of the applications that run on the platform.
  • Data services – we then deploy the common data and messaging services that are also shared across applications. This would typically include NoSQL databases such as MongoDB or ScyllaDB and messaging such as Kafka. We would also apply service mesh technologies such as Istio.

All this gets us to the point where you have a platform you can hand over to your app teams. You’re ready to cut the ribbon on it and start exploiting the opportunity you have created. Listen in next time for how best to get your app teams delivering and making use of this platform.

You can watch the video here:

You can listen to the audio version here:

Share this post

Share on facebook
Share on google
Share on twitter
Share on linkedin
Share on pinterest
Share on print
Share on email

happy holidays

we want to hear from you

https://calendly.com/345andrew/30min