Continuous Deployment with Microservices

Introduction

Over the last two years, I have been leading a microservices architecture team for overseas deliveries to enhance the finance system expansion for customers. My team comprises of three pairs of developers for pair programming. The team also provided support for eleven of its customer services. The continuous deployment (CD) pipeline is an essential technical practice for the team. In this article, I will share the implementation of CD in a microservices architecture from a practical perspective.

Overview of Microservices

In microservices, we split the traditional monolithic architecture into one that is composed of several microservices. This architecture allows for better scalability in both business and technical aspects. However, successful implementation of microservices is a challenging process because it involves shuffling the enterprise's organizational structure, as well as adopting new data management modes and deployment and monitoring technologies. To cope with these challenges, CD is widely adopted in the microservices architecture. This article will specifically introduce the Docker-Based CD approach.

Challenges for Deployment

In a monolithic architecture, we can deploy the code base of our system in a project. Any updates in the system will be applied using a continuous integration (CI) pipeline. Additionally, we can adopt a continuous deployment (CD) pipeline to deploy the system continuously into production environment. Under such circumstances, the CI or CD pipeline will execute every change introduced into the system. For example, implementing an update on the current system will require a 10 minute Unit Test, 2 hour Acceptance Test, and 15 minute package or 20 minute deployment. However, the deployment time may be significantly longer for larger systems.

When we split a monolithic system into multiple microservices and deploy every service independently, every code change introduced to the system will only affect an individual service. Then, we just need to deploy the changed service, and re-run CI or CD pipeline of the service. The total time taken for successful deployment is as listed below:

  • 1 minute Unit Test
  • 1 minute Integration test
  • 5 minutes package
  • 5 minutes deployment

After splitting, we can better meet the decoupling requirements by minimizing the required changes when a code change is introduced. However, since the services are split, major effort is necessary for independent deployment of every service. In microservices scenarios, different services may adopt different technologies based on demands, and each service requires its own CD pipeline. Furthermore, the creation of a new service necessitates frequent configuration of the supporting logs and monitoring systems, posing a huge challenge for CD.

Continuous Deployment Practices

We often integrate Continuous Integration (CI) while talking about Continuous Deployment (CD). The entire CD process starts from the code, which is then pushed to the master branch as shown in the figure below.

We use Docker to solve the differences in technical stacks. DevOps creation and deployment tools templatize deploying, monitoring, and warning configurations.

Here is a summary of using Docker for CD:

  • Build and publish a service using Docker
  • Adopt Docker Compose to run a test
  • Use Docker for deployment

The principles used are as follows:

  • Build pipe line as Code
  • Infrastructure as Code (based on AWS)
  • Share build script

Building and Publishing a Service Using Docker

The procedure used to build and publish a service using Docker is:
1. Build a service using Docker and publish the service as a docker image.
2. Publish the docker image to the docker registry.
3. Pull the docker image from the docker registry for deployment.

Adopting Docker Compose to Run a Test

You can use the Docker Compose in combination with multiple docker images. We can combine the service image and database image through Docker Compose to enable services to access the data. After the combination, we can adopt Docker Compose to run CI. The example in the link demonstrates the manner in which we can achieve the combination.
https://gist.github.com/lvjian700/7c295e6a596e96526049f831d0eb8b13#file-docker-compose-yml

Build Pipeline as Code

We usually use Jenkins or Bamboo to build the CI/CD pipeline. However, this approach requires a lot of manual configuration each time we create a pipeline. Hence, it is very hard to achieve automated CI server configuration. Build pipeline as Code refers to describing the pipeline using code. Such description enables excellent readability and reusability and we can configure CI servers quite easily. This year, the team migrated all the pipelines from Bamboo to BuildKite. As such, we can describe the pipeline in the image above with the code mentioned on BuildKite (https://gist.github.com/lvjian700/7c295e6a596e96526049f831d0eb8b13#file-buildkite-yml).

Infrastructure as Code

If we want to publish an HTTP-based RESTful API service, we need the service to prepare the following infrastructure:

  • Deployable machines
  • Machine IP address and network configurations
  • Hardware monitoring service of the device, such as GPU and memory
  • SLB (Load Balancer)
  • DNS
  • AutoScaling (automatic scaling of services)
  • Splunk for log collection
  • NewRelic for performance monitoring
  • Sentry.io and PagerDuty warning

We hope to templatize and automate the establishment and configurations of the infrastructure. We describe the infrastructure using code while DevOps provides tools to templatize the descriptions of infrastructure.

Practices:

  • Deploy using AWS cloud servers
  • Describe and create resources using AWS CloudFormation
  • Perform source control on the resource operation script

Hereunder is an enumeration of the foundation principles associated with the above practices:

  • The need to make the descriptions and operations on the resources within the Git.
  • The need to adopt same deployment procedures for all the environments.
  • The SSH and other manual operations on resources can only be used for environment testing and some debugging.

Share build script

We can categorize the CD pipeline into three steps after building them for multiple services. These are:
1. Run the test
2. Build and publish the docker image
3. Deploy

Extract the shell scripts for these three steps respectively:
1. test.sh
2. docker-tag.sh
3. deploy <test|prod>

Further, create a git repository for the above scripts and reference the repository into various projects through the git sub-module method.

What Next after CD?

Let the CD pipeline provide services to the working procedure for the team.

We have built the CD pipeline and now we need to make it play its power in the agile development process of the team. Let us go through its process.

Team Responsibility

  • The team's primary categories include Business Analyst (BA), Developer (Dev) and Tech Lead (TL)
  • BA is responsible for business analysis, and creating stories on the story wall
  • Dev is responsible for development, QA and O&M (a cross-function team)
  • Tech Lead is responsible for technology

Workflow

  • Dev takes the card from the backlog for analysis. Post analysis, he kicks off the project together with the BA and TL and identifies the demand and technical implementation.
  • Subsequently, Dev creates the pull request (PR) in the repository to start the work. Every git push on the PR will trigger the pipeline of the PR. At this time, the CI machine will only run the unit and integration testing.
  • Once the Dev finishes the development, other Dev members will review the PR. If the review succeeds, the system will merge the PR to the master branch. The merger triggers the pipeline on the master branch to execute automatic deployment of the latest code to the test environment.
  • Dev will perform the QA, based on test environment when the deployment in the test environment is successful.
  • Upon completion of QA, Dev will display the service to BA and TL for user acceptance testing.
  • If the code passes the user acceptance test, Dev will click the Deploy-to-Production button on BuildKite to publish the service.

Based on the procedures above, the team can quickly get feedbacks from the CI/CD pipeline. A highly automated CD pipeline can enable the team to publish services by story.

Questions and Answers

Here we discuss some questions that would help enhance your understanding of microservice deployment.

Q1: What do you use for managing docker? Do you use K8S or Swarm?
A1: We used neither of the two. This is the first time that I have heard of these two services. Every independent service docker image runs on an AWS EC2 instance independently. The management usually centers on EC2 Instance. We adopted self-built Docker Registry service for Docker Images repositories to push and pull docker images.

Q2: How do you perform cross-language service integration?
A2: We once performed Ruby and Node.js integration. Inter-service communications adopt the HTTP protocol with JSON as the transmission format. JSON is based on HAL, one of the hyper media link implementations. We adopted the Consumer Driven Contract approach to solve inter-service conventions. Unit testing replaces the integration testing and specifically adopt Pact for this implementation.

Q3: How do you determine the granularity of services?
A3: This is a very good question, also a very hard one. I personally think the two most difficult questions about microservices are:
1.Why should we split a service?
2.How should we split it? (That is, how small should we split the service and how can we define the system boundary?)
We can answer the question of how to determine the granularity of services with one sentence "split using the Domain Driven Design (DDD) method". A longer version will be that our projects aim to solve the splitting problems for clients' financial ecosystems that have run for more than 10 years. That is, the old system renovation. In my personal opinion, this scenario is most suitable for implementing microservices. Old systems contain many pain points and the demand is comparatively stable. Here is a method to split it:
a) Introduction of business experts, technical experts and related stakeholders to joint analysis of the business scenario, determination of system constitution and domain languages as well as business modeling.
b) Convert the previous idea of splitting system by business into the one of splitting system by data model.
c) Dividing every category of data models into multiple sub-models to assist the primary data models.
d) Identifying service granularity by the data model granularity.
There are also some principles of splitting by non-functional demands, such as splitting the read and write into two parts when implementing CQRS, and adopting Even Sourcing to solve data state synchronization in distributed systems.

Q4: Why didn't you consider GoCD which is a first-class CI/CD tool?
A4: Good question. For this, I need to talk about the sales issue with my colleague in charge of GoCD at ThoughtWorks. BTW, I have not had a chance to try GoCD personally.

Q5: We currently find that when we switch from ESB or SOA to microservices, the original in-process calls change to network calls, and one RPC changes to several or a dozen RPCs, with a severe performance loss under the same conditions. How should we solve this problem?
A5: The performance issue is a frequently asked question. I have not encountered a high-concurrency scenario for performance issue, but we do have some solutions to reduce service dependencies and control network requests.
Essentially, deployment of all the services occur in the same network, such as on an AWS network. You can imagine it as an internal local area network and the performance compromise brought about by HTTP calls are acceptable for the time being.
From the system design perspective, rather than reduce the time consumption of HTTP requests, we can reduce the number of HTTP requests sent. In view of the system design, a majority of data models are immutable. We cache data between systems to reduce the number of HTTP requests sent.

Conclusion

Microservices have provided a pronounced convenience for business and technical scalability. However, it has imposed a huge challenge in the organization and technical layers. While there will be several new services emerging during the evolution of architecture, CD remains one of the key challenges in the technical layer. The pursuit of extreme automation can free the team from infrastructure work and make them focus more on implementation of functions that can harvest business value. Common FAQs were also addressed to facilitate more clarity.

时间: 2024-11-08 20:20:07

Continuous Deployment with Microservices的相关文章

Seven Microservices Anti-patterns

What it Was, Was Microservices Buzzwords often give context to concepts that evolved and needed a good "tag" to facilitate dialogue. Microservices is a new "tag" that defines areas I have personally been discovering and using for some

Thoughtworks Techniques

If you are wondering "What comes after agile?," you should look towards continuous delivery. While your development processes may be fully optimized, it still might take your organization weeks or months to get a single change into production. C

打造云上代码交付链,CodePipeline实践分享

在2017在线技术峰会--首届阿里巴巴研发效能嘉年华上,来自阿里云飞天研发部的工程师莫源分享了<打造云上代码交付链,CodePipeline实践分享>.他在云计算和云平台.持续集成流程.DevOps的基础上,详细分享了Alibaba Cloud CodePipeline优于Jenkins的性能和实践. 以下内容根据直播视频整理而成. 直播视频:https://yq.aliyun.com/edu/lesson/549 PDF下载:https://yq.aliyun.com/attachment/

什么是持续集成?持续交付?持续部署?

作者: 阮一峰 互联网软件的开发和发布,已经形成了一套标准流程,最重要的组成部分就是持续集成(Continuous integration,简称CI). 本文简要介绍持续集成的概念和做法. 一.概念 持续集成指的是,频繁地(一天多次)将代码集成到主干. 它的好处主要有两个. (1)快速发现错误.每完成一点更新,就集成到主干,可以快速发现错误,定位错误也比较容易. (2)防止分支大幅偏离主干.如果不是经常集成,主干又在不断更新,会导致以后集成的难度变大,甚至难以集成. 持续集成的目的,就是让产品可

使用Jenkins、Docker和Ansible进行持续集成和交付

本文讲的是使用Jenkins.Docker和Ansible进行持续集成和交付,[编者的话]本文介绍了使用Docker.Jenkins等技术实现应用开发,测试到部署的自动化.它是一种探索.重点在于流程中的代码检测.测试.部署.部署后要做的事情没有涉及.会在后面文章中介绍. 本文试图为您介绍一个设置持续集成.交付.部署工作流的可行方式.我会使用Jenkins.Docker.Ansible和Vagrant来设置(配置)两个服务器.一个作为Jenkins的服务器,另一个用来模拟生产环境.前者用来检查代码

从技术雷达看持续交付

在第13期云栖TechDay活动上,王健先以自己的亲身经历说明持续交付的重要性 ,然后利用技术雷达分别从实践.工具和人的角度讲解持续交付的发展和未来.   下面是演讲内容整理. 技术雷达 技术雷达的本质是采用图形化方式将各种技术归类为技术.工具.平台和语言及框架四个象限.技术雷达其实是代表着对于未来的一个前瞻,大家可以想象用现实中的雷达来看,它不是预测未来,它根据一些蛛丝马迹来提前的感知一些东西. 技术雷达分四个环: 暂缓区在最外层.很多人理解技术雷达,觉得暂缓不该用,但并不是这样.我们认为,有

快速指南:在DevOps中实现持续交付

本文讲的是快速指南:在DevOps中实现持续交付[编者的话]时至今日,以几乎相同的步调实现开发与交付已经成为一种必需.本份快速指南将帮助大家弄了解持续交付概念中的那些"良方"与"毒药". [烧脑式Kubernetes实战训练营]本次培训理论结合实践,主要包括:Kubernetes架构和资源调度原理.Kubernetes DNS与服务发现.基于Kubernetes和Jenkins的持续部署方案 .Kubernetes网络部署实践.监控.日志.Kubernetes与云原

如何实现 React 组件的可复用性

本文讲的是如何实现 React 组件的可复用性, 可复用性一词是当今软件工程领域上最为常见的流行词之一.可复用性早已成为大量不同框架.工具乃至模型都需要承诺的一种特性,且每一个所实现的方式与对该特性的诠释都各不相同. 那么,可复用性到底指的是什么? 真正的可复用性指的并非是一种特定的流程,而是一个开发策略.因而,在构建可复用组件时,开发者必须得把可复用性牢记在脑海里.因为,这将涉及到无比细致的规划及善解人意的 API 设计.再者,既然可复用性早已被现代的开发工具与框架所支持且倡导,那么我们就不能

How to Achieve Reusability with React Components - 如何实现 React 组件的可复用性 Back

本文讲的是How to Achieve Reusability with React Components - 如何实现 React 组件的可复用性 Back, 原文地址:How to Achieve Reusability with React Components 原文作者:Alex Grigoryan 译文出自:掘金翻译计划 译者:aleen42 校对者:Professor-Z.sqrthree.xiaoheiai4719 可复用性一词是当今软件工程领域上最为常见的流行词之一.可复用性早已