Contents
- The motivation
- The scenario
- Building the pipelines
- Simulating the entire build and deploy process
- Final thoughts
The motivation
One of the main reasons when using Kubernetes is that it allows us to simplify our delivery through CD pipelines. We still perform builds and tests in the same ways as we did for any other project type in Azure DevOps. However, the main goal of this post is to show how we can build CI/CD pipelines for Kubernetes in Azure DevOps using Gitflow. It won’t cover unit testing but if you want to play around with it, you can add a unit test project in your solution and define its run in the UI or yml file.
Traditionally, CI/CD require a series of steps to make it more practical, especially when dealing with the on-prem servers. This doesn’t mean you can’t do CI/CD without Kubernetes. Microsoft introduced Team Foundation Server (TFS) many years ago (now they call it Azure DevOps Server) and it has been an on-prem DevOps platform, allowing an old way of doing continuous integration and continuous delivery. Your source code will still be version controlled in TFS repos, you still build CD pipelines and take their outputs to deploy to the target servers/runtime environments. When talking about servers/runtime environments, things like user groups, security and permissions within the environment are difficult to deal with. And you don’t have one environment to work with. More likely, you will have multiple environments or servers to run your workload, and this is where Kubernetes can be used to tackle the complexity.
Just a recap of what Kubernetes can do (to find out more, please see my previous post for a brief overview):
Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management.
Many of my previous posts discussed scaling and managing workload in Kubernetes but they didn’t mention how we can automate the delivery of our workload.
I won’t talk about what CD/CD are but rather how these two terms can fit nicely together in Kubernetes to help make our life easier.
The scenario
I have a bunch of workload that are environment-specific. Some are in production, while others are still in development or UAT. Managing these environments had been so difficult prior to Kubernetes. I could have a server running applications for UAT, others for production but the process of build and deployment is still duplicated and involves manual configurations.
The great thing about Kubernetes is that we can separate our environments into namespaces, each has it own (or similar) settings and configurations. We also have a centralized control panel to make our changes. Because of this, we manage our workload at a centralized level (the control panel) rather than at a specific environment. Nodes (traditional servers for on-prem term) in Kubernetes are short-live and can be replicated when the workload grows and there is no point configuring at a node level. Configuring security, users , user groups, permissions is performed in the control panel. So if a node dies, those configurations still remain. But if i was having a configuration for my UAT in one of my on-prem servers (environments), and when that server goes down, I would have to start all over again. In Kubernetes, configurations are stored in an object called configmap. All configmaps are in the default node, the node that allows Kubernetes to run. I won’t try to explain what it is as it is not the focus of this post. If you want to find out more, either browse through my site and search for any post specific to Kubernetes or you can look it up on the web.
What we are going to do here is to take an existing application, make some code changes to simulate the actual development, commit the changes to an Azure DevOps repo, define CI/CD pipelines to allow an automated build and deployment to the Kubernetes cluster.
Building the pipelines
The overall process looks like the diagram below:
In the old UI approach, CI and CD are separated into their own pipelines. So for the CI, you would have a CI build pipeline consisting of different tasks such as build and ACR push. For the CD, you would have a release pipeline that takes the Docker image and deploy it to the Kubernetes cluster. However, we are using the yml approach, all stages (build, push, deploy) are in a single yml file where we define both the CI and CD pipelines.
The build pipeline (CI)
Looking at the above diagram, the top segment is for the build pipeline (akka. CI pipeline). Here we are leveraging the power of GitFlow and Azure DevOps build pipelines. We take the source code from a specific branch to build the Docker image, and when the build finishes we then push it to the Azure Container Registry (ACR). An artifact is then published to our workspace so that the CD pipeline can pick it up to deploy to our Kubernetes cluster.
In general, the CI process involves:
- Defining the image build stages in a Dockerfile. This is container specific, either for Linux or Windows containers. The application I’m building the pipelines for is a .NET Core app that runs in a Windows node, so a Windows container is required.
- Defining the build pipeline in Azure DevOps. You can define your build pipeline either through the classic UI or by using a more Kubernetes-like approach – the
.yml
files. This is the one I will be using. Note that we are using a Microsoft hosted agent for our build. You will see that in theazure-pipelines.yml
file shortly. - Establishing a service connection to your ACR so that your build pipeline can output the Docker image to.
- Publishing an artifact for your build so that the CD pipeline (discuss next) can use to deploy the image to the cluster.
I have written a few posts where I talked about containerizing applications for each type of containers. You can find more from the following:
- Dockerizing a full .NET App for Windows Containers running in Kubernetes
- Deploying a .Net Core Worker Service to a Minikube VM
- Windows Authentication for Linux containers running inside Azure Kubernetes Service (AKS)
My local repository already has GitFlow enabled but I just wanted to highlight a few things:
- Commits on the
develop
branch will trigger a build for a development image and will eventually get deployed to theÂdevelopment
namespace in the cluster. The same process applies to UAT and prod. I don’t think I need to explain this further as the diagram tells it all. - The
.yml
defining the build pipeline will then rely on each branch to trigger the correct build and tag the Docker image respectively. - One ACR repository is enough for all different versions of the container image because each build is tagged with a different name. You should be following the best practices recommended by Microsoft on their web site, can be found here. For example, if you are building images for Linux containers, you should have the base image existed in the ACR repository. The subsequent image pushes will only push the changes you made, not the base image itself. Again, that link will tell you all of those.
Adding the Dockerfile
If you look back to the diagram, we can see that all changes to development environment are coming from the develop branch. In your local repo, make sure you are in the develop branch before adding the Dockerfile and the definition of the pipelines.
You can now go ahead and add the Dockerfile. Note that my Dockerfile resides in the application layer directory as this is a 3-layers type application. This will be the directory where I set my Docker build context to. The Dockerfile can be moved up to the top level directory but I’ll just keep it like this for now.
The Dockerfile looks like below:
Adding the build pipeline (using the yml file)
Again, the yml file for the build pipeline locates in the same directory as the Dockerfile’s (see the above directory structure).
Before we go ahead and define our build pipeline’s stages, we need to establish a service connection to our ACR. Please refer to this link to see how you can add one. Once you have a service connection established, you’ll need an ACR connection ID in your pipeline. Browse this link to find yours:
Take the id of the connection from the json returned. Mine is <span class="token property">"id"</span><span class="token operator">:</span><span class="token string">"226b1912-cde1-49da-bb77-9fedbb92cd21"</span>
The final azure-pipelines.yml
 for the build pipeline:
# Deploy to Azure Kubernetes Service
# Build and push image to Azure Container Registry; Deploy to Azure Kubernetes Service
# https://docs.microsoft.com/azure/devops/pipelines/languages/docker
trigger:
# enable these triggers when AKS Networking is ready
- master
- release/*
- develop
resources:
- repo: self
variables:
${{ if eq(variables['Build.SourceBranchName'], 'master') }}:
environment: 'production'
${{ if eq(variables['Build.SourceBranchName'], 'develop') }}:
environment: 'development'
${{ if and(ne(variables['Build.SourceBranchName'], 'master'), ne(variables['Build.SourceBranchName'], 'develop')) }}:
environment: 'staging'
dockerRegistryServiceConnection: '226b1912-cde1-49da-bb77-9fedbb92cd21' # this service connnection is used for all containers push
imageRepository: 'ditsalessync'
containerRegistry: 'your-acr.azurecr.io'
dockerfilePath: '**/Dockerfile'
tag: '$(environment)-$(Build.BuildId)'
# Agent VM image name
vmImageName: 'windows-latest'
stages:
- stage: Build
displayName: Build image
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- bash: echo $(ACR.ImageName)
- task: Docker@2
displayName: Build an image
inputs:
command: build
dockerfile: $(dockerfilePath)
repository: $(imageRepository)
containerRegistry: $(dockerRegistryServiceConnection)
buildContext: .
tags: |
$(tag)
arguments: '--build-arg BUILD_ENV_ARG=$(environment)'
- task: Docker@2
displayName: Push image to container registry
inputs:
command: push
containerRegistry: |
$(dockerRegistryServiceConnection)
repository: $(imageRepository)
tags: |
$(tag)
- upload: DIT.Sales.Sync.WorkerService/Manifests
artifact: DIT.Sales.Sync.WorkerService/Manifests
- publish: DIT.Sales.Sync.WorkerService
artifact: DIT.Sales.Sync.WorkerService
Remember I mentioned that I am using a Microsoft hosted agent?
The main reason for this is that it provides quick way to try out things but if you need a faster build and caching features, you’re better off having a private agent. You could go ahead and set one up your own but for my project, I’ll just leave it as it is.
<span class="token comment"># Agent VM image name</span>
<span class="token key atrule">vmImageName</span><span class="token punctuation">:</span> <span class="token string">'windows-latest'</span>
And the service connection ID:
<span class="token key atrule">dockerRegistryServiceConnection</span><span class="token punctuation">:</span> <span class="token string">'226b1912-cde1-49da-bb77-9fedbb92cd21'</span>
The release pipeline (CD)
Adding the release pipeline (using the existing yml file)
Now we are done with the build pipeline, let’s go further and define our release pipeline. Before we make some changes to the existing azure-pipelines.yml
file, let’s create an environment for each of our Kubernetes namespaces.
Please see this link to see how you can create one.
I’ll have an environment called DITSalesSync. Within that environment, I’ll create each resource for my development, staging and production namespaces.
Adding the deployment stage
In order to deploy the Docker image to the cluster, we need to create a secret key for each namespace we plan to deploy to. Secret key is another type of Kubernetes object that allows password encryption. Here we are taking the ACR service connection we created earlier and use it in conjunction with the resources we defined in the DITSalesSync environment. Note that I commented out <span class="token comment">#kubernetesServiceConnection: ''</span>
because we don’t need it. We don’t need to establish a service connection to our AKS cluster as the environment handles this part for us.
I have added a deploy stage to the existing yml. When this runs, it creates a secret key (or if exists, it will override) to allow pulling the Docker image from ACR. For the deploy inputs, it takes in the secret created in the previous step, the namespace’s name and the artifact we published in the build pipeline.
The final build and release pipeline looks like below:
# Deploy to Azure Kubernetes Service
# Build and push image to Azure Container Registry; Deploy to Azure Kubernetes Service
# https://docs.microsoft.com/azure/devops/pipelines/languages/docker
trigger:
# enable these triggers when AKS Networking is ready
- master
- release/*
- develop
resources:
- repo: self
variables:
${{ if eq(variables['Build.SourceBranchName'], 'master') }}:
environment: 'production'
#kubernetesServiceConnection: ''
${{ if eq(variables['Build.SourceBranchName'], 'develop') }}:
environment: 'development'
#kubernetesServiceConnection: '6a92d8c7-7f85-432b-9347-2fe2c3213098' # replace this with production service connection per namespace
${{ if and(ne(variables['Build.SourceBranchName'], 'master'), ne(variables['Build.SourceBranchName'], 'develop')) }}:
environment: 'staging'
#kubernetesServiceConnection: '6a92d8c7-7f85-432b-9347-2fe2c3213098' # replace this with production service connection per namespace
dockerRegistryServiceConnection: '226b1912-cde1-49da-bb77-9fedbb92cd21' # this service connnection is used for all containers push
imageRepository: 'ditsalessync'
imagePullSecret: 'acr-secret'
containerRegistry: 'your-acr.azurecr.io'
dockerfilePath: '**/Dockerfile'
tag: '$(environment)-$(Build.BuildId)'
# Agent VM image name
vmImageName: 'windows-latest'
stages:
- stage: Build
displayName: Build image
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- bash: echo $(ACR.ImageName)
- task: Docker@2
displayName: Build an image
inputs:
command: build
dockerfile: $(dockerfilePath)
repository: $(imageRepository)
containerRegistry: $(dockerRegistryServiceConnection)
buildContext: .
tags: |
$(tag)
arguments: '--build-arg BUILD_ENV_ARG=$(environment)'
- task: Docker@2
displayName: Push image to container registry
inputs:
command: push
containerRegistry: |
$(dockerRegistryServiceConnection)
repository: $(imageRepository)
tags: |
$(tag)
- upload: DIT.Sales.Sync.WorkerService/Manifests
artifact: DIT.Sales.Sync.WorkerService/Manifests
- publish: DIT.Sales.Sync.WorkerService
artifact: DIT.Sales.Sync.WorkerService
- stage: Deploy
displayName: Deploy stage
dependsOn: Build
jobs:
- deployment: Deploy
displayName: Deploy
pool:
vmImage: $(vmImageName)
environment: 'DITSalesSync.$(environment)'
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest@0
displayName: Create imagePullSecret
inputs:
action: createSecret
secretName: $(imagePullSecret)
secretType: dockerRegistry
dockerRegistryEndpoint: $(dockerRegistryServiceConnection)
#kubernetesServiceConnection: $(kubernetesServiceConnection)
- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
#kubernetesServiceConnection: $(kubernetesServiceConnection)
namespace: $(environment)
manifests: |
$(Pipeline.Workspace)/DIT.Sales.Sync.WorkerService/Manifests/deployment.yml
$(Pipeline.Workspace)/DIT.Sales.Sync.WorkerService/Manifests/service.yml
imagePullSecrets: |
$(imagePullSecret)
containers: |
$(containerRegistry)/$(imageRepository):$(tag)
Simulating the entire build and deploy process
To simulate the changes on each environment, I will make some changes on my local branches, eg. commit a message, push the commit to the remote branch, finish up a branch and so on.
This is what I expect to happen:
- Changes on the develop branch will trigger a build for the development environment, the image is pushed to ACR and then gets deployed to the development namespace in AKS.
- When a release branch is created (as when we publish the release branch using GitFlow), it will trigger a build for the staging (UAT) environment, push the image to ACR and then deploy to the staging namespace in AKS.
- When finishing a release branch locally, we merge (GitFlow’s feature) the release branch back to the develop and master branch. We’ll then push the local master branch to the remote repo. On this occasion, it triggers a build for the production environment, then pushes the image to ACR, and finally deploys to the production namespace in AKS.
Make sure we are in the develop branch:
Simulate a change on the develop branch and do a commit on it:
Then push the commit to the develop branch on remote:
As you can see it has triggered the build pipeline when a change is pushed to the remote:
If you click on that run, you will see what it’s doing. This is the image built for Windows container and the size of the base images (SDK, runtime) is quite large (this is a major drawback when using a Microsoft hosted agent). Yours maybe quicker, especially when building a Linux image. Mine took ~ 8 minutes. You also notice that both the build and deploy have been successful.
You can check if the pod deployment has been done correctly, use kubectl to see it is running under the desired namespace:
kubectl get pods -n development
NAME READY STATUS RESTARTS AGE
ditsalessync-59c64f74c8-pdpqf 1/1 Running 0 15m
D:\Work\DIT.Sales.Sync>
To simulate the staging environment, I’ll start a release branch using GitFlow:
D:\Work\DIT.Sales.Sync>git flow release start v1.1.2
Switched to a new branch 'release/v1.1.2'
Summary of actions:
- A new branch 'release/v1.1.2' was created, based on 'develop'
- You are now on branch 'release/v1.1.2'
Follow-up actions:
- Bump the version number now!
- Start committing last-minute fixes in preparing your release
- When done, run:
git flow release finish 'v1.1.2'
D:\Work\DIT.Sales.Sync>git branch
develop
master
* release/v1.1.2
D:\Work\DIT.Sales.Sync>
To trigger a Docker build and Kubernetes deployment to our staging environment, we’ll just publish the release branch:
git flow release publish v1.1.2
The above git command will create a release branch in our Azure DevOps repo, namely release/v1.1.2.
As a result, a build and deployment to staging environment has been triggered and ran successfully.



To verify if the pod has successfully deployed to the staging namespace, again we use kubectl:
D:\Work\DIT.Sales.Sync>kubectl get pod -n staging
NAME READY STATUS RESTARTS AGE
ditsalessync-78f4b749d5-vggqt 1/1 Running 0 2m53s
D:\Work\DIT.Sales.Sync>
Also check if the pod (.NET Core) is running in staging:
D:\Work\DIT.Sales.Sync>kubectl logs ditsalessync-78f4b749d5-vggqt -n staging
[09:13:06 DBG] Service [DIT Sales Item Sync Worker] initializing
[09:13:07 DBG] Service [DIT Sales Item Sync Worker] initialized.
[09:13:07 INF] Application started. Press Ctrl+C to shut down.
[09:13:07 INF] Hosting environment: staging
[09:13:07 INF] Content root path: C:\app
[09:13:07 INF] Initialized Scheduler Signaller of type: Quartz.Core.SchedulerSignalerImpl
[09:13:07 INF] Quartz Scheduler v.3.0.7.0 created.
[09:13:07 INF] RAMJobStore initialized.
[09:13:07 INF] Scheduler meta-data: Quartz Scheduler (v3.0.7.0) '9e9bfbd4-057a-4450-b415-802fe28a3bdf' with instanceId 'NON_CLUSTERED'
Scheduler class: 'Quartz.Core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'Quartz.Simpl.DefaultThreadPool' - with 10 threads.
Using job-store 'Quartz.Simpl.RAMJobStore' - which does not support persistence. and is not clustered.
[09:13:07 INF] Quartz scheduler '9e9bfbd4-057a-4450-b415-802fe28a3bdf' initialized
[09:13:07 INF] Quartz scheduler version: 3.0.7.0
[09:13:07 INF] Scheduler 9e9bfbd4-057a-4450-b415-802fe28a3bdf_$_NON_CLUSTERED started.
[INFO][8/18/2020 9:13:08 AM][Thread 0004][akka://SalesProductSyncWorker/deadLetters] Message [JobCreated] from akka://SalesProductSyncWorker/user/QuartzActor to akka://SalesProductSyncWorker/deadLetters was not delivered. [1] dead letters encountered .This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO][8/18/2020 9:13:08 AM][Thread 0004][akka://SalesProductSyncWorker/deadLetters] Message [JobCreated] from akka://SalesProductSyncWorker/user/QuartzActor to akka://SalesProductSyncWorker/deadLetters was not delivered. [2] dead letters encountered .This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
D:\Work\DIT.Sales.Sync>
To simulate the production environment, we first finish the release branch. When this happens, GitFlow will go ahead and delete the release branch on remote, merge the local release branch back to develop and master branch.
D:\Work\DIT.Sales.Sync>git flow release finish v1.1.2
Branches 'master' and 'origin/master' have diverged.
And local branch 'master' is ahead of 'origin/master'.
Branches 'develop' and 'origin/develop' have diverged.
And local branch 'develop' is ahead of 'origin/develop'.
Already on 'master'
Your branch is ahead of 'origin/master' by 14 commits.
(use "git push" to publish your local commits)
hint: Waiting for your editor to close the file...
[main 2020-08-18T09:23:08.325Z] update#setState idle
(node:19704) Electron: Loading non context-aware native modules in the renderer process is deprecated and will stop working at some point in the future, please see https://github.com/electron/electron/issues/18397 for more information
(node:19704) Electron: Loading non context-aware native modules in the renderer process is deprecated and will stop working at some point in the future, please see https://github.com/electron/electron/issues/18397 for more information
(node:19704) Electron: Loading non context-aware native modules in the renderer process is deprecated and will stop working at some point in the future, please see https://github.com/electron/electron/issues/18397 for more information
(node:19704) Electron: Loading non context-aware native modules in the renderer process is deprecated and will stop working at some point in the future, please see https://github.com/electron/electron/issues/18397 for more information
(node:38108) Electron: Loading non context-aware native modules in the renderer process is deprecated and will stop working at some point in the future, please see https://github.com/electron/electron/issues/18397 for more information
(node:38108) Electron: Loading non context-aware native modules in the renderer process is deprecated and will stop working at some point in the future, please see https://github.com/electron/electron/issues/18397 for more information
Switched to branch 'develop'
Your branch is ahead of 'origin/develop' by 3 commits.
(use "git push" to publish your local commits)
Already up to date!
Merge made by the 'recursive' strategy.
remote: We noticed you're using an older version of Git. For the best experience, upgrade to a newer version.
To https://dev.azure.com/diamond-devteam/Project%20Green/_git/DIT.Sales.Sync
- [deleted] release/v1.1.2
Deleted branch release/v1.1.2 (was 7e24289).
Summary of actions:
- Release branch 'release/v1.1.2' has been merged into 'master'
- The release was tagged 'v1.1.2'
- Release tag 'v1.1.2' has been back-merged into 'develop'
- Release branch 'release/v1.1.2' has been locally deleted; it has been remotely deleted from 'origin'
- You are now on branch 'develop'
D:\Work\DIT.Sales.Sync>
Don’t forget to publish your tags to remote:
git push origin --tags
Now the release branch on the remote repo is deleted, we shall continue to simulate our production build and deploy:
D:\Work\DIT.Sales.Sync>git checkout master
Switched to branch 'master'
Your branch is ahead of 'origin/master' by 14 commits.
(use "git push" to publish your local commits)
D:\Work\DIT.Sales.Sync>
To trigger the pipelines, we’ll just push the local master branch to the remote:



And check if the pod is running in production:
D:\Work\DIT.Sales.Sync>kubectl get pods -n production
NAME READY STATUS RESTARTS AGE
ditsalessync-68fbcd6c44-n7kt5 1/1 Running 0 83s
D:\Work\DIT.Sales.Sync>kubectl logs ditsalessync-68fbcd6c44-n7kt5 -n production
[09:48:33 DBG] Service [DIT Sales Item Sync Worker] initializing
[09:48:33 DBG] Service [DIT Sales Item Sync Worker] initialized.
[09:48:33 INF] Application started. Press Ctrl+C to shut down.
[09:48:33 INF] Hosting environment: production
[09:48:33 INF] Content root path: C:\app
[09:48:34 INF] Initialized Scheduler Signaller of type: Quartz.Core.SchedulerSignalerImpl
[09:48:34 INF] Quartz Scheduler v.3.0.7.0 created.
[09:48:34 INF] RAMJobStore initialized.
[09:48:34 INF] Scheduler meta-data: Quartz Scheduler (v3.0.7.0) '224375f2-32b2-4402-99d4-7a1fa05abcdb' with instanceId 'NON_CLUSTERED'
Scheduler class: 'Quartz.Core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'Quartz.Simpl.DefaultThreadPool' - with 10 threads.
Using job-store 'Quartz.Simpl.RAMJobStore' - which does not support persistence. and is not clustered.
[09:48:34 INF] Quartz scheduler '224375f2-32b2-4402-99d4-7a1fa05abcdb' initialized
[09:48:34 INF] Quartz scheduler version: 3.0.7.0
[09:48:34 INF] Scheduler 224375f2-32b2-4402-99d4-7a1fa05abcdb_$_NON_CLUSTERED started.
[INFO][8/18/2020 9:48:34 AM][Thread 0008][akka://SalesProductSyncWorker/deadLetters] Message [JobCreated] from akka://SalesProductSyncWorker/user/QuartzActor to akka://SalesProductSyncWorker/deadLetters was not delivered. [1] dead letters encountered .This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO][8/18/2020 9:48:34 AM][Thread 0008][akka://SalesProductSyncWorker/deadLetters] Message [JobCreated] from akka://SalesProductSyncWorker/user/QuartzActor to akka://SalesProductSyncWorker/deadLetters was not delivered. [2] dead letters encountered .This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
D:\Work\DIT.Sales.Sync>
Final thoughts
You maybe aware that this is rather a very long post as dealing with CD/CD is a complicated task. There are a number of steps involved but once you have a few plays, you’ll get used to it. I’m sure what we have done here will certainly outweigh the drawbacks of the traditional approach. I like to automate things, specifically in the world of Kubernetes. I hope you do too.