Windows containers have been out there for a few years but I feel like it is now time to have our full .NET apps running in Kubernetes for production. The reason why I say this is because Kubernetes starts supporting Windows Containers running in AKS from version v1.14 and the supported Windows node’s OS must be Windows Server 1809/Windows Server 2019 or later. Of course, you don’t have to run a full .NET app in a Windows node but the question is why it even exists when you don’t have a plan to run them. That discussion is not the purpose of this post though.
You are probably excited about containers just like I did when I first heard about them a few years ago. In the past, we would deploy our multi-container apps using Docker compose to a docker swarm and have it managed our containerized apps and this worked but also added a lot of complexity. I started looking for new ways to manage my containers and I came across Kubernetes. I just went … wow! I had a few posts previously talking about Kubernetes but they were mostly general discussions. This post is going to be a long one, from the planning step to the final way out. I will address why I choose such an infrastructure as we are moving through.
Minikube is a way to run Kubernetes locally. It is a tool that runs a single-node Kubernetes cluster inside a Virtual Machine (VM) on your computer, either Linux, Windows or Mac. It is a quick way to try out Kubernetes and is also useful for testing and development scenarios. Before we dive into the deployment part, let’s highlight some of the alternatives that we have. There is a number different ways we can deploy our containerized apps to a Kubernetes cluster:
I’ve been searching for a better solution to monitor our on-prem applications. I know these apps do their job just fine but I don’t like the way they stop working without my acknowledgement. One day a message came from the sales team asking “hey I won a quote but I didn’t see it in Navision”. I then had a look in the server and realized that the service somehow stopped working. So I restarted the service and it went back to normal. This is just a simple example of how we manage our apps manually. I’d like to have them automated. It’s time to enter Kubernetes.
Carry on with what we have been doing in the previous post regarding deploying containerized apps to AKS. This post addresses some of the issues and how we are going to solve it. My main goal is to allow the app to have access to the on-prem resources using Windows Authentication for Linux containers, just like we would normally do with our apps running on an intranet network. I briefly mentioned our approach to achieve this using Azure VNet. In case you haven’t seen that post, here is the link.
If you’re ended up choosing this option to log in to your Kubernetes dashboard but don’t know how, this post helps you out with that.
Deploying containerized apps to a Azure Kubernetes Service (AKS) cluster using the default settings in Azure isn’t that much tough as the tools handle most of the hard work for us. I recently discovered a scenario where I needed to connect my AKS cluster back to the on-prem resources. So I went ahead and created a AKS cluster via the Azure portal. In the creation step, I chose Azure VNet and this automatically set my cluster to use Azure Networking Interface (CNI).