Minikube is a way to run Kubernetes locally. It is a tool that runs a single-node Kubernetes cluster inside a Virtual Machine (VM) on your computer, either Linux, Windows or Mac. It is a quick way to try out Kubernetes and is also useful for testing and development scenarios. Before we dive into the deployment part, let’s highlight some of the alternatives that we have. There is a number different ways we can deploy our containerized apps to a Kubernetes cluster:
- If you are on Windows, you can use the native Kubernetes support that comes from Docker Desktop for windows.
- You can deploy to a Minikube VM, which will be demonstrated in this post.
- You can also use a cloud provider service such as Microsoft Azure Kubernetes Service (AKS) or Google Kubernetes Engine (GKE) to host your cluster. Please refer to this post if you are interested in deploying to AKS.
Please note: building containers running inside Minikube involves either or both:
- Pulling images from a remote repository
- Pulling images from a local repository
The first option is much simpler than the second one but in some cases, you may want to use a local repository. For instance, you may not have a cloud subscription to make it private or you may not want to push your images to a public remote repository such as Docker Hub.
This post shows how you can deploy a .Net Core Worker Service to a Minikube running on Windows 10 Enterprise.
Prerequisites
- OS: Windows 10 Pro, version 10.0.18363, build 18363 or higher. I am using Windows 10 Enterprise version 10.0.18363, build 18363
- Docker Desktop for Windows version 2.2.0.4 with Kubernetes support preinstalled (version 1.15.5). More information from Docker website
- Minikube for Windows. Follow this this article to install if you don’t have one
- Helm 3 for Windows (with tiller has been removed ). More at Helm website
- .Net Core CLI, from Microsoft .Net Core. Choose .Net Core SDK 3.1
- Visual Studio Code to edit our helm files
Assuming that we want to run a background task inside a console application. If you come from Windows, you probably know one of the solutions we have is to host that console app inside a Windows Service. Because our containers are Linux-based, there is no such a windows service exists in Linux. .Net Core 3.0 recently added a new feature called Worker Service. It is just a type of console app but shares a few things with ASP.NET Core such as using Kestrel as the web server and enabling IIS integration, loading configuration from appsettings.json, appsettings.{Environment Name}.json, environment variables and so on.
In order to achieve what we achieved with Windows Services in the past. We need a way to host our background task, and that’s using .Net Core Worker Service.
Create the Worker Service
Let’s open up a PowerShell prompt and create the worker service.
dotnet new worker -lang C# -n WorkerServiceDemo cd WorkerServiceDemo
You should see a worker.cs
class being generated with the following lines:
namespace WorkerServiceDemo
{
public class Worker : BackgroundService
{
private readonly ILogger _logger;
public Worker(ILogger logger)
{
_logger = logger;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
_logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now);
await Task.Delay(1000, stoppingToken);
}
}
}
}
Dockerize the app
Add a Dockerfile in the project directory, and copy and paste the following, you can have these automatically generated with the Docker extension installed in your VS code editor:
FROM mcr.microsoft.com/dotnet/core/runtime:3.1 AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /src
COPY ["WorkerServiceDemo.csproj", "./"]
RUN dotnet restore "./WorkerServiceDemo.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "WorkerServiceDemo.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "WorkerServiceDemo.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "WorkerServiceDemo.dll"]
Don’t build the image yet, we will do this next!.
Configure our Minikube
In order to make use of our the existing Docker Daemon Engine, we have to perform a few things:
First, let make sure our cluster’s current-context is set to Minikube and check if its node is ready (you have one node available when you install the VM):
kubectl config view
kubectl get nodes
- context:
cluster: minikube
user: minikube
name: minikube
<strong>current-context: minikube</strong>
kind: Config
NAME STATUS ROLES AGE VERSION
m01 Ready master 12d v1.18.0
Second, make sure our Docker Desktop is running.
Finally, point our Minikube to use the existing Docker Daemon Engine:
minikube docker-env
& minikube -p minikube docker-env | Invoke-Expression
Create a private repository on Minikube so we can push our images to. Grab the kube-registry.yaml from this gist on github.
Then execute:
kubectl create -f kube-registry.yaml
List all the images within this VM with [docker images]. You may notice the list is very different if we execute the command from a separate prompt. I believe this is to do with the prompt’s session. Remember we are still connecting to Docker from our Minikube but if you run the command from another prompt, this simply executes against Docker Desktop for Windows.
Now, let’s build our Docker image, this time our image lives under the newly created private repository.
docker build --rm --pull -f "Dockerfile" -t "workerservicedemo:latest" .
Verify if the image is created with docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
workerservicedemo latest 3e7790a37241 About a minute ago 191MB
6609f8e7e2c4 About a minute ago 707MB
Create our helm chart
mkdir charts cd charts helm create workerservicedemo code .\workerservicedemo\
Delete the unnecessary files/folders that are being generated (eg.tests, service.yaml, ingress.yaml). We don’t need service.yaml or ingress.yaml because this is just a console app, no need to have inbound/outbound traffic.
Modify the values.yaml as follows (please pay attention to the image config, Minikube will pull the image with that tag):
values.yaml:
# Default values for workerservicedemo.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: workerservicedemo
tag: latest
pullPolicy: IfNotPresent
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name:
probes:
enabled: false
service:
type: ClusterIP
port: 80
ingress:
enabled: false
annotations: {}
hosts:
- host: chart-example.local
paths: []
tls: []
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "workerservicedemo.fullname" . }}
labels:
{{- include "workerservicedemo.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "workerservicedemo.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "workerservicedemo.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "workerservicedemo.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
{{- if .Values.probes.enabled }}
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Deploy the app:
Make sure your Minikube is still connecting to Docker Daemon as described in the Config Minikube section.
Make sure you are still under \WorkerServiceDemo\charts>
Run the following command:
helm install workerservicedemo-release workerservicedemo
LAST DEPLOYED: Tue Apr 7 16:15:08 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=workerservicedemo,app.kubernetes.io/instance=workerservicedemo-release" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace default port-forward $POD_NAME 8080:80
To verify if the required Kubernetes objects have been created by Helm, fire up Kubernetes dashboard, you should see the pod is up and running.
minikube dashboard
And if you check through the dashboard, you’ll see the app logs are being generated from the console: