In our previous article we’ve learned what Kubernetes-based event-driven autoscaling (KEDA) is and how it can help us with scaling our applications.
As part of that effort, we’ve built a small .NET Core 3.0 Worker which processes messages from a Service Bus queue. By using a ScaledObject
to scale our order processor via KEDA.
Today, we are happy to contribute this sample to the KEDA organization! Let’s have a closer look and run it yourself!
For the sake of keeping this walk through short I will not go in depth in terms of building the .NET Core Worker itself. If you are interested in learning about that, you can find all the sources on GitHub.
Before we start…
This article requires you to have the following tools & services:
- Azure CLI
- Azure Subscription
- .NET Core 3.0 Preview 5
- Kubernetes cluster with KEDA installed
Creating an Azure Service Bus Queue
We will start by creating a new Azure Service Bus namespace:
After that, we create an orders
queue in our namespace:
We need to be able to connect to our queue, so we create a new authorization rule. In this case, we will assign Management
permissions given this is a requirement for KEDA.
Once the authorization rule is created, we can list the connection string as following:
Create a base64 representation of primaryConnectionString
:
Create a secret to deploy in Kubernetes that contains our connection string:
This secret will be used by our order processor and KEDA to connect to the queue.
Save the secret declaration in deploy/deploy-secret.yaml
.
Deploying our Service Bus secret in Kubernetes
We will start by creating a new Kubernetes namespace to run our order processor in:
Before we can connect to our queue, we need to deploy the secret which contains the Service Bus connection string to the queue.
Once created, you should be able to retrieve the secret:
Deploying our order processor pod
We are ready to go! We will start by creating a Kubernetes deployment.
The deployment will schedule a pod running our order processor based on tomkerkhove/keda-sample-dotnet-worker-servicebus-queue
Docker image.
As you can see it passes a KEDA_SERVICEBUS_QUEUE_CONNECTIONSTRING
environment variable which contains the value of the secret we’ve just deployed.
Kubernetes will automatically decode it and pass the raw connection string.
Save the deployment declaration and deploy it:
Once created, you will see that our deployment shows up with one pod:
Defining how we want to autoscale with a ScaledObject
Now that our app is running we can start automatically scaling it!
By deploying a ScaledObject
you tell KEDA what deployment you want to scale and how:
In our case we define that we want to use the azure-servicebus
scale trigger and what our criteria is. For our scenario we’d like to scale out if there are 5 or more messages in the orders
queue with a maximum of 10 concurrent replicas which is defined via maxReplicaCount
.
KEDA will use the KEDA_SERVICEBUS_QUEUE_CONNECTIONSTRING
environment variable on our order-processor
Kubernetes Deployment to connect to Azure Service Bus. This allows us to avoid duplication of configuration.
Note – If we were to use a sidecar, we would need to define containerName
which contains this environment variable.
Save the ScaledObject
declaration and deploy it:
Once the ScaledObject
is deployed you’ll notice that we don’t have any pods running anymore:
This is because our queue is empty and KEDA scaled it down until there is work to do.
In that case, let’s generate some!
Publishing messages to the queue
The following job will send messages to the “orders” queue on which the order processor is listening to. As the queue builds up, KEDA will help the horizontal pod autoscaler add more and more pods until the queue is drained. The order generator will allow you to specify how many messages you want to queue.
First, you should clone the project:
Configure the connection string in the tool via your favorite text editor, in this case via Visual Studio Code:
Next, you can run the order generator via the CLI:
Now that the messages are generated, you’ll see that KEDA starts automatically scaling out your deployment:
Eventually, we will have 10 pods running processing messages in parallel:
You can look at the logs for a given processor as following:
Once all the messages have been processed KEDA will scale the deployment back to 0 pod instances.
Time to clean up
Don’t forget to clean up your resources!
Conclusion
We have easily deployed a .NET Core 3.0 Worker on Kubernetes which was processing messages from Service Bus.
Once we’ve deployed a ScaledObject
for our Kubernetes deployment it started scaling the pods out and in according to the queue depth.
We could very easily plug in autoscaling for our existing application without making any changes!
Thanks for reading,
Tom.
Subscribe to our RSS feed