Azure Kubernetes Metric Adapter

TLDR; The Azure Kubernetes Metric Adapter is a experimental component that enables you to scale your application deployment pods running on any Kubernetes cluster using the Horizontal Pod Autoscaler (HPA) with External Metrics from Azure Resources (such as Service Bus Queues) and Custom Metrics stored in Application Insights.

Checkout a video showing how scaling works with the adapter, deploy the adapter or learn by going through the walkthrough.

I currently work for an awesome team at Microsoft called CSE (Commercial Software Engineering), where we work with customers to help them solve their challenging problems. One of the goals of my specific team inside CSE is to identify repeatable patterns our customers face. It is a challenging, but rewarding role where I get to work on some of the most interesting and cutting edge technology. Check out this awesome video that talks about how my team operates.

While working with customers at a recent engagement, I recognized a repeating pattern with in the monitoring solutions we were implementing on Azure Kubernetes Service (AKS) with customers. We had 5 customers in the same room and 3 of them wanted to scale on custom metrics being generated by their applications. And so the Azure Kubernetes Metric Adapter was created.

Why do we need the Azure Kubernetes Metric Adapter

One of the customers was using Prometheus and so we started to look at the Kubernetes Prometheus metric adapter, which solves the problem of scaling on custom metrics when you are using Prometheus in your cluster. The Prometheus adapter using the custom metrics api to scale instead of heapster. You can learn more about the direction Kubernetes is moving with custom metrics here.

Two of the other customers were not using Prometheus, instead they using Azure services such as Azure Monitor, Log Analytics and Application Insights. At the engagement, one of the customers started to implement their own custom scaling solution. This seemed a bit repetitive as the other customer where not going to be reuse there solution. And so Azure Kubernetes Metric Adapter was created.

What is the Azure Kubernetes Metric Adapter

The Azure Kubernetes Metric Adapter enables you to scale your application deployment pods running on AKS (or any Kubernetes cluster) using the Horizontal Pod Autoscaler (HPA) with External Metrics from Azure Resources (such as Service Bus Queues) and Custom Metrics stored in Application Insights.

That is a bit of a mouth full so let’s take a look at wha the solution looks like when deployed onto your cluster:

azure kubernetes metric adapter deployment architecture

The Azure Metric Adapter is deployed onto you cluster and wired up the Horizontal Pod Autoscaler (HPA). The HPA checks in periodically with the Adapter to get the custom metric defined by you. The adapter in turn calls to an Azure endpoint to retrieve the metric and give it back to the HPA. The HPA then evaluates the value and compares it to the target value you have configured for a given deployment. Based on an algorithm the HPA will either leave you deployment alone, scale up the pods or scale them down.

As you can see you there is no custom code needed to scale with custom or external metrics when using the Adapter. You deploy the Adapter, configure an HPA and the rest of the scaling is taken care of by Kubernetes.

When can you use it?

It is available now as an experiment (alpha state - don’t run in production). You should try it out on your development and test environments and give feed back on what works, what doesn’t on github issues, and any features you want to see.

How can you use it?

There are two main scenarios that have been addressed first and you can see a step by step walk through for each, though you can scale on any Application Insights metric or Azure Monitor Metric.

If you prefer to see it in action checkout this video:


First you deploy it to your cluster (requires some authorization setup on you cluster)

kubectl apply -f

Next you create an HPA and define a few values. An example would be:

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
 name: consumer-scaler
   apiVersion: extensions/v1beta1
   kind: Deployment
   name: consumer-scaler
 minReplicas: 1
 maxReplicas: 10
  - type: External
      metricName: queuemessages
          metricName: Messages
          resourceGroup: sb-external-example
          resourceName: sb-external-ns
          resourceProviderNamespace: Microsoft.Servicebus
          resourceType: namespaces
          aggregation: Total
          filter: EntityName_eq_externalq
      targetValue: 30

And deploy the HPA and watch your deployment scale:

kubectl apply -f hpa.yaml

kubectl get hpa consumer-scaler -w
consumer-scaler   Deployment/consumer   0/30      1         10        1          1h   
consumer-scaler   Deployment/consumer   27278/30   1         10        1         1h   
consumer-scaler   Deployment/consumer   26988/30   1         10        4         1h   
consumer-scaler   Deployment/consumer   26988/30   1         10        4         1h           consumer-scaler   Deployment/consumer   26702/30   1         10        4         1h           
consumer-scaler   Deployment/consumer   26702/30   1         10        4         1h           
consumer-scaler   Deployment/consumer   25808/30   1         10        4         1h           
consumer-scaler   Deployment/consumer   25808/30   1         10        4         1h           consumer-scaler   Deployment/consumer   24784/30   1         10        8         1h           consumer-scaler   Deployment/consumer   24784/30   1         10        8         1h          
consumer-scaler   Deployment/consumer   23775/30   1         10        8         1h           
consumer-scaler   Deployment/consumer   22065/30   1         10        8         1h           
consumer-scaler   Deployment/consumer   22065/30   1         10        8         1h           
consumer-scaler   Deployment/consumer   20059/30   1         10        8         1h           
consumer-scaler   Deployment/consumer   20059/30   1         10        10        1h

And your that’s it to enable auto scaling on External Metric. Checkout the samples for more examples.

Wrapping up

Hope you enjoy the Metric Adapter and can use it to scale your deployments automatically so you can have more time to sip coffee, tea, or just read books. Please be sure to report any bugs, features, challenges you might have with it.

And if you really like it tweet about it, star the repo, and drop a thank you in the issues.

Checking the Aks Acs-Engine version number for debugging

Azure Kubernetes Service (AKS) uses the acs-engine project behind the scenes. Acs-engine is used as place to prototype, experiment and bake features before they make it into AKS.

I was recently working on a project where we using AKS shortly after it went General Availability (GA). We saw strange behavior on our test cluster related to provisioning volume mounts and load balancers that we could not reproduce with newly created clusters. We checked the version numbers on Kubernetes/code/images but we could not find anything different between the clusters.

We finally found that there was a difference between acs-engine versions of the clusters. This happened because the customer had created the cluster before the GA date. Recreating the cluster (and therefor getting the latest changes from acs-engine) fixed many of the inconsistencies we were seeing in the cluster with issues.

To check an AKS cluster acs-engine version number:

# find the generated resource group name
az group list -o table
Name                       Location    Status
-------------------------  ----------  ---------
MC_vnet_kvnet_eastus       eastus      Succeeded
vnet                       eastus      Succeeded

# find a node name for that group
az vm list -g MC_vnet_kvnet_eastus -o table
Name                      ResourceGroup         Location    Zones
------------------------  --------------------  ----------  -------
aks-agentpool-8593-0  MC_vnet_kvnet_eastus  eastus
aks-agentpool-8593-1  MC_vnet_kvnet_eastus  eastus
aks-agentpool-8593-2  MC_vnet_kvnet_eastus  eastus

# list the tags to see the acs-version number
az vm show -n aks-agentpool-8593-0 -g MC_vnet_kvnet_eastus --query tags
# output
  "acsengineVersion": "0.16.2", 
  "creationSource": "aks-aks-agentpool-8593-0",
  "orchestrator": "Kubernetes:1.9.6",
  "poolName": "agentpool",
  "resourceNameSuffix": "8593"

Being able to compare the version numbers helped pinpoint the issue but the bigger lesson learned is to always recreate your Azure resources after a product goes GA. There are a lot of changes, fixes and releases that happen in the weeks leading up to a product release in Azure and the best way to make sure your running the latest software is to create a resource after the GA event.

Windows Containers Cheat Sheet

I have been using windows containers a lot in the last month and the other day I was asked how to do something. I don’t remember anything, I use a combination of GitHub, OneNote, and Bingle (Bing/Google) for that, so of course I started looking for the various examples in various GitHub repo’s that I’ve used and written. Turns out this is not very efficient.

Instead, I am going to create this living document as a Windows Container Cheat Sheet (this blog is on GitHub so you can submit a PR if I missed anything you think is useful). It will serve as a quick reference for myself but hopefully can help beginners get a lay of the land.

This first section has general links about Windows Containers, jump to the dev resources if your already familiar.

General Info

Where to find

The first place you should know about is the Official Windows Container Docs and info on licensing and pricing.

Windows Container Flavors

There are two flavors of Windows Containers:

  • Windows ServerCore - Use for legacy Applications (Lift and Shift). Includes full .NET framework, and can run IIS. Large Container size (10+ GB’s)
  • Windows Nano Server - Use for Cloud-First Applications. Small version (100’s of MB)

Windows Container Versions

To increase the speed of improvements and releases the team had to make breaking changes between versions. This means you have to match the host machine version to the container version. If you upgrade your host machine you can run older version of containers in Hyper-v mode.

Read more about Windows Container version compatibility.

The are two release channels:

  • Long Term Support Channel (ltsc) - supported for 5 years from release
  • Semi-Annual Channel (sac) - supported for 18 months from release

The current version’s are:

  • Windows Server 2016 (ltsc)
  • Windows Server 1709 (sac)
  • Windows Server 1803 (sac)

Note: if you are running nanoserver it only has the Semi-Annual Channel Release (sac)

When using the containers it is always a good idea to explicitly tag the images to a version an example below (choose the latest from tags on servercore and nanoserver):

# for an image with a specific patch in 1709
FROM microsoft/nanoserver:1709_KB4043961

# for an image with a specific path in 2016
FROM microsoft/nanoserver:10.0.14393.1770

Development Resources and Tips

There are also sorts of tricks and tips that you can use. For examples, you should checkout:

Download files

There are several ways to download files. Soon you will be able to use curl.

RUN Invoke-WebRequest -UseBasicParsing  -Uri $url -OutFile ''; 

Enable Tls 1.2

If you get the error message (currently any requests to GitHub): Invoke-WebRequest : The request was aborted: Could not create SSL/TLS secure channel.

RUN [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

Extract Files

Soon you will be able to use tar.

RUN Expand-Archive -DestinationPath C:\temp\;

Run Executable (installer)

RUN Start-Process you-executable.exe -ArgumentList '--paramter', 'value' -NoNewWindow -Wait;

Set Environment variable

RUN setx /M ENV_VARIABLE value; 

User Chocolately as a package Provider in Powershell

RUN Install-PackageProvider -Name chocolatey -RequiredVersion -Force; \
    Install-Package -Name webdeploy -RequiredVersion 3.6.0 -Force;

Use escape character to chain commands

# escape=`
FROM microsoft/windowsservercore

RUN Write-Host 'Line 1.'; `
    Write-Host 'Line 2';

Debug .NET Framework app in Container

Instructions at

Enable Web Auth in IIS

This also demonstrates how to set web.config files in

FROM microsoft/aspnet:4.7.1-windowsservercore-1709

RUN powershell.exe Add-WindowsFeature Web-Windows-Auth
RUN powershell.exe -NoProfile -Command `
  Set-WebConfigurationProperty -filter /system.WebServer/security/authentication/AnonymousAuthentication -name enabled -value false -PSPath IIS:\ ; `
  Set-WebConfigurationProperty -filter /system.webServer/security/authentication/windowsAuthentication -name enabled -value true -PSPath IIS:\ 

Give IIS access to folder for logging

RUN icacls C:/inetpub/wwwroot/App_Data /grant IIS_IUSRS:f /T

Install MSI silently

RUN Start-Process msiexec.exe -ArgumentList '-i', 'installer.msi', '/quiet', '/passive' -NoNewWindow -Wait;

Powershell Core in 1709

The nanoserver with Powershell Core installed:

FROM microsoft/powershell:6.0.1-nanoserver-1709

Use MultiStage Builds

Given nanoserver doesn’t have full dotnet framework and 1709 doesn’t ship with powershell but you can leverage multistage builds to do fancier things (like use powershell) then ship a smaller container:

FROM microsoft/windowsservercore:1709 as builder

RUN Write-Host 'Use Powershell to download and install';

## ship a smaller container
FROM microsoft/nanoserver:1709

COPY --from=builder /app /app

CMD ["yourapp.exe"]


Set up a full pipeline in Visual Studio Team Services for Windows Containers.

Debugging inside a container (During dev)

List of commands to run to see various state of your container. There is no UI so here are a few commands to get you started.

List process and Service running in container


Get Event Log

# this shows source as 'Docker' but can change you 'Application' or custom
Get-EventLog -LogName Application -Source Docker -After (Get-Date).AddMinutes(-5) | Sort-Object Time
# can also store in variable to see message detail
$el = Get-EventLog -LogName Application -Source Docker | Sort-Object Time

Networking information

Figuring out open ports and assigned ip addresses.

netstat -a

General Trouble shooting

There are some great tips on how to find logs and debug issues you might run into at

Creating a Secure Service Fabric Cluster in two commands

Creating a Secure Fabric cluster in Azure has become easier. Currently if you use the Azure Cli you can do it in only two commands. Note that you may wish to change the location or operating system parameters.

az group create --name <resource-group-name> --location eastus

az sf cluster create 
  --resource-group <resource-group-name> \
  --location eastus \
  --certificate-output-folder . \
  --certificate-password <password> \
  --certificate-subject-name <clustername> \
  --cluster-name <cluster-name> \
  --cluster-size 5 \
  --os WindowsServer2016DatacenterwithContainers \
  --vault-name <keyvault-name> \
  --vault-resource-group <resource-group-name> \
  --vm-password <vm-password> \
  --vm-user-name azureadmin

Using an ARM Template

It is possible to create the cluster with a specified ARM template with --template-file and --parameter-file. Working with Noel, I found that in the the ARM parameter template (parameters.json) you need provide entries with blank values for certificateThumbprint, sourceVaultValue, and certificateUrlValue. This will create certificate and drop it on your machine as well.

An example is:

az sf cluster create \
  -g <rg-name> \
  -l eastus \
  --template-file template.json \
  --parameter-file parameters.json \
  --vault-name <vault-name> \
  --certificate-subject-name <clustername> \
  --certificate-password <password> \
  --certificate-output-folder .


The command above creates the cluster and drops the pfx and pem files for the certificate to your current folder. You can connect to your cluster through the Service Fabric Explorer or through the Service Fabric Cli (sfctl).

Connect using the Service Fabric Explorer

To install the cert so you can connect to the Service Fabric explorer:

powershell.exe Import-PfxCertificate -FilePath .\<yourpfx>.pfx -CertStoreLocation 'Cert:\CurrentUser\My\'

Browse to you Service Fabric Explorer ( and when prompted select your certificate:

select correct certificate for service fabric explorer

Connect using the Service Fabric Cli (sfctl)

To connect with sfctl run the following command:

sfctl cluster select --endpoint https://<yourclustername> --pem .\<yourpem>.pem --ca .\<yourpem>.pem

## get nodes
sfctl node list

Your certs

If you review the command we ran to create the cluster you will notice that you create an Azure Key Vault. If you ever need to get your private keys you can head back to the Key Vault resource either via the CLI or the portal to get your keys.

Thanks Vy Ta here is a quick example:

az keyvault secret download --vault-name <vault-name> -n <key-name> -f pfx-out.pfx

# if you want the pem
openssl pkcs12 -in pfx-out.pfx -out pem-out.pem -nodes

Running Kubernetes Minikube on Windows 10 with WSL

Sometimes you want to be able to deploy and develop applications locally with out having to spin up an entire cluster. Setting up Minikube on Windows 10 hasn’t been the easiest thing to do but with the help of a colleague, Noel Bundick and GitHub issues, I got it working this week so this post is for me in the future when I can’t remember how i did it :-).

Install Minikube

This part is pretty easy if you use Chocolately (not using Chocolatly? Check out why you should). Alternatively you can download it and add it to your path.

choco install minikube

Create a Virtual Switch in Hyper-V

This is the extra step you need to do to get Hyper-V to work with minikube. Open a Powershell prompt and type:

# get list of network adapter to attach to

Name                      InterfaceDescription                    ifIndex Status          LinkSpeed
----                      --------------------                    ------- ------          ---------
vEthernet (minikube)      Hyper-V Virtual Ethernet Adapter #3          62 Up              400 Mbps
Network Bridge            Microsoft Network Adapter Multiplexo...      46 Up              400 Mbps
vEthernet (nat)           Hyper-V Virtual Ethernet Adapter #2          12 Up              10 Gbps
vEthernet (Default Swi... Hyper-V Virtual Ethernet Adapter             13 Up              10 Gbps
Bluetooth Network Conn... Bluetooth Device (Personal Area Netw...      23 Disconnected    3 Mbps
Ethernet                  Intel(R) Ethernet Connection (4) I21...       9 Disconnected    0 bps
Wi-Fi                     Intel(R) Dual Band Wireless-AC 8265          14 Up              400 Mbps

#Create the switch
New-VMSwitch -name minikube  -NetAdapterName <your-network-adapter-name> -AllowManagementOS $true  

Create minikube

From the Powershell prompt in Windows, create minikube using the switch you just created:

minikube start --vm-driver hyperv --hyperv-virtual-switch minikube

Minikube adds the configuration to your .kube/config file upon successful creation so you should be able to connect to the minikube from the powershell prompt using kubectl if you have it installed on windows:

kubectl get nodes

minikube   Ready     <none>    18h       v1.8.0

Using WSL to talk to minikube

I mostly use WSL as my command prompt in Windows these days which means I have kubectl, helm and my other tools all installed there. Since we just installed minikube on windows, the .kube/config file was created on the windows side at C:\Users\<username>\.kube\config. To get kubectl to work we will need to add the configuration to our .kube/config on WSL at /home/<bash-user-name>/.kube.

Note: the following might vary depending on your existing .kube/config file and set up. Check out sharing cluster access on kubernetes docs for more info and alternative ways to configure.

To see the values created on for you windows environment you can run kubectl config view from your powershell prompt. Use those values for the minikube entries below.

From your WSL terminal add the minikube context info:

kubectl config set-cluster minikube --server=https://<minikubeip>:port --certificate-authority=/mnt/c/Users/<windows-user-name>/.minikube/ca.crt
kubectl config set-credentials minikube --client-certificate=/mnt/c/Users/<windows-user-name>/.minikube/cert.crt --client-key=/mnt/c/Users/<windows-user-name>/.minikube/client.key
kubectl config set-context minikube --cluster=minikube --user=minikube

This points the context at the cert files minikube created on Windows. To verify you have set the values correctly view the context in WSL (if you have other contexts if might look slightly different):

kubectl config view

apiVersion: v1
- cluster:
    certificate-authority: /mnt/c/Users/<windows-user-name>/.minikube/ca.crt
    server: https://<minikubeip>port
  name: minikube
- context:
    cluster: minikube
    user: minikube
  name: minikube
current-context: minikube
kind: Config
preferences: {}
- name: minikube
    client-certificate: /mnt/c/Users/<windows-user-name>/.minikube/client.crt
    client-key: /mnt/c/Users/<windows-user-name>/.minikube/client.key

Now set your current context to minikube and try connecting to your minikube instance:

kubectl config use-context minikube

kubectl get nodes

minikube   Ready     <none>    20h       v1.8.0


I can use kubectl as I would with any other cluster but I have found that the can’t run the minikube commands from WSL. I have to go back to my Windows prompt to run commands like minikube service <servicename> --url