Windows Containers on Windows 10 without Docker (using Containerd)

I have been working on making containerd work well on Windows with Kubernetes. I needed to do some local dev on containerd so I started to configure my local machine. I found lots of information here and there but nothing comprehensive so I wrote down my steps.

Note there are a few limitations with containerd (so you might not want to fully uninstall Docker). For instance, containerd doesn’t support building containers and you won’t be able to use the native Linux containers functionality (though there is some early support for LCOW).

Let’s get started!

Get Containerd

Get and install containerd:

curl.exe -LO https://github.com/containerd/containerd/releases/download/v1.4.4/containerd-1.4.4-windows-amd64.tar.gz

#install
tar xvf containerd-1.4.4-windows-amd64.tar.gz
mkdir -force "C:\Program Files\containerd"
mv ./bin/* "C:\Program Files\containerd"

"C:\Program Files\containerd\containerd.exe" config default | Out-File "C:\Program Files\containerd\config.toml" -Encoding ascii

Next for performance tell Windows defender to ignore it and start the service:

Add-MpPreference -ExclusionProcess "$Env:ProgramFiles\containerd\containerd.exe"

.\containerd.exe --register-service
Start-Service containerd

Setting up network

Unlike Docker, Containerd doesn’t attach the pods to a network directly. It uses a CNI (container networking interface) plugin to set up the networking. We will use the windows nat plugin for our local dev env. Not this is likely the plugin you would want to use in a kubernetes cluster, for that you should look at Calico, Antrea, or many of the cloud specific ones that are avalaible. For our dev environment NAT will work just fine.

Create the folder for cni binaries and configuration. The path C:\Program Files\containerd\cni\bin is the default location for containerd.

mkdir -force "C:\Program Files\containerd\cni\bin"
mkdir -force "C:\Program Files\containerd\cni\conf"

Get the nat binaries:

curl.exe -LO https://github.com/microsoft/windows-container-networking/releases/download/v0.2.0/windows-container-networking-cni-amd64-v0.2.0.zip
Expand-Archive windows-container-networking-cni-amd64-v0.2.0.zip -DestinationPath "C:\Program Files\containerd\cni\bin" -Force

Create a nat network. For nat network it must have the name nat

 curl.exe -LO https://raw.githubusercontent.com/microsoft/SDN/master/Kubernetes/windows/hns.psm1
ipmo ./hns.psm1

$subnet="10.0.0.0/16" 
$gateway="10.0.0.1"
New-HNSNetwork -Type Nat -AddressPrefix $subnet -Gateway $gateway -Name "nat"

Set up the containerd network config using the same gateway and subnet.

@"
{
    "cniVersion": "0.2.0",
    "name": "nat",
    "type": "nat",
    "master": "Ethernet",
    "ipam": {
        "subnet": "$subnet",
        "routes": [
            {
                "gateway": "$gateway"
            }
        ]
    },
    "capabilities": {
        "portMappings": true,
        "dns": true
    }
}
"@ | Set-Content "C:\Program Files\containerd\cni\conf\0-containerd-nat.conf" -Force

Creating containers

There are two main ways to interact with containerd: ctr and crictl. ctr is great for simple testing and crictl is used to interact with containerd in the same way that kubernetes works.

CTR

Ctr was already installed when you pulled the binaries and installed containerd. It has a similiar interface to docker but it does vary some since you are working at a different level.

There is a bug with ctr that doesn’t support multi-arch images on Windows (PR to fix it) so we will use the specific image that matches our operating system. I am running Windows 10 20H2 (build number 19042) so will use that tag for the image.

# find your version
cmd /c ver
Microsoft Windows [Version 10.0.19042.867]

cd "C:\Program Files\containerd\"
./ctr.exe pull mcr.microsoft.com/windows/nanoserver:20H2 
./ctr.exe run -rm  mcr.microsoft.com/windows/nanoserver:20H2 test cmd /c echo hello
hello

Create a pod via crictl

Using CRI api can be alittle cumbersome. It requires doing each step independently.

Get the crictl and install it to a directly:

curl.exe -LO https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.20.0/crictl-v1.20.0-windows-amd64.tar.gz                          
tar xvf crictl-v1.20.0-windows-amd64.tar.gz

Set up defaults for crictl that will connect to containerd’s default named pipe so it doesn’t need to be set for each call with crictl.

@"
runtime-endpoint: npipe://./pipe/containerd-containerd
image-endpoint: npipe://./pipe/containerd-containerd
timeout: 10
#debug: true
"@ | Set-Content "crictl.yaml" -Force

Pull an image

./crictl pull k8s.gcr.io/pause:3.4.1

Create the sandbox, container and run it (told you its a bit cumbersome).

$POD_ID=(./crictl runp .\pod.json)
$CONTAINER_ID=(./crictl create $POD_ID .\container.json .\pod.json)
./crictl start $CONTAINER_ID

Finally exec in and check the pod has an IP address from the network we crated earlier.

First lets take a look at the network and endpoint that was created for the pod:

get-hnsnetwork | ? Name -Like "nat" | select name, id, subnets

Name ID                                   Subnets
---- --                                   -------
nat  D95007EB-27C6-4EED-87FC-402187447637 {@{AdditionalParams=; AddressPrefix=10.0.0.0/16; Flags=0; GatewayAddress=1...

Get-HnsEndpoint | ? virtualnetworkname -like nat | select name, id, ipaddress, gatewayaddress

Name                                                                 ID                                   IPAddress    GatewayAddress
----                                                                 --                                   ---------    --------------
92c0776a2fa06e2ce0a074b350146ed9e97bb2dafdb5bf0b8f8c55c8d4020f00_nat 50f2bd95-90c2-4234-8d82-ef45c841dfb4 10.0.113.113 10.0.0.1

Now exec into the container and look at the network config

./crictl exec -i -t $CONTAINER_ID cmd

C:\>ipconfig
Windows IP Configuration
Ethernet adapter vEthernet (92c0776a2fa06e2ce0a074b350146ed9e97bb2dafdb5bf0b8f8c55c8d4020f00_nat):

Connection-specific DNS Suffix  . : home
Link-local IPv6 Address . . . . . : fe80::3034:e49c:68b4:99ac%88
IPv4 Address. . . . . . . . . . . : 10.0.113.113 
Subnet Mask . . . . . . . . . . . : 255.255.0.0  
Default Gateway . . . . . . . . . : 10.0.0.1 

You will note the endpoint is the name as the Ethernet Adapter and the ipddresss and default gateway match.

Debugging CNI and Kubelet in Kubernetes

I have been working on adding IPV6 support to the Azure cluster api provider (CAPZ) over the last few weeks. I ran into some trouble configuring Calico as the Container Networking Interface (CNI) for the solution. There are some very old (2017) and popular (updated 13 days ago at time of writing) issues on github where folks are looking for ways to fix the issue.

This is obviously something that is hard to debug due to the error message being vague and the wide variety of CNI’s available and the wide range of issues that could be causing it. The original issues did seem to be root caused but others (me included) are still running into the issue regularly. In my case, I was already running the latest version of my cni (plus just running the latest isn’t really understanding the root cause) and I understand why removing environment variables doesn’t really help. I figured I had simply configured something wrong but what was it?

Debugging CNI issues with Kubelet

The dreaded error message from Kubelet is:

kubelet: E0714 12:45:30.541001 7263 kubelet.go:2136] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message: network plugin is not ready: cni config uninitialized

Unfortunately this is not very descriptive. Sometimes kubelet has other error messages in it that give you pointers to what is wrong but in my case there was nothing in the logs. The CNI itself wasn’t working but there weren’t any error logs that I could find.

After a big of digging I found the a developer tool for executing CNI’s manually. This allows you to attach an interface to an already created network namespace via a CNI you provide. After building the tool, it is pretty straight forward to execute it with the cni and configuration that is installed on the kubernetes node.

It assumes the cni config is installed to /etc/cni/net.d and /opt/cni/bin where cni’s are typically installed if you use kubeadm:

# create a test network namespace
sudo ip netns add testingns

# run with default config location
sudo CNI_PATH=/opt/cni/bin cnitool add uniquename /var/run/netns/testingns

Discovering the bug

In my case it was a simple misconfiguration, can you spot it?

Important: be careful with the Calico spec below. It is likely not what you are expecting. At the time of writing Calico vxlan doesn’t support IPV6 which is one of the ways to run Calico on Azure. Another way is to use UDR’s and host-local ipam which requires --configure-cloud-routes in the cloud provider which is what is happening above. I also has the misconfiguration in it.

---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
  name: calico-config
  namespace: kube-system
data:
  # Typha is disabled.
  typha_service_name: "none"
  # Configure the backend to use.
  calico_backend: "none"

  # The CNI network configuration to install on each node.  The special
  # values in this config will be automatically populated.
  # https://docs.projectcalico.org/reference/cni-plugin/configuration#using-host-local-ipam
  cni_network_config: |-
    {
      "name": "k8s-pod-network",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "calico",
          "log_level": "info",
          "datastore_type": "kubernetes",
          "nodename": "__KUBERNETES_NODE_NAME__",
          "mtu": 1500,
          "ipam": {
              "type": "host-local",
              "subnet": "usePodCidr",
          },
          "policy": {
              "type": "k8s"
          },
          "kubernetes": {
              "kubeconfig": "__KUBECONFIG_FILEPATH__"
          }
        },
        {
          "type": "portmap",
          "snat": true,
          "capabilities": {"portMappings": true}
        }
      ]
    }

Yes, that is comma after "usePodCidr" which gave the cryptic error message network plugin is not ready: cni config uninitialized. Fun times with json embedded in yaml :-) After removing it everything behaved as expected.

Next steps

I am going to take a look at the K8s/cni code to figure out if it’s possible to bubble up better error message as this is obviously a pain point for folks.

Deploying AKS with least privileged service principal

When deploying an Azure Kubernetes Service cluster you are required to use a service principal. This service principal is used by the Kubernetes Azure Cloud Provider to do many different of activities in Azure such as provision IP addresses, create storage disks and more. If do not specify a Service principal at the time of creation for the an AKS cluster a service principal is created behind the scenes.

In many scenarios, the resources your cluster will need to interact with will be outside the auto-generated node resource group that AKS creates. For instance, if you are using Static IP address for some of you services, the IP addresses might live in a resource group outside the auto-generated node resource group. Another common example is attaching to an existing vnet that is already provisioned.

Assigning proper permissions

It would be simple to give Contributor rights across your whole sub or even individual resource groups and these scenarios would work but this is not a good practice. Instead we should assign the specific, least privileged rights to a given service principal.

In the examples of attaching IP addresses, we may need the Azure Cloud Provider to be able to attach but not delete IP addresses because a different team controls the IP address. We definitely don’t want the Service principal to be able to delete any other resources in the resource group.

There is a good list of the Kubernetes v1.11 permissions required here. This list shows the permissions for creating a least privileged service principal (note that it might change as k8s version change so use as general guide). Using this we can assign just enough rights to the service principal to interact with the resources outside the node group.

Sample Walk through

I have created a full walk through sample of creating the service principal upfront and assign just enough rights to it to access the IP addresses and vnet. Here is a highlight of the important parts:

First create a service principal with no permission:

clientsecret=$(az ad sp create-for-rbac --skip-assignment -n $spname -o json | jq -r .password)

Using custom Role Definitions I am able to give the service principal only read access to the IP’s:

{
    "Name": "AKS IP Role",
    "IsCustom": true,
    "Description": "Needed for attach of ip address to load balancer created in K8s cluster.",
    "Actions": [
        "Microsoft.Network/publicIPAddresses/join/action",
        "Microsoft.Network/publicIPAddresses/read"
    ],
    "NotActions": [],
    "DataActions": [],
    "NotDataActions": [],
    "AssignableScopes": [
        "/subscriptions/yoursub"
    ]
}

Using that definition we can then apply the Role Definition to the Service principal which will give it IP read access to the resource group:

spid=$(az ad sp show --id "http://$spname" -o json | jq -r .objectId)

iprole=$(az role definition create --role-definition ./role-definitions/aks-reader.json)
az role assignment create --role "AKS IP Role" --resource-group ip-resouce-group --assignee-object-id $spid

Then you can deploy your cluster using the service principal:

az aks create \
    --resource-group aks-rg \
    --name aks-cluster \
    --service-principal $spid \
    --client-secret $clientsecret

This will allow the Service principal used to access the the IP Addresses in the resource group outside the node.

note: to access the ip address in group outside the cluster you will need to provide an annotation on the k8s Service definition (service.beta.kubernetes.io/azure-load-balancer-resource-group). See the example.

Check out the sample for a full walk through.

Using the Go Delve Debugger from the command line

Yesterday I had to revert to the command line to debug a program I was working on (long story as to why). I was initially apprehensive to debug from the command line but once I started I found the experience was quite nice. In fact I would compare debugging from the command line like “Vim for debugging”. Best part was I found the bug I was looking for!

I was working in Golang so I used the awesome tool Delve which is used by the VS Code Go extension. I didn’t find a simple tutorial on using the command line so I’ve created this to help me remember next time I need to fallback to the command line.

Install Delve (dlv)

The following steps assumes you have set up your go path.

go get -u github.com/derekparker/delve/cmd/dlv

Now you should be able to run (if not make sure you have your GOPATH configured properly):

dlv version
Delve Debugger
Version: 1.1.0
Build: $Id: 1990ba12450cab9425a2ae62e6ab988725023d5c $

Start Delve on a project

Delve can be run in many different ways but the easiest is to use dlv debug. This takes your current go package, builds it, then runs it. You can specify command parameters by using -- <application parameters> after the delve paramters you pass.

Get the example Go app and then start delve using the hello package:

$ go get github.com/golang/example/...
cd $GOPATH/src/github.com/golang/example/

# run delve
$ dlv debug ./hello
Type 'help' for list of commands.
(dlv)

You have now built the app and entered the debugging experience. Easy enough if you are like me, you are wondering what’s next? With so many command how do I do start to do anything useful?

There are tons of commands you can run at this point but I like I like to start with just getting into application to see where I am and get started on the right foot. So let’s set a breakpoint (b is short for break):

(dlv) b main.main
Breakpoint 1 set at 0x49c3a8 for main.main() ./hello/hello.go:25
(dlv)

Here we can see we set a breakpoint and even which file. Now execute the program until we hit it (c is short for continue):

(dlv) c
> main.main() ./hello/hello.go:25 (hits goroutine(1):1 total:1) (PC: 0x49c3a8)
    20:         "fmt"
    21:
    22:         "github.com/golang/example/stringutil"
    23: )
    24:
=>  25: func main() {
    26:         fmt.Println(stringutil.Reverse("!selpmaxe oG ,olleH"))
    27: }
(dlv)

Whoa! Now we hit the break point and can even see the code. Cool! This is fun lets keep going by stepping over the code (n is short for next):

(dlv) n
> main.main() ./hello/hello.go:26 (PC: 0x49c3bf)
    21:
    22:         "github.com/golang/example/stringutil"
    23: )
    24:
    25: func main() {
=>  26:         fmt.Println(stringutil.Reverse("!selpmaxe oG ,olleH"))
    27: }

Nice, now we can see that we have stepped to the next line of code. Now there isn’t much to this code but we can see that we do have a package called stringutil that we are using to reverse the string. Let’s step into that function (s is short for step):

(dlv) s
> github.com/golang/example/stringutil.Reverse() ./stringutil/reverse.go:21 (PC: 0x49c19b)
    16:
    17: // Package stringutil contains utility functions for working with strings.
    18: package stringutil
    19:
    20: // Reverse returns its argument string reversed rune-wise left to right.
=>  21: func Reverse(s string) string {
    22:         r := []rune(s)
    23:         for i, j := 0, len(r)-1; i < len(r)/2; i, j = i+1, j-1 {
    24:                 r[i], r[j] = r[j], r[i]
    25:         }
    26:         return string(r)
(dlv)

Now we inside another package and function. Let’s inspect the variable that has been passed in (p is short for print):

(dlv) p s
"!selpmaxe oG ,olleH"
(dlv)

Yup, that’s the string value we were expecting! Next let’s setting another breakpoint at line 24 and if we hit continue we should end up there:

(dlv) b 24
Breakpoint 2 set at 0x49c24e for github.com/golang/example/stringutil.Reverse() ./stringutil/reverse.go:24
(dlv) c
> github.com/golang/example/stringutil.Reverse() ./stringutil/reverse.go:24 (hits goroutine(1):1 total:1) (PC: 0x49c24e)
    19:
    20: // Reverse returns its argument string reversed rune-wise left to right.
    21: func Reverse(s string) string {
    22:         r := []rune(s)
    23:         for i, j := 0, len(r)-1; i < len(r)/2; i, j = i+1, j-1 {
=>  24:                 r[i], r[j] = r[j], r[i]
    25:         }
    26:         return string(r)
    27: }
(dlv)

Sweet, but what’s the values for i, j, and even r? Let’s inspect the locals:

(dlv) locals
r = []int32 len: 19, cap: 32, [...]
i = 0
j = 18

Now we could do some tweaking of the local values and see what happens:

(dlv) set i = 1
(dlv) locals
r = []int32 len: 19, cap: 32, [...]
i = 1
j = 18

Nice that changed the value but will likely mess up our application. Let’s start over (r is short for restart) just incase:

(dlv) r
Process restarted with PID 9429

Sweet but let’s take a look at the breakpoints we have set, and remove the main entry point breakpoint:

(dlv) bp
Breakpoint unrecovered-panic at 0x42a7f0 for runtime.startpanic() /usr/local/go/src/runtime/panic.go:591 (0)
        print runtime.curg._panic.arg
Breakpoint 1 at 0x49c3a8 for main.main() ./hello/hello.go:25 (0)
Breakpoint 2 at 0x49c24e for github.com/golang/example/stringutil.Reverse() ./stringutil/reverse.go:24 (0)
(dlv) clearall main.main
Breakpoint 1 cleared at 0x49c3a8 for main.main() ./hello/hello.go:25
(dlv)

Alright if you’re like me your feeling pretty good but are you ready to get fancy? We know the name of the function we want to jump to so let’s search for the function name, set a breakpoint right were we want it, then execute to it:

(dlv) funcs Reverse
github.com/golang/example/stringutil.Reverse
(dlv) b stringutil.Reverse
Breakpoint 3 set at 0x49c19b for github.com/golang/example/stringutil.Reverse() ./stringutil/reverse.go:21
(dlv) c
> github.com/golang/example/stringutil.Reverse() ./stringutil/reverse.go:21 (hits goroutine(1):1 total:1) (PC: 0x49c19b)
    16:
    17: // Package stringutil contains utility functions for working with strings.
    18: package stringutil
    19:
    20: // Reverse returns its argument string reversed rune-wise left to right.
=>  21: func Reverse(s string) string {
    22:         r := []rune(s)
    23:         for i, j := 0, len(r)-1; i < len(r)/2; i, j = i+1, j-1 {
    24:                 r[i], r[j] = r[j], r[i]
    25:         }
    26:         return string(r)
(dlv)

Finally we might want to evaluate the checks on the for loop:

(dlv) c
> github.com/golang/example/stringutil.Reverse() ./stringutil/reverse.go:24 (hits goroutine(1):1 total:1) (PC: 0x49c24e)
    19:
    20: // Reverse returns its argument string reversed rune-wise left to right.
    21: func Reverse(s string) string {
    22:         r := []rune(s)
    23:         for i, j := 0, len(r)-1; i < len(r)/2; i, j = i+1, j-1 {
=>  24:                 r[i], r[j] = r[j], r[i]
    25:         }
    26:         return string(r)
    27: }
(dlv) p len(r)-1
18
(dlv)

Everything looks fine, so I guess I work is done here:

(dlv) clearall reverse.go:24
Breakpoint 2 cleared at 0x49c24e for github.com/golang/example/stringutil.Reverse() ./stringutil/reverse.go:24
(dlv) c
Hello, Go examples!
Process 10956 has exited with status 0
(dlv) exit
Process 10956 has exited with status 0

That just touched the surface of what you can do but I hope it made you more comfortable with debugging on the command line. I think you can see how fast and efficient you can be. Good luck on you bug hunting!

Azure Kubernetes Metric Adapter

TLDR; The Azure Kubernetes Metric Adapter is a experimental component that enables you to scale your application deployment pods running on any Kubernetes cluster using the Horizontal Pod Autoscaler (HPA) with External Metrics from Azure Resources (such as Service Bus Queues) and Custom Metrics stored in Application Insights.

Checkout a video showing how scaling works with the adapter, deploy the adapter or learn by going through the walkthrough.

I currently work for an awesome team at Microsoft called CSE (Commercial Software Engineering), where we work with customers to help them solve their challenging problems. One of the goals of my specific team inside CSE is to identify repeatable patterns our customers face. It is a challenging, but rewarding role where I get to work on some of the most interesting and cutting edge technology. Check out this awesome video that talks about how my team operates.

While working with customers at a recent engagement, I recognized a repeating pattern with in the monitoring solutions we were implementing on Azure Kubernetes Service (AKS) with customers. We had 5 customers in the same room and 3 of them wanted to scale on custom metrics being generated by their applications. And so the Azure Kubernetes Metric Adapter was created.

Why do we need the Azure Kubernetes Metric Adapter

One of the customers was using Prometheus and so we started to look at the Kubernetes Prometheus metric adapter, which solves the problem of scaling on custom metrics when you are using Prometheus in your cluster. The Prometheus adapter using the custom metrics api to scale instead of heapster. You can learn more about the direction Kubernetes is moving with custom metrics here.

Two of the other customers were not using Prometheus, instead they using Azure services such as Azure Monitor, Log Analytics and Application Insights. At the engagement, one of the customers started to implement their own custom scaling solution. This seemed a bit repetitive as the other customer where not going to be reuse there solution. And so Azure Kubernetes Metric Adapter was created.

What is the Azure Kubernetes Metric Adapter

The Azure Kubernetes Metric Adapter enables you to scale your application deployment pods running on AKS (or any Kubernetes cluster) using the Horizontal Pod Autoscaler (HPA) with External Metrics from Azure Resources (such as Service Bus Queues) and Custom Metrics stored in Application Insights.

That is a bit of a mouth full so let’s take a look at wha the solution looks like when deployed onto your cluster:

azure kubernetes metric adapter deployment architecture

The Azure Metric Adapter is deployed onto you cluster and wired up the Horizontal Pod Autoscaler (HPA). The HPA checks in periodically with the Adapter to get the custom metric defined by you. The adapter in turn calls to an Azure endpoint to retrieve the metric and give it back to the HPA. The HPA then evaluates the value and compares it to the target value you have configured for a given deployment. Based on an algorithm the HPA will either leave you deployment alone, scale up the pods or scale them down.

As you can see you there is no custom code needed to scale with custom or external metrics when using the Adapter. You deploy the Adapter, configure an HPA and the rest of the scaling is taken care of by Kubernetes.

When can you use it?

It is available now as an experiment (alpha state - don’t run in production). You should try it out on your development and test environments and give feed back on what works, what doesn’t on github issues, and any features you want to see.

How can you use it?

There are two main scenarios that have been addressed first and you can see a step by step walk through for each, though you can scale on any Application Insights metric or Azure Monitor Metric.

If you prefer to see it in action checkout this video:

Deploying

First you deploy it to your cluster (requires some authorization setup on you cluster)

kubectl apply -f https://raw.githubusercontent.com/Azure/azure-k8s-metrics-adapter/master/deploy/adapter.yaml

Next you create an HPA and define a few values. An example would be:

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
 name: consumer-scaler
spec:
 scaleTargetRef:
   apiVersion: extensions/v1beta1
   kind: Deployment
   name: consumer-scaler
 minReplicas: 1
 maxReplicas: 10
 metrics:
  - type: External
    external:
      metricName: queuemessages
      metricSelector:
        matchLabels:
          metricName: Messages
          resourceGroup: sb-external-example
          resourceName: sb-external-ns
          resourceProviderNamespace: Microsoft.Servicebus
          resourceType: namespaces
          aggregation: Total
          filter: EntityName_eq_externalq
      targetValue: 30

And deploy the HPA and watch your deployment scale:

kubectl apply -f hpa.yaml

kubectl get hpa consumer-scaler -w
NAME              REFERENCE             TARGETS   MINPODS   MAXPODS   REPLICAS   AGE  
consumer-scaler   Deployment/consumer   0/30      1         10        1          1h   
consumer-scaler   Deployment/consumer   27278/30   1         10        1         1h   
consumer-scaler   Deployment/consumer   26988/30   1         10        4         1h   
consumer-scaler   Deployment/consumer   26988/30   1         10        4         1h           consumer-scaler   Deployment/consumer   26702/30   1         10        4         1h           
consumer-scaler   Deployment/consumer   26702/30   1         10        4         1h           
consumer-scaler   Deployment/consumer   25808/30   1         10        4         1h           
consumer-scaler   Deployment/consumer   25808/30   1         10        4         1h           consumer-scaler   Deployment/consumer   24784/30   1         10        8         1h           consumer-scaler   Deployment/consumer   24784/30   1         10        8         1h          
consumer-scaler   Deployment/consumer   23775/30   1         10        8         1h           
consumer-scaler   Deployment/consumer   22065/30   1         10        8         1h           
consumer-scaler   Deployment/consumer   22065/30   1         10        8         1h           
consumer-scaler   Deployment/consumer   20059/30   1         10        8         1h           
consumer-scaler   Deployment/consumer   20059/30   1         10        10        1h

And your that’s it to enable auto scaling on External Metric. Checkout the samples for more examples.

Wrapping up

Hope you enjoy the Metric Adapter and can use it to scale your deployments automatically so you can have more time to sip coffee, tea, or just read books. Please be sure to report any bugs, features, challenges you might have with it.

And if you really like it tweet about it, star the repo, and drop a thank you in the issues.