Checking the Aks Acs-Engine version number for debugging

Azure Kubernetes Service (AKS) uses the acs-engine project behind the scenes. Acs-engine is used as place to prototype, experiment and bake features before they make it into AKS.

I was recently working on a project where we using AKS shortly after it went General Availability (GA). We saw strange behavior on our test cluster related to provisioning volume mounts and load balancers that we could not reproduce with newly created clusters. We checked the version numbers on Kubernetes/code/images but we could not find anything different between the clusters.

We finally found that there was a difference between acs-engine versions of the clusters. This happened because the customer had created the cluster before the GA date. Recreating the cluster (and therefor getting the latest changes from acs-engine) fixed many of the inconsistencies we were seeing in the cluster with issues.

To check an AKS cluster acs-engine version number:

# find the generated resource group name
az group list -o table
Name                       Location    Status
-------------------------  ----------  ---------
MC_vnet_kvnet_eastus       eastus      Succeeded
vnet                       eastus      Succeeded

# find a node name for that group
az vm list -g MC_vnet_kvnet_eastus -o table
Name                      ResourceGroup         Location    Zones
------------------------  --------------------  ----------  -------
aks-agentpool-8593-0  MC_vnet_kvnet_eastus  eastus
aks-agentpool-8593-1  MC_vnet_kvnet_eastus  eastus
aks-agentpool-8593-2  MC_vnet_kvnet_eastus  eastus

# list the tags to see the acs-version number
az vm show -n aks-agentpool-8593-0 -g MC_vnet_kvnet_eastus --query tags
# output
{
  "acsengineVersion": "0.16.2", 
  "creationSource": "aks-aks-agentpool-8593-0",
  "orchestrator": "Kubernetes:1.9.6",
  "poolName": "agentpool",
  "resourceNameSuffix": "8593"
}

Being able to compare the version numbers helped pinpoint the issue but the bigger lesson learned is to always recreate your Azure resources after a product goes GA. There are a lot of changes, fixes and releases that happen in the weeks leading up to a product release in Azure and the best way to make sure your running the latest software is to create a resource after the GA event.

Windows Containers Cheat Sheet

I have been using windows containers a lot in the last month and the other day I was asked how to do something. I don’t remember anything, I use a combination of GitHub, OneNote, and Bingle (Bing/Google) for that, so of course I started looking for the various examples in various GitHub repo’s that I’ve used and written. Turns out this is not very efficient.

Instead, I am going to create this living document as a Windows Container Cheat Sheet (this blog is on GitHub so you can submit a PR if I missed anything you think is useful). It will serve as a quick reference for myself but hopefully can help beginners get a lay of the land.

This first section has general links about Windows Containers, jump to the dev resources if your already familiar.

General Info

Where to find

The first place you should know about is the Official Windows Container Docs and info on licensing and pricing.

Windows Container Flavors

There are two flavors of Windows Containers:

  • Windows ServerCore - Use for legacy Applications (Lift and Shift). Includes full .NET framework, and can run IIS. Large Container size (10+ GB’s)
  • Windows Nano Server - Use for Cloud-First Applications. Small version (100’s of MB)

Windows Container Versions

To increase the speed of improvements and releases the team had to make breaking changes between versions. This means you have to match the host machine version to the container version. If you upgrade your host machine you can run older version of containers in Hyper-v mode.

Read more about Windows Container version compatibility.

The are two release channels:

  • Long Term Support Channel (ltsc) - supported for 5 years from release
  • Semi-Annual Channel (sac) - supported for 18 months from release

The current version’s are:

  • Windows Server 2016 (ltsc)
  • Windows Server 1709 (sac)
  • Windows Server 1803 (sac)

Note: if you are running nanoserver it only has the Semi-Annual Channel Release (sac)

When using the containers it is always a good idea to explicitly tag the images to a version an example below (choose the latest from tags on servercore and nanoserver):

# for an image with a specific patch in 1709
FROM microsoft/nanoserver:1709_KB4043961

# for an image with a specific path in 2016
FROM microsoft/nanoserver:10.0.14393.1770

Development Resources and Tips

There are also sorts of tricks and tips that you can use. For examples, you should checkout:

Download files

There are several ways to download files. Soon you will be able to use curl.

RUN Invoke-WebRequest -UseBasicParsing  -Uri $url -OutFile 'outfile.zip'; 

Enable Tls 1.2

If you get the error message (currently any requests to GitHub): Invoke-WebRequest : The request was aborted: Could not create SSL/TLS secure channel.

RUN [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

Extract Files

Soon you will be able to use tar.

RUN Expand-Archive outfile.zip -DestinationPath C:\temp\;

Run Executable (installer)

RUN Start-Process you-executable.exe -ArgumentList '--paramter', 'value' -NoNewWindow -Wait;

Set Environment variable

RUN setx /M ENV_VARIABLE value; 

User Chocolately as a package Provider in Powershell

RUN Install-PackageProvider -Name chocolatey -RequiredVersion 2.8.5.130 -Force; \
    Install-Package -Name webdeploy -RequiredVersion 3.6.0 -Force;

Use escape character to chain commands

# escape=`
FROM microsoft/windowsservercore

RUN Write-Host 'Line 1.'; `
    Write-Host 'Line 2';

Debug .NET Framework app in Container

Instructions at https://www.richard-banks.org/2017/02/debug-net-in-windows-container.html.

Enable Web Auth in IIS

This also demonstrates how to set web.config files in asp.net.

FROM microsoft/aspnet:4.7.1-windowsservercore-1709

RUN powershell.exe Add-WindowsFeature Web-Windows-Auth
RUN powershell.exe -NoProfile -Command `
  Set-WebConfigurationProperty -filter /system.WebServer/security/authentication/AnonymousAuthentication -name enabled -value false -PSPath IIS:\ ; `
  Set-WebConfigurationProperty -filter /system.webServer/security/authentication/windowsAuthentication -name enabled -value true -PSPath IIS:\ 

Give IIS access to folder for logging

RUN icacls C:/inetpub/wwwroot/App_Data /grant IIS_IUSRS:f /T

Install MSI silently

RUN Start-Process msiexec.exe -ArgumentList '-i', 'installer.msi', '/quiet', '/passive' -NoNewWindow -Wait;

Powershell Core in 1709

The nanoserver with Powershell Core installed:

FROM microsoft/powershell:6.0.1-nanoserver-1709

Use MultiStage Builds

Given nanoserver doesn’t have full dotnet framework and 1709 doesn’t ship with powershell but you can leverage multistage builds to do fancier things (like use powershell) then ship a smaller container:

FROM microsoft/windowsservercore:1709 as builder

RUN Write-Host 'Use Powershell to download and install';

## ship a smaller container
FROM microsoft/nanoserver:1709

COPY --from=builder /app /app

CMD ["yourapp.exe"]

VSTS Build CI/CD

Set up a full pipeline in Visual Studio Team Services for Windows Containers.

Debugging inside a container (During dev)

List of commands to run to see various state of your container. There is no UI so here are a few commands to get you started.

List process and Service running in container

Get-service
List-processes 

Get Event Log

# this shows source as 'Docker' but can change you 'Application' or custom
Get-EventLog -LogName Application -Source Docker -After (Get-Date).AddMinutes(-5) | Sort-Object Time
 
# can also store in variable to see message detail
$el = Get-EventLog -LogName Application -Source Docker | Sort-Object Time
$el[0].Message

Networking information

Figuring out open ports and assigned ip addresses.

netstat -a
ipconfig

General Trouble shooting

There are some great tips on how to find logs and debug issues you might run into at https://docs.microsoft.com/en-us/virtualization/windowscontainers/troubleshooting.

Creating a Secure Service Fabric Cluster in two commands

Creating a Secure Fabric cluster in Azure has become easier. Currently if you use the Azure Cli you can do it in only two commands. Note that you may wish to change the location or operating system parameters.

az group create --name <resource-group-name> --location eastus

az sf cluster create 
  --resource-group <resource-group-name> \
  --location eastus \
  --certificate-output-folder . \
  --certificate-password <password> \
  --certificate-subject-name <clustername>.eastus.cloudapp.azure.com \
  --cluster-name <cluster-name> \
  --cluster-size 5 \
  --os WindowsServer2016DatacenterwithContainers \
  --vault-name <keyvault-name> \
  --vault-resource-group <resource-group-name> \
  --vm-password <vm-password> \
  --vm-user-name azureadmin

Using an ARM Template

It is possible to create the cluster with a specified ARM template with --template-file and --parameter-file. Working with Noel, I found that in the the ARM parameter template (parameters.json) you need provide entries with blank values for certificateThumbprint, sourceVaultValue, and certificateUrlValue. This will create certificate and drop it on your machine as well.

An example is:

az sf cluster create \
  -g <rg-name> \
  -l eastus \
  --template-file template.json \
  --parameter-file parameters.json \
  --vault-name <vault-name> \
  --certificate-subject-name <clustername>.eastus.cloudapp.azure.com \
  --certificate-password <password> \
  --certificate-output-folder .

Connect

The command above creates the cluster and drops the pfx and pem files for the certificate to your current folder. You can connect to your cluster through the Service Fabric Explorer or through the Service Fabric Cli (sfctl).

Connect using the Service Fabric Explorer

To install the cert so you can connect to the Service Fabric explorer:

powershell.exe Import-PfxCertificate -FilePath .\<yourpfx>.pfx -CertStoreLocation 'Cert:\CurrentUser\My\'

Browse to you Service Fabric Explorer (https://yourclustername.eastus.cloudapp.azure.com:19080/Explorer) and when prompted select your certificate:

select correct certificate for service fabric explorer

Connect using the Service Fabric Cli (sfctl)

To connect with sfctl run the following command:

sfctl cluster select --endpoint https://<yourclustername>.eastus.cloudapp.azure.com:19080 --pem .\<yourpem>.pem --ca .\<yourpem>.pem

## get nodes
sfctl node list

Your certs

If you review the command we ran to create the cluster you will notice that you create an Azure Key Vault. If you ever need to get your private keys you can head back to the Key Vault resource either via the CLI or the portal to get your keys.

Thanks Vy Ta here is a quick example:

az keyvault secret download --vault-name <vault-name> -n <key-name> -f pfx-out.pfx

# if you want the pem
openssl pkcs12 -in pfx-out.pfx -out pem-out.pem -nodes

Running Kubernetes Minikube on Windows 10 with WSL

Sometimes you want to be able to deploy and develop applications locally with out having to spin up an entire cluster. Setting up Minikube on Windows 10 hasn’t been the easiest thing to do but with the help of a colleague, Noel Bundick and GitHub issues, I got it working this week so this post is for me in the future when I can’t remember how i did it :-).

Install Minikube

This part is pretty easy if you use Chocolately (not using Chocolatly? Check out why you should). Alternatively you can download it and add it to your path.

choco install minikube

Create a Virtual Switch in Hyper-V

This is the extra step you need to do to get Hyper-V to work with minikube. Open a Powershell prompt and type:

# get list of network adapter to attach to
Get-NetAdapter  

#output
Name                      InterfaceDescription                    ifIndex Status          LinkSpeed
----                      --------------------                    ------- ------          ---------
vEthernet (minikube)      Hyper-V Virtual Ethernet Adapter #3          62 Up              400 Mbps
Network Bridge            Microsoft Network Adapter Multiplexo...      46 Up              400 Mbps
vEthernet (nat)           Hyper-V Virtual Ethernet Adapter #2          12 Up              10 Gbps
vEthernet (Default Swi... Hyper-V Virtual Ethernet Adapter             13 Up              10 Gbps
Bluetooth Network Conn... Bluetooth Device (Personal Area Netw...      23 Disconnected    3 Mbps
Ethernet                  Intel(R) Ethernet Connection (4) I21...       9 Disconnected    0 bps
Wi-Fi                     Intel(R) Dual Band Wireless-AC 8265          14 Up              400 Mbps

#Create the switch
New-VMSwitch -name minikube  -NetAdapterName <your-network-adapter-name> -AllowManagementOS $true  

Create minikube

From the Powershell prompt in Windows, create minikube using the switch you just created:

minikube start --vm-driver hyperv --hyperv-virtual-switch minikube

Minikube adds the configuration to your .kube/config file upon successful creation so you should be able to connect to the minikube from the powershell prompt using kubectl if you have it installed on windows:

kubectl get nodes

#output
NAME       STATUS    ROLES     AGE       VERSION
minikube   Ready     <none>    18h       v1.8.0

Using WSL to talk to minikube

I mostly use WSL as my command prompt in Windows these days which means I have kubectl, helm and my other tools all installed there. Since we just installed minikube on windows, the .kube/config file was created on the windows side at C:\Users\<username>\.kube\config. To get kubectl to work we will need to add the configuration to our .kube/config on WSL at /home/<bash-user-name>/.kube.

Note: the following might vary depending on your existing .kube/config file and set up. Check out sharing cluster access on kubernetes docs for more info and alternative ways to configure.

To see the values created on for you windows environment you can run kubectl config view from your powershell prompt. Use those values for the minikube entries below.

From your WSL terminal add the minikube context info:

kubectl config set-cluster minikube --server=https://<minikubeip>:port --certificate-authority=/mnt/c/Users/<windows-user-name>/.minikube/ca.crt
kubectl config set-credentials minikube --client-certificate=/mnt/c/Users/<windows-user-name>/.minikube/cert.crt --client-key=/mnt/c/Users/<windows-user-name>/.minikube/client.key
kubectl config set-context minikube --cluster=minikube --user=minikube

This points the context at the cert files minikube created on Windows. To verify you have set the values correctly view the context in WSL (if you have other contexts if might look slightly different):

kubectl config view

#output
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /mnt/c/Users/<windows-user-name>/.minikube/ca.crt
    server: https://<minikubeip>port
  name: minikube
contexts:
- context:
    cluster: minikube
    user: minikube
  name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
  user:
    client-certificate: /mnt/c/Users/<windows-user-name>/.minikube/client.crt
    client-key: /mnt/c/Users/<windows-user-name>/.minikube/client.key

Now set your current context to minikube and try connecting to your minikube instance:

kubectl config use-context minikube

kubectl get nodes

#output
NAME       STATUS    ROLES     AGE       VERSION
minikube   Ready     <none>    20h       v1.8.0

Limitations

I can use kubectl as I would with any other cluster but I have found that the can’t run the minikube commands from WSL. I have to go back to my Windows prompt to run commands like minikube service <servicename> --url

Validating and Registering an Azure Event Grid WebHook in Node.js

The accompanying source code can be found at https://github.com/jsturtevant/azure-event-grid-nodejs.

It is possible to register your own webhook endpoint with Azure Event Grid. To do so you need to pass the Event Grid Validation process which happens when you first subscribe your endpoint to a Event Grid Topic.

At subscription time, Event Grid will make a HTTP POST request to you endpoint with a header value of Aeg-Event-Type: SubscriptionValidation. Inside the request there will be a validation code that you need to echo back to the service. A sample request will look like:

[{
  "id": "2d1781af-3a4c-4d7c-bd0c-e34b19da4e66",
  "topic": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
  "subject": "",
  "data": {
    "validationCode": "512d38b6-c7b8-40c8-89fe-f46f9e9622b6"
  },
  "eventType": "Microsoft.EventGrid.SubscriptionValidationEvent",
  "eventTime": "2017-08-06T22:09:30.740323Z"
}]

And the response expected is:

{
  "validationResponse": "512d38b6-c7b8-40c8-89fe-f46f9e9622b6"
}

You can read about all the details of Event Grid security and authentication.

Note: The endpoint must be https. To debug your function locally, you can use ngrok as described in this post on Locally debugging an Azure Function Triggered by Event Grid. The general concept of using ngrok can be used even though we are not using Functions.

Handling the Response in Node.js

Handling the response in Node.js is fairly straight forward by checking the header type, event type and then returning the 200 status with the validation body. Here is an example in Express.js

app.post('/event', (req, res) => {
    var header = req.get("Aeg-Event-Type");
    if(header && header === 'SubscriptionValidation'){
         var event = req.body[0]
         var isValidationEvent = event && event.data && 
                                 event.data.validationCode &&
                                 event.eventType && event.eventType == 'Microsoft.EventGrid.SubscriptionValidationEvent'
         if(isValidationEvent){
             return res.send({
                "validationResponse": event.data.validationCode
            })
         }
    }

    // Do something on other event types 
    console.log(req.body)
    res.send(req.body)
})

Testing it out

Create a topic:

az group create --name eventgridrg
az eventgrid topic create --name nodejs -l westus2 -g eventgridrg

Set up ngrok in a separate terminal (optionally tag on –log “stdout” –log-level “debug” if running ngrok from WSL)

./ngrok http 3000 #optional --log "stdout" --log-level "debug"

Register your ngrok https endpoint with Event Grid:

az eventgrid topic event-subscription create --name expressapp \
          --endpoint https://994a01e1.ngrok.io/event \
          -g eventgridrg \
          --topic-name nodejs

The registration should succeed with `“provisioningState”: “Succeeded”’ in the response because it has the validation code. Once it is finished registering send a request and get the response:

# get endpoint and key
endpoint=$(az eventgrid topic show --name nodejs -g eventgridrg --query "endpoint" --output tsv)
key=$(az eventgrid topic key list --name nodejs -g eventgridrg --query "key1" --output tsv)

# use the example event from docs 
body=$(eval echo "'$(curl https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/event-grid/customevent.json)'")

# post the event
curl -X POST -H "aeg-sas-key: $key" -d "$body" $endpoint

In your terminal where the app is running you should see the log output of the custom event:

node index.js

#output
Example app listening on port 3000!
[ { id: '10107',
    eventType: 'recordInserted',
    subject: 'myapp/vehicles/motorcycles',
    eventTime: '2017-12-01T20:28:59+00:00',
    data: { make: 'Ducati', model: 'Monster' },
    topic: '/SUBSCRIPTIONS/B9D9436A-0C07-4FE8-B779-xxxxxxxxxxx/RESOURCEGROUPS/EVENTGRIDRG/PROVIDERS/MICROSOFT.EVENTGRID/TOPICS/NODEJS' } ]