Deploying a Service Fabric Guest Executable with Configuration File

In Service Fabric, when your guest executable relies on a configuration file, the file must be put in the same folder as the executable. An example of an application that that needs configuration is nginx or using Traefik with a static configuration file.

The folder structure would look like:

|-- ApplicationPackageRoot
    |-- GuestService
        |-- Code
            |-- guest.exe
            |-- configuration.yml
        |-- Config
            |-- Settings.xml
        |-- Data
        |-- ServiceManifest.xml
    |-- ApplicationManifest.xml

You may need to pass the location of the configuration file via Arguments tag in the ServiceManifest.xml. Additionally you need to set the WorkingFolder tag to CodeBase so it is able to see the folder when it starts. The valid options for the WorkingFolder parameter are specified in the docs. An example configuration of the CodePackage section of the ServiceManifest.xml file is:

<CodePackage Name="Code" Version="1.0.0">
  <EntryPoint>
    <ExeHost>
      <Program>traefik_windows-amd64.exe</Program>
      <Arguments>-c traefik.toml</Arguments>
      <WorkingFolder>CodeBase</WorkingFolder>
      <ConsoleRedirection FileRetentionCount="5" FileMaxSizeInKb="2048"/> 
    </ExeHost>
  </EntryPoint>
</CodePackage>

Not the ConsoleRedirection tag also enables redirection of stdout and stderr which is helpful for debugging.

Using the Visual Studio Team Services Agent Docker Images

There are two ways to create Visual Studio Team Services (VSTS) agents: Hosted and Private. Creating your own private agent for VSTS has some advantages such as being able to install the specific software you need for your builds. Another advantage that becomes important, particularly when you start to build docker images, is the ability to do incremental builds. Incremental builds lets you keep the source files, and in the case of the docker the images, on the machine between builds.

If you choose the VSTS Hosted Linux machine, each time you kick off a new build you will have to re-download the docker images your to the agent because the agent used is destroyed after the build is complete. Whereas, if you use your own private agent the machine is not destroyed, so the docker image layers will be cached and builds can be very fast and you will minimize your network traffic as well.

Note: This article ended up a bit long; If you are simply looking for how to run VSTS agents using docker jump to the end where there is one command to run.

Installing VSTS agent

VSTS provides you with an agent that is meant to get you started building your own agent quickly. You can download it from your agents page in VSTS. After it is installed and you can customize the software available on the machine. VSTS has done a good job at making this pretty straightforward but if you read the documentation you can see there are quite a few steps you have to take to get it configured properly.

Since we using docker, why not use the docker container as our agent? It turns out that VSTS provides a docker image just for this purpose. And it is only one command to configure and run an agent. You can find the source and the documentation on the Microsoft VSTS Docker hub account.

There are many different agent images to choose from depending on your scenario (TFS, standard tools, or docker) but the one we are going to use is the image with Docker installed on it.

Using the Docker VSTS Agent

All the way at the bottom of the page there is a description of how to get started with the agent images that support docker. One thing to note that it uses the host instance of Docker to provide the Docker support:

# note this doesn't work if you are building source with docker 
# containers from inside the agent
docker run \
  -e VSTS_ACCOUNT=<name> \
  -e VSTS_TOKEN=<pat> \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -it microsoft/vsts-agent:ubuntu-16.04-docker-1.11.2

This works for some things but if you using docker containers to build your source (and you should be considering it… it is even going to be supported soon through multi-stage docker builds), you may end up with some errors like:

2017-04-14T01:38:00.2586250Z �[36munit-tests_1     |�[0m MSBUILD : error MSB1009: Project file does not exist.

or

2017-04-14T01:38:03.0135350Z Service 'build-image' failed to build: lstat obj/Docker/publish/: no such file or directory

You get these errors because the source code is downloaded into the VSTS agent docker container. When you run a docker container from inside the VSTS agent, the new container runs in the context of the host machine because the VSTS agent uses docker on the host machine which doesn’t have your files (that’s what the line -v /var/run/docker.sock:/var/run/docker.sock does). If this hurts your brain, you in good company.

Agent that supports using Containers to build source

Finally, we come to the part you are most interested in. How do we actually set up one of these agents and avoid the above errors?

There is an environment variable called VSTS_WORK that specifies where the work should be done by the agent. We can change the location of the directory and volume mount it so that when the docker container runs on the host it will have access to the files.

To create an agent that is capable of using docker in this way:

docker run -e VSTS_ACCOUNT=<youraccountname>  \
  -e VSTS_TOKEN=<your-account-Private-access-token> \
  -e VSTS_WORK=/var/vsts -v /var/run/docker.sock:/var/run/docker.sock \
  -v /var/vsts:/var/vsts -d \ microsoft/vsts-agent:ubuntu-16.04-docker-17.03.0-ce-standard

The important command here is -e VSTS_WORK=/var/vsts which tells the agent to do all the work in the /var/vsts folder. Then volume mounting the folder with -v /var/vsts:/var/vsts enables you to run docker containers inside the VSTS agent and still see all the files.

Note: you should change the image name above to the one most recent or the version you need

Conclusion

Using the VSTS Docker Agents enables you to create and register agents with minimal configuration and ease. If you are using Docker containers inside the VSTS Docker Agent to build source files you will need to add the VSTS_WORK environment variable to get it to build properly.

Accessing Docker Swarm Secrets from ASP.NET Core

Docker Swarm introduced Secrets in version 1.13, which enables your share secrets across the cluster securely and only with the containers that need access to them. The secrets are encrypted during transit and at rest which makes them a great way to distribute connection strings, passwords, certs or any other sensitive information.

I will leave it to the official documentation to describe exactly how all this works but when you give a service access to the secret you essentially give it access to an in-memory file. This means your application needs to know how to read the secret from the file to be able to use the application.

This article will walk through the basics of reading that file from an ASP.NET Core application. The basic steps would be the same for ASP.NET 4.6 or any other language.

Note: In ASP.NET Core 2.0 (still in preview as of this writing) will have this functionality as a Configuration Provider which you can simply add the nuget and wire up.

Create a Secret

We will start by creating a secret on our swarm. Log into a Manager node on your swarm and add a secret by running he following command:

echo 'yoursupersecretpassword' | docker secret create secret_password -

You can confirm that the secret was created by running:

$ docker secret ls
ID                          NAME                CREATED             UPDATED
8gv5uzszlgtzk5lkpoi87ehql   secret_password     1 second ago        1 second ago

Notice that you will not be list the value of the secret. For any containers that you explicitly give access to this secret, they would be able to read the value as a string at the path:

/run/secrets/postgres_password

Reading Secrets

Your next step is to read the secret inside you ASP.NET Core application. Before Docker Secrets were introduced you had a few options to pass the secrets along to your service:

Modifying your ASP.NET Core to read Docker Swarm Secrets

During development, you may not want to read the value from a Swarm Secret for simplicity sake. To accommodate this you can check to see if a Swarm Secret is available and if not read the ASP.NET Configuration variable instead. This enables you to load the value from any of the other configuration providers as a fallback. The code is fairly straight forward:

public string GetSecretOrEnvVar(string key)
{
  const string DOCKER_SECRET_PATH = "/run/secrets/";
  if (Directory.Exists(DOCKER_SECRET_PATH))
  {
    IFileProvider provider = new PhysicalFileProvider(DOCKER_SECRET_PATH);
    IFileInfo fileInfo = provider.GetFileInfo(key);
    if (fileInfo.Exists)
    {
      using (var stream = fileInfo.CreateReadStream())
      using (var streamReader = new StreamReader(stream))
      {
        return streamReader.ReadToEnd();
      }
    }
  }
  
  return Configuration.GetValue<string>(key);
}

This checks to see if the directory exists and if it does checks for a file with a given key. Finally, if that key exists as a file then reads the value.

You can use this function anytime after the configuration has been loaded from other providers. You will likely call this function somewhere in your startup.cs but could be anywhere you have access to the Configuraiton object. Note that depending on your set-up you may want to tweak the function to not fall back on the Configuration object. As always consider your environment and adjust as needed.

Create a service that has access to the Secret

The final step is to create a service in the Swarm that has access to the value:

docker service  create --name="aspnetcore" --secret="secret_password" myaspnetcoreapp

or in your Compose file version 3.1 and higher (only including relevant info for example):

version: '3.1'

services:
  aspnetcoreapp:
    image: myaspnetcoreapp
  secrets:
    - secret_password

secrets:
  secret_password:
    external: true

Creating an ASP.NET Core Configuration Provider

I am going to leave it to the reader to create an ASP.NET Core Configuration Provider (or wait until it is a NuGet package supplied with ASP.NET Core 2.0). If you decide to turn the code into a configuration provider (or use ASP.NET Core 2.0’s) you could simply use in you start.cs and it might look something like:

 public Startup(IHostingEnvironment env)
{
    var builder = new ConfigurationBuilder()
        .SetBasePath(env.ContentRootPath)
        .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
        .AddEnvironmentVariables()
        .AddDockerSecrets();
    Configuration = builder.Build();
}

IMPORTANT: this is just an example the actual implementation might look slightly different.

Conclusion

Docker Secrets are a good way to keep your sensitive information only available to the services that need access to it. You can see that you can modify your application code in a fairly straight forward way that supports backward compatibility. You can learn more about managing secrets at Docker’s documentation.

Deploying a Python Website to Azure with Docker

This post will cover how to deploy a Python Website to Azure Web Apps for Linux using python. With the release of Web Apps for Linux it makes it easier to deploy any application to Azure Web Apps.

Before Web Apps for Linux there needed to be explicit support for a language in Azure or you had to create a complicated Kudo Extention to enable a language. For many languages, it also means you need to use the Windows specific distribution which often had issues with 3rd party libraries not supporting windows (although this is getting better now).

This tutorial will walk you through how to create your first Python Flask Website but if you already have an existing Python website (either Flask, Django or any other framework) you can skip to the step where you Add a Dockerfile. I assume you have Python and Docker installed.

This post is a bit long but the steps are quite simple:

  1. Create Python Website (optional if you have an existing app)
  2. Add a Dockerfile
  3. Build your docker image
  4. Publish your docker image to a repository
  5. Create and Configure Azure Web App for Linux using your custom docker image

If you’re looking for complete samples, here is the Flask Example from this tutorial and Django example.

Create your Python Website

I will use Flask in this example but you could use Django or any other python web framework you would like.

Create files

First, from the command line create a new directory for your application,then move into the directory:

mkdir flaskwebapp
cd flaskwebapp

Next, create two files: app.py and requirements.txt

touch app.py
touch requirements.txt

Add Python Code

Open app.py in your favorite editor (I use VS Code) and paste in the following code:

from flask import Flask
app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello from an Azure Web App running on Linux!"

if __name__ == "__main__":
    app.run(host='0.0.0.0')

This creates a simple Flask site with one route that displays a simple hello world message. You can learn more about Flask at the Flask Website.

Note: app.run() passes a argument called host with a value of 0.0.0.0. This is important for working with Docker. You can learn more about it in this stackoverflow answer.

Add your requirements

Open requirements.txt and add the following:

Flask

This will be used by Pip to install the Flask Library.

Run your site

Finally, you can make sure you site runs by running the following in a command line:

pip install -r requirements.txt
python app.py

If you navigate to localhost:5000 you will see your new website. Congrats you just created your first Flask app!

Add a Dockerfile

The first step to getting your app ready to run Azure Web App for Linux using Docker is to add a Docker File. When I was creating this tutorial I found a great plug-in for Visual Studio Code that makes working with Docker easy from VS Code. If your using VS Code I would recommend checking it out.

In the root directory of your Python App (again can be any framework you like but if you following along with the tutorial we used Flask) create a file called Dockerfile:

touch Dockerfile

Open this file and add the following:

FROM python:3.6

RUN mkdir /code
WORKDIR /code
ADD . /code/
RUN pip install -r requirements.txt

EXPOSE 5000
CMD ["python", "/code/app.py"]

This Dockerfile uses the official Python Base image. The Dockerfile then creates a folder code and copies all the files from the current directory into the docker image. Next, it runs pip which installs all the library dependencies from the requirements file (in the case of this tutorial that would just be Flask). It opens the port and finally runs the command that launches the website.

The Dockerfile above exposes port 5000 and calls the command python /code/hello.py which works great for our flask app but you may need to tweak the last two lines to get it to work for another framework. For instance, if you wanted to run a Django app you would need to replace those two lines with:

EXPOSE 8000
CMD ["python", "/code/manage.py", "runserver", "0.0.0.0:8000"]

You can find the specific command to launch your app in your framework documentation.

Note: With Django, we are again binding the app to the 0.0.0.0 IP address so that it can be exposed to the outside world through docker. This is slightly different location than we are doing with the binding in the case of Flask.

Build your Docker Image

To get your app ready for Deployment on Azure you need to build your docker image from your dockerfile. Run the following from your command prompt in the folder where you created your Dockerfile:

docker build -f Dockerfile -t flaskwebapp:latest .

This will download the Python base docker image (if not already local), perform the actions in your Dockerfile (copy the files/run pip/etc) and then tag the new docker image as flaskwebapp.

To actually run and test your new docker image, run the newly created image using docker and you should get an output like the following:

docker run -p 5000:5000 --rm flaskwebapp
#should get output similar to this:
  * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)

You can now navigate to localhost:5000 and you should see your Flask App running inside a docker container.

Publish your Docker Image

The next step is to publish your docker image to a location where Azure Web Apps can pull the image to use inside Azure Web App environment. If your app is open source then you can host it in the public Docker Hub, which is what will do for this tutorial. Otherwise, there are many options for hosting a private Docker image such as Docker Hub or Azure Container Service.

To publish it to Docker Hub make sure you have already signed up for an account.

First, find the image id by running docker images and looking for flaskwebapp container id:

docker images id

Next, you can tag the existing image with your Docker Hub username, using the following command with the id you just located (important: replace the id and user name below with yours):

docker tag e2b89cd22d2c jsturtevant/flaskwebapp

Now that you have tagged your image you need to login into Docker Hub and push the image to docker hub:

docker login #follow prompts for login
docker push jsturtevant/flaskwebapp

You should now see the image on your docker hub repository:

docker hub repository after pushing image

Create and Configure your Azure Web App

Inside the Azure portal click the Add (+) button in the top left corner and search for Web App for Linux:

new azure web app for linux

Next, Click Create, fill in the app name, subscription, resource group and App Service as in the image below.

When you get to the Configure Container Section click on it to open a new blade. You then have the option to use a preconfigured image, Docker Hub, or private Registry. In this example, we will choose Docker Hub and then fill in the image name with the docker image name we just pushed in the previous step (in my case jsturtevant/flaskwebapp).

Click Ok and then Create. Azure will then create the Web App for Linux using your custom image!

new azure web app for linux

We are not completely finished though because we need to configure Azure to use port 5000 that we specified in our Dockerfile.

Once the Web App is published we can navigate to the Web App dashboard by clicking on All Resources, Selecting the Web App you just created, then Clicking on AppSettings.

Once you are in the Application Settings blade you can add an App Setting key of PORT with a value of 5000. Click Save at the top.

Note: If you need to set environment variables for your application this is also where you set them. Azure will take care of passing the environment variables through to your docker container.

new azure web app for linux

Now you can navigate to your new Azure Web App (http://yourappname.azurewebsites.net/) and see your project!

Conclusion

In this tutorial, we created a new Flask Web app, added a Dockerfile and then deployed the built image to an Azure Web App for Linux. Although this tutorial was for Flask, you could easily make a few changes for your favorite python framework. And the same basic principles would work for any language (node.js/Go/Elixir/etc.).

You can see a complete Flask example from this tutorial and Django example here.

Debugging ARM Templates with Custom Scripts

A while back I was working a one-click Azure Deploy button for the Hoodie sample application. If you have not seen Hoodie before you should check the project out. It is an opensource project with a complete backend that allows front end developers to build amazing applications and they have a great community.

ARM Template Debugging Issues

I ran into an issue during development and testing of the ARM template that powers the one click deploy. When deploying, the template was deploying the VM successfully but the custom linux script was not completing properly.

After several attempts someone pointed out that I made a silly mistake by including a interactive prompt in the docker command. This caused one command in the script to time out but not actually fail the entire ARM deployment leaving me with no way to determine what went wrong or so I thought.

ARM Template Logs

After debugging for longer than I want to admit, I learned that the ARM templates store logs on the VM. Knowing this would have cut my debug time down considerably. You can find all the logs for ARM Deployments at:

/var/logs/Azure

In my case the log I was interested in for my custom script was located at:

/var/log/azure/Microsoft.OSTCExtensions.CustomScriptForLinux/1.5
.2.0/extension.log

Hope that helps someone not have to spend hours trying different configurations over and over because a silly interactive prompt.