Running Azure's az command-line tool from a docker container; or, the "credentials in a container" trick

azure docker container cli

Every cloud platform provider has their own CLI tool they want you to install, all with different dependencies and install methods.

Some want you to install their tools using Snap; some want you to run install scripts as root that add files not managed by your OS’s package management system;1 some want you to click through a license where you agree to be bound by the laws of Germany. Occasionally, you might find one who actually packages their tool for at least a few different OS distributions, or even provides a statically-linked binary plus the source code it was built from. Really though, the best CLI is one you don’t have to install at all.

I’m surprised (and annoyed, of course) that more cloud providers don’t provide web-based shell access to their tools. But when they don’t, the next best option might be to install the tool in a Docker container. (In any case, working in a webssh session gets a bit wearisome after a while, somewhere around the tenth time you accidentally hit ctrl-r and refresh the page instead of searching your command history. Installing a tool on your own machine does have some benefits.)

Rather than having your development environment cluttered with dozens or even hundreds of dependencies you neither want nor need, why not confine each provider’s tools to a single Docker container? Then you will have to clutter your development environment with potentially only one baroque, Rube Goldberg–esque, Kudzu-pervasive framework.2

Taking Azure’s az tool as an example, you can do it like so:

  1. Pull the Docker image (based on Alpine Linux) for the az CLI tool:3 4

    docker pull
  2. Create a docker volume which will hold the contents of the /root directory, which is where our credentials are stored:

    docker -D run -v /root --name azconfig

We now have a stopped Docker container called azconfig, containing an anonymous volume5 which contains the contents of the /root directory. We can use the volume by supplying --volumes-from azconfig as an argument to the docker run command.

(Note that we don’t supply --rm as an argument this first time round, since we want to actually keep the container and its attached volume.)

  1. Spin up a new container which mounts the volume from our azconfig one:

    docker run --rm -it --volumes-from azconfig

This time we do use the --rm argument: we don’t care if this container disappears, the credentials will be stored in the volume.

  1. Inside the container, we can run

    az login

    to log on. The tool displays a message like

    where the “XXXXXXXXX” is some 9-character code Microsoft uses to identify the device you’re logging in from. Follow the instructions, and voilà – we’re in.

Now we can run whatever commands we like inside the running container – try typing az account list, just to check that the tool is working:

$ az account list
    "cloudName": "AzureCloud",
    "homeTenantId": "<REDACTED>",
    "id": "<REDACTED>",
    "isDefault": true,
    "managedByTenants": [],
    "name": "Free Trial",
    "state": "Enabled",
    "tenantId": "<REDACTED>",
    "user": {
      "name": "",
      "type": "user"

And regardless of what we do with the new containers – keep them running, stop them, throw them away – we can just run up a new one to access our existing credentials and avoid having to log in again.

There. Don’t you feel better about life already? If only I’d done something like this earlier, I could’ve avoided writing this post.

The same sort of method should work perfectly well for other command-line tools like Amazon’s, Heroku’s, Travis CI’s, and so on. Google Cloud SDK and Amazon’s command-line tools are the only ones I’ve noticed where the provider actually tells users about the method described here as an alternative to installing a godawful bunch of packages just to manage something in the cloud. But I like to think that if enough of us join together, in a spirit of good-will and positivity, and yell at cloud providers to fix their shit up, then maybe someday the others will too.

  1. Though to give Amazon credit, they do at least tell you where their install script will be creating files. So it would be pretty easy to package the resulting files up into whatever package is native to your distribution (e.g. .deb or .rpm files). ↩︎

  2. Well, two, if your OS also uses systemd. ↩︎

  3. Microsoft hosts the az CLI docker images on its own registry, the Microsoft Container Registry, located at, so when pulling from it we have to use a qualified reference to the image we want. If we wanted to explore the tags available for the azure-cli image, we could run
    curl -L  
    This blog post suggests it also should be possible to explore the various repositories in the MCR registry from within the very nifty Visual Studio Code IDE, but I couldn’t get that to work, and Microsoft doesn’t seem to want you to. ↩︎

  4. Interestingly, if you look at the headers of the response from, using curl -v, the Host name is openresty, suggesting Microsoft’s registry is powered by the open source OpenResty variant of Nginx, which adds to Nginx the ability to make use of libraries written in LuaJIT. Or uses it as a reverse proxy, anyway. How things have changed since the days of “Linux is a cancer” (or, “communism”) and “Embrace, extend, extinguish”! Hopefully. ↩︎

  5. In the tradition of IT people giving things terrible names, anonymous volumes are not actually anonymous (though they are volumes). They just are given a randomly-generated name when created – something euphonious and pleasant to read like 71bc263a17ab4233d9d966c42bdb060c026ce6531c00fa5a7b7329834fe01914000000000000000000 for instance. ↩︎