Docker for Windows: Deploying a docker-compose app to local Kubernetes

This is part of a series on moving from desktop Linux back to Windows.

The first post is here. The previous post is here.

Note: This post assumes familiarity with Docker and a working Docker for Windows environment (perhaps from reading the previous post), and at least a basic understanding of what Kubernetes is. It does not assume any experience with Kubernetes.

Introduction

Recently, Docker announced that Docker for Windows has started bundling an integrated Kubernetes, so that we can more easily experiment locally with deploying to Kubernetes in a similar manner as we might deploy to a proper Kubernetes environment.

In this post, I’m going to begin with the end in mind and start with the short answers to my initial confusion for running docker-compose apps on local Kubernetes.

Then, I’ll go into that confusion in a bit more detail, just in case people struggle like I did and perhaps google searches will lead them here.

Keys to running docker-compose apps on local Kubernetes

 

The docker-compose “getting started” example

I mentioned in my previous post on Docker and docker-compose that I used the basic python app from the docker-compose “Getting Started” documentation as my first stab at compose.

The keys to getting this successfully deployed to local Kubernetes were in modifying docker-compose.yml like so:

  1. change the version to at least 3.3
  2. remove volumes
  3. add an image line for the “web” service, giving it any name you want (doesn’t matter)
  4. docker-compose build before deploying to Kubernetes.

The working version of docker-compose.yml looked like this:

The following 3 commands built the image, deployed to Kubernetes, and confirmed running in Kubernetes:

docker-compose build

docker stack deploy --compose-file .\docker-compose.yml web

kubectl get services

All told, it looked like this when working successfully:

The Docker for Windows / Kubernetes documentation example app

In addition, I could not successfully get the “example app” from the Docker for Windows / Kubernetes documentation running. Initially, this was because the docker-compose file referenced in that documentation was never meant to be standalone, but the documentation doesn’t mention that or point to the actual code that accompanies it. Once I found that, it still failed, and the answer to that problem was fixed in a pull request on the repo. So let’s get that working:

First, here’s the sample app that’s supposed to accompany that documentation.  I git cloned that locally and then followed the steps in the README, namely docker-compose build and docker stack deploy ..., just like above. This took quite a while because, being maven, it needed to download half the internet.

And then… fail, with something like:

Fortunately, an issue had been filed, with a fix, against the repo, and it was easy to apply. As described in the PR, changing words/Dockerfile to pin alpine, i.e. changing it from FROM alpine:edge to FROM alpine:3.7, fixed it. After successfully docker-compose build, it worked as described and the app was available at http://localhost:8080

 

My initial confusion with a basic docker-compose app

As mentioned above, I want to document the initial errors that I hit when trying to deploy that “getting started” docker-compose app onto local Kubernetes.

But first: Kubernetes aside, following along with the “getting started” example initially, running it with docker-compose up worked fine. I then added the volumes line so that I could get F5/refresh when working on the code, and that worked fine too. Thus, when finished, the docker-compose.yml file looked like this:

docker-compose.yml

And that’s the file I started with when trying to deploy to local Kubernetes.

Here’s the first error I got:

It was, to me, inscrutable. This error: parse error: services.web.volumes.0 must be a string made no sense to me.

On a whim, I commented out the volumes bits, re-deployed, and got this new nearly identical error:

parse error: services.web.ports.0 must be a string or number was also not helpful.

At this point, I was pretty well stuck. I don’t recall exactly how I arrived here, but somehow — perhaps by looking at different docker-compose files on the web? — I noticed that my docker-compose.yml had a version of 3, and others had 3.x. So I tried changing to: version: '3.3'. Success… kind of. I didn’t get errors, but I didn’t have a running web container, either:

Now we’re getting somewhere. But notice that there’s only that redis service running, and I knew I needed a web-published service listening on port 5000.

And this is where I got stuck. I posted the question to stack overflow, and tweeted it, and in short order had the final puzzle piece, which was that I needed to specify an image for the web service.

Goin’ back to Windows: Docker

This is part of a series on moving from desktop Linux back to Windows.

The first post is here. The previous post is here.

Note: Running Docker on Windows requires Windows 10 Pro. The necessary virtualization features are not available on Windows 10 Home.

Table of Contents

This post is juuuust long enough that it probably helps to know what I’ll cover:

  1. My hopes and expectations for working with Docker on Windows from within WSL
  2. Docker for Windows in Action (i.e., screenshots of it all working)
  3. Enabling virtualization
  4. Installing Docker for Windows
  5. Options for a Docker client in WSL (by far the longest part of this post)
  6. docker-compose in WSL
  7. VSCode Integration

Let’s get started.

Beginning with the end in mind

As I mentioned in the first post in this series, my motivation for moving back to Windows was that I needed a new laptop. It needed to be a solid developer laptop, and nowadays, that means being able to work with Docker.

After a bit of research, I knew:

  • I’d need Windows 10 Pro, since the required virtualization features aren’t available on Windows 10 Home
  • Docker should work from within Windows Subsystem for Linux (WSL), though I wasn’t sure of the particulars

Ultimately, I wanted the following:

  • All docker functionality — running public docker images, building and running my own images, etc — from within WSL
  • All docker-compose functionality from within WSL
  • Easy to keep all docker software up to date and not stuck on old versions
  • Practically seamless use of Docker… at least as easy as on Linux, and certainly better than the docker-machine rigamarole I had become accustomed to on Mac.
  • In short: a first-class Docker developer experience

I’m pleased to report that so far, all those expectations have been met. Docker on Windows has been, for me, a joy.

Docker on Windows in Action

Here are some screenshots of Docker on Windows in action:

Docker in WSL
Running Docker from within WSL
Docker from Powershell
Running Docker from PowerShell
Building a Docker image from within WSL
Building a Docker image from within WSL
Running Docker Compose from within WSL
Running Docker Compose from within WSL
It’s a modern version of stable Docker; switching to Edge is pretty simple, as well

 

As a bonus, here’s VS Code Integration:

Docker VS Code Integration
Docker VS Code Integration

Enabling Virtualization

Docker for Windows requires virtualization to be enabled, which probably doesn’t happen by default out of the box (it didn’t for me, at any rate).

On this new laptop, I followed these instructions which worked perfectly. On an older PC, they didn’t work and I needed to figure out how to get into the BIOS a different way (Shift-F2 or Shift-F8 at startup, IIRC).

Docker For Windows

Installing Docker for Windows was trivial with chocolatey:

choco install docker-for-windows

Obviously you can install with normal old download-and-double-click-the-exe, as well. Once installed, you can turn all manner of knobs if you need to:

Docker for Windows configuration
Docker for Windows configuration

In addition, in general, I’ve found the docs to be fantastic, including the troubleshooting docs.

Docker in WSL

During my research I found 3 separate ways to run Docker client from within WSL connecting to the Docker for Windows daemon:

  1. Use the Windows Docker client
  2. Use the Linux Docker client over TCP without TLS
  3. Use the Linux Docker client with a “relay” between WSL and Windows

Here’s a quick rundown of trade-offs I’ve seen so far for each of these 3 approaches:

Use the Windows Docker Client

Jessie Frazelle explained the seeming-magic of WSL internals in this excellent post. Bottom line: you can simply run the Windows docker.exe (which comes bundled with Docker for Windows) from within WSL, and it works really well.

Here’s an ugly example that uses the full path to docker.exe, just to demonstrate:

Running the Windows Docker client from within WSL
Running the Windows Docker client from within WSL

If you wanted to stick with this option, you’d want to symlink docker to that c/Program Files.../docker.exe so that you can simply run docker. You’d do that with something like:

sudo ln -s '/mnt/c/Program Files/Docker/Docker/resources/bin/docker.exe' /usr/local/bin/docker

Pros of this approach:

  • Easy to do and works great
  • Always using a version of the docker and docker-compose clients that match the daemon
  • No need to maintain/update docker or docker-compose software in WSL
  • Surprisingly (to me), does not set 777 permissions on any files added from a Windows filesystem into the docker image (more on that in a bit)

Cons:

  • There are bound to be differences between Windows Docker and Linux Docker clients, though I haven’t found any meaningful ones yet
  • As mentioned in the “relay” blog post below, using the Windows Docker client means that it probably wouldn’t match the Linux Docker client man pages
  • Perhaps you have a desire/need/use case for always using the Linux client; for example, maybe you want to guarantee that the behavior you see locally is the same as in your Linux-based CI/CD system (such as Jenkins)

Use the Linux Docker client over TCP without TLS

The next two options use the Linux Docker client rather than the client that ships with Windows for Docker.

If you intend to use the Linux Docker client, do not YOLO apt install docker.io. Follow the documented Linux client install instructions.

OK, so: for this “TCP without TLS” option, Nick Janetakis has a great blog post on how to use the Linux Docker client from within WSL using the Docker for Windows daemon, and I won’t attempt to recreate any of those instructions here.

Aside from installing docker-ce from within WSL, it’s otherwise just a 2-step affair that you only need to do once:

  1. Check a checkbox in the Docker for Windows config screen
  2. Add an environment variable EXPORT to your WSL ~/.bashrc file

One small note: when I did this, I did need to kill Docker for Windows and restart it after checking the checkbox, because the initial checking seemed to put it into a weird state. No idea whether that’s just a fluke.

Pros of this approach:

  • Easy to do, appears to work well
  • Using the actual Docker Linux client; behavior should match man pages and other usage of a Linux Docker client, such as within a Linux-based CI/CD system (e.g. Jenkins)

Cons:

  • That scary “makes you vulnerable to remote code execution attacks. use with caution” language that accompanies the checkbox you check. I really do not know how exploitable this threat vector is… I am not a CISO, lawyer, doctor, rocket surgeon, etc.
  • Need to maintain/keep updated Linux Docker client software in addition to the Windows for Docker software

Personally, that first con raises enough of a hackle for me that I won’t use it, especially since this third option, up next, was easy to get going and hasn’t been a nuisance to me in practice.

Use the Linux Docker client with a “relay” between WSL and Windows

A third option — the one I actually started with — is to use the Linux Docker client but without that “TCP without TLS” checkbox. In this approach, you set up a relay between WSL and the Docker for Windows daemon.

In short, in addition to installing the docker-ce Linux Docker client, it involves:

  1. One-time creation of the docker-relay binary
  2. When working with the Linux Docker client, starting that relay

In addition, I did update my /etc/sudoers file so that I wouldn’t be prompted for a password every time I ran the relay.

Pros of this approach:

  • Easy (ish) to do; works great
  • Using the actual Linux Docker client; behavior should match man pages and other usage of a Linux Docker client, such as within a Linux-based CI/CD system (eg Jenkins)
  • Doesn’t suffer from the potential security vulnerability of the TCP without TLS approach, above

Cons:

  • Have to remember to start the relay when working with the Linux Docker client (or auto-start it somehow when opening WSL)
  • Need to maintain/keep updated Linux Docker client software in addition to the Windows for Docker software

I’ve been using this option for a few months and it’s worked fine. Remembering to start the relay hasn’t been a nuisance in practice

A note on file permissions

Caveat: This might not matter to you at all!

I mentioned above that when doing docker build using the Windows Docker client, any files added from a Windows filesystem to the docker image do not get 777 permissions. In addition, the Docker client issues a warning about file system permissions (more details here.) Which begs the question: “Why on earth would you suspect that they would get 777 permissions?”

The answer is that because when you docker build from within WSL using the Linux client, any files you add do get 777 permissions.

For example, I keep all my development projects on the Windows filesystem, starting at c:\dev\projects. And from within WSL, I access them from /c/dev/projects. Yes, that means even from within WSL, I’m working on a Windows filesystem for all dev projects. If you list those files, you’ll see that everything gets world permissions (i.e. 777).

And when you build an image from the Linux client, if your stuff is on the Windows file system, any files that get added will by default retain those world permissions. Here’s an example, one after building with the Linux client, and one after building with the Windows client. The entrypoint.sh file is set to ls -la /entrypoint.sh so that you can easily see an example of the file permissions that I’m talking about:

After building with Linux client:

After building with Windows client:

This might not affect you if you’re building images whose Dockerfile is on the Linux file system within WSL. It might not matter to you at all. Or you may choose to just update your Dockerfile to explicitly set permissions on any files/directories that get added to the docker image.

Docker Compose

All of the options above for having the Docker client communicate with the Docker for Windows daemon apply to docker-compose.

If you choose to stick with using the Windows clients, you’d just want to symlink the Windows docker-compose.exe to docker-compose, similar to the docker.exe symlink shown above.

And if you choose to go with the Linux client, be sure to follow the documented instructions for installing docker-compose.

VSCode Integration

For VSCode integration:

  1. Install the Docker extension (ctrl-shift-x, search “docker”, Install)
  2. Optionally, if you have them, Plug in your Docker Hub credentials if you want to navigate images that you’ve pushed to Docker Hub from within VS Code

Here’s that image again from above. Note the GUI panels on the left that list images and containers, and note the terminal integration underneath the editors.

Docker VS Code Integration
Docker VS Code Integration

This is interesting to me: regardless of what Docker client option you go with for how you interact with the Windows for Docker Daemon, VSCode is going to use the Windows client for its GUI integrations, such as listing images and containers. However, for interactions with those items — such as right-clicking an image and running it or attaching to a running container — it’s going to use whatever shell you have configured VSCode to use by default. So in the example above, note that I have configured it to use Bash (via WSL). Consequently, interacting with those images and containers from that configured shell are going to use whatever Docker client option  you choose from the options above.

Wrapping up

When I embarked upon this Goin-back-to-Windows experiment, I knew that Docker would be a kind of bellwether for me. If it worked how I hoped it would, then most likely I figured this experiment would overall be a pretty big success. And if it was janky and felt second-class, then most likely I’d end up ditching the experiment and dual-booting a Linux distro onto this new laptop.

I am, so far, very happy with the Docker experience on Windows.

Goin’ back to Windows: multiple terminal windows with ConEmu

This is part of a series on moving from desktop Linux back to Windows.

The first post is here. The previous post is here.

Back when I was a full-time software developer, working on a Windows machine, I rarely needed cmd. I’d write batch files, sure, but I could launch those with Launchy or AutoHotKey or a toolbar mouse click.  Having a cmd window open all day just wasn’t a thing, for me. The only thing I might need a shell for was subversion or git, but most likely I used file system integration (i.e. point/click… boo, I know) or whatever IDE I was using at the time.

When I switched to Mac, and eventually Linux, having a shell running all day long was just how things worked. Right now, on my work laptop (MacBook Pro), I have half a dozen+ iTerm2 tabs open.

When you have a powerfull shell, you use it, a lot.

Multiple terminals on Windows

WSL made a powerful Linux shell on Windows a reality. But as of this writing, opening an Ubuntu bash shell only supports a single window. Sure, you can open multiple, separate shells, but that’s like web browsing pre-tab-browser. No thanks.

ConEmu makes it as simple as iTerm2 or Terminal to have multiple shells on Windows. It’s even easy to have multiple different terminals within a single ConEmu window. In my experience, the combination of WSL, supplemented with ConEmu, has made Windows finally stopped feeling like a second-class citizen development environment.

Check it out:

Just like Terminal (Linux) or iTerm2 (Mac), you can use the keyboard to create new tabs, cycle through tabs, and the like. apt install tmux and you can tmux, too.

ConEmu is highly customizable, though I tend to keep things default and just add keyboard shortcuts. My current setup is that an Ubuntu bash shell is the default shell, activated by the default win-w, and I’ve assigned win-p to Powershell. Here’s how to do that:

Since I’ve moved back to Windows, the combination of WSL for a Linux experience, and ConEmu for managing multiple terminal windows, has been a delight.

If you use Chocolatey, install it with choco install conemu, and off you go. Otherwise, download it at https://conemu.github.io/

Next post: Docker on Windows!

Goin’ back to Windows: Launchy

This is part of a series on moving from desktop Linux back to Windows.

The first post is here. The previous post is here.

A very long time ago, before I had ever used Mac or Linux for personal computing, someone had convinced me of the value of a “launcher”: a program, usually invoked via alt-space, that would pop up a box and help you find stuff on your computer, launch programs / scripts, do quickie things like calculations, and otherwise keep your hands on the keyboard and off of the mouse.

At that time, the only game in town for Windows was Launchy.

When I started using a Mac for work, I tried out Spotlight, which is the default Mac launcher, and it felt OK but not even on par with Launchy. I quickly discovered Quicksilver and have stuck with it.

When I moved to Linux a few years ago, I started using Kupfer, though I don’t recall why. It worked just fine, but I was a n00b and had I known about GNOME-Do, I probably would have used that.

When I moved back to Windows, one of the first things I did was look for the current state of launchers on Windows. And, to my surprise, it seems that Launchy is still a favorite. Here’s what it looks like, exactly the same as it did in 2009:

Why not just the win key?

The win key is fine as an application launcher. It’s easy, fast, and just works.

What I like about Launchy, though, is that it also makes it easier to navigate the file system quickly. For example, let’s say I keep all my code in c:\dev\projects. If I want to navigate to that natively, I could hit the win key and then type c:\dev\projects. Or I could open up explorer and point-and-click to it.

But with Launchy, it’s as easy as

This is possible because Launchy lets you configure where it looks for stuff. In the case above, I can configure launch to catalog files or folders in a certain location:

Finally, Launchy includes a catalog of plugins and comes with some useful defaults. For example, I often need to so simple-ish calculations, and Launchy makes that really easy thanks to the Calc plugin:

Conclusion

This is all certainly not life-changing, earth-shattering stuff. But I spend a lot of time on a computer, and pointing-and-clicking all day long is inefficient and unenjoyable. I like tiny time-saving, joy-boosting things, and a launcher like Launchy serves nicely.

Next post: Multiple terminal windows with ConEmu

 

Lambda: using AWS SAM Local to test Lambda functions locally on Windows

This is part of a series on moving from desktop Linux back to Windows.

The first post is here. The previous post is here.

AWS SAM Local is a “CLI tool for local development and testing of Serverless applications.” It uses Docker to simulate a Lambda-like experience. The docs explain well how to get started, and the GitHub repo has lots of samples as well. As of this writing, it supports python, java, .net, and nodejs.

This is a quick post to show how to use it in Windows Subsystem for Linux (WSL) and Docker For Windows.

Installing on Windows

The instructions recommend installing with npm. That didn’t work for me, giving me errors about file not found. I’m not sure if this is a problem with npm inside of WSL, if it’s a problem with the current installer, or what.

I did get it installed by using the next option, which is go get github.com/awslabs/aws-sam-local. I then aliased that to sam: alias sam='aws-sam-local'

Running on Windows

Out of the box, using Docker for Windows as the Docker daemon, the sam command itself worked fine, but actually invoking a function did not work for me when using the docker client on Ubuntu within WSL. With a simple python hello-world example, I’d get this:

marc@ulysses:sam-local-play$ sam local invoke "HelloWorldFunction" -e event.json
2018/01/20 08:59:18 Successfully parsed template.yml
2018/01/20 08:59:18 Connected to Docker 1.35
...
Unable to import module 'main': No module named 'main'
END RequestId: c9edd13a-000e-49bd-a4a7-a8a23258a03b
REPORT RequestId: c9edd13a-000e-49bd-a4a7-a8a23258a03b Duration: 1 ms Billed Duration: 0 ms Memory Size: 0 MB Max Memory Used: 19 MB
{"errorMessage": "Unable to import module 'main'"}

In the instructions below, I’ll tie together several GitHub issues and a gist from three separate GH users.

First, Kivol, in this  GH issue comment from GH user, shows a 2-fold solution:

1. bind-mount /c to /mnt/c and then within Ubuntu make sure you’re on /c (from this comment on another GH issue from aseering)

2. pass --docker-volume-basedir in the sam invoke command

Here’s how to do all that (again, mostly copying from a few GH issues and adding some color commentary):

# bind-mount /c. this mount lasts as long as your current terminal session; instructions below for how to make this persistent if this works for you
$ sudo mkdir /c
$ sudo mount --bind /mnt/c /c

# now you can use /c instead of /mnt/c
$ cd /c/path/to/project $ sam local invoke --docker-volume-basedir $(pwd -P) --event event.json "HelloWorldFunction"

And voila, it worked. I get this beautiful output:

2018/01/20 12:16:08 Successfully parsed template.yml
2018/01/20 12:16:08 Connected to Docker 1.35
2018/01/20 12:16:08 Fetching lambci/lambda:python3.6 image for python3.6 runtime...
python3.6: Pulling from lambci/lambda
Digest: sha256:0682e157b34e18cf182b2aaffb501971c7a0c08c785f337629122b7de34e3945
Status: Image is up to date for lambci/lambda:python3.6
2018/01/20 12:16:09 Invoking main.handler (python3.6)
2018/01/20 12:16:09 Mounting /c/dev/projects/lambda-projects/sam-local-play as /var/task:ro inside runtime container
START RequestId: d7a9dcb0-751e-445d-9566-a025f4e804b0 Version: $LATEST
Loading function
value1 = value1
value2 = value2
value3 = value3
END RequestId: d7a9dcb0-751e-445d-9566-a025f4e804b0
REPORT RequestId: d7a9dcb0-751e-445d-9566-a025f4e804b0 Duration: 4 ms Billed Duration: 0 ms Memory Size: 0 MB Max Memory Used: 19 MB

Note: that cd /c/... is really important! If you do all the above but stay in your shell at /mnt/c — like I did 🙂 — it’s still not going to work.

Big thanks to GitHub users Kivol and aseering for putting this together!

Persistently mounting /c

If the above worked for you, then you’ll probably also want to persistently mount /c so that you don’t have to redo it every time you want to use sam from within Linux / WSL.

If you’ve got a Linux background, you’re thinking: just mount it in /etc/fstab. It seems that as of now, anyway, WSL isn’t loading /etc/fstab entries when you open a new shell (i.e. it seems as if you have to run mount -a every time), at least according to comments in this MSDN post and this WSL issue.

Fortunately, linked in those comments, sgtoj has a gist that sets all this up nicely. I saved this locally, ran it once, and now /c is mounted for all new Linux sessions.

Really?

Coming from doing all personal development on Linux for the past 3 or so years, these kinds of hacks are disappointing. So far in this experiment with going back to Windows, these hacks have been few, and so far for me have all been related to wanting to use a docker client inside of Linux / WSL. More on that in a future post.

Suffice to say: yeah, it’s hacky, but it’s not that bad. Annoying, sure, but certainly not enough to sully the overall experience so far in moving back to Windows. I can live with this one.

A note on aws-sam-local and Powershell

After initially failing to get aws-sam-local running successfully within WSL, I figured I’d try it out all on the Windows side of the house. I ran into problems there, too. First, using go-get to install it, I got path-too-long errors. WTF. That led me to choco install nodejs and use the npm install route. That did work, and sam invoke worked fine.

So far, this is the first time that working in WSL was kind of a shit-show. Thankfully smart people figured out how to get it working correctly. I am always grateful when I find answers in GitHub comments along the lines of “maybe sharing this will help.”

I really did not want to have to use Powershell for this. Not because I don’t like Powershell, but mostly because it kind of pierces the vale of the single development experience I’m trying to achieve, I guess. It’s a bit of a context switch to be doing most of the work in WSL, and then for this one thing, need to pop over to Powershell, which also means maintaining duplicate installs of software in Windows land and WSL land. A small thing, for sure, but I’d like to avoid it if possible.

Next post: Launchy