Docker Security – part 1(Overview)

There is a general perception that Containers, especially Docker Containers, are insecure. It is true that Containers are not as secure as VM since all Containers in a single machine share the same kernel and compromising one Container can cause host level compromise or compromise with other Containers. There are many ways to harden Containers and the Docker team has put in a lot of effort to make Docker Containers secure. Docker release 1.10 introduces new security features like seccomp profiles, user namespace, authorization plugin that further enhances Docker security.

In this four part blog series on Docker security, I will cover the following:

  • The first part will cover overview of Docker Security and its different components.
  • The second part will focus on Docker engine security and associated Linux kernel capabilities.
  • The third part will focus on secure access to Docker engine.
  • The fourth part will focus on Container image security.

To better understand Docker security, I have classified Docker security into the following categories as shown in the picture below:


Docker Engine security:

Docker engine does the heavy lifting of running and managing Containers. Docker engine uses Linux kernel features like Namespaces and Cgroups to provide basic isolation across Containers. Advanced isolation can be achieved using Linux kernel features like Capabilities, Seccomp, SELinux/AppArmor. The granularity of control increases as we move from Capabilities to Seccomp to SELinux/AppArmor. Docker exposes these Linux kernel capabilities either at Docker daemon level or at each Container level.

Docker engine secure access:

Docker client can access Docker engine locally using Unix socket or remotely using http mechanism. The typical usecase is to run Docker client remotely. In this scenario, it is needed to use https and TLS so that confidentiality, integrity and authentication can be ensured. Using authorization plugin capability added in Docker 1.10, Docker engine API access can be controlled based on userid and policy associated with the userid. For example, we can say that user A can create and modify Containers while user B can view Containers in read-only mode.

Container image security:

Container images are stored either in private repository or public repository. Following are the options that Docker provides for storing Container images:

  • Docker hub – This is a public registry service provided by Docker
  • Docker registry – This is an open source project that users can use to host their own registry.
  • Docker trusted registry – This is Docker’s commercial implementation of Docker registry and it provides role based user authentication along with LDAP directory service integration.

Irrespective of the registry approach used, Container images needs to be signed so that the user downloading the Container image is assured that the images are coming from a reliable source and not modified in the middle. Docker recently also added support for hardware keys using Yubikey. Docker registry access should be secure through TLS.

Container image is typically created using a Dockerfile with a base image and a bunch of software installed on top of the base image as layers. Containers can have security vulnerabilities either because of the base image or because of the software installed on top of the base image. Docker is working on a project called Nautilus that does security scan of Containers and lists the vulnerabilities. Nautilus works by comparing the each Container image layer with vulnerability repository to identify security holes.

Best practices:

Docker along with Linux kernel provides various ways to secure Containers and base OS. Other than using the different features described above, following are few other best practices that can be followed:

  • Have separate containers for each micro-service
  • Don’t put ssh inside container, “docker exec” can be used to ssh to Container.
  • Have smaller container images
  • Run application as non-root. If root is needed, run as root only for limited operations.
  • Keep OS secure with regular updates. Using Container optimized OS is an option here since they provide automatic pushed update.
  • Store root keys, passphrase in a safe place. Docker has plans to manage keys with UCP.
  • Use Docker official images. These images are curated by Docker so that the highest quality and security is maintained for the official images.

Running application as non-root:

If the Container is run as non-root, even if the Container is compromised, the impact will be minimal. Following is Dockerfile of official nginx Container. Here, the Container is run as root since no USER directive is used. One of the reasons nginx Container is run as root so that it can get access to logging inside /var directory.
Following is the ID inside nginx Container:

$ docker exec -ti mynginx sh
# id
uid=0(root) gid=0(root) groups=0(root)

Following is Dockerfile of official jenkins Container. Here, we see the USER directive with “jenkins” user. This makes jenkins Container more secure.
Following is the ID inside jenkins Container:

$ docker exec -ti 47b1bfbd5ca5 sh
$ id
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins)

Over the next series of posts, I will cover internals of different Docker Security features. For Container based production deployments, having a good Security scheme for host OS as well as Containers is very critical and this should be planned from initial deployment.



5 thoughts on “Docker Security – part 1(Overview)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s