Linux Docker base images

Recently, someone asked me how to create a Docker base image for a Linux variant that they are creating. In this blog, I will cover what a Linux base image is and how to create new base images.

Linux Docker base image

Following is a sample Dockerfile to create a Python webserver Docker container image.

FROM ubuntu:14.04

# Update the sources list
RUN apt-get update

# Update
RUN apt-get install -y python2.7 python-pip

# Install app dependencies
RUN pip install Flask==0.9.0

# Bundle app source
COPY simpleapp.py /src/simpleapp.py

EXPOSE  8000
CMD ["python", "/src/simpleapp.py", "-p 8000"]

In the above example, we have given “FROM ubuntu:14.04” in the first line. ubuntu:14.04 is the base image used here. 1 common question that I get asked is that does it mean that this container contains everything present in Ubunbu VM? That is not true. What ubuntu container image contains is the packages, libraries and tools associated with Ubuntu along with the root filesystem. To give a size comparison, Ubuntu container image is around 180mb while the Ubuntu VM size is around 1GB.

Docker hub contains base container images for all major distributions including Ubuntu, rhel, centos, debian etc. It is always better to take the official images as they would have the latest security patches. The official images are maintained by the distribution vendor who works closely with Docker.

Creating Linux Docker Container base image

Docker repository provides a script “mkimage.sh” that can be used to create base images for different Linux variants. When Docker is installed in Ubuntu 14.04, “mkimage.sh” is available in “/usr/share/docker-ce/contrib/”. Following output shows the “help” for “mkimage.sh”.

$ ./mkimage.sh --help
usage: mkimage.sh [-d dir] [-t tag] [--compression algo| --no-compression] script [script-args]
   ie: mkimage.sh -t someuser/debian debootstrap --variant=minbase jessie
       mkimage.sh -t someuser/ubuntu debootstrap --include=ubuntu-minimal --components=main,universe trusty
       mkimage.sh -t someuser/busybox busybox-static
       mkimage.sh -t someuser/centos:5 rinse --distribution centos-5
       mkimage.sh -t someuser/mageia:4 mageia-urpmi --version=4
       mkimage.sh -t someuser/mageia:4 mageia-urpmi --version=4 --mirror=http://somemirror/
       mkimage.sh -t someuser/solaris solaris

Each Linux distribution provides a helper script to create the base filesystem. For example, “debootstrap” is used for debian/ubuntu variants, “rinse” is used for centos variants. “mkimage.sh” uses the root filesystem created by helper script to import that to a Docker container. The helper script installs the necessary packages and sets up the root filesystem. If you are creating a new flavor of Linux and if it based on 1 of the major distributions, we can extend the helper script. Otherwise, we can create a new helper script based on the current examples. It will be good to contribute the new helper script back to “mkimage” in Docker repository.

Following is an example to create Debian Jessie base image with mkimage.sh:

sudo ./mkimage.sh -t smakam/debian:jessie debootstrap jessie

The above command will create a new container image “smakam/debian:jessie”. It pulls the necessary files from the repository.

References

Advertisements

Dockercon 2017 – My experiences

Dockercon 2017 was the first Docker global conference that I attended. The conference was hosted in Austin, Texas. It was a memorable experience and I had lot of fun attending the conference. In this blog, I will share some of my experiences from Dockercon 2017. I have covered details on important announcements, keynote demos, Cool hacks, Sessions that I attended, Security workshop conducted by me and Docker team and key takeaways for me.

Key announcements

Following were key announcements as part of Keynote sessions:

  • Moby opensource projectMoby is a framework to assemble specialized container systems. Docker is 1 of the assembled container systems from Moby. There can be other container systems that users can create. For example, 1 of the example demonstrated in keynote is to use Moby to build a container system to run Kubernetes on Mac. Moby is an effort to keep Docker open source projects and Docker product separate.
  • LinuxkitLinuxkit is a toolkit for building custom, minimal and immutable Linux distributions. This is used by Microsoft to run Linux containers in Windows. Linuxkit is 1 of the components of Moby that allows us to build a bootable container system. This system can be run either on bare-metal or on cloud.
  • IBM is running Docker in their powerpc and Z systems.
  • Oracle enterprise DB is available in Docker store and can be tried free for personal use.
  • I am glad to mention the Cisco announcements.  Cisco and Docker are partnering on Modernizing traditional applications(MTA) program. Contiv 1.0  is available as GA.

Keynote Demos

Live demos are a key part of Dockercon. These demos were done as part of keynote sessions from Solomon and Ben.

  • Multistage Docker build to reduce Docker image size and desktop to cloud integration for moving applications across Swarms.
  • Deploying an application securely with multiple services on Docker swarm cluster. The application was deployed with Docker compose using TLS, Secrets.
  • Secure supply chain using DDC, Security scan and Docker secrets
  • Deploying 3rd party VM applications with containers using image2docker and Docker datacenter. image2docker can do migration of VMs to Containers and this would be helpful for migrating legacy applications.

Hacks

Following 2 hacks were done by Docker captains. These were selected from the many hacks submitted for Dockercon.

  1. PWD – play with Docker
    PWD is a great tool for running Docker containers using browser without having to install Docker. This is great for workshops and it is also a good Docker beginner tool. For more details on PWD, please refer to my earlier blog here.
  2. FaaS – This is a framework for building serverless functions on Docker Swarm. The demo was a cool one with integration with Alexa service.

Sessions attended

Following are the sessions that I attended over the Dockercon week:

  • Cilium: Network and application security using BPF and XDP –
    • Berkeley packet filter(BPF) and extended data processing(XDP) runs in Linux kernel.
    • Learnt use cases of BPF where policy can be forced at network layer inside linux kernel using BPF.
    • Cilium can be used as Docker networking plugin.
    • XDP extends BPF to network drivers which makes packet filtering even faster. Facebook says that XDP is 10 times faster with switching packets.
  • Solving the storage problem for cloud-native applications – Portworx
    • Portworx is a container storage solution that is trying to solve the big problem of persistent container storage. This is a complex problem to solve and there are many players trying to address this problem.
  • Scaling App defense with intent based security – Twistlock
    • This session went into details of Twistlock container security platfrom. Dynamic secure policies can be created by Twistlock automatically.
  • Docker networking: from application plane to Data plane
    • Covered Docker networking from beginning to now including tools to debug common Docker networking issues.
  • Infinit’s next generation key value store
    • Covered how Infinit’s solution is unique, distributed and scalable. Object store and file system can be on top of key-value store, this is targeted for 4th quarter of this year.
  • Journey to Docker production: evolving your infrastructure and processes
    • Talk from Docker Captain Bret fisher – Explained the production considerations for small and big Docker clusters.
  • Container performance analysis – Netflix
    • Netflix tools to debug container performance. Covered tools like Netflix victor, titus, flame graphs.
  • From ARM to Z: multi-platform Docker swarm
    • Cross-platform containers using manifest tool. Same container image can be used across multiple platforms so that developer don’t need to remember platform details.
  • Building a secure app with Docker
    • Best practices to be followed for building secure applications

I am eagerly waiting to watch the recording of the other sessions.

Security workshop

I conducted Docker Security workshop along with Nigel, Nass, Matt. Nigel is a Docker captain and Nass and Matt are from Docker team. Around 50 folks attended the session. It was a 3 hr session with presentations and labs on different Docker security topics including Swarm mode, Content trust, Security scan, Networking, Secrets and Linux container security features. The labs were done on AWS cloud. The session was interactive and we got interesting questions from the audience. The labs and the slides are posted here.

What I enjoyed the most

  • Meeting the folks in person with whom I have interacted over mails and slack
  • Keynote demos
  • Interacting with other Docker captains. Docker has an amazing Captains group and I am privileged to be part of the group. From Bangalore, 2 other Docker captains Neependra and Ajeet also attended the conference. Following picture was taken in the Captains summit.

captains_picture

  • Captains discussion with Solomon Hykes.
  • Presenting and interacting with folks in Docker Security workshop
  • Seeing overall Docker excitement with attendees
  • Talking to companies in their booth and understanding container ecosystem
  • Everyday after conference party…

 

 

Comparing Docker deployment options in public cloud

Few weeks back, I gave a presentation in Container conference, Bangalore comparing different solutions available to deploy Docker in the public cloud.

Slides are available here. I have also put the steps necessary along with short video for each of the options in the github page here.

Abstract of the talk:

Containers provide portability for applications across private and public clouds. Since there are many options to deploy Docker Containers in public cloud, customers get confused in the decision making process. I will compare Docker machine, Docker Cloud, Docker datacenter, Docker for AWS, Azure and Google cloud, AWS ECS, Google Container engine, Azure Container service. A sample multi-container application will be deployed using the different options. The deployment differences including technical internals for each option will be covered. At the end of the session, the user will be able to choose the right Docker deployment option for their use-case.

Note:

  • I have focused mainly on Docker centric options in the comparison.
  • There are few CaaS platforms like Tectonic, Rancher that I have not included since I did not get a chance to try them.
  • Since all the solutions are under active development, some of the gaps will get covered by the solutions in the future.

Comparing Docker compose versions

In this blog, I have captured some of my learnings on Docker compose files and how they differ between versions. Docker compose is a tool used for defining and running multi-container Docker applications. I have used the famous multi-container voting application to illustrate the differences with compose versions.

Following are some questions that I have to tried to answer in this blog:

  • What is the difference between Compose versions 1, 2 and 3?
  • What is the difference between compose, stack and dab formats?
  • What are different ways to run compose files with different compose versions?
  • How does “docker stack deploy” really work?

Compose versions:

Following table captures the main differences between Compose versions:

Continue reading Comparing Docker compose versions

Kubernetes CRI and Minikube

Kubernetes CRI(Container runtime interface) is introduced in experimental mode in Kubernetes 1.15 release. Kubernetes CRI introduces a common Container runtime layer that allows for Kubernetes orchestrator to work with multiple Container runtimes like Docker, Rkt, Runc, Hypernetes etc. CRI makes it easy to plug in a new Container runtime to Kubernetes. Minikube project simplifies Kubernetes installation for development and testing purposes. Minikube project allows Kubernetes master and worker components to run in a single VM which facilitates developers and users of Kubernetes to easily try out Kubernetes. In this blog, I will cover basics of Minikube usage, overview of CRI and steps to try out CRI with Minikube.

Minikube

Kubernetes software is composed of multiple components and beginners normally get overwhelmed with the installation steps. It is also easier to have a lightweight Kubernetes environment for development and testing purposes. Minikube has all Kubernetes components in a single VM that runs in the local laptop. Both master and worker functionality is combined in the single VM.

Following are some major features present in Minikube:

Continue reading Kubernetes CRI and Minikube

Docker 1.13 Experimental features

Docker 1.13 version got released last week. Some of the significant new features include Compose support to deploy Swarm mode services, supporting backward compatibility between Docker client and server versions, Docker system commands to manage Docker host and restructured Docker CLI. In addition to these major features, Docker introduced a bunch of experimental features in 1.13 release. In every release, Docker introduces few new Experimental features. These are features that are not yet ready for production purposes. Docker puts out these features in experimental mode so that it can collect feedback from its users and make modifications when the feature gets officially released in the next set of releases. In this blog, I will cover the experimental features introduced in Docker 1.13.

Following are the regular features introduced in Docker 1.13:

  • Deploying Docker stack on Swarm cluster with Docker compose.
  • Docker cli with Docker daemon backward compatibility. This allows newer Docker CLI to talk to older Docker daemons.
  • Docker cli new options like “docker container”, “docker image” to collect related commands in docker sub-keyword.
  • Docker system details using “docker system” – This helps in maintaining Docker host for cleanup and to get Container usage details
  • Docker secret management
  • docker build with compress option for slow connections

Following are the 5 features introduced in experimental mode in Docker 1.13:

  • Experimental daemon flag to enable experimental features instead of having separate experimental build.
  • Docker service logs command to view logs for a Docker service. This is needed in Swarm mode.
  • Option to squash image layers to the base image after successful builds.
  • Checkpoint and restore support for Containers.
  • Metrics (Prometheus) output for basic container, image, and daemon operations.

Experimental Daemon flag

Docker released experimental features prior to 1.13 release as well. In earlier release, users needed to download a new Docker image to try out experimental features. To avoid this unnecessary overhead of having different images, Docker introduced a experimental flag or option to Docker daemon so that users can start the Docker daemon with or without experimental features. With Docker 1.13 release, Docker experimental flag is in experimental mode.

By default, experimental flag is turned off. To see the experimental flag, check Docker version.

Continue reading Docker 1.13 Experimental features

Hybrid cloud recent solutions from Microsoft and VMWare – 2 different ends of the hybrid cloud spectrum

Public clouds have grown tremendously over the last few years and there are very few companies who do not use public cloud at this point. Even traditional enterprises with in-house data centers have some presence in the public cloud. I was looking at Amazon’s re:Invent conference details and I was amazed by the number of new services and enhancements that were announced this year.  It is very difficult for private clouds to keep up in pace with the new features of public cloud. There is no doubt that public clouds will overtake private clouds in the long term. Private clouds still have a wide deployment and there will be enough use cases for quite some time to deploy private cloud. The use cases includes regulated industries, compute needed in remote locations not having access to public cloud and some specialized requirements that public clouds cannot meet. For some enterprises, private cloud would make more sense from a costing perspective. Having hybrid cloud option is a safe bet for most companies as it provides the best of both worlds. I saw 2 recent announcements in hybrid cloud that captured my attention. One is Azure stack that allows running Azure stack in private cloud. Another is VMWare cloud on AWS that allows running entire VMware stack in AWS public cloud. I see these two services as 2 ends of the hybrid cloud spectrum. In 1 case, public cloud infrastructure software is made to run on private cloud(Azure stack) and in another case, private cloud infrastructure software is made to run on public cloud(Vmware cloud on AWS). In this blog, I have tried to capture more details on these 2 services.

Hybrid cloud

There are predominantly 2 options currently to run Private cloud. 1 option is to use vendor based cloud management software along with hardware from same vendor.

Continue reading Hybrid cloud recent solutions from Microsoft and VMWare – 2 different ends of the hybrid cloud spectrum