Category Archives: Virtualbox

Experimental Docker with Docker machine

Docker Experimental channel is used to release experimental Docker features so that Docker users can try the new features and provide feedback.  It is nice to use the experimental Docker in a test environment rather than upgrading Docker in the main development machine. The preferred approach is to use docker-machine and create a VM with experimental Docker. In this blog, I will describe the approach that I use to create docker-machine with experimental Docker VM. For basics of Docker machine, please refer to my blog on Docker machine.

Following are the steps needed to build the experimental boot2docker ISO and copy it to the docker-machine default location:

git clone https://github.com/boot2docker/boot2docker.git
cd boot2docker
docker build -t my-boot2docker-img -f Dockerfile.experimental .
docker run --rm my-boot2docker-img > boot2docker.iso
mv boot2docker.iso ~/.docker/machine/cache/boot2docker.iso

We need to specify Docker experimental release location in Dockerfile.experimental. In this case, it is https://experimental.docker.com/builds/Linux/x86_64/docker-latest.

Following command will start a Docker machine in Virtualbox with experimental Docker:

docker-machine create -d virtualbox exp

Following is the experimental Docker version running in my host:

$ docker --version
Docker version 1.11.0-dev, build 6c2f438, experimental

Installing custom software in boot2Docker image:
I had a recent usecase where I needed boot2docker to have ipvsadm installed. Package manager is not available in boot2docker. Other than installing ipvsadm, I had to copy few libraries. Following is my boot2docker.experimental file that I used for this usecase:

FROM boot2docker/boot2docker
MAINTAINER Sreenivas Makam ""

#DESCRIPTION use the latest experimental build of Docker

RUN apt-get update && apt-get install -y ipvsadm
RUN cp /sbin/ipvsadm $ROOTFS/sbin/
RUN cp /lib/x86_64-linux-gnu/libnl-genl-3.so.200 /rootfs/lib/libnl-genl-3.so.200
RUN cp /lib/x86_64-linux-gnu/libnl-3.so.200 /rootfs/lib/libnl-3.so.200
RUN cp /lib/x86_64-linux-gnu/libpopt.so.0 /rootfs/lib/libpopt.so.0

#get the latest experimental docker
RUN cd $ROOTFS/usr/local/bin && curl -fL -O https://experimental.docker.com/builds/Linux/x86_64/docker-1.12.0-rc4.tgz && tar -xvzf docker-1.12.0-rc4.tgz --strip-components=1 && chmod +x $ROOTFS/usr/local/bin/docker* && rm docker-1.12.0-rc4.tgz

RUN echo "" >> $ROOTFS/etc/motd
RUN echo "  WARNING: this is an experimental.docker.com build, not a release." >> $ROOTFS/etc/motd
RUN echo "" >> $ROOTFS/etc/motd

RUN /make_iso.sh
CMD ["cat", "boot2docker.iso"]

Issue faced:
I was not able to use custom Docker image with docker-machine version 0.8.0-rc1. I could not find an option to prevent docker-machine from downloading latest Docker image. I have opened an issue here. The only workaround I found was to copy boot2docker image to ~/.docker/machine/cache/ , remove internet connection and then create docker-machine host.

References:

Advertisements

Baremetal cloud using Packet

Typical Opensource demo applications comes packaged as a Vagrant application which starts a bunch of VMs and does automatic provisioning. I have a Windows machine with Virtualbox and VMWare player installed. Since Virtualbox does not support nested virtualization with 64 bit VMs(More details can be found in my previous blogs on Virtualbox and VMWare player), I use VMWare player to try out demo applications that needs 64 bit VMs. The demo applications typically run on Linux, so running them on Windows with Virtualbox is ruled out. I was recently trying this Mantl project for deploying distributed microservices and I found that it was very slow to run in VMWare player with nested virtualization. I tried to run the application in AWS and I found that AWS does not support nested virtualization(More details can be found here). Then I tried out Google cloud. Even though Google cloud supports nested virtualization, hardware virtualization is disabled on the guest VMs and this prevents running 64 bit VMs inside Google cloud VMs. After I ran out of these options, I stumbled upon the possibility of using baremetal cloud. I used baremetal cloud from Packet and it worked great for my usecase mentioned above. Though this is not a typical use case, I was very happy with the performance and the possibilities that this provides. In this blog, I will share the use cases for baremetal cloud and my experiences with using Packet service.

Bare metal cloud Use case

Typical cloud providers like Amazon, Google, Digitalocean, Microsoft rent out VMs as part of their compute offering. These VMs run on top of a hypervisor. Though the user is guaranteed a specific performance, these VMs share the same resources with other VMs running on the same host machine. With bare metal cloud, the cloud provider hosts machines that the user can rent which is not shared with anyone. Cloud providers provide different configurations for bare metal and the user can choose based on their performance needs and the costing is based on the performance provided by the bare metal server. Following are some advantages that bare metal cloud provides:

Continue reading Baremetal cloud using Packet

Hashicorp Atlas workflow with Vagrant, Packer and Terraform

I have used and loved Vagrant for a long time and I recently used Consul and I was very impressed by both these Devops tools. Recently, I saw some of the videos of Hashiconf and I learnt that Hashicorp has an ecosystem of tools addressing Devops needs and that these tools can be chained together to create complete application delivery platform from development to production. Atlas is Hashicorp’s product that combines its open source tools into a platform and it has a commercial version as well. In this blog, I will cover a development to production workflow for a LAMP application stack using Atlas, Vagrant, Packer and Terraform.

Overview of Vagrant, Packer, Terraform and Atlas

Vagrant

Vagrant provides a repeatable VM development environment. Vagrant integrates well with major hypervisors like Virtualbox, VMWare, HyperV. “Vagrantfile” describes the VM settings as well as initial bootstrap provisioning that needs to be done on the VM. Vagrant also integrates well with other provisioning tools like Chef, Ruby and Ansible to describe the provisioning. Simply by doing “vagrant up”, the complete VM environment is exactly reproduced. The typical problems like “it does not work for me even though its working in your machine” goes away.

Packer

Packer is a tool to create machine images for providers like Virtualbox, VMWare, AWS, Google cloud. Packer configuration is described as a JSON file and images for multiple providers can be created in parallel. The typical workflow is for developer to create development environment in Vagrant and once it becomes stable, the production image can be built from Packer. Since the provisioning part is baked into the image, the deployment of production images becomes much faster. Following link describes how Vagrant and Packer fits well together.

Continue reading Hashicorp Atlas workflow with Vagrant, Packer and Terraform

Connecting VMs between Virtualbox and VMWare Player

I had written blogs earlier on using Virtualbox and VMWare Player. I recently had a need to connect VMs running on Virtualbox and VMWare player. This is for my Windows laptop. I found the procedure mentioned in this link to be very useful. There are 2 options.

  1. Use bridged mechanism. Create a networking interface with bridged adapter on Virtualbox and map it to 1 of the physical adapters. Create a networking interface with bridged adapter on VMWare player and map it to the same physical adapter as Virtualbox. In this option, the connection between the 2 hosts is established through external router. It is possible that the router blocks this external communication. Even after clearing the firewall rules on the router, I was not able to get Virtualbox VM talk to VMWare player VM with this approach. I was able to get Virtual box VMs using bridged adapters to talk to each other, same with VMWare player. I feel that there is some filtering enforced either by Virtualbox or VMWare and I am not able to find the reason…
  2. Use host-only adapter mechanism. Create a host-only adapter on Virtualbox and create a networking interface on the VM and map it to the host-only adapter created. The host-only adapter can be configured with/without dhcp. Open “vmnetcfg” to map 1 of the VMWare adapters to bridged host-only adapter that was created with Virtualbox. I have captured details of  “vmnetcfg” in my VMWare player blog. On the VM with VMWare player, create a networking interface and map it to the custom VMNetx interface that was bridged to the Virtualbox adapter. After this, the 2 VMs will be able to talk to each other using the host-only network. This option does not have the disadvantage of dependency on external router and firewall.

References:

Virtualbox and VM management

In 1 of the previous posts, I had indicated that its difficult to resize the VM hard disk that was created during VM creation. My friend pointed me to few links on how to do this. It turned out be pretty straightforward.

Also, I learnt that when we try different applications and different configurations in the VM, its very easy for the VM OS to be screwed up and it is necessary to either take snapshots, backups, clones and restore them to be on the safe side.

In this blog, I will try to cover some of the common tasks that we would need from VM management perspective like VM backup and restore, resizing VM hard disk. I will say this in the context of Virtualbox, lot of these would apply to other VM hosting software like VMware player, etc. Also, I will cover some things that I have learnt in Virtual box along the way..

Continue reading Virtualbox and VM management