Baremetal cloud using Packet

Typical Opensource demo applications comes packaged as a Vagrant application which starts a bunch of VMs and does automatic provisioning. I have a Windows machine with Virtualbox and VMWare player installed. Since Virtualbox does not support nested virtualization with 64 bit VMs(More details can be found in my previous blogs on Virtualbox and VMWare player), I use VMWare player to try out demo applications that needs 64 bit VMs. The demo applications typically run on Linux, so running them on Windows with Virtualbox is ruled out. I was recently trying this Mantl project for deploying distributed microservices and I found that it was very slow to run in VMWare player with nested virtualization. I tried to run the application in AWS and I found that AWS does not support nested virtualization(More details can be found here). Then I tried out Google cloud. Even though Google cloud supports nested virtualization, hardware virtualization is disabled on the guest VMs and this prevents running 64 bit VMs inside Google cloud VMs. After I ran out of these options, I stumbled upon the possibility of using baremetal cloud. I used baremetal cloud from Packet and it worked great for my usecase mentioned above. Though this is not a typical use case, I was very happy with the performance and the possibilities that this provides. In this blog, I will share the use cases for baremetal cloud and my experiences with using Packet service.

Bare metal cloud Use case

Typical cloud providers like Amazon, Google, Digitalocean, Microsoft rent out VMs as part of their compute offering. These VMs run on top of a hypervisor. Though the user is guaranteed a specific performance, these VMs share the same resources with other VMs running on the same host machine. With bare metal cloud, the cloud provider hosts machines that the user can rent which is not shared with anyone. Cloud providers provide different configurations for bare metal and the user can choose based on their performance needs and the costing is based on the performance provided by the bare metal server. Following are some advantages that bare metal cloud provides:

  • User is guaranteed to get a separate host to them that is not shared with anyone.
  • Application performance would be better since there is no virtualization layer in the middle.
  •  A security compromise in 1 VM can impact other VMs in the case of regular cloud offering. Bare metal cloud is better from security perspective as there is no resource sharing.
  • For pure Container based solution, baremetal cloud works better. Rackspace is exploring this with their product Carina. With this solution, Rackspace deploys Containers to run in their baremetal server and managed using native Docker tools.
  • For some applications, it works out better from licensing perspective.

Bare metal cloud providers

Other than big players like IBM and Rackspace, there are also startups like Packet and Scaleway that provides Bare metal cloud. Scaleway provides SSD cloud servers.

Bare metal cloud from Packet

Packet provides bare metal cloud service. Following are some characteristics of the service:

  • Bare metal server can be rented on an hourly basis.
  • There are only four server configurations to choose from. Lowend Type 0 server comes with 4 core, 8 gb RAM, 80 gb SSD. Highend Type 2 server comes with 24 core, 256 gb RAM, 2.8 tb SSD. Type 0 costs 0.05$ per hour and Type 2 costs 1$ per hour.
  • Block storage and load balancing service are available as add-on services.
  • Currently, supported OS are CentOS, Debian, Ubuntu, CoreOS.
  • Packet API provides RESTful, programmatic access to the Packet ecosystem.

Following are the steps to use the Packet service:

  • Create an user account. It is needed to supply Credit card detail. Credit card will be used only when the servers are deployed.
  • Create a project first and create servers within a project. Servers within a project can communicate with each other without opening up ports. Projects can also be used for collaboration purposes.
  • Create SSH key using “ssh-keygen” tool locally and transfer the public key to Packet service. These public keys will be embedded in the servers created. The user can choose the corresponding private key to ssh into the node.

My experiences with Packet:

It was a very smooth experience for me. I tried creating Type 0 server with Ubuntu OS for my use case. It took between 8-10 minutes for the server to be created and become usable. This time is higher compared to getting a compute instance in AWS or Google cloud as spawning a new VM is much faster. Since my use case was a demo use case, I did not want to leave the server running all the time to avoid getting charged. To overcome this, I would need to save the server data in block storage and use it later. I have not tried this. According to Packet, this should be possible.

Following is a screenshot of Type 0 server with Ubuntu OS in one of Packet’s 4 datacenters. This server is running inside “freelance” project.

packet1

I tried creating a 3 node CoreOS cluster with each CoreOS node running on a Type 0 packet bare metal server. Packet supports latest Stable, Beta and Alpha releases of CoreOS. We can supply user-data which sets up the base configuration of CoreOS node. Since CoreOS is typically used to run Container based services, additional virtualization overhead can be avoided by running CoreOS directly on baremetal servers.

Following user-data should typically work with etcd2 and fleet service started by default:

#cloud-config

coreos:
  etcd2:
    # generate a new token for each unique cluster from https://discovery.etcd.io/new?size=3
    # specify the initial size of your cluster with ?size=X
    discovery: https://discovery.etcd.io/a77cc9fb2b1c380a340a58b70e35f27d
    # multi-region and multi-cloud deployments need to use $public_ipv4
    advertise-client-urls: http://$private_ipv4:2379
    initial-advertise-peer-urls: http://$private_ipv4:2380
    listen-client-urls: http://0.0.0.0:2379
    listen-peer-urls: http://$private_ipv4:2380
  units:
    - name: etcd2.service
      command: start
    - name: fleet.service
      command: start

We need to generate the discovery key and update it in the user-data above. With the above configuration, etcd2 service was not starting up for me. After discussing with Packet team, it looks that there is a bug in the current CoreOS installation with Packet. To workaround this problem, they suggested me to use the following user-data. This is a temporary solution till they fix the default CoreOS installation.

#cloud-config

coreos:
  etcd2:
    # specify the initial size of your cluster with ?size=X
    discovery: https://discovery.etcd.io/ea4707d7e727e6c9be78623cfa6c4e9b
    advertise-client-urls: http://$private_ipv4:2379
    initial-advertise-peer-urls: http://$private_ipv4:2380
    listen-client-urls: http://0.0.0.0:2379
    listen-peer-urls: http://$private_ipv4:2380
  units:
    - name: etcd2.service
      drop-ins:
        - name: 10-after-phone-home.conf
          content: |
            [Unit]
            Requires=oem-phone-home.service
            After=oem-phone-home.service
      command: start
    - name: fleet.service
      command: start

Following image shows the summary of 3 CoreOS nodes running in Packet baremetal:

packet2

Following images shows successful etcd cluster and fleet cluster:

packet3

packet4

References

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s