Anthos is a hybrid/multi-cloud platform from GCP. Anthos allows customers to build their application once and run in GCP or in any other private or public cloud. Anthos unifies the control, management and data plane when running a container based application across on-premise and multiple clouds. Anthos was launched in last year’s NEXT18 conference and made generally available recently. VMWare integration is available now, integration with other clouds is planned in the roadmap. 1 of the components of Anthos is called “Migrate for Anthos” which allows direct migration of VM into Containers running on GKE. This blog will focus on “Migrate for Anthos”. I will cover the need for “Migrate for Anthos”, platform architecture and move a simple application from GCP VM into a GKE container. Please note that “Migrate for Anthos” is in BETA now and it is not ready for production.
Need for “Migrate for Anthos”
Modern application development typically use microservices and containers to improve the application’s agility. Containers, Docker and Kubernetes provides the benefits of agility and portability to applications. It is easier to build a greenfield application using microservices and containers. What should we do with applications that are already existing as monoliths? Enterprises typically spend a lot of effort in modernizing their applications which could typically mean a long journey for a lot of them. What if we had an automatic way to convert VMs to Containers. Does this sound like magic? Yes, “Migrate for Anthos”(earlier called as V2K) does quite a bit of magic underneath to automatically convert VMs to Containers.
Following diagram shows the different approaches that enterprises take in their modernization and cloud journey. The X-axis shows classic and cloud native applications, Y-axis show on-prem and cloud.
Migrate and Modernize:
In this approach, we first do a lift and shift of the VMs to cloud and we then modernize the application to Containers. Velostrata is GCP’s tool to do lift and shift VM migration.
Modernize and Migrate:
In this approach, we first modernize the application on-prem and then migrate the modernized application to the cloud. If the on-prem application is modernized using Docker and Kubernetes, then it can be migrated easily to GKE.
Migrate for Anthos:
Both the above approaches are 2 step approaches. With “Migrate for Anthos”, migration and modernization happens in the same step. The modernization is not fully complete in this approach. Even though the VM is migrated to containers, the monolith application is not broken down into microservices.
You might be wondering why migrate to containers if the monolith application is not converted to microservices. There are some basic advantages that we get with containerizing the monolith application and that includes portability, better packing and integration with other container services like Istio. As a next step, the monolith container application can be broken down into microservices. There are some roadmap items in “Migrate for Anthos” that will facilitate this.
For some legacy applications, it might not make sense to break it down into microservices and they can live as a single monolithic container for a long time using this approach. In a typical VM environment, we need to worry about patching, security, networking, monitoring, logging and other infrastructure components which comes out of the box with gke and kubernetes after doing the migration to Containers. This is another advantage of “Migrate for Anthos”.
“Migrate for Anthos” Architecture
“Migrate for Anthos” converts the source VMs to system containers running in GKE. System containers when compared to application containers run multiple processes and applications in a single container. Initial support for “Migrate for Anthos” is available for VMWare VMs or GCE VMs as source. Following changes are done to convert VM to Container.
- VM operating system is converted into kernel supported by GKE.
- VM system disks are mounted inside container using persistent volume(PV) and stateful dataset.
- Networking, logging and monitoring use GKE constructs.
- Applications running inside VM using systemd scripts run in container user space.
- During the initial migration phase, storage is streamed to container using CSI. The storage can then be migrated to any storage class supported by GKE.
Following are the components of “Migrate for Anthos”:
- “Migrate for Compute Engine” (formerly Velostrata) – Velostrata team has enhanced the VM migration tool to also convert VM to containers and then do the migration. The fundamentals of Velostrata including agentless and streaming technologies still remain the same for “MIgrate for Anthos”. Velostrata manager and cloud extensions needs to be installed in GCP environment to do the migration. Because Velostrata uses streaming technology, the complete VM storage need not be migrated to run the container in GKE, this speeds up the entire migration process.
- GKE cluster – “Migrate for Anthos” will run in the GKE cluster as an application containers and can be installed from the GKE marketplace.
- Source VM – Source VM can be in GCE or in VMWare environment. In VMWare environment, “Migrate for Anthos” component needs to be installed in VMWare as well.
Following picture shows the different components in the VM and how it will look when they are migrated.
The second column in the picture is what exists currently when the VM is migrated to GKE container. The only option currently is to do vertical scaling when the capacity is reached. The yellow components leverage kubernetes and the green components run inside containers. The third column in the picture is how the future would look like where we can have multiple containers with horizontal pod autoscaling.
“Migrate for Anthos” hands-on
I did a migration of GCE VM to Container running in GKE using “Migrate for Anthos”. The GCE VM has a base Debian OS with nginx web server installed.
Following are a summary of the steps to do the migration:
- Create service account for Velostrata manager and cloud extension.
- Install Velostrata manager from marketplace with the service accounts created in previous step.
- Create cloud extension from Velostrata manager.
- Create GKE cluster.
- Install “Migrate for Anthos” from GKE marketplace on the GKE cluster created in previous step.
- Create source VM in GCE and install needed application in the source VM.
- Create YAML configuration file(persistent volume, persistent volume claim, stateful dataset) from the source VM.
- Stop source VM.
- Apply the YAML configuration on top of the GKE cluster.
- Create Kubernetes service configuration files to expose the container services.
Service account creation:
I created service accounts for Velostrata manager and cloud extension using steps listed here. I used the single project configuration example.
Velostrata manager installation:
I used the steps listed here to install Velostrata manager from marketplace and to do the initial configuration. Velostrata manager provides the management interface for Velostrata where all migrations can be managed. I have used the “default” network for my setup. We need to remember the api password for future steps.
Create cloud extension:
I used the steps here to install cloud extension from Velostrata manager. The cloud storage takes care of storage caching in GCP.
Create GKE cluster:
I used the steps here to create GKE cluster. GKE nodes and source VM needs to be in the same zone. Because of this restriction, it is better to create a regional cluster so that we have a GKE node in all the regions. When I first tried the migration, I got an error like below:
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NotTriggerScaleUp 1m (x300 over 51m) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added): Warning FailedScheduling 1m (x70 over 51m) default-scheduler 0/9 nodes are available: 9 node(s) had volume node affinity conflict.
Based on discussion with Velostrata engineering team, I understood that the problem lies with not able to schedule the pod since none of the GKE nodes are in the same zone as source VM. In my case, I created a regional cluster in us-central-1, but it created nodes only in 3 zones instead of the 4 zones available in us-central-1. My source VM unfortunately resided in the 4th zone where GKE node is not present. This looks like a bug in GKE regional cluster creation where GKE nodes are not created in all zones. After I created the source VM in 1 of the zones where GKE nodes were present, the problem got resolved.
Install “MIgrate for Anthos”:
I used the steps here to install “Migrate for Anthos” in GKE cluster. There is a need to mention Velostrata manager IP address and cloud extension name that we created in the previous steps.
Create source VM:
I created a debian VM and installed nginx webserver.
sudo apt-get update sudo apt-get install -y nginx sudo service nginx start sudo sed -i -- 's/nginx/Google Cloud Platform - '"\$HOSTNAME"'/' /var/www/html/index.nginx-debian.html
Create YAML configuration from source VM:
I used the steps here. This is the command I used to create the kubernetes configuration. The configuration contains details to create persistent volume claim(PVC), persistent volume(PV) and stateful dataset.
python3 /google/migrate/anthos/gce-to-gke/clone_vm_disks.py \ -p sreemakam-anthos `#Your GCP project name` \ -z us-central1-b `#GCP Zone that hosts the VM, for example us-central1-a` \ -i webserver `#Name of the VM. For example, myapp-vm` \ -A webserver `#Name of workload that will be launched in GKE` \ -o webserver.yaml `#Filename of resulting YAML configuration`
Apply YAML configuration:
Before applying the YAML config, we need to stop the source VM. This will create a consistent snapshot. I used the following command as in this link to create persistent volume claim(PVC), persistent volume(PV) and stateful dataset. The volume would use the GCE persistent disk.
kubectl apply -f webserver.yaml
Create Kubernetes service configuration:
To expose the container service running on port 80, we can create a Kubernetes service mentioned below.
kind: Service apiVersion: v1 metadata: name: webserver spec: type: LoadBalancer selector: app: webserver ports: name: http protocol: TCP port: 80 targetPort: 80
After applying the service, it will create a load balancer with an external IP address using which we can access the nginx webservice .
The above example shows a migration of a simple VM to Container. The link here talks about how to migrate a two tier application involving application and database. Examples of applications that can be migrated includes web applications, middleware frameworks and any applications built on linux. Supported operating systems are mentioned here.
I want to convey special thanks to Alon Pildus from “Migrate for Anthos” team who helped to review and suggest improvements to this blog.