Kubernetes and Google container engine

In this blog, I will cover the Google container engine service that I tried out.

Pre-requisites:

  • Need Google cloud account.
  • Install Google cloud SDK.

Google container engine is not available in the normal gcloud SDK installation. To use container engine service, we need to update preview component.

$ gcloud components update preview

I followed the 2 examples mentioned in the container engine documentation.
WordPress application:
In this example, we create a cluster which has a single master and single worker node. We create a pod running WordPress container in the cluster and expose the WordPress service to external world. Since there is only 1 pod, we dont create a service.
Following are the commands used:

# create cluster
gcloud preview container clusters create hello-world --num-nodes 1 --machine-type g1-small
# set cluster
gcloud config set container/cluster hello-world
# create wordpress pod
gcloud preview container kubectl create -f wordpress.json
# open up firewall
gcloud compute firewall-rules create hello-world-node-80 --allow tcp:80 --target-tags k8s-hello-world-node

At this point, we will be able to access wordpress application using the IP specified in:

gcloud preview container kubectl get pod wordpress

Lets ssh to the master and slave nodes and look at the services running:
Master:

$ ps -eaf|grep kube
root      2286     1  0 14:20 ?        00:00:01 /usr/local/bin/kubelet -manifest_url=http://metadata.google.internal/computeMetadata/v1beta1/instance/attributes/google-container-manifest -config=/etc/kubernetes/manifests
998       5147     1  0 14:22 ?        00:00:31 /usr/local/bin/kube-apiserver --address=127.0.0.1 --etcd_servers=http://10.240.238.193:4001 --cloud_provider=gce --allow_privileged=False --portal_net=10.51.240.0/20 --tls_cert_file=/srv/kubernetes/server.cert --tls_private_key_file=/srv/kubernetes/server.key --secure_port=6443 --token_auth_file=/srv/kubernetes/known_tokens.csv --v=2
997       5253     1  0 14:22 ?        00:00:16 /usr/local/bin/kube-controller-manager --master=127.0.0.1:8080 --machines=k8s-hello-world-node-1.c.eighth-keyword-474.internal --minion_regexp='k8s-hello-world-node.*' --cloud_provider=gce --sync_nodes=true --v=2
996       5340     1  0 14:22 ?        00:00:01 /usr/local/bin/kube-scheduler --master=127.0.0.1:8080 --master=127.0.0.1:8080 --v=2
root      6273     1  0 14:22 ?        00:00:00 /bin/bash /etc/init.d/kube-addons start

smakam14@k8s-hello-world-master:~$ ps -eaf|grep etc
root      2286     1  0 14:20 ?        00:00:01 /usr/local/bin/kubelet -manifest_url=http://metadata.google.internal/computeMetadata/v1beta1/instance/attributes/google-container-manifest -config=/etc/kubernetes/manifests
998       5147     1  0 14:22 ?        00:00:31 /usr/local/bin/kube-apiserver --address=127.0.0.1 --etcd_servers=http://10.240.238.193:4001 --cloud_provider=gce --allow_privileged=False --portal_net=10.51.240.0/20 --tls_cert_file=/srv/kubernetes/server.cert --tls_private_key_file=/srv/kubernetes/server.key --secure_port=6443 --token_auth_file=/srv/kubernetes/known_tokens.csv --v=2
root      5623     1  0 14:22 ?        00:00:00 /usr/bin/monit -c /etc/monit/monitrc
root      6273     1  0 14:22 ?        00:00:00 /bin/bash /etc/init.d/kube-addons start
etcd     14706     1  0 15:36 ?        00:00:00 /usr/local/bin/etcd -addr 10.240.238.193:4001 -bind-addr 10.240.238.193:4001 -data-dir /var/etcd/data -initial-advertise-peer-urls http://k8s-hello-world-master:2380 -name k8s-hello-world-master -initial-cluster k8s-hello-world-master=http://k8s-hello-world-master:2380

Above, we see the kubelet server, api server, scheduler, replication controller, etcd running in the master.

Slave:

$ sudo docker ps
CONTAINER ID        IMAGE                              COMMAND                CREATED             STATUS              PORTS                    NAMES
89e1675801e0        tutum/wordpress:latest             "/run.sh"              About an hour ago   Up About an hour                             k8s_wordpress.ca69f5d6_wordpress.default.api_8350b10c-cb1f-11e4-a154-42010af0eec1_21a7a040                                 

$ ps -eaf|grep kube
root      4826     1  0 14:22 ?        00:00:07 /usr/local/bin/kubelet --api_servers=https://10.240.238.193:6443 --auth_path=/var/lib/kubelet/kubernetes_auth --address=0.0.0.0 --config=/etc/kubernetes/manifests --allow_privileged=False --v=2 --cluster_dns=10.51.240.10 --cluster_domain=kubernetes.local
root      4984     1  0 14:22 ?        00:00:06 /usr/local/bin/kube-proxy --master=http://10.240.238.193:7080 --v=2
root      5380  4649  0 14:22 ?        00:00:00 /kube2sky -domain=kubernetes.local
root      5471  4649  0 14:22 ?        00:00:00 /skydns -machines=http://localhost:4001 -addr=0.0.0.0:53 -domain=kubernetes.local.

smakam14@k8s-hello-world-node-1:~$ ps -eaf|grep etc
root      4826     1  0 14:22 ?        00:00:07 /usr/local/bin/kubelet --api_servers=https://10.240.238.193:6443 --auth_path=/var/lib/kubelet/kubernetes_auth --address=0.0.0.0 --config=/etc/kubernetes/manifests --allow_privileged=False --v=2 --cluster_dns=10.51.240.10 --cluster_domain=kubernetes.local
root      5322  4649  0 14:22 ?        00:00:02 /etcd /etcd -bind-addr=127.0.0.1 -peer-bind-addr=127.0.0.1

Above, we see the wordpress container running in slave node. Also, we see the kubelet agent running.

Guestbook application:

Following are the steps:

  • Create a guestbook cluster(default uses 1 master and 3 worker nodes with particular vm size).
  • Create a redis master node and then expose the service.
  • Create a redis worker replica with 2 replicas and then expose that service.
  • Create a webserver replica and service.
  • Expose the webserver opening up the firewall.

First step is downloading the necessary json files from here and setting CONFIG_DIR.

# start cluster
gcloud preview container clusters create guestbook

# start up redis master pod
gcloud preview container kubectl create -f $CONFIG_DIR/redis-master-pod.json
# start up redis service which will direct to redis pod
gcloud preview container kubectl create -f $CONFIG_DIR/redis-master-service.json

# Create worker threads replica pod with 2 replicas
gcloud preview container kubectl create -f $CONFIG_DIR/redis-worker-controller.json
# Create worker service
gcloud preview container kubectl create -f $CONFIG_DIR/redis-worker-service.json

# Create webserver pod with 3 replicas
gcloud preview container kubectl create -f $CONFIG_DIR/guestbook-controller.json
# Create web service
gcloud preview container kubectl create -f $CONFIG_DIR/guestbook-service.json

# expose firewall
gcloud compute firewall-rules create guestbook-node-3000 --allow=tcp:3000 \
    --target-tags k8s-guestbook-node

At this point, we will be able to access guestbook service using the external IP.

To clean up:

# delete cluster
gcloud preview container clusters delete hello-world
gcloud preview container clusters delete guestbook
# delete firewall rule
cloud compute firewall-rules delete hello-world-node-80
cloud compute firewall-rules delete guestbook-node-3000

Google container engine service has lot of similarities to AWS Container service. As can be seen, clustering complexity gets completely hidden and application management becomes easier with this service.

References:

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s