Ingress into GKE Cluster

In this blog, I will talk about different options for getting traffic from external world into GKE cluster. The options described are:

  • Network load balancer(NLB)
  • Http load balancer with ingress
  • Http load balancer with Network endpoint groups(NEG)
  • nginx Ingress controller
  • Istio ingress gateway

For each of the above options, I will deploy a simple helloworld service with 2 versions and show access from outside world to the 2 versions of the application. The code is available in my github project here. You can clone and play with it if needed. In the end, I will also discuss about choosing the right option based on the requirement.

Pre-requisites

Continue reading Ingress into GKE Cluster

GKE with VPN – Networking options

While working on a recent hybrid GCP plus on-premise customer architecture, we had a need to connect GKE cluster running in GCP to a service running in on-premise through a VPN. There were few unique requirements like needing to expose only a small IP range to the on-premise and having full control over the IP addresses exposed. In this blog, I will talk about the different approaches possible from a networking perspective when connecting GKE cluster to a on-premise service. Following options are covered in this blog:

  • Flexible pod addressing scheme
  • Connecting using NAT service running on VM
  • Using IP masquerading at the GKE node level

I did not explore Cloud NAT managed service as that works only with private clusters and it does not work through VPN. I have used VPC native clusters as that has become the default networking scheme and it is more straightforward to use than route-based clusters. For more information on VPC native clusters and IP aliasing, please refer to my earlier blog series here.

Requirements

Continue reading GKE with VPN – Networking options

Migrate for Anthos

Anthos is a hybrid/multi-cloud platform from GCP. Anthos allows customers to build their application once and run in GCP or in any other private or public cloud. Anthos unifies the control, management and data plane when running a container based application across on-premise and multiple clouds. Anthos was launched in last year’s NEXT18 conference and made generally available recently. VMWare integration is available now, integration with other clouds is planned in the roadmap. 1 of the components of Anthos is called “Migrate for Anthos” which allows direct migration of VM into Containers running on GKE. This blog will focus on “Migrate for Anthos”. I will cover the need for “Migrate for Anthos”, platform architecture and move a simple application from GCP VM into a GKE container. Please note that “Migrate for Anthos” is in BETA now and it is not ready for production.

Need for “Migrate for Anthos”

Continue reading Migrate for Anthos

VPC native GKE clusters – Container native LB

This blog is last in the series on VPC native GKE clusters. In this blog, I will cover Network endpoint groups(NEG) and Container native load balancing. For the first part on GKE ip addressing, please refer here and the second part on VPC native clusters, please refer here.

Container load balancing and Network endpoint groups(NEG)

Following diagram shows how default Container load balancing works(without NEG). This is applicable to both http and network load balancer.

Continue reading VPC native GKE clusters – Container native LB

VPC native GKE clusters – IP aliasing

This blog is second in the series on VPC native GKE clusters. In this blog, I will cover overview of IP aliasing, IP alias creation options and creating VPC native clusters using alias IPs. For the first blog on GKE ip addressing, please refer here.

Overview
IP aliases allows a single VM to have multiple internal IP addresses. These addresses are not the same as having multiple interfaces with each interface having a different IP address. Each of the multiple internal IP address can be used to allocate it for different services running in the VM. When the node is running containers, alias IPs can be used to allocate it to Container pods. GCP VPC network is aware of alias IPs, so the routing is taken care by VPC. Alias IP has significant advantages with GKE Containers since Containers have pod and service IP to manage in addition to the node IP and IP aliasing makes sure that these addresses are native to VPC allowing a tight integration with GCP services.

Alias IP Advantages

Continue reading VPC native GKE clusters – IP aliasing

VPC native GKE clusters – IP address management

This blog was written by me after a long gap of close to 7 months. Many reasons including busy work schedule, some health issues in the middle and a little bit of laziness contributed to this. I hope to be a more active blogger going forward.

In this blog series, I will cover the following topics:

The first blog in this series will talk about GKE default IP address management.

Following are the Kubernetes abstractions that needs IP addresses:

  • Node IP address – Assigned to individual nodes. The node ip address is assigned from the VPC subnet range.
  • Pod IP address – Assigned to individual pods. All containers within a single pod share same IP address.
  • Service IP address- Assigned to individual service

By default, “/14” address gets allocated for cluster IP range. Pod and service IP addresses comes out this pool. “/24” address that comes out of the cluster IP range gets assigned to each individual node and is used for pod IP allocation. “/20” address that comes out of the cluster IP range gets assigned for Kubernetes services. The user has a choice to select cluster IP range when creating the cluster.

To illustrate some of the above points, I have created a 3 node Kubernetes cluster with IP aliasing disabled. By default, VPC native clusters(ip aliasing enabled) is disabled and has to enabled manually. In the future GKE release, VPC native clusters will be the default mechanism.

Cluster output:

Continue reading VPC native GKE clusters – IP address management

My Data engineer certification notes

This blog is more of a quick reference notes rather than a real blog. I shared this with few Google internal folks and since they found it useful, I thought I will share these notes in the blog as well.

Please note that the Data engineer exam format changed a little from March 29, 2019. I took this exam in January 2019, so some of the notes below might not apply. The 2 big changes from before are that there are no case studies in the new format and certain new data and ML products like composer, Automl, kubeflow have been introduced.

There are many references on preparation steps for GCP Data engineer exam. Why am I writing 1 more? What I found reading through an individual’s experience is that I always get some new insight. I am hoping that this write up would help some new person preparing for the exam.

I cleared the exam last week of January 2019. I come from an infrastructure background, Data engineer was not my area of strength and this needed around a total of 8 week preparation time for me. The 8 weeks is not a dedicated time but few hours split over a 8 week period. I have been with Google cloud team for 1+ years and I found that the practical experience is important for the exam.

The exam is an objective choice with 50 questions to be done in 2 hours time frame. There are also questions asked regarding the 2 case studies. For my preparation, I used the data engineer modules of Coursera training and Linux academy. It will be good to review and breakdown the case studies beforehand so that we can avoid spending time reading the case study text in the exam. Qwiklabs and the labs associated with Coursera helps quite a bit. I have added the links in the references.

The certification tests both the theoretical knowledge as well as practical knowledge using the GCP products and technologies. My suggestion is to take the exam after getting some practical experience with GCP. Another thing that I noticed in the exam is for some questions, more than 1 answer would seem appropriate and it takes thorough analysis before answering. 1 tip would be to mark these questions for review and spend time on these after answering all other questions.

Continue reading My Data engineer certification notes

Devops with Kubernetes

I did the following presentation “Devops with Kubernetes” in Kubernetes Sri Lanka inaugural meetup earlier this week. Kubernetes is one of the most popular open source projects in the IT industry currently. Kubernetes abstractions, design patterns, integrations and extensions make it very elegant for Devops. The slides delve little deep on these topics.

NEXT 100 Webinar – Top 3 reasons why you should run your enterprise workloads on GKE

I presented this webinar “Top 3 reasons why you should run your enterprise workloads on GKE” at NEXT100 CIO forum earlier this week. Businesses are increasingly moving to Containers and Kubernetes to simplify and speed up their application development and deployment. The slides and demo covers the top reasons why Google Kubernetes engine(GKE) is one of the best Container management platforms for enterprises to deploy their containerized workloads.

Following are the slides and recording:

Recording link

 

 

Container Conference Presentation

This week, I did a presentation in Container Conference, Bangalore. The conference was well conducted and it was attended by 400+ quality attendees. I enjoyed some of the sessions and also had fun talking to attendees. The topic I presented was “Deep dive into Kubernetes Networking”. Other than covering Kubernetes networking basics, I also touched on Network control policy, Istio service mesh, hybrid cloud and best practises.

Slides:

Recording:

Demo code and Instructions:

Github link

Recording of the Istio section of the demo: (the recording was not at conference)

As always, feedback is welcome.

I was out of blogging action for last 9 months as I was settling into my new Job at Google and I also had to take care of some personal stuff. Things are getting little clear now and I am hoping to start my blogging soon…