This link has the slides that I presented as part of lightning talk at Devops Days India, 2016. In the slides, I have tried to capture how automation in networking area is evolving. I attended first day of the conference and it had a pretty decent collection of talks in Devops area.
I have used and loved Vagrant for a long time and I recently used Consul and I was very impressed by both these Devops tools. Recently, I saw some of the videos of Hashiconf and I learnt that Hashicorp has an ecosystem of tools addressing Devops needs and that these tools can be chained together to create complete application delivery platform from development to production. Atlas is Hashicorp’s product that combines its open source tools into a platform and it has a commercial version as well. In this blog, I will cover a development to production workflow for a LAMP application stack using Atlas, Vagrant, Packer and Terraform.
Overview of Vagrant, Packer, Terraform and Atlas
Vagrant provides a repeatable VM development environment. Vagrant integrates well with major hypervisors like Virtualbox, VMWare, HyperV. “Vagrantfile” describes the VM settings as well as initial bootstrap provisioning that needs to be done on the VM. Vagrant also integrates well with other provisioning tools like Chef, Ruby and Ansible to describe the provisioning. Simply by doing “vagrant up”, the complete VM environment is exactly reproduced. The typical problems like “it does not work for me even though its working in your machine” goes away.
Packer is a tool to create machine images for providers like Virtualbox, VMWare, AWS, Google cloud. Packer configuration is described as a JSON file and images for multiple providers can be created in parallel. The typical workflow is for developer to create development environment in Vagrant and once it becomes stable, the production image can be built from Packer. Since the provisioning part is baked into the image, the deployment of production images becomes much faster. Following link describes how Vagrant and Packer fits well together.
Last 6 months, I have been blogging very little since I was busy writing a book on CoreOS. The book is available for pre-ordering now from the publisher website, Amazon US and Amazon India. Discounts are available till Feb 29th from publisher and discount code is available here. The tentative publishing date is early end of Feb/early March.
Why I wrote the book other than the fact that I can make some money out of it?
I started blogging around later part of 2013. The first open source project that I tried was Opendaylight. It was very exciting to create virtual networks using Mininet and manage it with Openflow and Opendaylight. In early 2014, I focused on cloud technologies like AWS, Google Cloud and Openstack and learnt how Public and Private clouds revolutionized how we consume resources. Around August 2014, I started spending time on Devops and learnt how tools like Ansible eased the management pain not just in server infrastructure but also in networking infrastructure. From December 2014, I started spending time on Docker and CoreOS. The first Docker version I used was 1.3.1(Now its 1.9) and first CoreOS version I used was 500.x(Now its 933.x). Even though Microservices and Containers were present before, Docker made Containers easier to use which in turn made Microservices popular. CoreOS pretty much invented the Cluster OS or Container-optimized OS category and provided OS and tools that ease Container deployment. By writing this book, I felt that I could connect the dots between SDN, Cloud, Devops technologies that I focussed on earlier with Docker and CoreOS.
What are some questions the book tries to answer?
What is a Container-Optimized OS?
Why CoreOS and Containers are needed?
How to deploy microservices and distributed applications?
How to setup, maintain and update CoreOS cluster?
What role does key CoreOS services etcd, fleet, systemd play?
How does Docker and CoreOS manage networking and storage?
What role does standards play in Containers?
Why there are multiple Container runtimes like Docker and Rkt?
How does Kubernetes and Swarm orchestrate Container deployment?
How is Openstack, AWS ECS, Google Container engine, Tectonic related to CoreOS and Docker?
How to debug Containers and CoreOS?
What are the production considerations when deploying microservices?
What did I learn?
That there is a lot more to learn..
Open source is very powerful and it aids innovation and collaboration
Trying hands-on is more important than reading through software manuals
Can learn a lot from Conference Video recordings/slides, blogs
Need lot more patience for writing a book when compared to writing a blog
It is good to do something different part-time from daily office job to keep life interesting
If you read the book or read few chapters of the book, please feel free to provide your feedback..
Thanks to CoreOS, Docker and Container community for the amazing technologies that they have developed. Big thanks to Open source community for making software easily accessible to everyone.
In this blog, I will give an overview of Continuous Integration (CI) and Continuous Deployment (CD) and cover few CI, CD Use cases with Docker, Jenkins and Tutum. Docker provides Container runtime and tools around Containers to create a Container platform. Jenkins is a CI/CD application to build, test and deploy applications. Tutum is a SaaS Container platform that can be used to build, deploy and manage Docker Containers. I have covered overview of Tutum in previous blog. There are Use cases where these applications work well with each other and the Use cases in this blog will illustrate that.
Traditional approach of releasing software has the following problems:
- Software release cycles were spaced apart which caused new features taking longer time to reach the customers.
- Majority of the process of getting software from development stage to production was manual.
- Considering the different deployment scenarios, it was difficult to guarantee that software worked in all environments and configurations.
Containers have tried to mitigate some of the problems above. By using microservices and Container approach, it is guaranteed that the application will behave similar in development and production stages. Process automation and appropriate testing is still needed for Container based environment.
Continuous Integration refers to the process of making an executable or a Container image automatically after developer has done the UT and commit.
Continuous delivery refers to the process of taking the developer built image, setting up the staging environment for testing the image and deploying it successfully for production.
Following diagram show the different stages of CI, CD cycle.
Following are some notes on the above diagram:
In my earlier blogs, I had covered basics of Netconf and Yang and how to use Netconf to configure Cisco devices. Recently, I came across this Python ncclient library that simplifies the configuration/monitoring of Networking devices that supports Netconf. Using ncclient library, we can programmatically configure and monitor devices using Netconf. I also found out that Cisco Openstack Neutron plugin uses ncclient library to program the Nexus switches.
I have used Cisco Nexus 3k switch and Cisco VIRL NXOS switch for the examples in this blog.
In my earlier blog on configuring Cisco Nexus devices using Netconf, I covered the following netconf requests.
- “get” request using filter to display configuration.
- “edit-config” request to change configuration.
- “exec-command” to execute raw CLI requests.
In this blog, I will cover the above same tests using Python ncclient library. Even though the examples below are tried from Python interactive shell, the same can be executed as a Python program as well.
First step is to import the ncclient library and create a connection:
In this blog, I will cover the steps that I did to connect Cisco NXOS VIRL switch instance to Arista vEoS switch instance. We can connect any Cisco switch simulated in VIRL, I just picked the NXOS switch type. CML/VIRL supports majority of Cisco switches as VM as well as few external switches from Juniper, Vyatta. External virtual or physical switches can be connected to Cisco switches running inside VIRL using VM Networking magic. I just think it is cool to connect Virtual devices, try out real-time network configurations and see how the device responds.
- Install CML/VIRL using the procedure here.
- Install vEoS using the procedure here.
- I used VMPlayer to run VIRL and vEoS. Connecting across Virtualbox and VMWare player is little painful.
Following is the network I created:
In this blog, I will cover the steps to get NXAPI working with NXOS image in VIRL. For more details on CML/VIRL, please refer to my earlier blog series. Running NXAPI with VIRL image makes it easy to write automation scripts without needing a physical switch.
Earlier, I had installed VIRL February release(0.9.17) which included the VIRL STD 0.10.13.11. To run NXAPI which is supported in NXOS 7.2.0 version, it is needed to upgrade VIRL to the latest version. I tried running NXOS 7.2.0 in VIRL 0.10.13.11. Even though I was able to enable “feature nxapi”, I was not able to configure management IP and be able to connect from outside.
VIRL’s latest April release(0.9.242) has the following components:
- VM Maestro 1.2.2 Build Dev-211
- VIRL STD 0.10.14.20
There are 2 approaches to upgrade VIRL.
- Full upgrade which upgrades both VIRL and OS related stuff.
- Quick upgrade which upgrades only VIRL. Based on the VIRL 0.9.242 upgrade note here, it is fine to do quick upgrade for users running VIRL STD 0.10.13.11. For folks outside Cisco, I am not sure if VIRL 0.9.242 is released outside.