ARTH — Task 16 👨🏻💻
Case Study of Kubernetes
The Borg System
The Borg System was introduced by google on around 2003–2004. It is a small-scale project, Initially in collaboration with a new version of Google’s new search engine. Borg was a large-scale internal cluster management system.
The Omega
After Borg’s era Google introduced the Omega cluster management system, a flexible, scalable scheduler for large compute clusters.
The Kubernetes
In the mid 2014 google introduced kubernetes as an open source for Borg, later on which Microsoft, RedHat, IBM, Docker joins their hand with Kubernetes.
Kube v1.0
On 21 july 2015 kuberenetes v1.0 gets released. And the historic moment took place, Google partnered with the Linux Foundation to form the Cloud Native Computing Foundation(CNCF). The uprise of kubernetes started attracting companies towards it on november 19 2015 Deis, OpenShift, Huawei, and Gondor joined their hands with kubernetes. On 9th of November 9 google intoduces kubernetes 1.1 brings major performance upgrades, improved tooling, and new features that make applications even easier to build and deploy. The first inaugural community KubeCon 2015 is held in San Fransisco
Beta Version of Kubernetes1.10
On march 2 2015 first Beta Version of Kubernetes1.10 announced. Users could test the production-ready versions of Kubelet TLS Bootstrapping, API aggregation, DigitalOcean dives into Kubernetes, announces a new hosted Kubernetes product. DigitalOcean Kubernetes will provide the container management and orchestration platform as a free service on top of its existing cloud compute and storage options
The Azure Kubernetes Services(AKS)
The Azure Kubernetes Service(AKS) is generally available. With AKS users can deploy and manage their production Kubernetes apps with the confidence that Azure’s engineers are providing constant monitoring, operations, and support for our customers’ fully managed Kubernetes clusters.
Feature of Kubernetes
Storage Orchestration
Kubernetes automatically mount your storage system of your choice. It can be either a local storage or may be on public cloud provider such as GCP or AWS or even a network storage system such as NFS or other things.
Self Healing
Whenever Kubernetes realize that one of your container has failed then it will restart that container on its own. In case if your node itself fails then Kubernetes would do in that case is whatever containers were running in that failed node those containers would be started in another node and of course you would have some more nodes in that cluster.
Horizontal Scaling
Scale your application up and down with a simple command, with a UI, or automatically based on CPU usage.
Automated Rollouts & Rollbacks
Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn’t kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the change for you.
Service Discovery & Load Balancing
Kubernetes assigns containers their own IP addresses & probably a single DNS name for a set of containers which are performing a logical operation. Also there will be load balancing across them so you don’t have to worry about the load distribution.
Use Case Of kubernetes
HUAWEI
Huawei began moving the internal I.T. department’s applications to run on Kubernetes. So far, about 30 percent of these applications have been transferred to cloud native.“By the end of 2016, Huawei’s internal I.T. department managed more than 4,000 nodes with tens of thousands containers using a Kubernetes-based Platform as a Service (PaaS) solution,” says Hou. “The global deployment cycles decreased from a week to minutes, and the efficiency of application delivery has been improved 10 fold.”
PEARSON
“To transform our infrastructure, we had to think beyond simply enabling automated provisioning,” says Chris Jackson, Director for Cloud Platforms & SRE at Pearson. “We realized we had to build a platform that would allow Pearson developers to build, manage and deploy applications in a completely different way.” The team chose Docker container technology and Kubernetes orchestration “because of its flexibility, ease of management and the way it would improve our engineers’ productivity.”With the platform, there has been substantial improvements in productivity and speed of delivery. “In some cases, we’ve gone from nine months to provision physical assets in a data center to just a few minutes to provision and get a new idea in front of a customer,” says John Shirley, Lead Site Reliability Engineer for the Cloud Platform Team. Jackson estimates they’ve achieved 15–20% developer productivity savings.
The first phase involved moving services to Docker containers. Once these services went into production in early 2017, the team began looking at orchestration to help create efficiencies and manage them in a decentralized way. After an evaluation of various solutions, Pinterest went with Kubernetes.”By moving to Kubernetes the team was able to build on-demand scaling and new failover policies, in addition to simplifying the overall deployment and management of a complicated piece of infrastructure such as Jenkins,” says Micheal Benedict, Product Manager for the Cloud and the Data Infrastructure Group at Pinterest. “We not only saw reduced build times but also huge efficiency wins. For instance, the team reclaimed over 80 percent of capacity during non-peak hours. As a result, the Jenkins Kubernetes cluster now uses 30 percent less instance-hours per-day when compared to the previous static cluster.”