Native Kubernetes Applications on Best-in-Class GKE with Zero Commands
top of page
Blue Background

Native Kubernetes Applications on Best-in-Class GKE with Zero Commands

Most of the companies are using Docker or Linux containers for containerizing their applications. But when they are managing the application on a massive scale i.e., managing either basic 10 nodes or more than 100+ nodes for load balancing the traffic or ensuring high availability they require the container management orchestration platform like Kubernetes or Docker swarm.


Google Kubernetes Engine provides a one-stop shop for creating your own Kubernetes environment, on which you can deploy all of the containers and pods that you wish without having to worry about managing Kubernetes masters and capacity.

Google Kubernetes Engine:

Google Kubernetes Engine provides a managed environment for deploying, managing, and scaling our containerized applications using Google infrastructure. The environment GKE provides consists of multiple machines (specifically, Google Compute Engine instances) grouped together to form a cluster.


GKE clusters are powered by the Kubernetes open source cluster management system. Kubernetes provides the mechanisms through which you interact with your cluster. You use Kubernetes commands and resources to deploy and manage your applications, perform administration tasks and set policies, and monitor the health of your deployed workloads.


Kubernetes draws on the design principles like: automatic management, monitoring and liveness probes for application containers, automatic scaling, rolling updates, and more.


Why GKE???

Have you ever thought that even after having many cloud providers like AWS, AZURE, etc are providing managed kubernetes clusters, why one have to move particularly to google managed kubernetes clusters?? Then here are a few points which answers your questions:


Deploy a Wide Variety of Applications:

Google Kubernetes Engine empowers rapid application development and iteration by making it easy to deploy, update, and manage your applications and services. GKE isn't only for stateless applications, you can also attach persistent storage, and even run a database in your cluster. Unlike other cloud platform like Microsoft Azure, which supports only Microsoft based applications, GKE on the other hand deploys a wide variety of applications. Users simply describe the compute, memory, and storage resources which their application containers requires, then Kubernetes Engine provisions and manages the underlying cloud resources automatically. Support for hardware accelerators makes it easy to run Machine Learning, General Purpose GPU, High-Performance Computing, and other workloads that benefit from specialized hardware accelerators.


Managed Built-in Istio:

Istio on GKE is an add-on for GKE that lets you quickly create a cluster with all the components you need to create and run an Istio service mesh, in a single step. Once installed, your Istio control plane components are automatically kept up-to-date, with no need for you to worry about upgrading to new versions. You can also use the add-on to install Istio on an existing cluster. Istio on GKE lets you easily manage the installation and upgrade of Istio as part of the GKE cluster life cycle, automatically upgrading your system to the most recent GKE-supported version of Istio with optimal control plane settings for most needs. You can find instructions for installing open source Istio on GKE in Installing Istio on a GKE cluster.

Logging and Monitoring:

You can configure Prometheus to automatically detect apps in GKE by setting the Kubernetes SD scrape config. This config enables Prometheus to query the Kubernetes API to discover new possible scrape targets without additional configuration. However, you can configure multiple scrape-target discovery mechanisms. After configuring Prometheus with information about your GKE cluster, you need to add an annotation to your Kubernetes resource definitions so that Prometheus will begin scraping your services, pods, or ingresses.


Operate Seamlessly with High Availability:

Control your working environment from the built-in Kubernetes Engine dashboard in Google Cloud console. Use routine health checks to detect and replace hung, or crashed, applications inside your deployments. Container replication strategies, monitoring, and automated repairs help ensure that your services are highly available and offer a seamless experience to your users. When node auto repair is enabled, if a node fails a health check Kubernetes Engine initiates a repair process for that node. Google Site Reliability Engineers (SREs) constantly monitor your cluster and its compute, networking, and storage resources so you don't have to, giving you back time to focus on your applications.


Deploying Rancher on Kubernetes Cluster:

Rancher is a complete software stack for teams adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes clusters across any infrastructure, while providing DevOps teams with integrated tools for running containerized workloads. Rancher was built to manage Kubernetes everywhere it runs. You can use Rancher to create clusters in a hosted Kubernetes provider, such as Google GKE. Rancher sends a request to a hosted provider using the provider’s API. The provider then provisions and hosts the cluster for user. When the cluster finishes building, user can manage it from the Rancher UI along with clusters which the user have provisioned that are hosted on-premise or in an infrastructure provider, all from the same UI.


Scale Effortlessly to Meet Demand:

Google Kubernetes Engine auto scaling allows you to handle from a single node to thousands of nodes, which increases the user demand for services, but keeping them available when it matters most is important. It is also important to scale back in the quiet periods to save money or schedule low-priority batch jobs to use up spare cycles. Kubernetes Engine helps you get the most out of your resource pool.


Move Freely between On-premises and Clouds:

Google Kubernetes Engine runs Certified Kubernetes ensuring portability across clouds and on-premises. There is no vendor lock-in which means that, you're free to take your applications out of Kubernetes Engine and run them anywhere Kubernetes is supported, including on your own on-premises servers. You can tailor integrations such as monitoring, logging, and CI/CD using Google Cloud Platform (GCP) and third party solutions in the ecosystem.


Run Securely on Google's Network:

Connect to and isolate clusters no matter where you are with fine-grained network policies using Global Virtual Private Cloud (VPC) in Google Cloud. Use public services behind a single global anycast IP address for seamless load balancing. Protect against DOS and other types of edge attacks on your containers.


Pricing:

Google Cloud Platform is a clear winner when it comes to the cost of services. As you can see in the image below a 2CPU 8GB RAM instance for GCP is priced at $50 per month whereas AWS instance is priced at $69 per month. You save 25% on the same instance selected. You can also save more as the billing for AWS is done on per-hour basis, whereas Google Cloud Platform provides billing on a per-second basis. Moreover, Google offers additional discounts for long-term usage and there are no upfront costs.


For more detailed information on pricing, see GCP documentation page here. You can use the Google Cloud Pricing Calculator to estimate costs.


Cluster Creation:

A cluster consists of at least one master node and multiple worker nodes. Nodes are Compute Engine virtual machine (VM) instances that run the Kubernetes processes necessary to make them part of the cluster. Cluster can be created either by running commands on the host system or through GKE dashboard.

Google Container Registry :

GCP provides a secure, single repository for all the container images called Google Container Registry (GCR). Here it mainly includes managing the images where, listing images in a repository, adding tags, deleting tags, copying images to a new repository, and deleting image are the major operations. To push any local image to Container Registry, you need to first tag it with the registry name and then push the image to registry.


The user has to follow all the commands or instructions in order to install an application from cloud marketplace to the server. But while installing, user may find difficulties to understand and move further. So Google Kubernetes Engine (GKE) helps them by installing the application on a single-click over the application. The containers increases the storage elasticity which makes the GKE to find a great place in cloud area.


The example screenshot below shows the one-click deploy of the application from the marketplace.


Summary:

Google App Engine is a platform for building scalable web applications and mobile and IoT backends. App Engine provides you with built-in services and APIs, such as NoSQL data stores, Memcached, and a user authentication API, common to most applications. App Engine will scale your application automatically in response to the amount of traffic it receives, so you only pay for the resources you use. Google Compute Engine provides highly customizable virtual machines with best-of-breed features, friendly pay-for-what-you-use pricing, and the option to deploy your code directly or via containers. Google App Engine is a flexible platform-as-a-service that lets you focus on your code, freeing you from the operational details of deployment and infrastructure management. Google Kubernetes Engine lets you use fully-managed Kubernetes clusters to deploy, manage, and orchestrate containers at scale. GKE is a powerful cluster manager and orchestration system for running your Docker containers. Kubernetes Engine schedules your containers into the cluster, keeps them healthy and manages them automatically based on requirements you define (such as CPU and memory).


For more details cloud-native information, please refer to Yobitel Communications.

114 views
Featured Posts
Follow Us
  • LinkedIn
  • GitHub
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
Recent Posts
bottom of page