Skip to main content

Kubernetes ReplicaSet

We need multiple replicas of containers running at a time because:
  • Redundancy: fault-tolerant system
  • Scale: more requests can be served
  • Sharding: computation can be handled in a parallel manner
Multiple copies of pods can be created manually, but it’s a tedious process. We need a way in which a replicated set of pods can be managed and defined as a single entity. This what the ReplicaSet does: ensures the right types and number of pods are running correctly. Pods managed by a ReplicaSet are automatically rescheduled during a node failure or network partition.
When defining ReplicaSet, we need:
  • specification of the pods we want to create
  • desired number of replicas
  • a way of finding the pods controlled by the ReplicaSet
Reconciliation Loops
This is the main concept behind how ReplicaSets work and it is fundamental to the design and implementation of Kubernetes. Here we deal with two states:
  • desired state is the state you want - the desired number of replicas
  • the current state is the observed staten the present moment - the number of pods presently running
  • The reconciliation loop runs constantly to check if there is a mismatch between the current and the desired state of the system.
  • If it finds a mismatch, then it takes the required actions to match the current state with what’s desired.
  • For example, in the case of replicating pods, it’ll decide whether to scale up or down the number of pods based on what’s specified in the pod’s YAML. If there are 2 pods and we require 3, it’ll create a new pod.
Benefits:
  • goal-driven
  • self-healing
  • can be expressed in few lines of code
  •  
Relating Pods and ReplicaSets
  • pods and ReplicaSets are loosely coupled
  • ReplicaSets doesn’t own the pods they create
  • use label queries to identify which set of pods they’re managing
This decoupling supports:
Adopting Existing Containers:
If we want to replicate an existing pod and if the ReplicaSet owned the pod, then we’d have to delete the pod and re-create it through a ReplicaSet. This would lead to downtime. But since they’re decoupled, the ReplicaSet can simply “adopt” the existing pod.
Quarantining Containers
If a pod is misbehaving and we want to investigate what’s wrong, we can isolate it by changing its labels instead of killing it. This will dissociate it from the ReplicaSet and/or service and consequently the controller will notice that a pod is missing and create a new one. The bad pod is available to developers for debugging.
Designing with ReplicaSets
  • represent a single, scalable microservice in your app
  • every pod created is homogeneous and are inter-changable
  • designed for stateless services
ReplicaSet spec
  • must’ve a unique name (within a namespace or cluster-wide?)
  • spec section that contains:
    • number of replicas
    • pod template
Pod Templates
  • created using the pod template in the spec section
  • the ReplicaSet controller creates & submits the pod manifest to the API server directly
Labels
  • ReplicaSets use labels to filter pods it’s tracking and responsible for
  • When a ReplicaSet is created, it queries the API server for the list of pods, filters it by labels. It adds/deletes pods based on what’s returned.
Scaling ReplicaSets
Imperative Scaling
kubectl scale replica-set-name —replicas=4
Don’t forget to update any text-file configs you’ve to match the value set imperatively.
Declarative Scaling
Change the replicas field in the config file via version control and then apply it to the cluster.
Autoscaling a ReplicaSet
K8s has a mechanism called horizontal pod autoscaling (HPA). It is called that because k8s differentiates between:
  • horizontal scaling: create additional replicas of a pod
  • vertical scaling: adding resources (CPU, memory) to a particular pod
HPA uses a pod known as heapster in your cluster to work correctly. This pod keeps track of metrics and provides an API for consuming those metrics when it makes scaling decisions.
Note: There is no relationship between HPA and ReplicaSet. But it’s a bad idea to use both imperative/declarative management and autoscaling together. It can lead to unexpected behaviour.
Deleting ReplicaSets
Deleting a ReplicaSet deletes all the pods it created & managed as well. TO delete only the ReplicaSet object & not the pods, use —cascasde=false.

Comments

Popular posts from this blog

Comparison between Azure Application Gateway V1 and V2

Microsoft has announced new version of Azure Application Gateway and its Web Application Firewall module (WAF). In this article, we will discuss about the enhancements and new highlights that are available in the new SKUs i.e. Standard_v2 and WAF_v2. Enhancements and new features: Scalability: It allows you to perform scaling of the number of instances on the traffic. Static VIP: The VIP assigned to the Application Gateway can be static which will not change over its lifecycle. Header Rewrite: It allows you to add, remove or update HTTP request and response headers on application gateway. Zone redundancy: It enables application gateway to survive zonal failures which allows increasing the resilience of applications. Improved Performance: Improvement in performance during the provisioning and during the configuration update activities. Cost: V2 SKU may work out to be overall cheaper for you relative to V1 SKU. For more information, refer Microsoft p...

Install Solr as an Azure App Service

After Sitecore 9.0.2, Solr is a supported search technology for Sitecore Azure PAAS deployments. In this article, we will install SOLR service 8.4.0 in Azure App Service for Sitecore 10. 1. Create Azure App Service Login to Azure and create Azure App service. Make sure Runtime stack should be Java. 2. Download Solr Download Solr 8.4.0 from https://archive.apache.org/dist/lucene/solr/ Extract the files and add the below web.config file in the Solr package. <?xml version="1.0" encoding="UTF-8"?> <configuration>  <system.webServer>      <handlers>      <add  name="httpPlatformHandler"            path="*"            verb="*"            modules="httpPlatformHandler"            resourceType="Uns...

Configure a Backup for your Azure App Service

The Backup feature in Azure App Service allows us to easily create app backups manually or on a schedule. You can restore the app to a snapshot of a previous state by overwriting the existing app or restoring to another app. Refer the below steps to schedule your backup: 1. Go to your App service and click on Backups from left Navigation bar. 2. Click on Configure and select your Azure storage account and container to store your backup. Then configure the schedule to start your backup as illustrated below. 3. Once everything is configured you can see backup status as shown below. 4. Once backup is succeeded, you can see the next scheduled backup details. Exclude files from your backup If you want to exclude few folders and files from being stored in your backup, then you can create _backup.filter file inside D:\home\site\wwwroot folder of your web app. Let’s assume you want to exclude Logs folder and ashish.pdf file. Then create _backup.filter file and add...

Export BACPAC file of SQL database

When you need to create an archive of an Azure SQL database, you can export the database schema and data to a BACPAC file. A BACPAC file can be stored in Azure blob storage or in local storage in an on-premises location and later imported back into Azure SQL Database or into a SQL Server on-premises installation. Let's learn some of the ways to export BACPAC file. Export BACPAC using Azure Portal Open your SQL Database and select Export. Fill the parameters as shown below. Select your storage account container & enter your SQL Server admin login. To check the status of your database export. Open your SQL Database server containing the database being exported. Go to Settings and then click Import/Export history Export BACPAC using SSMS Login Azure SQL Database by SSMS. Right-click the database -> Tasks -> Export Data-tier Application Save the .bacpac file into local disk. Export BACPAC using SQLPackage There is a command line tool that you can also choose to ...

Difference between Azure Front Door Service and Traffic Manager

Azure Front Door Service is Microsoft’s highly available and scalable web application acceleration platform and global HTTP(s) load balancer. Azure Front Door Service supports Dynamic Site Acceleration (DSA), SSL offloading and end to end SSL, Web Application Firewall, cookie-based session affinity, URL path-based routing, free certificates and multiple domain management. In this article, I will compare Azure Front Door to Azure Traffic Manager in terms of performance and functionality. Similarity: Azure Front Door service can be compared to Azure Traffic Manager in a way that this also provides global HTTP load balancing to distribute traffic across different Azure regions, cloud providers or even with your on-premises. Both AFD & Traffic Manager support: Multi-geo redundancy: If one region goes down, traffic routes to the closest region without any intervention. Closest region routing: Traffic is automatically routed to the closest region. Differences: Azu...