What are the different deployment strategies available with Kubernetes deployment?

  • Re-Create
  • Rolling Updates

What is Re-Create Strategy?

In re-create strategy:

  • All earlier PODs are destroyed first.
  • Then new PODs are created with new version.
  • Problem: There is a time when application is down and can’t receive any request.

What is Rolling update strategy?

In rolling update strategy:

  • PODs are brought down 1 by 1 and while 1 POD of earlier version is brought down, another POD of new version is brought up.
  • This is the default strategy.
  • And there is never a time when application is not availaible.

How to specify which strategy to use?

You can specify strategy in your deployment definition file under spec:

apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: think-app-deployment
   labels:
     app: think-app
     env: eat
 spec:
   replicas: 3
   strategy:
     type: RollingUpdate or Recreate <===
   template:
     metadata:
       name: think-app-pod
       labels:
         app: think-app
         type: load-balancer
     spec:
       containers:
       - name: thinkscholar-nginx-container
          image:  nginx    
   selector:
     matchLabels:
       type: load-balancer

Note: If you don’t specify a strategy then it gets default value of RollingUpdate

Can I check from already running deployment, which strategy was used?

If you run describe command on deployment:

  • You see StrategyType in output
  • Also if you look carefully at events you can figure out which kind of strategy was used.

kubectl describe deployment myapp-deployment

Output for Recreate type deployment:

Name:           think-app-deployment
 Namespace:      default
 CreationTimestamp:  Tue, 15 Mar 2016 14:48:04 -0700
 Labels:         app=think-app
 Selector:       app=think-app
 Replicas:       3 desired | 1 updated | 4 total | 3 available | 1 unavailable
 StrategyType:       Recreate
 MinReadySeconds:    0
 RollingUpdateStrategy:  25% max unavailable, 25% max surge
 Pod Template:
   Labels:  app=think-app
   Containers:
    nginx:
     Image:        nginx:1.161
     Port:         80/TCP
     Host Port:    0/TCP
     Environment:  <none>
     Mounts:       <none>
   Volumes:        <none>
 Conditions:
   Type           Status  Reason
   ----           ------  ------
   Available      True    MinimumReplicasAvailable
   Progressing    True    ReplicaSetUpdated
 OldReplicaSets:     think-app-deployment-1564180365 (3/3 replicas created)
 NewReplicaSet:      think-app-deployment-3066724191 (1/1 replicas created)
 Events:
   FirstSeen LastSeen    Count   From                    SubObjectPath   Type        Reason              Message
   --------- --------    -----   ----                    -------------   --------    ------              -------
   1m        1m          1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set think-app-deployment-2035384211 to 3
   22s       22s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled down replica set think-app-deployment-2035384211 to 0
   22s       22s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set think-app-deployment-1564180365 to 3

You will notice, when the recreate strategy was used, the events indicate that the old replica set was scaled down to zero first and then the new replica set scaled up to five.

Output for Rolling Update type deployment

Name:           think-app-deployment
 Namespace:      default
 CreationTimestamp:  Tue, 15 Mar 2016 14:48:04 -0700
 Labels:         app=think-app
 Selector:       app=think-app
 Replicas:       3 desired | 1 updated | 4 total | 3 available | 1 unavailable
 StrategyType:       RollingUpdate
 MinReadySeconds:    0
 RollingUpdateStrategy:  25% max unavailable, 25% max surge
 Pod Template:
   Labels:  app=think-app
   Containers:
    nginx:
     Image:        nginx:1.161
     Port:         80/TCP
     Host Port:    0/TCP
     Environment:  <none>
     Mounts:       <none>
   Volumes:        <none>
 Conditions:
   Type           Status  Reason
   ----           ------  ------
   Available      True    MinimumReplicasAvailable
   Progressing    True    ReplicaSetUpdated
 OldReplicaSets:     think-app-deployment-1564180365 (3/3 replicas created)
 NewReplicaSet:      think-app-deployment-3066724191 (1/1 replicas created)
 Events:
   FirstSeen LastSeen    Count   From                    SubObjectPath   Type        Reason              Message
   --------- --------    -----   ----                    -------------   --------    ------              -------
   1m        1m          1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set think-app-deployment-2035384211 to 3
   22s       22s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set think-app-deployment-1564180365 to 1
   22s       22s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled down replica set think-app-deployment-2035384211 to 2
   22s       22s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set think-app-deployment-1564180365 to 2
   21s       21s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled down replica set think-app-deployment-2035384211 to 1
   21s       21s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set think-app-deployment-1564180365 to 3
   13s       13s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled down replica set think-app-deployment-2035384211 to 0
   13s       13s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set think-app-deployment-3066724191 to 1

You will notice, when the rolling update strategy was used the old replica set was scaled down one at a time simultaneously scaling up the new replica set one at a time.


Rakesh Kalra

Hello, I am Rakesh Kalra. I have more than 15 years of experience working on IT projects, where I have worked on varied complexity of projects and at different levels of roles. I have tried starting my own startups, 3 of those though none of it were successful but gained so much knowledge about business, customers, and the digital world. I love to travel, spend time with my family, and read self-development books.

0 Comments

Leave a Reply