What is a Rollout?

When you do a deployment, it is also called as rollout of new version.

Kubernetes provides rollout command to help you manage a deployment using subcommands like “kubectl rollout undo deployment/abc”

You can check history of deployment, it’s status, rollback, pause or resume deployment using rollout command.

How to check deployment status?

Command:

kubectl rollout status deployment/think-app-deployment

Output:

Waiting for rollout to finish: 0 of 3 updated replicas are available…

Waiting for rollout to finish: 0 of 3 updated replicas are available…
 Waiting for rollout to finish: 1 of 3 updated replicas are available…
 Waiting for rollout to finish: 2 of 3 updated replicas are available…
 deployment "think-app-deployment" successfully rolled out

What is a revision in case of deployments?

Each deployment is given revision number and when you send next deployment of same name, its revision number is incremented.

Revision numbers helps to keep track of deployment and rollback if required.

So consider you are deploying think-app-deployment for first time, then Revision number 1 is assigned to Rollout.

In future, when new version is deployed of same application, new rollout is triggered and new revision is created, Revision 2.

First deployment:  kubectl create -f deployment-definition.yml -> Revision number is 1

Second deployment:  kubectl create -f deployment-definition.yml -> Revision number is 2

How to check deployment history?

Command:

kubectl rollout history deployment/think-app-deployment

Output:

deployments “think-app-deployment"
 REVISION    CHANGE-CAUSE
 1                    <none>
 2                    kubectl apply —filename=deployment-def.yaml —record=true

How Change-cause column is filled?

When doing deployment if you use –record parameter, then current command is recorded under change cause.

kubectl create -f deployment-def.yaml --record

How to check rollout history of specific revision number?

To check rollout history for specific version, use –revision flag

kubectl rollout history deployment/think-app-deployment --revision=1

Output:

deployments “think-app-deployment" revision 2
   Labels: app=think-app
           pod-template-hash=1159050644
   Annotations:  kubernetes.io/change-cause=kubectl apply —filename=deployment-def.yml —record=true
   Containers:
    nginx:
     Image: nginx:1.16.1
     Port:  80/TCP
      QoS Tier:
         cpu:      BestEffort
         memory:   BestEffort
     Environment Variables: <none>
   No volumes.

How Change-cause column is filled?

If you create deployment with –record flag

kubectl create -f deployment-def.yaml --record

Can you help me understand, what happens with POD and Replicaset during deployment?

If you are deployment for first time, new ReplicaSet and POD inside ReplicaSet comes up.

What is important to know is when you are doing deployment second time:

  • Old ReplicaSet is not destroyed, but first New ReplicaSet is created
  • POD are destroyed from old ReplicaSet 1 by 1
  • At same time new POD is brought up in new ReplicaSet.

This is even visible when you will run the command: kubectl get replicasets

  • Old ReplicaSet with 0 pod
  • New ReplicaSet with 3 pods
kubectl get replicasets
 NAME                            DESIRED   CURRENT   READY     AGE
 think-app-deployment-985342     0         0         0         12m
 think-app-deployment-989911     3         3         3         13m

How do i Rollback a deployment?

Command:

kubectl rollout undo deployment/think-app-deployment

This will rollback to previous revision.

How to rollback to specific version?

Specify the revision number to rollback to using –to-revision 

kubectl rollout undo deployment/think-app-deployment --to-revision=1

What will happen to Revision number during rollback, will it go to previous value?

No, during rollback also revision number is incremented.

kubectl rollout history deployment/think-app-deployment

What happens with ReplicaSet during rollback?

Like during deployment POD are killed in old ReplicaSet and brought up in new ReplicaSet, during rollback PODs are destroyed in new ReplicaSet and brought back up in old ReplicaSet.

So before rollback:

kubectl get replicasets
 NAME                            DESIRED   CURRENT   READY     AGE
 think-app-deployment-985342     0         0         0         12m
 think-app-deployment-989911     3         3         3         13m
 

After Rollback

kubectl get replicasets
 NAME                            DESIRED   CURRENT   READY     AGE
 think-app-deployment-985342     3         3         3         14m
 think-app-deployment-989911     0         0         0         13m

Rakesh Kalra

Hello, I am Rakesh Kalra. I have more than 15 years of experience working on IT projects, where I have worked on varied complexity of projects and at different levels of roles. I have tried starting my own startups, 3 of those though none of it were successful but gained so much knowledge about business, customers, and the digital world. I love to travel, spend time with my family, and read self-development books.

0 Comments

Leave a Reply