How is the traditional way to control access to shared variables in java?

control GIF

The traditional way to control access to shared fields is to use synchronization. In synchronization, once the thread gets the lock, it gets exclusive access to the block of code.

What are the short comings of traditional way to control access?

  • There is no way for threads to check, the lock they need is already acquired by some other thread and form logic based on that.
  • You can’t put a condition that a thread will wait only for a specific time limit only to acquire a lock. If a thread that acquired the lock is delayed because of any reasons like page fault, CPU scheduling delay, etc, then other threads waiting for the lock will keep waiting for a long time. This scenario could have the worst effects if thread blocked is a high-priority task (a hazard known as priority inversion)
  • There is always some probability of deadlock.

Locks are a fairly “heavyweight” mechanism for managing a simple operation such as incrementing a counter.

Isn’t a using volatile variable will take care of synchronization issues?

Well if we are thinking in this direction, then we are confused about the correct usage of Volatile variables.

Confused Question Mark GIF by Preity G  Zinta

Volatile variables are used to store shared variables and if you are doing a single update operation like writing a value to a volatile variable, it will be instantly visible to all threads.

But if we are talking about incrementing the volatile variable, then we are talking about three operations: read-modify-write, and to make this increment operation atomic we are dependent upon synchronization or Compare and Swap (CAS).

Volatile variables don’t provide any way for a sequence of operations to be atomic.

sad GIF

How to implement a simple counter using traditional locking?

public class Counter {
     private int counter;
     public synchronized int getCounter() { return counter; }
     public synchronized int incrementCounter() { return ++counter; }
     public synchronized int decrementCounter() { return --counter; }

The incrementCounter() and decrementCounter() are synchronized. Hence if one thread is doing increment or decrement operation, and another thread wants to enter the function, it needs to wait for lock to be available.

The counter class in the above example works reliably, in the presence of little or no contention.

However, under heavy contention, performance will suffer dramatically, as the JVM spends more time dealing with scheduling threads and managing contention and queues of waiting threads and less time doing real work, like incrementing counters.

omg GIF

Enough of synchronization theory, what is Compare and Swap?

Most modern processors include support for multiprocessing. This support also supporting instructions for updating shared variables in a way that can either detect or prevent concurrent access from other processors.

computer animation 90s GIF

The most common approach taken by current processors is to implement a primitive called compare-and-swap, or CAS.

A CAS operation includes three operands — a memory location (V), the expected old value (A), and a new value (B).

The processor will atomically update the location to the new value if the value that is there matches the expected old value, otherwise it will do nothing. In either case, it returns the value that was at that location prior to the CAS instruction.

CAS effectively says “I think location V should have the value A; if it does, put B in it, otherwise, don’t change it but tell me what value is there now.”

Instructions like CAS allow an algorithm to execute a read-modify-write sequence without fear of another thread modifying the variable in the meantime, because if another thread did modify the variable, the CAS would detect it (and fail) and the algorithm could retry the operation.

Example code explaining the behavior of compare-and-swap using synchronized keyword

public class SimulatedCAS {
      private int counter;
      public synchronized int getCounter() { return counter; }
      public synchronized int compareAndSwap(int expectedValue, int newValue) {
          int oldCounter = counter;
          if (counter == expectedValue)
              counter = newValue;
          return oldValue;

Concurrent algorithms based on CAS are called lock-free (non-blocking), because threads do not ever have to wait for a lock (sometimes called a mutex or critical section).

i am here waiting GIF by tripredictor

If the CAS fails, the caller can retry the CAS operation or take other action as it sees fit.

Example code showing the counter class rewritten to use CAS instead of locking:

public class CasCounter {
     private SimulatedCAS counter;
     public int getCounter() {
         return counter.getCounter();
     public int incrementCounter() {
         int oldValue = counter.getCounter();
         while (counter.compareAndSwap(oldValue, oldValue + 1) != oldValue)
             oldValue = counter.getValue();
         return oldValue + 1;

Rakesh Kalra

Hello, I am Rakesh Kalra. I have more than 15 years of experience working on IT projects, where I have worked on varied complexity of projects and at different levels of roles. I have tried starting my own startups, 3 of those though none of it were successful but gained so much knowledge about business, customers, and the digital world. I love to travel, spend time with my family, and read self-development books.


Leave a Reply