volatile is a Java keyword that provides two guarantees for a variable:
- Visibility --- when one thread writes, all other threads immediately see the new value (no CPU cache staleness)
- Ordering --- prevents instruction reordering around the volatile read/write (acts as a memory barrier)
The Problem Without volatile
Each CPU core has its own L1/L2 cache. Without volatile, a thread may read a stale cached copy:
Thread A (Core 0) Thread B (Core 1)
┌──────────────┐ ┌──────────────┐
│ L1 Cache │ │ L1 Cache │
│ flag = false │ │ flag = false │ ← stale!
└──────┬───────┘ └──────┬───────┘
│ │
▼ ▼
┌─────────────────────────────────────────────┐
│ Main Memory │
│ flag = true ← Thread A wrote │
└─────────────────────────────────────────────┘
Thread A sets flag = true, but Thread B's core still sees false from its local cache.
With volatile
java
volatile boolean flag = false;
Thread A writes flag = true
│
▼
Store barrier (flush CPU cache → main memory)
│
▼
Main Memory: flag = true
│
▼
Load barrier (invalidate other cores' cached copies)
│
▼
Thread B reads flag → true (forced to read from main memory)
Every write goes straight to main memory, every read comes straight from main memory. No stale cache.
Happens-Before Guarantee
volatile establishes a happens-before relationship: everything before a volatile write is visible to any thread that subsequently reads that volatile variable.
java
// Thread A
x = 42; // (1) non-volatile write
volatile_flag = true; // (2) volatile write --- flushes (1) too
// Thread B
if (volatile_flag) { // (3) volatile read --- sees (2)
System.out.println(x); // (4) guaranteed to see 42, not 0
}
This is why ConcurrentHashMap.get() works without locks --- the Node[] table is volatile, so any write to the table (including new nodes) is visible to all reading threads.
volatile vs synchronized vs Atomic*
| Feature | volatile |
synchronized |
AtomicInteger etc. |
|---|---|---|---|
| Visibility | ✅ | ✅ | ✅ |
| Atomicity (single read/write) | ✅ | ✅ | ✅ |
Atomicity (compound ops like i++) |
❌ | ✅ | ✅ (CAS) |
| Mutual exclusion | ❌ | ✅ | ❌ |
| Performance | Fastest | Slowest | Middle |
Key point: volatile only guarantees atomic single reads/writes. i++ is actually read → increment → write (3 steps), so volatile int i does NOT make i++ thread-safe.
java
volatile int count = 0;
// Thread A and B both do:
count++; // NOT atomic! Race condition.
// Read count (0) → increment (1) → write (1)
// Both threads may read 0, both write 1 → lost update
// Fix options:
synchronized (lock) { count++; } // mutual exclusion
AtomicInteger count = new AtomicInteger();
count.incrementAndGet(); // CAS loop, lock-free
When to Use volatile
Use volatile when:
├── One thread writes, others only read (flag pattern)
├── Simple state flag (boolean running/stopped)
├── Publishing an immutable object reference
│ └── volatile Node[] table ← ConcurrentHashMap does this
└── Double-checked locking (singleton pattern)
Don't use volatile when:
├── Multiple threads write to the same variable
├── Compound operations (check-then-act, read-modify-write)
└── Need mutual exclusion
Common Patterns
1. Stop Flag
java
class Worker implements Runnable {
private volatile boolean running = true;
public void stop() { running = false; } // Thread A
public void run() {
while (running) { // Thread B --- sees update immediately
doWork();
}
}
}
2. ConcurrentHashMap's table Field
java
// From ConcurrentHashMap source
transient volatile Node<K,V>[] table;
// get() reads table without any lock --- volatile guarantees visibility
V get(Object key) {
Node<K,V>[] tab = table; // volatile read --- always sees latest array reference
// ... traverse nodes
}
3. Double-Checked Locking (Singleton)
java
class Singleton {
private static volatile Singleton instance; // volatile prevents partial construction
static Singleton getInstance() {
if (instance == null) { // fast path --- no lock
synchronized (Singleton.class) {
if (instance == null) { // double-check under lock
instance = new Singleton(); // volatile ensures full construction visible
}
}
}
return instance;
}
}
Deep Dive: Why Double-Checked Locking Needs volatile
Object Construction Is Not Atomic
instance = new Singleton() looks like one statement, but the JVM breaks it into 3 steps:
instance = new Singleton();
// JVM bytecode equivalent:
1. memory = allocate() // Allocate memory for the object
2. constructObject(memory) // Call constructor, initialize fields
3. instance = memory // Assign reference to the variable
Instruction Reordering
The JVM and CPU are allowed to reorder steps 2 and 3 for performance. Within a single thread the result is identical, but across threads it breaks:
Intended order: Reordered (legal!):
1. Allocate memory 1. Allocate memory
2. Run constructor 3. Assign reference ← BEFORE constructor!
3. Assign reference 2. Run constructor
The Bug Without volatile
java
class Singleton {
private int value;
private static Singleton instance; // NOT volatile --- BUG
Singleton() { this.value = 42; }
}
Dangerous timeline:
Thread A Thread B
──────── ────────
Check 1: instance == null ✓
Acquire lock
Check 2: instance == null ✓
1. Allocate memory @ 0xABC
3. instance = 0xABC ← REORDERED!
┌─────────────────────────┐
│ instance is now non-null │
│ but constructor hasn't │
│ run --- value is still 0 │
└─────────────────────────┘
Check 1: instance == null?
→ NO! instance is 0xABC (non-null)
→ Skip synchronized block entirely
→ return instance
→ instance.value == 0 ← BUG!
2. Constructor runs (value = 42)
Release lock
Thread B sees a non-null instance (step 3 ran before step 2), skips the lock, and uses an object whose constructor hasn't finished.
Memory Layout During the Bug
Heap Memory when Thread B reads instance:
┌─────────────────────────────┐
│ Singleton object @ 0xABC │
│ ┌─────────────────────────┐ │
│ │ value = 0 (default!) │ │ ← constructor hasn't run yet
│ │ (should be 42) │ │
│ └─────────────────────────┘ │
└─────────────────────────────┘
instance variable: 0xABC ← points to allocated but uninitialized object
How volatile Fixes It
java
private static volatile Singleton instance;
volatile inserts a memory barrier that prevents reordering:
With volatile, the JVM MUST execute in this order:
1. memory = allocate() // Allocate
2. constructObject(memory) // Constructor MUST complete
── StoreStore barrier ── // volatile write barrier
3. instance = memory // Only THEN assign reference
── StoreLoad barrier ── // flush to main memory
The volatile write at step 3 guarantees everything before it (including the constructor) is fully visible to any thread that reads instance.
Thread A Thread B
──────── ────────
1. Allocate memory
2. Constructor runs (value = 42)
── memory barrier ──
3. instance = memory (volatile write)
Check 1: instance == null?
→ NO! (volatile read)
→ return instance
→ instance.value == 42 ✓ CORRECT
Why This Only Matters for Double-Checked Locking
In a simple synchronized singleton (no double-check), this bug can't happen because every thread goes through the lock, and synchronized itself provides a full memory barrier:
java
// Safe without volatile --- but slower (every call acquires lock)
static synchronized Singleton getInstance() {
if (instance == null) {
instance = new Singleton();
}
return instance;
}
The double-checked pattern optimizes by avoiding the lock on every call. But the first if (instance == null) check happens outside the lock --- that's where Thread B can see the reordered, partially-constructed object. volatile closes that gap.
Reordering Inside synchronized
A common question: can the JVM reorder instructions inside a synchronized block?
Yes --- reordering CAN happen inside synchronized. But it's invisible to any thread that acquires the same lock.
How synchronized Works
synchronized provides two guarantees:
-
Mutual exclusion --- only one thread executes the block at a time
-
Memory visibility --- on lock release, all writes are flushed; on lock acquire, all cached values are invalidated
Thread A Thread B
──────── ────────
synchronized (lock) { // waiting for lock...
x = 1; // (1) //
y = 2; // (2) //
z = x + y; // (3) //
} //
// lock released ──────────────────── // lock acquired
synchronized (lock) {
// sees x=1, y=2, z=3
// guaranteed by memory barrier
}
Inside Thread A's block, the JVM may reorder (1) and (2) --- but Thread B will never know, because it can't enter the block until Thread A exits. The lock release/acquire creates a full memory barrier that makes all writes visible in program order.
Why Double-Checked Locking Breaks
The problem is the outer if check happens without the lock:
java
if (instance == null) { // ← NO LOCK --- no memory barrier
synchronized (Singleton.class) {
if (instance == null) {
instance = new Singleton(); // reordering inside here...
}
} // lock release flushes writes
}
return instance; // ← Thread B reads here WITHOUT lock
Thread A (inside synchronized) Thread B (OUTSIDE synchronized)
────────────────────────────── ─────────────────────────────────
1. Allocate memory
3. instance = memory ← reordered!
(still inside lock, legal) if (instance == null)?
→ NO! (sees non-null reference)
→ return instance ← NO LOCK
→ instance.value == 0 ← BUG!
2. Constructor runs (value = 42)
lock release (memory barrier) // Thread B never hits this barrier
Thread B bypasses the synchronized block entirely, so it never gets the memory barrier that would make the reordering invisible.
The Key Insight
Reordering Visibility to Visibility to
inside? threads WITH threads WITHOUT
same lock? the lock?
─────────────────────────────────────────────────────────────────────
synchronized ✅ Yes ✅ Safe ❌ Unsafe
volatile ❌ No ✅ Safe ✅ Safe
synchronizedmakes reordering invisible to threads that use the same lockvolatileprevents reordering entirely --- safe for all threads, locked or not- Double-checked locking needs
volatilebecause the outer check is unsynchronized
Safe Patterns
java
// Pattern 1: Full synchronization (safe, slower)
static synchronized Singleton getInstance() {
if (instance == null) instance = new Singleton();
return instance;
}
// Every thread acquires lock → memory barrier → sees correct state
// Pattern 2: volatile + double-check (safe, faster)
private static volatile Singleton instance;
static Singleton getInstance() {
if (instance == null) { // volatile read --- no reordering
synchronized (Singleton.class) {
if (instance == null)
instance = new Singleton(); // volatile write --- barrier
}
}
return instance;
}
// Pattern 3: Initialization-on-demand holder (safe, no volatile needed)
class Singleton {
private static class Holder {
static final Singleton INSTANCE = new Singleton();
}
static Singleton getInstance() {
return Holder.INSTANCE; // class loading guarantees visibility
}
}
Pattern 3 is often the cleanest --- the JVM's class loading mechanism provides the memory barrier for free.
Connection to Other Steering Docs
concurrenthashmap-internals.md---volatile Node<K,V>[] tableenables lock-freeget()readsguava-best-practices.md--- Guava'sLoadingCacheuses volatile reads in its fast path (segment.getLiveEntry(key))