JDK7 ConcurrentHashMap principle

ConcurrentHashMap in JDK 7 (Segment-Based)

For the Java 8+ version (Node[] + per-bin locking), see concurrenthashmap-internals.md.

Architecture

复制代码
ConcurrentHashMap
  └── Segment[16]                          (default concurrencyLevel = 16)
        ├── Segment[0] extends ReentrantLock
        │     └── volatile HashEntry[] table
        │           ├── [0] → Entry → Entry → null    (linked list only)
        │           ├── [1] → null
        │           └── [n] → Entry → null
        ├── Segment[1] extends ReentrantLock
        │     └── volatile HashEntry[] table
        │           └── ...
        └── Segment[15]
              └── volatile HashEntry[] table
                    └── ...

Two-level hashing: upper bits select the Segment, lower bits select the bin within that Segment.

Key Classes

Segment

java 复制代码
static final class Segment<K,V> extends ReentrantLock {
    volatile int count;                    // number of entries in this segment
    int modCount;                          // structural modification count
    int threshold;                         // resize threshold for this segment
    volatile HashEntry<K,V>[] table;       // the hash table for this segment
    final float loadFactor;
}

Each Segment is an independent mini-HashMap with its own lock, count, and resize threshold.

HashEntry

java 复制代码
static final class HashEntry<K,V> {
    final int hash;
    final K key;
    volatile V value;              // volatile for lock-free reads
    volatile HashEntry<K,V> next;  // volatile for lock-free reads (JDK 7u6+)
}

key and hash are final --- once created, a HashEntry's identity never changes. Only value and next can be updated.

get() --- Lock-Free

复制代码
get(key)
     |
     |--- hash = hash(key.hashCode())
     |--- segmentIndex = (hash >>> segmentShift) & segmentMask
     |--- segment = segments[segmentIndex]          // volatile read
     |--- tab = segment.table                       // volatile read
     |--- e = tab[(tab.length - 1) & hash]          // volatile read
     |
     |--- while (e != null) {                       // linked list traversal
     |        if (e.hash == hash && key.equals(e.key))
     |            return e.value;                    // volatile read
     |        e = e.next;                            // volatile read
     |    }
     |--- return null

No lock acquired. Relies entirely on volatile reads of value and next.

put() --- Segment-Level Lock

复制代码
put(key, value)
     |
     |--- hash = hash(key.hashCode())
     |--- segment = segmentFor(hash)
     |
     |--- segment.lock()                    // ReentrantLock on this segment only
     |    |
     |    |--- tab = segment.table
     |    |--- index = (tab.length - 1) & hash
     |    |--- first = tab[index]
     |    |
     |    |--- traverse linked list:
     |    |    for (e = first; e != null; e = e.next)
     |    |        if (e.hash == hash && key.equals(e.key))
     |    |            oldValue = e.value
     |    |            e.value = value      // update existing
     |    |            return oldValue
     |    |
     |    |--- [key not found → insert]
     |    |    tab[index] = new HashEntry(hash, key, value, first)
     |    |    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
     |    |    HEAD PREPEND (new node.next = old first)
     |    |
     |    |--- count++
     |    |--- [count > threshold?] → rehash() (per-segment resize)
     |    |
     |--- segment.unlock()
     |
     |--- return null (new key)

Key difference from Java 8+: new nodes are HEAD-PREPENDED (not tail-appended). This is safe because get() reads tab[index] via volatile --- it sees either the old head or the new head, both consistent.

Resize --- Per-Segment

Each Segment resizes independently. Only the Segment being resized is locked.

复制代码
rehash() --- called inside segment.lock()
     |
     |--- oldTable = segment.table
     |--- newTable = new HashEntry[oldTable.length * 2]
     |
     |--- for each bin in oldTable:
     |    |--- e = oldTable[i]
     |    |--- if (e == null) → skip
     |    |--- if (e.next == null) → newTable[e.hash & newMask] = e  (reuse single node)
     |    |--- else:
     |    |    |--- find lastRun (tail of same-destination nodes) → REUSE
     |    |    |--- create NEW nodes for everything before lastRun
     |    |    |    (same optimization as Java 8+)
     |
     |--- segment.table = newTable          // volatile write

Other Segments are completely unaffected --- no cooperative transfer, no ForwardingNode.

复制代码
Segment[0]: resizing (locked)     → only threads hitting Segment[0] are blocked
Segment[1]: normal operation      → fully concurrent
Segment[2]: normal operation      → fully concurrent
...
Segment[15]: normal operation     → fully concurrent

remove() --- Segment-Level Lock

复制代码
remove(key)
     |
     |--- segment.lock()
     |    |
     |    |--- find node in linked list
     |    |--- [found?]
     |    |    |--- rebuild chain WITHOUT the removed node
     |    |    |    (creates new nodes from head to removed node)
     |    |    |    (reuses nodes after removed node)
     |    |    |--- tab[index] = new chain head
     |    |    |--- count--
     |    |
     |--- segment.unlock()

In JDK 7, remove() rebuilds the chain because HashEntry.next was originally final (changed to volatile in 7u6). The rebuild creates new nodes for the prefix and reuses the suffix after the removed node.

size() --- Two-Pass Strategy

复制代码
size()
     |
     |--- Pass 1 & 2: Try without locks (optimistic)
     |    |--- sum all segment.count values
     |    |--- sum all segment.modCount values
     |    |--- if modCount sum unchanged between two passes → return count sum
     |    |    (no structural modifications happened → count is accurate)
     |
     |--- Pass 3: Lock ALL segments (pessimistic fallback)
     |    |--- lock every segment
     |    |--- sum all counts
     |    |--- unlock every segment
     |    |--- return exact count

The optimistic approach avoids locking in the common case (no concurrent modifications during the two reads).

Concurrency Level

java 复制代码
new ConcurrentHashMap<>(initialCapacity, loadFactor, concurrencyLevel);
//                                                    ^^^^^^^^^^^^^^^^
//                                                    default = 16

concurrencyLevel determines the number of Segments. It's rounded up to the nearest power of 2. Maximum theoretical concurrency = number of Segments (16 by default).

复制代码
concurrencyLevel = 16 → 16 Segments → max 16 concurrent writers
concurrencyLevel = 1  → 1 Segment  → effectively a synchronized HashMap
concurrencyLevel = 64 → 64 Segments → max 64 concurrent writers (more memory)

Once created, the number of Segments is FIXED --- it never changes. Only the HashEntry[] table within each Segment grows.

Head Prepend vs Tail Append

JDK 7 prepends new nodes at the head. JDK 8+ appends at the tail.

复制代码
JDK 7 put("D"):
  Before: tab[i] → A → B → C
  After:  tab[i] → D → A → B → C    (head prepend)

JDK 8+ put("D"):
  Before: tab[i] → A → B → C
  After:  tab[i] → A → B → C → D    (tail append)

Why JDK 7 prepends:

  • In early JDK 7, HashEntry.next was final --- you couldn't append by mutating tail.next
  • The only option: create a new head node with next = oldFirst, then swap tab[index]
  • This guaranteed readers always saw a complete, immutable chain --- no partial state
  • Note: put() still traverses the full list O(n) to check for duplicate keys before inserting
  • The prepend itself is O(1), but the full put() is O(n) due to the duplicate check

Why JDK 8+ changed to append:

  • Must traverse anyway to check for duplicate keys
  • Already at the tail when no match found --- append is free
  • Preserves insertion order (useful for LinkedHashMap compatibility reasoning)

Limitations (Why JDK 8 Replaced This Design)

Limitation Impact
Fixed segment count Can't adapt to workload; too few = contention, too many = memory waste
Linked list only O(n) worst case per bin (hash collision attacks)
Per-segment resize Each segment resizes alone; no cooperative multi-thread transfer
No computeIfAbsent Must use error-prone putIfAbsent + get pattern
size() may lock all Pessimistic fallback locks entire map
Memory overhead Segment objects + ReentrantLock per segment

Connection to Other Docs

  • concurrenthashmap-internals.md --- Java 8+ version (Node[], per-bin CAS + synchronized, TreeBin)
  • concurrenthashmap-reference.md --- Java 7 vs 8+ comparison table
  • java-thread-internals.md --- ReentrantLock details

size() Deep Dive

modCount --- Structural Modification Counter

Each Segment has a modCount that tracks structural changes (entries added/removed):

java 复制代码
static final class Segment<K,V> extends ReentrantLock {
    int modCount;          // NOT volatile --- written under lock, read without lock
    volatile int count;    // number of entries
}
Operation modCount incremented? Why
put(newKey, v) Yes New entry added (structural change)
put(existingKey, v2) No Just value update, no structural change
remove(key) Yes Entry removed (structural change)
rehash() No Same entries, different layout

modCount is not volatile but is still visible to readers because the count volatile write that follows provides the memory barrier:

复制代码
Writer: modCount++ → count = newCount (volatile write, flushes modCount too)
Reader: count = seg.count (volatile read) → modCount = seg.modCount (piggybacks, sees latest)

size() Algorithm --- Three-Pass Strategy

java 复制代码
static final int RETRIES_BEFORE_LOCK = 2;

public int size() {
    long sum, last = 0;

    for (int retries = -1; ; retries++) {

        // Pass 3: lock all segments (pessimistic fallback)
        if (retries == RETRIES_BEFORE_LOCK) {
            for (Segment seg : segments) seg.lock();
        }

        sum = 0;
        long modSum = 0;
        for (Segment seg : segments) {
            sum += seg.count;        // volatile read
            modSum += seg.modCount;  // piggybacks on volatile read
        }

        // Stable? modCount sum unchanged since last pass
        if (modSum == last) {
            if (retries == RETRIES_BEFORE_LOCK) {
                for (Segment seg : segments) seg.unlock();
            }
            return (int) sum;
        }
        last = modSum;
    }
}

Execution Flow

复制代码
retries=-1 (Pass 1, no locks):
  sum = 5+3+7+2 = 17
  modSum = 10+4+8+1 = 23
  last(0) != modSum(23) → retry
  last = 23

retries=0 (Pass 2, no locks):
  sum = 5+3+7+2 = 17
  modSum = 10+4+8+1 = 23
  last(23) == modSum(23) → STABLE → return 17 ✓

--- OR if put() happened between passes ---

retries=0 (Pass 2, no locks):
  sum = 5+4+7+2 = 18          ← Segment[1] got a new entry
  modSum = 10+5+8+1 = 24      ← Segment[1].modCount incremented
  last(23) != modSum(24) → retry
  last = 24

retries=1 (Pass 3, ALL SEGMENTS LOCKED):
  lock all 16 segments         ← no one can modify anything
  sum = 5+4+7+2 = 18
  modSum = 10+5+8+1 = 24
  last(24) == modSum(24) → STABLE (guaranteed) → return 18 ✓
  unlock all

Why This Works

  • If modSum is identical across two consecutive passes, no put(newKey)/remove() happened in between
  • The count values read in the second pass are consistent with each other
  • Avoids locking in the common case (low-contention reads)
  • Falls back to locking all segments only after 2 failed optimistic attempts

Segment Selection --- Upper Bits

Segment index uses the UPPER bits of the hash, bin index uses the LOWER bits:

java 复制代码
int segmentShift = 28;    // 32 - 4 (for 16 segments)
int segmentMask  = 15;    // 16 - 1

int segmentIndex = (hash >>> segmentShift) & segmentMask;  // upper 4 bits
int binIndex     = (tab.length - 1) & hash;                // lower bits
复制代码
hash (32 bits):
  SSSS_XXXX_XXXX_XXXX_XXXX_XXXX_XXXX_BBBB
  ^^^^                                ^^^^
  segment (fixed, upper 4 bits)       bin (variable, depends on tab.length)

Upper and lower bits are independent --- resizing a segment's table (changing bin count) never affects which segment a key belongs to. The bin mask is computed at read time from segment.table.length, not stored globally.

复制代码
Segment[3] resizes 8 → 16 bins:
  Segment index: unchanged (upper bits)
  Bin index: recomputed with new mask (tab.length-1)
  
  Before: hash & 0x7 = 5  → tab[5]
  After:  hash & 0xF = 13 → tab[13]
相关推荐
庞轩px12 小时前
第七篇:Spring扩展点——如何优雅地介入Bean的创建流程
java·后端·spring·bean·aware·扩展点
tongluowan00714 小时前
一个请求在Spring MVC 中是怎么流转的
java·spring·mvc
夜郎king14 小时前
Spring AI 对接大模型开发易错点总结与实战解决办法
java·人工智能·spring
oradh14 小时前
Oracle数据库中的Java概述
java·数据库·oracle·sql基础·oracle数据库java概述
组合缺一15 小时前
Java AI 框架三国杀:Solon AI vs Spring AI vs LangChain4j 深度对比
java·人工智能·spring·ai·langchain·llm·solon
c++之路15 小时前
适配器模式(Adapter Pattern)
java·算法·适配器模式
吴声子夜歌15 小时前
Java——接口的细节
java·开发语言·算法
阿拉金alakin15 小时前
深入理解 Java 锁机制:CAS 原理、synchronized 优化与主流锁策略全总结
java·开发语言
myheartgo-on15 小时前
Java—方 法
java·开发语言·算法·青少年编程
雨落在了我的手上15 小时前
如何学习java?
java·开发语言·学习