大纲
1.并发安全的数组列表CopyOnWriteArrayList
2.并发安全的链表队列ConcurrentLinkedQueue
3.并发编程中的阻塞队列概述
4.JUC的各种阻塞队列介绍
5.LinkedBlockingQueue的具体实现原理
6.基于两个队列实现的集群同步机制
1.并发安全的数组列表CopyOnWriteArrayList
(1)CopyOnWriteArrayList的初始化
(2)基于锁 + 写时复制机制实现的增删改操作
(3)使用写时复制的原因是读操作不加锁 + 不使用Unsafe读取数组元素
(4)对数组进行迭代时采用了副本快照机制
(5)核心思想是通过弱一致性提升读并发
(6)写时复制的总结
(1)CopyOnWriteArrayList的初始化
并发安全的HashMap是ConcurrentHashMap
并发安全的ArrayList是CopyOnWriteArrayList
并发安全的LinkedList是ConcurrentLinkedQueue
从CopyOnWriteArrayList的构造方法可知,CopyOnWriteArrayList基于Object对象数组实现。
这个Object对象数组array会使用volatile修饰,保证了多线程下的可见性。只要有一个线程修改了数组array,其他线程可以马上读取到最新值。
//A thread-safe variant of java.util.ArrayList in which all mutative operations
//(add, set, and so on) are implemented by making a fresh copy of the underlying array.
public class CopyOnWriteArrayList<E> implements List<E>, RandomAccess, Cloneable, java.io.Serializable {
...
//The lock protecting all mutators
final transient ReentrantLock lock = new ReentrantLock();
//The array, accessed only via getArray/setArray.
private transient volatile Object[] array;
//Creates an empty list.
public CopyOnWriteArrayList() {
setArray(new Object[0]);
}
//Sets the array.
final void setArray(Object[] a) {
array = a;
}
...
}
(2)基于锁 + 写时复制机制实现的增删改操作
一.使用独占锁解决对数组的写写并发问题
每个CopyOnWriteArrayList都有一个Object数组 + 一个ReentrantLock锁。在对Object数组进行增删改时,都要先获取锁,保证只有一个线程增删改。从而确保多线程增删改CopyOnWriteArrayList的Object数组是并发安全的。注意:获取锁的动作需要在执行getArray()方法前执行。
但因为获取独占锁,所以导致CopyOnWriteArrayList的写并发并性能不太好。而ConcurrentHashMap由于通过CAS设置 + 分段加锁,所以写并发性能很高。
二.使用写时复制机制解决对数组的读写并发问题
CopyOnWrite就是写时复制。写数据时不直接在当前数组里写,而是先把当前数组的数据复制到新数组里。然后再在新数组里写数据,写完数据后再将新数组赋值给array变量。这样原数组由于没有了array变量的引用,很快就会被JVM回收掉。
其中会使用System.arraycopy()方法和Arrays.copyOf()方法来复制数据到新数组,从Arrays.copyOf(elements, len + 1)可知,新数组的大小比原数组大小多1。
所以CopyOnWriteArrayList不需要进行数组扩容,这与ArrayList不一样。ArrayList会先初始化一个固定大小的数组,然后数组大小达到阈值时会扩容。
三.总结
为了解决CopyOnWriteArrayList的数组写写并发问题,使用了锁。
为了解决CopyOnWriteArrayList的数组读写并发问题,使用了写时复制。
所以CopyOnWriteArrayList可以保证多线程对数组写写 + 读写的并发安全。
//A thread-safe variant of java.util.ArrayList in which all mutative operations
//(add, set, and so on) are implemented by making a fresh copy of the underlying array.
public class CopyOnWriteArrayList<E> implements List<E>, RandomAccess, Cloneable, java.io.Serializable {
...
//The lock protecting all mutators
final transient ReentrantLock lock = new ReentrantLock();
//The array, accessed only via getArray/setArray.
private transient volatile Object[] array;
//Creates an empty list.
public CopyOnWriteArrayList() {
setArray(new Object[0]);
}
//Sets the array.
final void setArray(Object[] a) {
array = a;
}
//Gets the array. Non-private so as to also be accessible from CopyOnWriteArraySet class.
final Object[] getArray() {
return array;
}
//增:Appends the specified element to the end of this list.
public boolean add(E e) {
final ReentrantLock lock = this.lock;
lock.lock();
try {
Object[] elements = getArray();
int len = elements.length;
Object[] newElements = Arrays.copyOf(elements, len + 1);
newElements[len] = e;
setArray(newElements);
return true;
} finally {
lock.unlock();
}
}
//删:Removes the element at the specified position in this list.
//Shifts any subsequent elements to the left (subtracts one from their indices).
//Returns the element that was removed from the list.
public E remove(int index) {
final ReentrantLock lock = this.lock;
lock.lock();
try {
Object[] elements = getArray();
int len = elements.length;
E oldValue = get(elements, index);
int numMoved = len - index - 1;
if (numMoved == 0) {
setArray(Arrays.copyOf(elements, len - 1));
} else {
//先创建新数组,新数组的大小为len-1,比原数组的大小少1
Object[] newElements = new Object[len - 1];
//把原数组里从0开始拷贝index个元素到新数组里,并且从新数组的0位置开始放置
System.arraycopy(elements, 0, newElements, 0, index);
//把原数组从index+1开始拷贝numMoved个元素到新数组里,并且从新数组的index位置开始放置;
System.arraycopy(elements, index + 1, newElements, index, numMoved);
setArray(newElements);
}
return oldValue;
} finally {
lock.unlock();
}
}
//改:Replaces the element at the specified position in this list with the specified element.
public E set(int index, E element) {
final ReentrantLock lock = this.lock;
lock.lock();
try {
Object[] elements = getArray();
E oldValue = get(elements, index);
if (oldValue != element) {
int len = elements.length;
Object[] newElements = Arrays.copyOf(elements, len);
newElements[index] = element;
setArray(newElements);
} else {
//Not quite a no-op; ensures volatile write semantics
setArray(elements);
}
return oldValue;
} finally {
lock.unlock();
}
}
...
}
(3)使用写时复制的原因是读操作不加锁 + 不使用Unsafe读取数组元素
CopyOnWriteArrayList的增删改采用写时复制的原因在于get操作不需加锁。get操作就是先获取array数组,然后再通过index定位返回对应位置的元素。
由于在写数据的时候,首先更新的是复制了原数组数据的新数组。所以同一时间大量的线程读取数组数据时,都会读到原数组的数据,因此读写之间不会出现并发冲突的问题。
而且在写数据的时候,在更新完新数组之后,才会更新volatile修饰的数组变量。所以读操作只需要直接对volatile修饰的数组变量进行读取,就能获取最新的数组值。
如果不使用写时复制机制,那么即便有写线程先更新了array引用的数组中的元素,后续的读线程也只是具有对使用volatile修饰的array引用的可见性,而不会具有对array引用的数组中的元素的可见性。所以此时只要array引用没有发生改变,读线程还是会读到旧的元素,除非使用Unsafe.getObjectVolatile()方法来获取array引用的数组的元素。
public class CopyOnWriteArrayList<E> implements List<E>, RandomAccess, Cloneable, java.io.Serializable {
...
//The array, accessed only via getArray/setArray.
private transient volatile Object[] array;
//Gets the array. Non-private so as to also be accessible from CopyOnWriteArraySet class.
final Object[] getArray() {
return array;
}
public E get(int index) {
//先通过getArray()方法获取array数组,然后再通过get()方法定位到数组某位置的元素
return get(getArray(), index);
}
private E get(Object[] a, int index) {
return (E) a[index];
}
...
}
(4)对数组进行迭代时采用了副本快照机制
CopyOnWriteArrayList的Iterator迭代器里有一个快照数组snapshot,该数组指向的就是创建迭代器时CopyOnWriteArrayList的当前数组array。
所以使用CopyOnWriteArrayList的迭代器进行迭代时,会遍历快照数组。此时如果有其他线程更新了数组array,也不会影响迭代的过程。
public class CopyOnWriteArrayListDemo {
static List<String> list = new CopyOnWriteArrayList<String>();
public static void main(String[] args) {
list.add("k");
System.out.println(list);
Iterator<String> iterator = list.iterator();
while (iterator.hasNext()) {
System.out.println(iterator.next());
}
}
}
public class CopyOnWriteArrayList<E> implements List<E>, RandomAccess, Cloneable, java.io.Serializable {
...
public Iterator<E> iterator() {
return new COWIterator<E>(getArray(), 0);
}
...
static final class COWIterator<E> implements ListIterator<E> {
private final Object[] snapshot;
private int cursor;
private COWIterator(Object[] elements, int initialCursor) {
cursor = initialCursor;
snapshot = elements;
}
...
}
...
}
(5)核心思想是通过最终一致性提升读并发
CopyOnWriteArrayList的核心思想是通过弱一致性来提升读写并发的能力。
CopyOnWriteArrayList基于写时复制机制存在的最大问题是最终一致性。
多个线程并发读写数组,写线程已将新数组修改好,但还没设置给array。此时其他读线程读到的(get或者迭代)都是数组array的数据,于是在同一时刻,读线程和写线程看到的数据是不一致的。这就是写时复制机制存在的问题:最终一致性或弱一致性。
(6)写时复制的总结
一.优点
读读不互斥,读写不互斥,写写互斥。同一时间只有一个线程可以写,写的同时允许其他线程来读。
二.缺点
空间换时间,写的时候内存里会出现一模一样的副本,对内存消耗大。通过数组副本可以保证大量的读不需要和写互斥。如果数组很大,可能要考虑内存占用会是数组大小的几倍。此外使用数组副本来统计数据,会存在统计数据不一致的问题。
三.使用场景
适用于读多写少的场景,这样大量的读操作不会被写操作影响,而且不要求统计数据具有实时性。
2.并发安全的链表队列ConcurrentLinkedQueue
(1)ConcurrentLinkedQueue的介绍
(2)ConcurrentLinkedQueue的构造方法
(3)ConcurrentLinkedQueue的offer()方法
(4)ConcurrentLinkedQueue的poll()方法
(5)ConcurrentLinkedQueue的peak()方法
(6)ConcurrentLinkedQueue的size()方法
(1)ConcurrentLinkedQueue的介绍
ConcurrentLinkedQueue是一种并发安全且非阻塞的链表队列(无界队列)。
ConcurrentLinkedQueue采用CAS机制来保证多线程操作队列时的并发安全。
链表队列会采用先进先出的规则来对结点进行排序。每次往链表队列添加元素时,都会添加到队列的尾部。每次需要获取元素时,都会直接返回队列头部的元素。
并发安全的HashMap是ConcurrentHashMap
并发安全的ArrayList是CopyOnWriteArrayList
并发安全的LinkedList是ConcurrentLinkedQueue
(2)ConcurrentLinkedQueue的构造方法
ConcurrentLinkedQueue是基于链表实现的,链表结点为其内部类Node。
ConcurrentLinkedQueue的构造方法会初始化链表的头结点和尾结点为同一个值为null的Node对象。
Node结点通过next指针指向下一个Node结点,从而组成一个单向链表。而ConcurrentLinkedQueue的head和tail两个指针指向了链表的头和尾结点。
public class ConcurrentLinkedQueueDemo {
public static void main(String[] args) {
ConcurrentLinkedQueue<String> queue = new ConcurrentLinkedQueue<String>();
queue.offer("张三");//向队尾添加元素
queue.offer("李四");//向队尾添加元素
queue.offer("王五");//向队尾添加元素
System.out.println(queue.peek());//返回队头的元素不出队
System.out.println(queue.poll());//返回队头的元素而且出队
System.out.println(queue.peek());//返回队头的元素不出队
}
}
//An unbounded thread-safe queue based on linked nodes.
//This queue orders elements FIFO (first-in-first-out).
//The head of the queue is that element that has been on the queue the longest time.
//The tail of the queue is that element that has been on the queue the shortest time.
//New elements are inserted at the tail of the queue,
//and the queue retrieval operations obtain elements at the head of the queue.
//A ConcurrentLinkedQueue is an appropriate choice when many threads will share access to a common collection.
//Like most other concurrent collection implementations, this class does not permit the use of null elements.
public class ConcurrentLinkedQueue<E> extends AbstractQueue<E> implements Queue<E>, java.io.Serializable {
...
private transient volatile Node<E> head;
private transient volatile Node<E> tail;
//构造方法,初始化链表队列的头结点和尾结点为同一个值为null的Node对象
//Creates a ConcurrentLinkedQueue that is initially empty.
public ConcurrentLinkedQueue() {
head = tail = new Node<E>(null);
}
private static class Node<E> {
volatile E item;
volatile Node<E> next;
private static final sun.misc.Unsafe UNSAFE;
private static final long itemOffset;
private static final long nextOffset;
static {
try {
UNSAFE = sun.misc.Unsafe.getUnsafe();
Class<?> k = Node.class;
itemOffset = UNSAFE.objectFieldOffset(k.getDeclaredField("item"));
nextOffset = UNSAFE.objectFieldOffset(k.getDeclaredField("next"));
} catch (Exception e) {
throw new Error(e);
}
}
Node(E item) {
UNSAFE.putObject(this, itemOffset, item);
}
boolean casItem(E cmp, E val) {
return UNSAFE.compareAndSwapObject(this, itemOffset, cmp, val);
}
void lazySetNext(Node<E> val) {
UNSAFE.putOrderedObject(this, nextOffset, val);
}
boolean casNext(Node<E> cmp, Node<E> val) {
return UNSAFE.compareAndSwapObject(this, nextOffset, cmp, val);
}
}
...
}
(3)ConcurrentLinkedQueue的offer()方法
其中关键的代码就是"p.casNext(null, newNode))",就是把p的next指针由原来的指向空设置为指向新的结点,并且通过CAS确保同一时间只有一个线程可以成功执行这个操作。
注意:更新tail指针并不是实时更新的,而是隔一个结点再更新。这样可以减少CAS指令的执行次数,从而降低CAS操作带来的性能影响。
插入第一个元素后,tail指针指向倒数第二个节点。
插入第二个元素后,tail指针指向最后一个节点。
插入第三个元素后,tail指针指向倒数第二个节点。
插入第四个元素后,tail指针指向最后一个节点。
//An unbounded thread-safe queue based on linked nodes.
public class ConcurrentLinkedQueue<E> extends AbstractQueue<E> implements Queue<E>, java.io.Serializable {
...
private transient volatile Node<E> head;
private transient volatile Node<E> tail;
private static final sun.misc.Unsafe UNSAFE;
private static final long headOffset;
private static final long tailOffset;
static {
try {
UNSAFE = sun.misc.Unsafe.getUnsafe();
Class<?> k = ConcurrentLinkedQueue.class;
headOffset = UNSAFE.objectFieldOffset(k.getDeclaredField("head"));
tailOffset = UNSAFE.objectFieldOffset(k.getDeclaredField("tail"));
} catch (Exception e) {
throw new Error(e);
}
}
//构造方法,初始化链表队列的头结点和尾结点为同一个值为null的Node对象
//Creates a ConcurrentLinkedQueue that is initially empty.
public ConcurrentLinkedQueue() {
head = tail = new Node<E>(null);
}
public boolean offer(E e) {
checkNotNull(e);
final Node<E> newNode = new Node<E>(e);
//插第一个元素时, tail和head都是初始化时的空节点, p也指向该空节点, q是该空节点的next元素;
//很明显q是null, p.casNext后, p的next设为第一个元素, 此时p和t相等, tail的next是第一个元素;
//由于p==t, 于是返回true, head和tail还是指向初始化时的空节点, tail指针指向的是倒数第二个节点;
//插第二个元素时, q成为第一个元素,不为null了, 而且p指向tail, tail的next是第一个元素, 所以p != q;
//由于此时p和t还是一样的, 所以会将q赋值给p, 也就是p指向第一个元素了, 再次进行新一轮循环;
//新一轮循环时, q指向第一个元素的next成为null, 所以会对第一个元素执行casNext操作;
//也就是将第二个元素设为第一个元素的next, 设完后由于p和t不相等了, 会执行casTail设第二个元素为tail;
//插入第三个元素时, 又会和插入第一个元素一样了, 这时tail指针指向的是倒数第二个节点;
//插入第四个元素时, 和插入第二个元素一样, 这是tail指针指向的是最后一个节点;
for (Node<E> t = tail, p = t;;) {
Node<E> q = p.next;//p是尾结点,q是尾结点的下一个结点
if (q == null) {
//插入第一个元素时执行的代码
if (p.casNext(null, newNode)) {//将新结点设置为尾结点的下一个结点
if (p != t) {//隔一个结点再CAS更新tail指针
casTail(t, newNode);
}
return true;
}
} else if (p == q) {
p = (t != (t = tail)) ? t : head;
} else {
//插入第二个元素时执行的代码
p = (p != t && t != (t = tail)) ? t : q;
}
}
}
private boolean casTail(Node<E> cmp, Node<E> val) {
return UNSAFE.compareAndSwapObject(this, tailOffset, cmp, val);
}
...
}
(4)ConcurrentLinkedQueue的poll()方法
poll()方法会将链表队列的头结点出队。
注意:更新head指针时也不是实时更新的,而是隔一个结点再更新。这样可以减少CAS指令的执行次数,从而降低CAS操作带来的性能影响。
//An unbounded thread-safe queue based on linked nodes.
public class ConcurrentLinkedQueue<E> extends AbstractQueue<E> implements Queue<E>, java.io.Serializable {
...
private transient volatile Node<E> head;
private transient volatile Node<E> tail;
private static final sun.misc.Unsafe UNSAFE;
private static final long headOffset;
private static final long tailOffset;
static {
try {
UNSAFE = sun.misc.Unsafe.getUnsafe();
Class<?> k = ConcurrentLinkedQueue.class;
headOffset = UNSAFE.objectFieldOffset(k.getDeclaredField("head"));
tailOffset = UNSAFE.objectFieldOffset(k.getDeclaredField("tail"));
} catch (Exception e) {
throw new Error(e);
}
}
//构造方法,初始化链表队列的头结点和尾结点为同一个值为null的Node对象
//Creates a ConcurrentLinkedQueue that is initially empty.
public ConcurrentLinkedQueue() {
head = tail = new Node<E>(null);
}
public E poll() {
restartFromHead:
for (;;) {
for (Node<E> h = head, p = h, q;;) {
E item = p.item;
if (item != null && p.casItem(item, null)) {
if (p != h) {//隔一个结点才CAS更新head指针
updateHead(h, ((q = p.next) != null) ? q : p);
}
return item;
} else if ((q = p.next) == null) {
updateHead(h, p);
return null;
} else if (p == q) {
continue restartFromHead;
} else {
p = q;
}
}
}
}
final void updateHead(Node<E> h, Node<E> p) {
if (h != p && casHead(h, p)) {
h.lazySetNext(h);
}
}
private boolean casHead(Node<E> cmp, Node<E> val) {
return UNSAFE.compareAndSwapObject(this, headOffset, cmp, val);
}
...
}
(5)ConcurrentLinkedQueue的peak()方法
peek()方法会获取链表的头结点,但是不会出队。
//An unbounded thread-safe queue based on linked nodes.
public class ConcurrentLinkedQueue<E> extends AbstractQueue<E> implements Queue<E>, java.io.Serializable {
...
private transient volatile Node<E> head;
public E peek() {
restartFromHead:
for (;;) {
for (Node<E> h = head, p = h, q;;) {
E item = p.item;
if (item != null || (q = p.next) == null) {
updateHead(h, p);
return item;
} else if (p == q) {
continue restartFromHead;
} else {
p = q;
}
}
}
}
final void updateHead(Node<E> h, Node<E> p) {
if (h != p && casHead(h, p)) {
h.lazySetNext(h);
}
}
private boolean casHead(Node<E> cmp, Node<E> val) {
return UNSAFE.compareAndSwapObject(this, headOffset, cmp, val);
}
...
}
(6)ConcurrentLinkedQueue的size()方法
size()方法主要用来返回链表队列的大小,查看链表队列有多少个元素。size()方法不会加锁,会直接从头节点开始遍历链表队列中的每个结点。
//An unbounded thread-safe queue based on linked nodes.
public class ConcurrentLinkedQueue<E> extends AbstractQueue<E> implements Queue<E>, java.io.Serializable {
...
public int size() {
int count = 0;
for (Node<E> p = first(); p != null; p = succ(p))
if (p.item != null) {
if (++count == Integer.MAX_VALUE) {
break;
}
}
}
return count;
}
//Returns the first live (non-deleted) node on list, or null if none.
//This is yet another variant of poll/peek; here returning the first node, not element.
//We could make peek() a wrapper around first(), but that would cost an extra volatile read of item,
//and the need to add a retry loop to deal with the possibility of losing a race to a concurrent poll().
Node<E> first() {
restartFromHead:
for (;;) {
for (Node<E> h = head, p = h, q;;) {
boolean hasItem = (p.item != null);
if (hasItem || (q = p.next) == null) {
updateHead(h, p);
return hasItem ? p : null;
} else if (p == q) {
continue restartFromHead;
} else {
p = q;
}
}
}
}
//Returns the successor of p, or the head node if p.next has been linked to self,
//which will only be true if traversing with a stale pointer that is now off the list.
final Node<E> succ(Node<E> p) {
Node<E> next = p.next;
return (p == next) ? head : next;
}
...
}
如果在遍历的过程中,有线程执行入队或者是出队的操作,此时会怎样?
从队头开始遍历,遍历到一半时:如果有线程在队列尾部进行入队操作,此时的遍历能及时看到新添加的元素。因为入队操作就是设置队列尾部节点的next指针指向新添加的结点,而入队时设置next指针属于volatile写,因此遍历时是可以看到的。如果有线程从队列头部进行出队操作,此时的遍历则无法感知有元素出队了。
所以可以总结出这些并发安全的集合:ConcurrentHashMap、CopyOnWriteArrayList和ConcurrentLinkedQueue,为了优化多线程下的并发性能,会牺牲掉统计数据的一致性。为了保证多线程写的高并发性能,会大量采用CAS进行无锁化操作。同时会让很多读操作比如常见的size()操作,不使用锁。因此使用这些并发安全的集合时,要考虑并发下的统计数据的不一致问题。
3.并发编程中的阻塞队列概述
(1)什么是阻塞队列
(2)阻塞队列提供的方法
(3)阻塞队列的应用
(1)什么是阻塞队列
队列是一种只允许在一端进行移除操作、在另一端进行插入操作的线性表,队列中允许插入的一端称为队尾,允许移除的一端称为队头。
阻塞队列就是在队列的基础上增加了两个操作:
一.支持阻塞插入
在队列满时会阻塞继续往队列中添加数据的线程,直到队列中有元素被释放。
二.支持阻塞移除
在队列空时会阻塞从队列中获取元素的线程,直到队列中添加了新的元素。
阻塞队列其实实现了一个生产者/消费者模型:生产者往队列中添加数据,消费者从队列中获取数据。队列满了阻塞生产者,队列空了阻塞消费者。
阻塞队列中的元素可能会使用数组或者链表等来进行存储。一个队列中能容纳多少个元素取决于队列的容量大小,因此阻塞队列也分为有界队列和无界队列。
有界队列指有固定大小的队列,无界队列指没有固定大小的队列。实际上无界队列也是有大小限制的,只是大小限制为非常大,可认为无界。
注意:在无界队列中,由于理论上不存在队列满的情况,所以不存在阻塞。
阻塞队列在很多地方都会用到,比如线程池、ZooKeeper。一般使用阻塞队列来实现生产者/消费者模型。
(2)阻塞队列提供的方法
阻塞队列的操作有插入、移除、检查,在队列满或者空时会有不同的效果。
一.抛出异常
当队列满的时候通过add(e)方法添加元素,会抛出异常。
当队列空的时候调用remove(e)方法移除元素,也会抛出异常。
二.返回特殊值
调用offer(e)方法向队列入队元素时,会返回添加结果true或false。
调用poll()方法从队列出队元素时,会从队列取出一个元素或null。
三.一直阻塞
在队列满了的情况下,调用插入方法put(e)向队列中插入元素时,队列会阻塞插入元素的线程,直到队列不满或者响应中断才退出阻塞。
在队列空了的情况下,调用移除方法take()从队列移除元素时,队列会阻塞移除元素的线程,直到队列不为空时唤醒线程。
四.超时退出
超时退出其实就是在offer()和poll()方法中增加了阻塞的等待时间。
(3)阻塞队列的应用
阻塞队列可以理解为线程级别的消息队列。
消息中间件可以理解为进程级别的消息队列。
所以可以通过阻塞队列来缓存线程的请求,从而达到流量削峰的目的。
4.JUC的各种阻塞队列介绍
(1)基于数组的阻塞队列ArrayBlockingQueue
(2)基于链表的阻塞队列LinkedBlockingQueue
(3)优先级阻塞队列PriorityBlockingQueue
(4)延迟阻塞队列DelayQueue
(5)无存储结构的阻塞队列SynchronousQueue
(6)阻塞队列结合体LinkedTransferQueue
(7)双向阻塞队列LinkedBlockingDeque
(1)基于数组的阻塞队列ArrayBlockingQueue
ArrayBlockingQueue是一个基于数组实现的阻塞队列。其构造方法可以指定:数组的长度、公平还是非公平、数组的初始集合。
ArrayBlockingQueue会通过ReentrantLock来解决线程竞争的问题,以及采用Condition来解决线程的唤醒与阻塞的问题。
//A bounded BlockingQueue backed by an array.
//This queue orders elements FIFO (first-in-first-out).
//The head of the queue is that element that has been on the queue the longest time.
//The tail of the queue is that element that has been on the queue the shortest time.
//New elements are inserted at the tail of the queue,
//and the queue retrieval operations obtain elements at the head of the queue.
public class ArrayBlockingQueue<E> extends AbstractQueue<E> implements BlockingQueue<E>, java.io.Serializable {
//The queued items
final Object[] items;
//items index for next take, poll, peek or remove
int takeIndex;
//items index for next put, offer, or add
int putIndex;
//Number of elements in the queue
int count;
//Main lock guarding all access
final ReentrantLock lock;
//Condition for waiting takes
private final Condition notEmpty;
//Condition for waiting puts
private final Condition notFull;
//Creates an ArrayBlockingQueue with the given (fixed) capacity and default access policy.
public ArrayBlockingQueue(int capacity) {
this(capacity, false);
}
//Creates an ArrayBlockingQueue with the given (fixed) capacity and the specified access policy.
public ArrayBlockingQueue(int capacity, boolean fair) {
if (capacity <= 0) {
throw new IllegalArgumentException();
}
this.items = new Object[capacity];
lock = new ReentrantLock(fair);
notEmpty = lock.newCondition();
notFull = lock.newCondition();
}
//Inserts the specified element at the tail of this queue,
//waiting for space to become available if the queue is full.
public void put(E e) throws InterruptedException {
checkNotNull(e);
final ReentrantLock lock = this.lock;
lock.lockInterruptibly();
try {
while (count == items.length) {
notFull.await();
}
enqueue(e);
} finally {
lock.unlock();
}
}
//Inserts element at current put position, advances, and signals.
//Call only when holding lock.
private void enqueue(E x) {
final Object[] items = this.items;
items[putIndex] = x;
if (++putIndex == items.length) {
putIndex = 0;
}
count++;
notEmpty.signal();
}
public E take() throws InterruptedException {
final ReentrantLock lock = this.lock;
lock.lockInterruptibly();
try {
while (count == 0) {
notEmpty.await();
}
return dequeue();
} finally {
lock.unlock();
}
}
//Returns the number of elements in this queue.
public int size() {
final ReentrantLock lock = this.lock;
lock.lock();
try {
return count;
} finally {
lock.unlock();
}
}
...
}
(2)基于链表的阻塞队列LinkedBlockingQueue
LinkedBlockingQueue是一个基于链表实现的阻塞队列,它可以不指定阻塞队列的长度,它的默认长度是Integer.MAX_VALUE。由于这个默认长度非常大,一般也称LinkedBlockingQueue为无界队列。
//An optionally-bounded BlockingQueue based on linked nodes.
//This queue orders elements FIFO (first-in-first-out).
//The head of the queue is that element that has been on the queue the longest time.
//The tail of the queue is that element that has been on the queue the shortest time.
//New elements are inserted at the tail of the queue,
//and the queue retrieval operations obtain elements at the head of the queue.
//Linked queues typically have higher throughput than array-based queues
//but less predictable performance in most concurrent applications.
public class LinkedBlockingQueue<E> extends AbstractQueue<E> implements BlockingQueue<E>, java.io.Serializable {
...
//The capacity bound, or Integer.MAX_VALUE if none
private final int capacity;
//Current number of elements
private final AtomicInteger count = new AtomicInteger();
//Head of linked list.
transient Node<E> head;
//Tail of linked list.
private transient Node<E> last;
//Lock held by take, poll, etc
private final ReentrantLock takeLock = new ReentrantLock();
//Lock held by put, offer, etc
private final ReentrantLock putLock = new ReentrantLock();
//Wait queue for waiting takes
private final Condition notEmpty = takeLock.newCondition();
//Wait queue for waiting puts
private final Condition notFull = putLock.newCondition();
//Creates a LinkedBlockingQueue with a capacity of Integer#MAX_VALUE.
public LinkedBlockingQueue() {
this(Integer.MAX_VALUE);
}
//Creates a LinkedBlockingQueue with the given (fixed) capacity.
public LinkedBlockingQueue(int capacity) {
if (capacity <= 0) throw new IllegalArgumentException();
this.capacity = capacity;
last = head = new Node<E>(null);
}
//Inserts the specified element at the tail of this queue,
//waiting if necessary for space to become available.
public void put(E e) throws InterruptedException {
if (e == null) throw new NullPointerException();
int c = -1;
Node<E> node = new Node<E>(e);
final ReentrantLock putLock = this.putLock;
final AtomicInteger count = this.count;
putLock.lockInterruptibly();
try {
while (count.get() == capacity) {
notFull.await();
}
enqueue(node);
c = count.getAndIncrement();
if (c + 1 < capacity) {
notFull.signal();
}
} finally {
putLock.unlock();
}
if (c == 0) {
signalNotEmpty();
}
}
//Links node at end of queue.
private void enqueue(Node<E> node) {
last = last.next = node;
}
//Signals a waiting take. Called only from put/offer (which do not otherwise ordinarily lock takeLock.)
private void signalNotEmpty() {
final ReentrantLock takeLock = this.takeLock;
takeLock.lock();
try {
notEmpty.signal();
} finally {
takeLock.unlock();
}
}
public E take() throws InterruptedException {
E x;
int c = -1;
final AtomicInteger count = this.count;
final ReentrantLock takeLock = this.takeLock;
takeLock.lockInterruptibly();
try {
while (count.get() == 0) {
notEmpty.await();
}
x = dequeue();
c = count.getAndDecrement();
if (c > 1) {
notEmpty.signal();
}
} finally {
takeLock.unlock();
}
if (c == capacity) {
signalNotFull();
}
return x;
}
private E dequeue() {
Node<E> h = head;
Node<E> first = h.next;
h.next = h; // help GC
head = first;
E x = first.item;
first.item = null;
return x;
}
public int size() {
return count.get();
}
...
}
(3)优先级阻塞队列PriorityBlockingQueue
PriorityBlockingQueue是一个支持自定义元素优先级的无界阻塞队列。默认情况下添加的元素采用自然顺序升序排列,当然可以通过实现元素的compareTo()方法自定义优先级规则。
PriorityBlockingQueue是基于数组实现的,这个数组会自动进行动态扩容。在应用方面,消息中间件可以基于优先级阻塞队列来实现。
//An unbounded BlockingQueue that uses the same ordering rules as class PriorityQueue and supplies blocking retrieval operations.
//While this queue is logically unbounded, attempted additions may fail due to resource exhaustion (causing OutOfMemoryError).
//This class does not permit null elements.
public class PriorityBlockingQueue<E> extends AbstractQueue<E> implements BlockingQueue<E>, java.io.Serializable {
//Priority queue represented as a balanced binary heap: the two children of queue[n] are queue[2*n+1] and queue[2*(n+1)].
//The priority queue is ordered by comparator, or by the elements' natural ordering,
//if comparator is null: For each node n in the heap and each descendant d of n, n <= d.
//The element with the lowest value is in queue[0], assuming the queue is nonempty.
private transient Object[] queue;
//The number of elements in the priority queue.
private transient int size;
//The comparator, or null if priority queue uses elements' natural ordering.
private transient Comparator<? super E> comparator;
//Lock used for all public operations
private final ReentrantLock lock;
//Condition for blocking when empty
private final Condition notEmpty;
//Spinlock for allocation, acquired via CAS.
private transient volatile int allocationSpinLock;
//Creates a PriorityBlockingQueue with the default initial capacity (11) that
//orders its elements according to their Comparable natural ordering.
public PriorityBlockingQueue() {
this(DEFAULT_INITIAL_CAPACITY, null);
}
//Creates a PriorityBlockingQueue with the specified initial capacity that
//orders its elements according to their Comparable natural ordering.
public PriorityBlockingQueue(int initialCapacity) {
this(initialCapacity, null);
}
//Creates a PriorityBlockingQueue with the specified initial capacity that orders its elements according to the specified comparator.
public PriorityBlockingQueue(int initialCapacity, Comparator<? super E> comparator) {
if (initialCapacity < 1) {
throw new IllegalArgumentException();
}
this.lock = new ReentrantLock();
this.notEmpty = lock.newCondition();
this.comparator = comparator;
this.queue = new Object[initialCapacity];
}
//Inserts the specified element into this priority queue.
//As the queue is unbounded, this method will never block.
public void put(E e) {
offer(e); // never need to block
}
//Inserts the specified element into this priority queue.
//As the queue is unbounded, this method will never return false.
public boolean offer(E e) {
if (e == null) {
throw new NullPointerException();
}
final ReentrantLock lock = this.lock;
lock.lock();
int n, cap;
Object[] array;
while ((n = size) >= (cap = (array = queue).length)) {
tryGrow(array, cap);
}
try {
Comparator<? super E> cmp = comparator;
if (cmp == null) {
siftUpComparable(n, e, array);
} else {
siftUpUsingComparator(n, e, array, cmp);
}
size = n + 1;
notEmpty.signal();
} finally {
lock.unlock();
}
return true;
}
//Tries to grow array to accommodate at least one more element (but normally expand by about 50%),
//giving up (allowing retry) on contention (which we expect to be rare). Call only while holding lock.
private void tryGrow(Object[] array, int oldCap) {
lock.unlock(); // must release and then re-acquire main lock
Object[] newArray = null;
if (allocationSpinLock == 0 && UNSAFE.compareAndSwapInt(this, allocationSpinLockOffset, 0, 1)) {
try {
int newCap = oldCap + ((oldCap < 64) ? (oldCap + 2) : (oldCap >> 1));
if (newCap - MAX_ARRAY_SIZE > 0) {
int minCap = oldCap + 1;
if (minCap < 0 || minCap > MAX_ARRAY_SIZE) {
throw new OutOfMemoryError();
}
newCap = MAX_ARRAY_SIZE;
}
if (newCap > oldCap && queue == array) {
newArray = new Object[newCap];
}
} finally {
allocationSpinLock = 0;
}
}
if (newArray == null) {// back off if another thread is allocating
Thread.yield();
}
lock.lock();
if (newArray != null && queue == array) {
queue = newArray;
System.arraycopy(array, 0, newArray, 0, oldCap);
}
}
private static <T> void siftUpComparable(int k, T x, Object[] array) {
Comparable<? super T> key = (Comparable<? super T>) x;
while (k > 0) {
int parent = (k - 1) >>> 1;
Object e = array[parent];
if (key.compareTo((T) e) >= 0) {
break;
}
array[k] = e;
k = parent;
}
array[k] = key;
}
private static <T> void siftUpUsingComparator(int k, T x, Object[] array, Comparator<? super T> cmp) {
while(k > 0) {
int parent = (k - 1) >>> 1;
Object e = array[parent];
if (cmp.compare(x, (T) e) >= 0) {
break;
}
array[k] = e;
k = parent;
}
array[k] = x;
}
public E take() throws InterruptedException {
final ReentrantLock lock = this.lock;
lock.lockInterruptibly();
E result;
try {
while ( (result = dequeue()) == null) {
notEmpty.await();
}
} finally {
lock.unlock();
}
return result;
}
public int size() {
final ReentrantLock lock = this.lock;
lock.lock();
try {
return size;
} finally {
lock.unlock();
}
}
...
}
(4)延迟阻塞队列DelayQueue
DelayQueue是一个支持延迟获取元素的无界阻塞队列,它是基于优先级队列PriorityQueue实现的。
往DelayQueue队列插入元素时,可以按照自定义的delay时间进行排序。也就是队列中的元素顺序是按照到期时间排序的,只有delay时间小于或等于0的元素才能够被取出。
DelayQueue的应用场景有:
一.订单超时支付需要自动取消订单
二.任务超时处理需要自动丢弃任务
三.消息中间件的实现
//An unbounded BlockingQueue of Delayed elements, in which an element can only be taken when its delay has expired.
//The head of the queue is that Delayed element whose delay expired furthest in the past.
//If no delay has expired there is no head and poll will return null.
//Expiration occurs when an element's getDelay(TimeUnit.NANOSECONDS) method returns a value less than or equal to zero.
//Even though unexpired elements cannot be removed using take or poll, they are otherwise treated as normal elements.
//For example, the size method returns the count of both expired and unexpired elements.
//This queue does not permit null elements.
public class DelayQueue<E extends Delayed> extends AbstractQueue<E> implements BlockingQueue<E> {
private final transient ReentrantLock lock = new ReentrantLock();
private final PriorityQueue<E> q = new PriorityQueue<E>();
//Thread designated to wait for the element at the head of the queue.
//When a thread becomes the leader, it waits only for the next delay to elapse, but other threads await indefinitely.
//The leader thread must signal some other thread before returning from take() or poll(...),
//unless some other thread becomes leader in the interim.
//Whenever the head of the queue is replaced with an element with an earlier expiration time,
//the leader field is invalidated by being reset to null, and some waiting thread,
//but not necessarily the current leader, is signalled.
//So waiting threads must be prepared to acquire and lose leadership while waiting.
private Thread leader = null;
//Condition signalled when a newer element becomes available at the head of the queue or a new thread may need to become leader.
private final Condition available = lock.newCondition();
//Creates a new {@code DelayQueue} that is initially empty.
public DelayQueue() {
}
//Inserts the specified element into this delay queue.
//As the queue is unbounded this method will never block.
public void put(E e) {
offer(e);
}
//Inserts the specified element into this delay queue.
public boolean offer(E e) {
final ReentrantLock lock = this.lock;
lock.lock();
try {
q.offer(e);
if (q.peek() == e) {
leader = null;
available.signal();
}
return true;
} finally {
lock.unlock();
}
}
//Retrieves and removes the head of this queue,
//waiting if necessary until an element with an expired delay is available on this queue.
public E take() throws InterruptedException {
final ReentrantLock lock = this.lock;
lock.lockInterruptibly();
try {
for (;;) {
E first = q.peek();
if (first == null) {
available.await();
} else {
long delay = first.getDelay(NANOSECONDS);
if (delay <= 0) {
return q.poll();
}
first = null; // don't retain ref while waiting
if (leader != null) {
available.await();
} else {
Thread thisThread = Thread.currentThread();
leader = thisThread;
try {
available.awaitNanos(delay);
} finally {
if (leader == thisThread) {
leader = null;
}
}
}
}
}
} finally {
if (leader == null && q.peek() != null) {
available.signal();
}
lock.unlock();
}
}
...
}
public class PriorityQueue<E> extends AbstractQueue<E> implements java.io.Serializable {
//Priority queue represented as a balanced binary heap: the two children of queue[n] are queue[2*n+1] and queue[2*(n+1)].
//The priority queue is ordered by comparator, or by the elements' natural ordering, if comparator is null:
//For each node n in the heap and each descendant d of n, n <= d.
//The element with the lowest value is in queue[0], assuming the queue is nonempty.
transient Object[] queue;
//The number of elements in the priority queue.
private int size = 0;
//The comparator, or null if priority queue uses elements' natural ordering.
private final Comparator<? super E> comparator;
public E peek() {
return (size == 0) ? null : (E) queue[0];
}
//Inserts the specified element into this priority queue.
public boolean offer(E e) {
if (e == null) {
throw new NullPointerException();
}
modCount++;
int i = size;
if (i >= queue.length) {
grow(i + 1);
}
size = i + 1;
if (i == 0) {
queue[0] = e;
} else {
siftUp(i, e);
}
return true;
}
//Increases the capacity of the array.
private void grow(int minCapacity) {
int oldCapacity = queue.length;
// Double size if small; else grow by 50%
int newCapacity = oldCapacity + ((oldCapacity < 64) ? (oldCapacity + 2) : (oldCapacity >> 1));
// overflow-conscious code
if (newCapacity - MAX_ARRAY_SIZE > 0) {
newCapacity = hugeCapacity(minCapacity);
}
queue = Arrays.copyOf(queue, newCapacity);
}
private void siftUp(int k, E x) {
if (comparator != null) {
siftUpUsingComparator(k, x);
} else {
siftUpComparable(k, x);
}
}
@SuppressWarnings("unchecked")
private void siftUpComparable(int k, E x) {
Comparable<? super E> key = (Comparable<? super E>) x;
while (k > 0) {
int parent = (k - 1) >>> 1;
Object e = queue[parent];
if (key.compareTo((E) e) >= 0) {
break;
}
queue[k] = e;
k = parent;
}
queue[k] = key;
}
@SuppressWarnings("unchecked")
private void siftUpUsingComparator(int k, E x) {
while (k > 0) {
int parent = (k - 1) >>> 1;
Object e = queue[parent];
if (comparator.compare(x, (E) e) >= 0) {
break;
}
queue[k] = e;
k = parent;
}
queue[k] = x;
}
...
}
(5)无存储结构的阻塞队列SynchronousQueue
SynchronousQueue的内部没有容器来存储数据,因此当生产者往其添加一个元素而没有消费者去获取元素时,生产者会阻塞。当消费者往其获取一个元素而没有生产者去添加元素时,消费者也会阻塞。
SynchronousQueue的本质是借助了无容量存储的特点,来实现生产者线程和消费者线程的即时通信,所以它特别适合在两个线程之间及时传递数据。
线程池是基于阻塞队列来实现生产者/消费者模型的。当向线程池提交任务时,首先会把任务放入阻塞队列中,然后线程池中会有对应的工作线程专门处理阻塞队列中的任务。
Executors.newCachedThreadPool()就是基于SynchronousQueue来实现的,它会返回一个可以缓存的线程池。如果这个线程池大小超过处理当前任务所需的数量,会灵活回收空闲线程。当任务数量增加时,这个线程池会不断创建新的工作线程来处理这些任务。
public class Executors {
...
//Creates a thread pool that creates new threads as needed,
//but will reuse previously constructed threads when they are available.
//These pools will typically improve the performance of programs that execute many short-lived asynchronous tasks.
//Calls to execute will reuse previously constructed threads if available.
//If no existing thread is available, a new thread will be created and added to the pool.
//Threads that have not been used for sixty seconds are terminated and removed from the cache.
//Thus, a pool that remains idle for long enough will not consume any resources.
//Note that pools with similar properties but different details (for example, timeout parameters)
//may be created using ThreadPoolExecutor constructors.
public static ExecutorService newCachedThreadPool() {
return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS, new SynchronousQueue<Runnable>());
}
...
}
(6)阻塞队列结合体LinkedTransferQueue
LinkedTransferQueue是一个基于链表实现的无界阻塞TransferQueue。
阻塞队列的特性是根据队列的数据情况来阻塞生产者线程或消费者线程,TransferQueue的特性是生产者线程生产数据后必须等消费者消费才返回。
LinkedTransferQueue是TransferQueue和LinkedBlockingQueue的结合体,而SynchronousQueue内部其实也是基于TransferQueue来实现的,所以LinkedTransferQueue是带有阻塞队列功能的SynchronousQueue。
(7)双向阻塞队列LinkedBlockingDeque
LinkedBlockingDeque是一个基于链表实现的双向阻塞队列,双向队列的两端都可以插入和移除元素,可减少多线程并发下的一半竞争。
5.LinkedBlockingQueue的具体实现原理
(1)阻塞队列的设计分析
(2)有界队列LinkedBlockingQueue
(3)LinkedBlockingQueue的put()方法
(4)LinkedBlockingQueue的take()方法
(5)LinkedBlockingQueue使用两把锁拆分锁功能
(6)LinkedBlockingQueue的size()方法和迭代
(7)对比LinkedBlockingQueue链表队列和ArrayBlockingQueue数组队列
(1)阻塞队列的设计分析
阻塞队列的特性为:如果队列为空,消费者线程会被阻塞。如果队列满了,生产者线程会被阻塞。
为了实现这个特性:如何让线程在满足某个特定条件的情况下实现阻塞和唤醒?阻塞队列中的数据应该用什么样的容器来存储?
线程的阻塞和唤醒,可以使用wait/notify或者Condition。阻塞队列中数据的存储,可以使用数组或者链表。
(2)有界队列LinkedBlockingQueue
一.并发安全的无界队列
比如ConcurrentLinkedQueue,是没有边界没有大小限制的。它就是一个单向链表,可以无限制的往里面去存放数据。如果不停地往无界队列里添加数据,那么可能会导致内存溢出。
二.并发安全的有界队列
比如LinkedBlockingQueue,是有边界的有大小限制的。它也是一个单向链表,如果超过了限制,往队列里添加数据就会被阻塞。因此可以限制内存队列的大小,避免内存队列无限增长,最后撑爆内存。
(3)LinkedBlockingQueue的put()方法
put()方法是阻塞添加元素的方法,当队列满时,阻塞添加元素的线程。
首先把添加的元素封装成一个Node对象,该对象表示链表中的一个结点。
然后使用ReentrantLock.lockInterruptibly()方法来获取一个可被中断的锁,加锁的目的是保证数据添加到队列过程中的安全性 + 避免队列长度超阈值。
接着调用enqueue()方法把封装的Node对象存储到链表尾部,然后通过AtomicInteger来递增当前阻塞队列中的元素个数。
最后根据AtomicInteger类型的变量判断队列元素是否已超阈值。
注意:这里用到了一个很重要的属性notFull。notFull是一个Condition对象,用来阻塞和唤醒生产者线程。如果队列元素个数等于最大容量,就调用notFull.await()阻塞生产者线程。如果队列元素个数小于最大容量,则调用notFull.signal()唤醒生产者线程。
public class LinkedBlockingQueue<E> extends AbstractQueue<E> implements BlockingQueue<E>, java.io.Serializable {
...
private final int capacity;//阻塞队列的最大容量,默认是Integer.MAX_VALUE
private final AtomicInteger count = new AtomicInteger();//阻塞队列的元素个数
transient Node<E> head;//链表的头结点
private transient Node<E> last;//链表的尾结点
//使用两把锁,使put()和take()的锁分离,提升并发性能
private final ReentrantLock takeLock = new ReentrantLock();//出队的锁
private final ReentrantLock putLock = new ReentrantLock();//入队的锁
//使用两个Condition,分别阻塞和唤醒出队时的线程和入队时的线程
private final Condition notEmpty = takeLock.newCondition();//出队的等待队列condition
private final Condition notFull = putLock.newCondition();//入队的等待队列condition
//Creates a LinkedBlockingQueue with a capacity of Integer#MAX_VALUE.
public LinkedBlockingQueue() {
this(Integer.MAX_VALUE);
}
//Creates a LinkedBlockingQueue with the given (fixed) capacity.
public LinkedBlockingQueue(int capacity) {
if (capacity <= 0) throw new IllegalArgumentException();
this.capacity = capacity;
last = head = new Node<E>(null);
}
//Inserts the specified element at the tail of this queue,
//waiting if necessary for space to become available.
public void put(E e) throws InterruptedException {
if (e == null) throw new NullPointerException();
int c = -1;
//将添加的元素封装成一个Node对象
Node<E> node = new Node<E>(e);
final ReentrantLock putLock = this.putLock;
final AtomicInteger count = this.count;//当前队列元素的数量
putLock.lockInterruptibly();//加可被中断的锁
try {
//注意:这里用到了一个很重要的属性notFull,它是一个Condition对象,用来阻塞和唤醒生产者线程
//如果阻塞队列当前的元素个数等于最大容量,就调用notFull.await()方法来阻塞生产者线程
while (count.get() == capacity) {
notFull.await();//阻塞当前线程,并释放锁
}
//把封装的Node对象存储到链表中
enqueue(node);
//通过AtomicInteger来递增当前阻塞队列中的元素个数,用于后续判断是否已超阻塞队列的最大容量
c = count.getAndIncrement();
//根据AtomicInteger类型的变量判断队列元素是否已超阈值
if (c + 1 < capacity) {
notFull.signal();
}
} finally {
putLock.unlock();//释放锁
}
if (c == 0) {
signalNotEmpty();
}
}
//Links node at end of queue.
private void enqueue(Node<E> node) {
//node先成为当前last的next
//然后last又指向last的next(即node)
last = last.next = node;
}
//Signals a waiting take. Called only from put/offer (which do not otherwise ordinarily lock takeLock.)
private void signalNotEmpty() {
final ReentrantLock takeLock = this.takeLock;
takeLock.lock();
try {
notEmpty.signal();
} finally {
takeLock.unlock();
}
}
...
}
(4)LinkedBlockingQueue的take()方法
take()方法是阻塞获取元素的方法,当队列为空时,阻塞获取元素的线程。
首先使用ReentrantLock.lockInterruptibly()方法来获取一个可被中断的锁。
然后判断元素个数是否为0,若是则通过notEmpty.await()阻塞消费者线程。
否则接着调用dequeue()方法从链表头部获取一个元素,并通过AtomicInteger来递减当前阻塞队列中的元素个数。
最后判断阻塞队列中的元素个数是否大于1,如果是,则调用notEmpty.signal()唤醒被阻塞的消费者线程。
public class LinkedBlockingQueue<E> extends AbstractQueue<E> implements BlockingQueue<E>, java.io.Serializable {
...
private final int capacity;//阻塞队列的最大容量,默认是Integer.MAX_VALUE
private final AtomicInteger count = new AtomicInteger();//阻塞队列的元素个数
transient Node<E> head;//链表的头结点
private transient Node<E> last;//链表的尾结点
//使用两把锁,使put()和take()的锁分离,提升并发性能
private final ReentrantLock takeLock = new ReentrantLock();//出队的锁
private final ReentrantLock putLock = new ReentrantLock();//入队的锁
//使用两个Condition,分别阻塞和唤醒出队时的线程和入队时的线程
private final Condition notEmpty = takeLock.newCondition();//出队的等待队列condition
private final Condition notFull = putLock.newCondition();//入队的等待队列condition
//Creates a LinkedBlockingQueue with a capacity of Integer#MAX_VALUE.
public LinkedBlockingQueue() {
this(Integer.MAX_VALUE);
}
//Creates a LinkedBlockingQueue with the given (fixed) capacity.
public LinkedBlockingQueue(int capacity) {
if (capacity <= 0) throw new IllegalArgumentException();
this.capacity = capacity;
last = head = new Node<E>(null);
}
public E take() throws InterruptedException {
E x;
int c = -1;
final AtomicInteger count = this.count;
final ReentrantLock takeLock = this.takeLock;
//获取一个可中断的锁
takeLock.lockInterruptibly();
try {
//判断元素个数是否为0
while (count.get() == 0) {
notEmpty.await();//阻塞当前线程并释放锁
}
//调用dequeue()方法从链表中获取一个元素
x = dequeue();
//通过AtomicInteger来递减当前阻塞队列中的元素个数
c = count.getAndDecrement();
//判断阻塞队列中的元素个数是否大于1
//如果是,则调用notEmpty.signal()唤醒被阻塞的消费者消除
if (c > 1) {
notEmpty.signal();
}
} finally {
takeLock.unlock();
}
if (c == capacity) {
signalNotFull();
}
return x;
}
//首先获取链表的头结点head
//然后拿到头结点的下一个结点first
//然后把原来的头结点从队列中移除,设置first结点的数据为null,并将first结点设置为新的头结点
//最后返回first结点的数据
private E dequeue() {
Node<E> h = head;//h指向head
Node<E> first = h.next;//first指向h的next
h.next = h;// help GC
head = first;
E x = first.item;
first.item = null;
return x;
}
...
}
(5)LinkedBlockingQueue使用两把锁拆分锁功能
两把独占锁可以提升并发性能,因为出队和入队用的是不同的锁。这样在并发出队和入队的时候,出队和入队就可以同时执行,不会锁冲突。
这也是锁优化的一种思想,通过将一把锁按不同的功能进行拆分,使用不同的锁控制不同功能下的并发冲突,从而提升性能。
(6)LinkedBlockingQueue的size()方法和迭代
一.size()方法获取的结果也不是100%准确
LinkedBlockingQueue的size()方法获取元素个数是通过AtomicInteger获取的。
相比ConcurrentLinkedQueue通过遍历队列获取,准确性大很多。
相比CopyOnWriteArrayList通过遍历老副本数组获取,准确性也大很多。
但是相比ConcurrentHashMap通过分段CAS统计,那么准确性则差不多。
注意:LinkedBlockingQueue也不能获取到100%准确的队列元素的个数。除非锁掉整个队列,调用size()时不允许入队和出队,才会是100%准确。因为是完成入队或出队之后,才会对AtomicInteger变量进行递增或递减。
二.迭代时获取两把锁来锁整个队列
LinkedBlockingQueue的遍历会直接锁整个队列,即会先获取两把锁。
public class LinkedBlockingQueue<E> extends AbstractQueue<E> implements BlockingQueue<E>, java.io.Serializable {
...
private final AtomicInteger count = new AtomicInteger();//阻塞队列的元素个数
transient Node<E> head;//链表的头结点
private transient Node<E> last;//链表的尾结点
//使用两把锁,使put()和take()的锁分离,提升并发性能
private final ReentrantLock takeLock = new ReentrantLock();//出队的锁
private final ReentrantLock putLock = new ReentrantLock();//入队的锁
public int size() {
return count.get();
}
public Iterator<E> iterator() {
return new Itr();
}
private class Itr implements Iterator<E> {
private Node<E> current;
private Node<E> lastRet;
private E currentElement;
Itr() {
fullyLock();
try {
current = head.next;
if (current != null) {
currentElement = current.item;
}
} finally {
fullyUnlock();
}
}
...
}
void fullyLock() {
putLock.lock();
takeLock.lock();
}
void fullyUnlock() {
takeLock.unlock();
putLock.unlock();
}
...
}
(7)对比LinkedBlockingQueue链表队列和ArrayBlockingQueue数组队列
一.LinkedBlockingQueue是基于链表实现的有界阻塞队列,ArrayBlockingQueue是基于数组实现的有界阻塞队列
二.ArrayBlockingQueue的整体实现原理与LinkedBlockingQueue的整体实现原理是一样的
三.LinkedBlockingQueue需要使用两把独占锁,分别锁出队和入队的场景
四.ArrayBlockingQueue只使用一把锁,锁整个数组,所以其入队和出队不能同时进行
五.ArrayBlockingQueue执行size()方法获取元素个数时会直接加独占锁
六.ArrayBlockingQueue和LinkedBlockingQueue执行迭代方法时都会锁数据
6.基于两个队列实现的集群同步机制
(1)服务注册中心集群需要实现的功能
(2)基于两个队列实现的集群同步机制
(3)使用ConcurrentLinkedQueue实现第一个队列
(4)使用LinkedBlockingQueue实现第二个队列
(5)集群同步机制的具体实现
(1)服务注册中心集群需要实现的功能
服务实例向任何一个服务注册中心实例发送注册、下线、心跳的请求,该服务注册中心实例都需要将这些信息同步到其他的服务注册中心实例上,从而确保所有服务注册中心实例的内存注册表的数据是一致的。
(2)基于两个队列实现的集群同步机制
某服务注册中心实例接收到服务实例A的请求时,首先会把服务实例A的服务请求信息存储到本地的内存注册表里,也就是把服务实例A的服务请求信息写到第一个内存队列中,之后该服务注册中心实例对服务实例A的请求处理就可以结束并返回。
接着该服务注册中心实例会有一个后台线程消费第一个内存队列里的数据,把消费到的第一个内存队列的数据batch打包然后写到第二个内存队列里。
最后该服务注册中心实例还有一个后台线程消费第二个内存队列里的数据,把消费到的第二个内存队列的数据同步到其他服务注册中心实例中。
(3)使用ConcurrentLinkedQueue实现第一个队列
首先有两种队列:
一是无界队列ConcurrentLinkedQueue,基于CAS实现,并发性能很高。
二是有界队列LinkedBlockingQueue,基于两把锁实现,并发性能一般。
LinkedBlockingQueue默认的队列长度是MAX_VALUE,所以可以看成是无界队列。但是也可以指定正常大小的队列长度,从而实现入队的阻塞,避免耗尽内存。
当服务注册中心实例接收到各种请求时,会先将请求信息放入第一个队列。所以第一个队列会存在高并发写的情况,因此LinkedBlockingQueue不合适。
因为LinkedBlockingQueue属于阻塞队列,如果LinkedBlockingQueue满了,那么服务注册中心实例中的,处理服务请求的线程,就会被阻塞住。而且LinkedBlockingQueue的并发性能也不是太高,要获取独占锁才能写,所以最好还是使用无界队列ConcurrentLinkedQueue来实现第一个队列。
(4)使用LinkedBlockingQueue实现第二个队列
消费第一个内存队列的数据时,可以按时间来进行batch打包,比如每隔500ms才将消费到的所有数据打包成一个batch消息。接着再将这个batch信息放入到第二个内存队列中,这样消费第二个队列的数据时,只需同步batch信息到集群其他实例即可。
可见对第二个队列进行的入队和出队操作是由少数的后台线程来执行的,因此可以使用有界队列LinkedBlockingQueue来实现第二个内存队列。
此外还要估算有界队列LinkedBlockingQueue的队列长度应设置多少才合适。假如每一条需要同步给集群其他实例的请求信息,有6个字段,占30字节。平均每一条batch信息会包含100条请求信息,也就是会占3000字节 = 3KB。那么1000条batch消息,才占用3000KB = 3MB。因此可以设置第二个内存队列LinkedBlockingQueue的长度为1000。
(5)集群同步机制的具体实现
//集群同步组件
public class PeersReplicator {
//集群同步生成batch的间隔时间:500ms
private static final long PEERS_REPLICATE_BATCH_INTERVAL = 500;
private static final PeersReplicator instance = new PeersReplicator();
private PeersReplicator() {
//启动接收请求和打包batch的线程
AcceptorBatchThread acceptorBatchThread = new AcceptorBatchThread();
acceptorBatchThread.setDaemon(true);
acceptorBatchThread.start();
//启动同步发送batch的线程
PeersReplicateThread peersReplicateThread = new PeersReplicateThread();
peersReplicateThread.setDaemon(true);
peersReplicateThread.start();
}
public static PeersReplicator getInstance() {
return instance;
}
//第一个内存队列:处理高并发的服务请求,所以存在高并发的写入情况,无界队列
private ConcurrentLinkedQueue<AbstractRequest> acceptorQueue = new ConcurrentLinkedQueue<AbstractRequest>();
//第二个内存队列:有界队列,用于同步batch消息到其他集群实例
private LinkedBlockingQueue<PeersReplicateBatch> replicateQueue = new LinkedBlockingQueue<PeersReplicateBatch>(10000);
//同步服务注册请求
public void replicateRegister(RegisterRequest request) {
request.setType(AbstractRequest.REGISTER_REQUEST);
//将请求消息放入第一个内存队列
acceptorQueue.offer(request);
}
//同步服务下线请求
public void replicateCancel(CancelRequest request) {
request.setType(AbstractRequest.CANCEL_REQUEST);
//将请求消息放入第一个内存队列
acceptorQueue.offer(request);
}
//同步发送心跳请求
public void replicateHeartbeat(HeartbeatRequest request) {
request.setType(AbstractRequest.HEARTBEAT_REQUEST);
//将请求消息放入第一个内存队列
acceptorQueue.offer(request);
}
//负责接收数据以及打包为batch的后台线程
class AcceptorBatchThread extends Thread {
long latestBatchGeneration = System.currentTimeMillis();
@Override
public void run() {
while(true) {
try {
//每隔500ms生成一个batch
PeersReplicateBatch batch = new PeersReplicateBatch();
long now = System.currentTimeMillis();
if (now - latestBatchGeneration >= PEERS_REPLICATE_BATCH_INTERVAL) {//已经到了500ms的时间间隔
//将batch消息放入第二个内存队列
replicateQueue.offer(batch);
//更新latestBatchGeneration
latestBatchGeneration = System.currentTimeMillis();
//重置batch
batch = new PeersReplicateBatch();
} else {//还没到500ms的时间间隔
//从第一层队列获取数据,然后batch放入到第二层队列中
AbstractRequest request = acceptorQueue.poll();
if (request != null) {
batch.add(request);
} else {
Thread.sleep(100);
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
//集群同步线程
class PeersReplicateThread extends Thread {
@Override
public void run() {
while(true) {
try {
PeersReplicateBatch batch = replicateQueue.take();
if (batch != null) {
//遍历其他的register-server地址
//给每个地址的register-server都发送一个http请求同步batch
System.out.println("给其他的register-server发送请求,同步batch......");
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
}
//用于进行批量同步的batch消息
public class PeersReplicateBatch {
private List<AbstractRequest> requests = new ArrayList<AbstractRequest>();
public void add(AbstractRequest request) {
this.requests.add(request);
}
public List<AbstractRequest> getRequests() {
return requests;
}
public void setRequests(List<AbstractRequest> requests) {
this.requests = requests;
}
}
//负责接收和处理register-client发送过来的请求的
public class RegisterServerController {
//服务注册表
private ServiceRegistry registry = ServiceRegistry.getInstance();
//服务注册表的缓存
private ServiceRegistryCache registryCache = ServiceRegistryCache.getInstance();
//集群同步组件
private PeersReplicator peersReplicator = PeersReplicator.getInstance();
//服务注册
public RegisterResponse register(RegisterRequest registerRequest) {
RegisterResponse registerResponse = new RegisterResponse();
try {
//在注册表中加入这个服务实例
ServiceInstance serviceInstance = new ServiceInstance();
serviceInstance.setHostname(registerRequest.getHostname());
serviceInstance.setIp(registerRequest.getIp());
serviceInstance.setPort(registerRequest.getPort());
serviceInstance.setServiceInstanceId(registerRequest.getServiceInstanceId());
serviceInstance.setServiceName(registerRequest.getServiceName());
registry.register(serviceInstance);
//更新自我保护机制的阈值
synchronized (SelfProtectionPolicy.class) {
SelfProtectionPolicy selfProtectionPolicy = SelfProtectionPolicy.getInstance();
selfProtectionPolicy.setExpectedHeartbeatRate(selfProtectionPolicy.getExpectedHeartbeatRate() + 2);
selfProtectionPolicy.setExpectedHeartbeatThreshold((long)(selfProtectionPolicy.getExpectedHeartbeatRate() * 0.85));
}
//过期掉注册表缓存
registryCache.invalidate();
//进行集群同步
peersReplicator.replicateRegister(registerRequest);
registerResponse.setStatus(RegisterResponse.SUCCESS);
} catch (Exception e) {
e.printStackTrace();
registerResponse.setStatus(RegisterResponse.FAILURE);
}
return registerResponse;
}
//服务下线
public void cancel(CancelRequest cancelRequest) {
//从服务注册中摘除实例
registry.remove(cancelRequest.getServiceName(), cancelRequest.getServiceInstanceId());
//更新自我保护机制的阈值
synchronized (SelfProtectionPolicy.class) {
SelfProtectionPolicy selfProtectionPolicy = SelfProtectionPolicy.getInstance();
selfProtectionPolicy.setExpectedHeartbeatRate(selfProtectionPolicy.getExpectedHeartbeatRate() - 2);
selfProtectionPolicy.setExpectedHeartbeatThreshold((long)(selfProtectionPolicy.getExpectedHeartbeatRate() * 0.85));
}
//过期掉注册表缓存
registryCache.invalidate();
//进行集群同步
peersReplicator.replicateCancel(cancelRequest);
}
//发送心跳
public HeartbeatResponse heartbeat(HeartbeatRequest heartbeatRequest) {
HeartbeatResponse heartbeatResponse = new HeartbeatResponse();
try {
//获取服务实例
ServiceInstance serviceInstance = registry.getServiceInstance(heartbeatRequest.getServiceName(), heartbeatRequest.getServiceInstanceId());
if (serviceInstance != null) {
serviceInstance.renew();
}
//记录一下每分钟的心跳的次数
HeartbeatCounter heartbeatMessuredRate = HeartbeatCounter.getInstance();
heartbeatMessuredRate.increment();
//进行集群同步
peersReplicator.replicateHeartbeat(heartbeatRequest);
heartbeatResponse.setStatus(HeartbeatResponse.SUCCESS);
} catch (Exception e) {
e.printStackTrace();
heartbeatResponse.setStatus(HeartbeatResponse.FAILURE);
}
return heartbeatResponse;
}
//同步batch数据
public void replicateBatch(PeersReplicateBatch batch) {
for (AbstractRequest request : batch.getRequests()) {
if (request.getType().equals(AbstractRequest.REGISTER_REQUEST)) {
register((RegisterRequest) request);
} else if (request.getType().equals(AbstractRequest.CANCEL_REQUEST)) {
cancel((CancelRequest) request);
} else if (request.getType().equals(AbstractRequest.HEARTBEAT_REQUEST)) {
heartbeat((HeartbeatRequest) request);
}
}
}
//拉取全量注册表
public Applications fetchFullRegistry() {
return (Applications) registryCache.get(CacheKey.FULL_SERVICE_REGISTRY);
}
//拉取增量注册表
public DeltaRegistry fetchDeltaRegistry() {
return (DeltaRegistry) registryCache.get(CacheKey.DELTA_SERVICE_REGISTRY);
}
}