Java并发线程ConcurrentHashMap(JDK1.7)解析

2021/4/18 22:25:25

本文主要是介绍Java并发线程ConcurrentHashMap(JDK1.7)解析,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

最近看了一下ConcurrentHashMap的相关代码,感觉JDK1.7和JDK1.8差别挺大的,这次先看下JDK1.7是怎么实现的吧

哈希(hash)

先了解一下啥是哈希(网上有很多介绍),是一种散列函数,简单来说就是将输入值转换为固定值的一种压缩映射,在Java中最常见的就是Object.hashCode(),通过固定算法计算出来的一个值

数据结构

ConcurrentHashMap主要结构是有Segment<K,V>以及HashEntry<K,V>链表组成的

我们先看一下HashEntry<K,V>的主要结构,还是单向链表的数据结构:

    static final class HashEntry<K,V> {
        final int hash;//hash值
        final K key;//存储key
        volatile V value;//存储值
        volatile HashEntry<K,V> next;//指向下一个,单向链表
        HashEntry(int hash, K key, V value, HashEntry<K,V> next) {
            this.hash = hash;
            this.key = key;
            this.value = value;
            this.next = next;
        }
        //......
    }

 再来看一下Segment<K,V>的数据结构,主要还是用到了HashEntry<K,V>数组:

static final class Segment<K,V> extends ReentrantLock implements Serializable {
    //数据储存数组
    transient volatile HashEntry<K,V>[] table;
     /**
      * The load factor for the hash table.  Even though this value
      * is same for all segments, it is replicated to avoid needing
      * links to outer object.
      * @serial
     */
    //扩容因子,当Segment的数量大于initialCapacity* loadFactor就会扩容
    final float loadFactor;
    /**
     * The table is rehashed when its size exceeds this threshold.
     * (The value of this field is always <tt>(int)(capacity *
     * loadFactor)</tt>.)
     */
    //阈值,超出后就必须重新散列,就是扩容
    transient int threshold;

    Segment(float lf, int threshold, HashEntry<K,V>[] tab) {
        this.loadFactor = lf;
        this.threshold = threshold;
        this.table = tab;
    }

    //.....
}

接下来看一下ConcurrentHashMap的构造函数以及相关变量:

    /**
     * The default initial capacity for this table,
     * used when not otherwise specified in a constructor.
     */
    //容器的默认大小
    static final int DEFAULT_INITIAL_CAPACITY = 16;

    /**
     * The default load factor for this table, used when not
     * otherwise specified in a constructor.
     */
    //用来调整大小的,就是扩容
    static final float DEFAULT_LOAD_FACTOR = 0.75f;

    /**
     * The default concurrency level for this table, used when not
     * otherwise specified in a constructor.
     */
    //并发时访问的线程数量
    static final int DEFAULT_CONCURRENCY_LEVEL = 16;    

    final Segment<K,V>[] segments;//数据存储的数组

    //最大并发的线程数,不能超过65536
    static final int MAX_SEGMENTS = 1 << 16; // slightly conservative
    //最大容量数,不能超过2的30次方
    static final int MAXIMUM_CAPACITY = 1 << 30;

    public ConcurrentHashMap(int initialCapacity,
                             float loadFactor, int concurrencyLevel) {
        if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0)
            throw new IllegalArgumentException();
        if (concurrencyLevel > MAX_SEGMENTS)
            concurrencyLevel = MAX_SEGMENTS;
        // Find power-of-two sizes best matching arguments
        int sshift = 0;
        int ssize = 1;
        while (ssize < concurrencyLevel) {
            ++sshift;
            ssize <<= 1;
        }
        this.segmentShift = 32 - sshift;
        this.segmentMask = ssize - 1;
        if (initialCapacity > MAXIMUM_CAPACITY)
            initialCapacity = MAXIMUM_CAPACITY;
        int c = initialCapacity / ssize;
        if (c * ssize < initialCapacity)
            ++c;
        int cap = MIN_SEGMENT_TABLE_CAPACITY;
        while (cap < c)
            cap <<= 1;
        // create segments and segments[0]
        Segment<K,V> s0 =
            new Segment<K,V>(loadFactor, (int)(cap * loadFactor),
                             (HashEntry<K,V>[])new HashEntry[cap]);
        Segment<K,V>[] ss = (Segment<K,V>[])new Segment[ssize];
        UNSAFE.putOrderedObject(ss, SBASE, s0); // ordered write of segments[0]
        this.segments = ss;
    }

在构造方法中可以看到,其实还是创建一个Segment的数组,默认的话长度为16,并且将s0变量赋值进去,s0中的HashEntry数组的大小默认为2。

接下来看一下我们经常用put()方法,源代码如下:

首先需要计算key值的hash值,计算方法是固定的算法,然后判断Segment数组中是否有这个hash值的数据,如果不存在的话,则进入扩容方法ensureSegment(j);在这个方法中可以看到扩容新数组的长度为table.length * loadFactor,即每次扩容为initialCapacity* loadFactor,只会扩容HashEntry数组,并非Segment数组;如果存在的话,则调用Segment的put()方法,这个方法总共有四个参数,最后一个参数是用于区别putIfAbsent()以及put(),这两个方法区别简单来说就是,判断当前key存不存在,如果存在的话put()方法就是覆盖,而putIfAbsent()就是不覆盖,并且这两个方法都会返回旧值,在下面的有Segment的put方法解析。

    @SuppressWarnings("unchecked")
    public V put(K key, V value) {
        Segment<K,V> s;
        if (value == null)
            throw new NullPointerException();
        int hash = hash(key);
        int j = (hash >>> segmentShift) & segmentMask;
        if ((s = (Segment<K,V>)UNSAFE.getObject          // nonvolatile; recheck
             (segments, (j << SSHIFT) + SBASE)) == null) //  in ensureSegment
            s = ensureSegment(j);
        return s.put(key, hash, value, false);
    }
    private int hash(Object k) {
        int h = hashSeed;

        if ((0 != h) && (k instanceof String)) {
            return sun.misc.Hashing.stringHash32((String) k);
        }

        h ^= k.hashCode();

        // Spread bits to regularize both segment and index locations,
        // using variant of single-word Wang/Jenkins hash.
        h += (h <<  15) ^ 0xffffcd7d;
        h ^= (h >>> 10);
        h += (h <<   3);
        h ^= (h >>>  6);
        h += (h <<   2) + (h << 14);
        return h ^ (h >>> 16);
    }
    //扩容Segment的数组,
    private Segment<K,V> ensureSegment(int k) {
        final Segment<K,V>[] ss = this.segments;
        long u = (k << SSHIFT) + SBASE; // raw offset
        Segment<K,V> seg;
        if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u)) == null) {
            Segment<K,V> proto = ss[0]; // use segment 0 as prototype
            int cap = proto.table.length;
            float lf = proto.loadFactor;
            int threshold = (int)(cap * lf);
            HashEntry<K,V>[] tab = (HashEntry<K,V>[])new HashEntry[cap];
            if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u))
                == null) { // recheck
                Segment<K,V> s = new Segment<K,V>(lf, threshold, tab);
                while ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u))
                       == null) {
                    if (UNSAFE.compareAndSwapObject(ss, u, null, seg = s))
                        break;
                }
            }
        }
        return seg;
    }

    //Segement中的put方法:可以看到,首先会先去获取锁
    final V put(K key, int hash, V value, boolean onlyIfAbsent) {
            HashEntry<K,V> node = tryLock() ? null :
                scanAndLockForPut(key, hash, value);
            V oldValue;
            try {
                HashEntry<K,V>[] tab = table;
                int index = (tab.length - 1) & hash;
                HashEntry<K,V> first = entryAt(tab, index);
                for (HashEntry<K,V> e = first;;) {//循环判断
                    if (e != null) {
                        K k;
                        if ((k = e.key) == key ||
                            (e.hash == hash && key.equals(k))) {
                            oldValue = e.value;//返回旧值
                            if (!onlyIfAbsent) {
                                e.value = value;//如果是putIfAbsent()则不执行这段覆盖代码
                                ++modCount;
                            }
                            break;
                        }
                        e = e.next;
                    }
                    else {
                        //如果在对应的table数组中不存在则创建一个HashEntry节点,或者创建一个
                        if (node != null)
                            node.setNext(first);
                        else
                            node = new HashEntry<K,V>(hash, key, value, first);
                        int c = count + 1;
                        if (c > threshold && tab.length < MAXIMUM_CAPACITY)
                            rehash(node);
                        else
                            setEntryAt(tab, index, node);
                        ++modCount;
                        count = c;
                        oldValue = null;
                        break;
                    }
                }
            } finally {
                unlock();//释放锁
            }
            return oldValue;

接下来看看get()方法,其实get()方法的现对来说较为简单,在定位segment和定位table后,依次扫描这个table元素下的的链表,要么找到元素,要么返回null。这里可能会有个并发问题如何获取是最新的,因为在HashEntry设计当中value属性的使用了 volatile保证了数据的可见性,但是key并非使用了volatile修饰,所以在使用get()以及containsKey()方法会存在一致性问题,由于HashEntry是链表结构,所以在并发情况下如果其他线程进行修改HashEntry链表值的话,返回值并非是实时数据。

    public V get(Object key) {
        Segment<K,V> s; // manually integrate access methods to reduce overhead
        HashEntry<K,V>[] tab;
        int h = hash(key);
        long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE;
        if ((s = (Segment<K,V>)UNSAFE.getObjectVolatile(segments, u)) != null &&
            (tab = s.table) != null) {
            for (HashEntry<K,V> e = (HashEntry<K,V>) UNSAFE.getObjectVolatile
                     (tab, ((long)(((tab.length - 1) & h)) << TSHIFT) + TBASE);
                 e != null; e = e.next) {
                K k;
                if ((k = e.key) == key || (e.hash == h && key.equals(k)))
                    return e.value;
            }
        }
        return null;
    }
    //获取containsKey的值
    public boolean containsKey(Object key) {
        Segment<K,V> s; // same as get() except no need for volatile value read
        HashEntry<K,V>[] tab;
        int h = hash(key);
        long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE;
        if ((s = (Segment<K,V>)UNSAFE.getObjectVolatile(segments, u)) != null &&
            (tab = s.table) != null) {
            for (HashEntry<K,V> e = (HashEntry<K,V>) UNSAFE.getObjectVolatile
                     (tab, ((long)(((tab.length - 1) & h)) << TSHIFT) + TBASE);
                 e != null; e = e.next) {
                K k;
                if ((k = e.key) == key || (e.hash == h && key.equals(k)))
                    return true;
            }
        }
        return false;
    }

在使用size()时候,会进去两次统计,并且不是加锁统计,两次一致直接返回结果,不一致,重新加锁再次统计

public int size() {
        // Try a few times to get accurate count. On failure due to
        // continuous async changes in table, resort to locking.
        final Segment<K,V>[] segments = this.segments;
        int size;
        boolean overflow; // true if size overflows 32 bits
        long sum;         // sum of modCounts
        long last = 0L;   // previous sum
        int retries = -1; // first iteration isn't retry
        try {
            for (;;) {
                //第一次统计
                if (retries++ == RETRIES_BEFORE_LOCK) {
                    for (int j = 0; j < segments.length; ++j)
                        ensureSegment(j).lock(); // force creation
                }
                sum = 0L;
                size = 0;
                overflow = false;
                //第二次统计
                for (int j = 0; j < segments.length; ++j) {
                    Segment<K,V> seg = segmentAt(segments, j);
                    if (seg != null) {
                        sum += seg.modCount;
                        int c = seg.count;
                        if (c < 0 || (size += c) < 0)
                            overflow = true;
                    }
                }
                if (sum == last)
                    break;
                last = sum;
            }
        } finally {
            if (retries > RETRIES_BEFORE_LOCK) {
                for (int j = 0; j < segments.length; ++j)
                    segmentAt(segments, j).unlock();
            }
        }
        return overflow ? Integer.MAX_VALUE : size;
    }

其他方法我就不介绍啦,下次再看一点JDK1.8的ConcurrentHashMap源代码,写的不是很好,不要见怪咯



这篇关于Java并发线程ConcurrentHashMap(JDK1.7)解析的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!


扫一扫关注最新编程教程