博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
HashMap(JDK1.8)源码阅读记录
阅读量:6079 次
发布时间:2019-06-20

本文共 13063 字,大约阅读时间需要 43 分钟。

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/weixin_40254498/article/details/81780244

HashMap

基于哈希表的 Map 接口的实现。此实现提供所有可选的映射操作,并允许使用 null 值和 null 键。(除了非同步和允许使用 null 之外,HashMap 类与 Hashtable 大致相同。)此类不保证映射的顺序,特别是它不保证该顺序恒久不变。 此实现假定哈希函数将元素适当地分布在各桶之间,可为基本操作(get 和 put)提供稳定的性能。迭代 collection 视图所需的时间与 HashMap 实例的“容量”(桶的数量)及其大小(键-值映射关系数)成比例。所以,如果迭代性能很重要,则不要将初始容量设置得太高(或将加载因子设置得太低)。


数据结构

先看下hashmap的数据结构

这里写图片描述
大概就是如图所示。
table就是数组咯。链表的他们称之为桶。大于阈值就转成红黑树咯,主要是为了提高效率。
使用红黑树来实现。

构造方法

/**     * Constructs an empty HashMap with the specified initial     * capacity and load factor.     *     * @param  initialCapacity the initial capacity     * @param  loadFactor      the load factor     * @throws IllegalArgumentException if the initial capacity is negative     *         or the load factor is nonpositive     */    public HashMap(int initialCapacity, float loadFactor) {        if (initialCapacity < 0)            throw new IllegalArgumentException("Illegal initial capacity: " +                                               initialCapacity);        if (initialCapacity > MAXIMUM_CAPACITY)            initialCapacity = MAXIMUM_CAPACITY;        if (loadFactor <= 0 || Float.isNaN(loadFactor))            throw new IllegalArgumentException("Illegal load factor: " +                                               loadFactor);        this.loadFactor = loadFactor;        this.threshold = tableSizeFor(initialCapacity);    }    /**     * Constructs an empty HashMap with the specified initial     * capacity and the default load factor (0.75).     *     * @param  initialCapacity the initial capacity.     * @throws IllegalArgumentException if the initial capacity is negative.     */    public HashMap(int initialCapacity) {        this(initialCapacity, DEFAULT_LOAD_FACTOR);    }    /**     * Constructs an empty HashMap with the default initial capacity     * (16) and the default load factor (0.75).     */    public HashMap() {        this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted    }    /**     * Constructs a new HashMap with the same mappings as the     * specified Map.  The HashMap is created with     * default load factor (0.75) and an initial capacity sufficient to     * hold the mappings in the specified Map.     *     * @param   m the map whose mappings are to be placed in this map     * @throws  NullPointerException if the specified map is null     */    public HashMap(Map
m) { this.loadFactor = DEFAULT_LOAD_FACTOR; putMapEntries(m, false); }
其中最主要的是初始化的大小还有初始化填充因子static final float DEFAULT_LOAD_FACTOR = 0.75f;HashMap的容量超过当前数组长度*加载因子,就会执行resize()算法比如说向水桶中装水,此时HashMap就是一个桶, 这个桶的容量就是加载容量, 而加载因子就是你要控制向这个桶中倒的水不超过水桶容量的比例,比如加载因子是0.75 , 那么在装水的时候这个桶最多能装到3/4 处,超过这个比例时,桶会自动扩容。 因此,这个桶最多能装水 = 桶的容量 * 加载因子。
/**          * 获取初始值,你输入的初始值,不一定是初始化时所用的初始值。     * 为什么初始值必须是2得倍数呢,下面代码会给你解释。     * Returns a power of two size for the given target capacity.     */    static final int tableSizeFor(int cap) {        int n = cap - 1;        n |= n >>> 1;        n |= n >>> 2;        n |= n >>> 4;        n |= n >>> 8;        n |= n >>> 16;        return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;    }    MAXIMUM_CAPACITY = 1<<30;
这样得到的始终是你输入初始值 小于最小的2的次幂,也就是说 比如你输入 15 --->>1629 --->>3244 --->>64

重要函数

hash()

/**     * Computes key.hashCode() and spreads (XORs) higher bits of hash     * to lower.  Because the table uses power-of-two masking, sets of     * hashes that vary only in bits above the current mask will     * always collide. (Among known examples are sets of Float keys     * holding consecutive whole numbers in small tables.)  So we     * apply a transform that spreads the impact of higher bits     * downward. There is a tradeoff between speed, utility, and     * quality of bit-spreading. Because many common sets of hashes     * are already reasonably distributed (so don't benefit from     * spreading), and because we use trees to handle large sets of     * collisions in bins, we just XOR some shifted bits in the     * cheapest possible way to reduce systematic lossage, as well as     * to incorporate impact of the highest bits that would otherwise     * never be used in index calculations because of table bounds.     */    static final int hash(Object key) {        int h;        // 这一顿操作大概的意思就是保留了高16位的值        // 其实低16位得值也保留了下来,只要在做一次异或,值就变回来了        return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);    }

public V put(K key, V value) {}

/**     * Associates the specified value with the specified key in this map.     * If the map previously contained a mapping for the key, the old     * value is replaced.     *     * @param key key with which the specified value is to be associated     * @param value value to be associated with the specified key     * @return the previous value associated with key, or     *         null if there was no mapping for key.     *         (A null return can also indicate that the map     *         previously associated null with key.)     */    public V put(K key, V value) {        return putVal(hash(key), key, value, false, true);    }    /**     * Implements Map.put and related methods     *     * @param hash hash for key     * @param key the key     * @param value the value to put     * @param onlyIfAbsent if true, don't change existing value     * @param evict if false, the table is in creation mode.     * @return previous value, or null if none     */    final V putVal(int hash, K key, V value, boolean onlyIfAbsent,                   boolean evict) {        Node
[] tab; Node
p; int n, i; // table未初始化或者长度为0,进行扩容 if ((tab = table) == null || (n = tab.length) == 0) n = (tab = resize()).length; // 看下值放在哪一个table[] // 这里也有一个为什么table的大小为什么必须是2的倍数的原因 // n 是 tab的长度 那么 (n - 1) & hash 的意思就是? // 假如 长度为 16(10000) 那么 15(01111) & 就得到最后hash值相当于 h & (length - 1) == h % length // 这样数组也不会越界等 运算得比%运算得快 if ((p = tab[i = (n - 1) & hash]) == null) tab[i] = newNode(hash, key, value, null); // 已经有了,就看下是放在 链表还是红黑树。 else { Node
e; K k; //先比较s是不是在头节点 if (p.hash == hash && ((k = p.key) == key || (key != null && key.equals(k)))) e = p; //或者是红黑树 else if (p instanceof TreeNode) e = ((TreeNode
)p).putTreeVal(this, tab, hash, key, value); //没办法了,只能是链表了 else { for (int binCount = 0; ; ++binCount) { //直接放在尾部 if ((e = p.next) == null) { p.next = newNode(hash, key, value, null); //链表大于8个阈值直接转成红黑树 if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st treeifyBin(tab, hash); break; } //存在一模一样的key则跳出继续 if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) break; //继续遍历 p = e; } } //如果找到了存放的位置 if (e != null) { // existing mapping for key V oldValue = e.value; // onlyIfAbsent为false或者旧值为null // onlyIfAbsent是传入的参数 默认w为false直接替换 if (!onlyIfAbsent || oldValue == null) //用新值替换旧值 e.value = value; afterNodeAccess(e); // 返回旧值 return oldValue; } } ++modCount; // 实际大小大于阈值则扩容 if (++size > threshold) resize(); afterNodeInsertion(evict); return null; }

public V get(Object key) {}

相对于put,get就比较简单了。相对jdk1.7版本 1.7 ---->1.8 位桶+链表 ----> 位桶+链表大于阈值(8)后切换成红黑树大数据下 O(n)->>O(Logn)
/**     * Returns the value to which the specified key is mapped,     * or {
@code null} if this map contains no mapping for the key. * *

More formally, if this map contains a mapping from a key * {

@code k} to a value {
@code v} such that {
@code (key==null ? k==null : * key.equals(k))}, then this method returns {
@code v}; otherwise * it returns {
@code null}. (There can be at most one such mapping.) * *

A return value of {

@code null} does not necessarily * indicate that the map contains no mapping for the key; it's also * possible that the map explicitly maps the key to {
@code null}. * The {
@link #containsKey containsKey} operation may be used to * distinguish these two cases. * * @see #put(Object, Object) */ public V get(Object key) { Node
e; return (e = getNode(hash(key), key)) == null ? null : e.value; } /** * Implements Map.get and related methods * * @param hash hash for key * @param key the key * @return the node, or null if none */ final Node
getNode(int hash, Object key) { Node
[] tab; Node
first, e; int n; K k; // table已经初始化,长度大于0,根据hash寻找table中的项也不为空 if ((tab = table) != null && (n = tab.length) > 0 && (first = tab[(n - 1) & hash]) != null) { //判断是不是第一个结点 是就返回 if (first.hash == hash && // always check first node ((k = first.key) == key || (key != null && key.equals(k)))) return first; // 节点下面还有东西? if ((e = first.next) != null) { // 是红黑树吗? if (first instanceof TreeNode) return ((TreeNode
)first).getTreeNode(hash, key); //不是红黑树那你肯定是链表咯 do { if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) return e; } while ((e = e.next) != null); } } return null; }

resize()

hashmap的扩容方法

/**     * Initializes or doubles table size.  If null, allocates in     * accord with initial capacity target held in field threshold.     * Otherwise, because we are using power-of-two expansion, the     * elements from each bin must either stay at same index, or move     * with a power of two offset in the new table.     *     * @return the table     */    final Node
[] resize() { //保存旧的 Node
[] oldTab = table; //保存长度 int oldCap = (oldTab == null) ? 0 : oldTab.length; //保存阈值 需要resize的阈值 int oldThr = threshold; int newCap, newThr = 0; // 之前table大小大于0 if (oldCap > 0) { // 之前table大于最大容量 if (oldCap >= MAXIMUM_CAPACITY) { // 阈值为最大整形 threshold = Integer.MAX_VALUE; return oldTab; } // 容量翻倍,使用左移,效率更高 else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY && oldCap >= DEFAULT_INITIAL_CAPACITY) // double threshold 阈值翻倍 newThr = oldThr << 1; // 之前阈值大于0 else if (oldThr > 0) // initial capacity was placed in threshold newCap = oldThr; // oldCap = 0并且oldThr = 0,使用缺省值(如使用HashMap()构造函数,之后再插入一个元素会调用resize函数,会进入这一步) else { // zero initial threshold signifies using defaults newCap = DEFAULT_INITIAL_CAPACITY; newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY); } // 新阈值为0 if (newThr == 0) { float ft = (float)newCap * loadFactor; newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ? (int)ft : Integer.MAX_VALUE); } threshold = newThr; @SuppressWarnings({
"rawtypes","unchecked"}) // 初始化table Node
[] newTab = (Node
[])new Node[newCap]; table = newTab; // 之前的table已经初始化过 if (oldTab != null) { // 复制元素,重新进行hash for (int j = 0; j < oldCap; ++j) { Node
e; if ((e = oldTab[j]) != null) { oldTab[j] = null; //如果链表只有一个,则直接赋值 if (e.next == null) newTab[e.hash & (newCap - 1)] = e; //红黑树啊 else if (e instanceof TreeNode) ((TreeNode
)e).split(this, newTab, j, oldCap); //只能是链表了 else { // preserve order Node
loHead = null, loTail = null; Node
hiHead = null, hiTail = null; Node
next; do { next = e.next; if ((e.hash & oldCap) == 0) { if (loTail == null) loHead = e; else loTail.next = e; loTail = e; } else { if (hiTail == null) hiHead = e; else hiTail.next = e; hiTail = e; } } while ((e = next) != null); if (loTail != null) { loTail.next = null; newTab[j] = loHead; } if (hiTail != null) { hiTail.next = null; newTab[j + oldCap] = hiHead; } } } } } return newTab; }
这一顿操作之后大概就是这个过程吧

这里写图片描述

END

HashMap运用了许多非常巧妙的算法吧,大量的使用到了位运算,让这个结构运行更稳定更巧妙。每次看都有新收获。

你可能感兴趣的文章
zabbix agent item
查看>>
一步一步学习SignalR进行实时通信_7_非代理
查看>>
AOL重组为两大业务部门 全球裁员500人
查看>>
字符设备与块设备的区别
查看>>
为什么我弃用GNOME转向KDE(2)
查看>>
Redis学习记录初篇
查看>>
爬虫案例若干-爬取CSDN博文,糗事百科段子以及淘宝的图片
查看>>
Web实时通信技术
查看>>
第三章 计算机及服务器硬件组成结合企业运维场景 总结
查看>>
IntelliJ IDEA解决Tomcal启动报错
查看>>
默认虚拟主机设置
查看>>
php中的短标签 太坑人了
查看>>
[译] 可维护的 ETL:使管道更容易支持和扩展的技巧
查看>>
### 继承 ###
查看>>
数组扩展方法之求和
查看>>
astah-professional-7_2_0安装
查看>>
函数是对象-有属性有方法
查看>>
uva 10107 - What is the Median?
查看>>
Linux下基本栈溢出攻击【转】
查看>>
c# 连等算式都在做什么
查看>>