Java中HashMap详解二
更新日期:
《HashMap的实现与优缺点》
个人感觉:比较详细的讲解,和特点比较。
[http://blog.csdn.net/tlycherry/article/details/8991530]
HashMap 是我们经常使用的一种数据结构。工作中会经常用到,面试也会总提到这个数据结构,找工作的时候,”HashTable 和HashMap的区别“被问到过没有?
本文会从原理,JDK源码,项目使用多个角度来分析HashMap。
1.HashMap是什么
JDK文档中如是说”基于哈希表的 Map 接口的实现。此实现提供所有可选的映射操作,并允许使用 null 值和 null 键。(除了不同步和允许使用 null 之外,HashMap 类与 Hashtable 大致相同。)不保证映射的顺序“
里面大致包含如下意思:
HashMap是Map的实现,因此它内部的元素都是K-V(键,值)组成的。
HashMap内部元素是无序的。
2.jdk中如何实现一个HashMap
HashMap在java.util包下,我们平时使用的类,有大部分都是这个包或者其子包的类
JDK中实现类的定义
public class HashMap
它实现了Map接口
通常我们这么使用HashMap1
Map<Integer,String> maps=new HashMap<Integer,String>();
maps.put(1, "a");
maps.put(2, "b");
上面代码新建了一个HashMap并且往里插入了两个数据,这里不接受基本数据类型来做K,V
如果你这么写的话,就会出问题了1
Map<int,double> maps=new HashMap<int,double>();
上面例子很简单可是你知道内部他们怎么实现的吗?
我们来看看HashMap的构造方法1
public HashMap() {
this.loadFactor = DEFAULT_LOAD_FACTOR;
threshold = (int)(DEFAULT_INITIAL_CAPACITY * DEFAULT_LOAD_FACTOR);
table = new Entry[DEFAULT_INITIAL_CAPACITY];
init();
}
都知道HashMap是个变长的数据结构,看了上面的构造方法可能你并不会认为它有那么神了。
DEFAULT_LOAD_FACTOR //默认加载因子,如果不制定的话是0.75
DEFAULT_INITIAL_CAPACITY //默认初始化容量,默认是16
threshold //阈(yu)值 根据加载因子和初始化容量计算得出
因此我们知道了,如果我们调用无参数的构造方法的话,我们将得到一个16容量的数组
数组是定长的,如何用一个定长的数据来表示一个不定长的数据呢,答案就是找一个更长的
下面来看看put方法是怎么实现的1
public V put(K key, V value) {
if (key == null) //键为空的情况,HashMap和HashTable的一个区别
return putForNullKey(value);
int hash = hash(key.hashCode()); //根据键的hashCode算出hash值
int i = indexFor(hash, table.length); //根据hash值算出究竟该放入哪个数组下标中
for (Entry<K,V> e = table[i]; e != null; e = e.next) {//整个for循环实现了如果存在K那么就替换V
Object k;
if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {
V oldValue = e.value;
e.value = value;
e.recordAccess(this);
return oldValue;
}
}
modCount++;//计数器
addEntry(hash, key, value, i); //添加到数组中
return null;
}
区区十几行代码,通过我添加的注释看懂并不难,细心的话可能会发现这里并没有体现变长的概念,如果我数据大于之前的容量的怎么继续添加呀,答案就在addEntry方法中
void addEntry(int hash, K key, V value, int bucketIndex) {
Entry<K,V> e = table[bucketIndex];
table[bucketIndex] = new Entry<K,V>(hash, key, value, e);
if (size++ >= threshold)
resize(2 * table.length);
}
这里显示了如果当前size>threshold的话那么就会扩展当前的size的两倍,如何扩展?
void resize(int newCapacity) {
Entry[] oldTable = table;
int oldCapacity = oldTable.length;
if (oldCapacity == MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;
return;
}
Entry[] newTable = new Entry[newCapacity];
transfer(newTable);
table = newTable;
threshold = (int)(newCapacity * loadFactor);
}
new 一个新的数组,将旧数据转移到新的数组中,并且重新计算阈值,如何转移数据?
void transfer(Entry[] newTable) {
Entry[] src = table;
int newCapacity = newTable.length;
for (int j = 0; j < src.length; j++) {
Entry<K,V> e = src[j];
if (e != null) {
src[j] = null;
do {
Entry<K,V> next = e.next;
int i = indexFor(e.hash, newCapacity);
e.next = newTable[i];
newTable[i] = e;
e = next;
} while (e != null);
}
}
}
根据hash值,和新的容量重新计算数据下标。天呀,太麻烦了吧。
到此为止我们知道了新建一个HashMap和添加一个HashMap之后源代码中都干了什么。
3.hashcode你懂它不
HashMap是根据hashcode的来进行计算hash值的,equals方法默认也是通过hashcode来进行比较的
hashCode到底是个什么东西呢?
我们跟踪JDK源码到Object结果JDK确给了我们一个下面的本地方法
public native int hashCode();
通过方法我们只能知道hashcode 是一个int值。
疑问更加多了,首先它如何保证不同对象的hashcode 值不一样呢,
既然hashcode是一个整形的,那么它最多的应该只能表示Integer.maxValue个值, 那么当大于这么多值的情况下这些对象的值又该如何表示呢。
要理解这些东西需要从操作系统说起了
//TODO 时间关系,后面再补
4.HashMap的优缺点
优点:超级快速的查询速度,如果有人问你什么数据结构可以达到O(1)的时间复杂度,没错是HashMap
动态的可变长存储数据(和数组相比较而言)
缺点:需要额外计算一次hash值
如果处理不当会占用额外的空间
5.如何更加高效的使用HashMap
添加
前面我们知道了添加数据的时候,如果当前数据的个数加上1要大于hashmap的阈值的话,那么数组就会进行一个2的操作。并且从新计算所有元素的在数组中的位置。
因此如果我们要添加一个1000个元素的hashMap,如果我们用默认值那么我么需要额外的计算多少次呢
当大于160.75=12的时候,需要从新计算 12次
当大于1620.75=24的时候,需要额外计算 24次
……
当大于16n0.75=768的时候,需要额外计算 768次
所以我们总共在扩充过程中额外计算12+24+48+……+768次
因此强力建议我们在项目中如果知道范围的情况下,我们应该手动指定初始大小 像这样1
Map<Integer,String> maps=new HashMap<Integer,String>(1000);
删除
JDK中如下方式进行删除
1 | public V remove(Object key) { Entry<K,V> e = removeEntryForKey(key); return (e == null ? null : e.value); } final Entry<K,V> removeEntryForKey(Object key) { int hash = (key == null) ? 0 : hash(key.hashCode()); int i = indexFor(hash, table.length); Entry<K,V> prev = table[i]; Entry<K,V> e = prev; while (e != null) { Entry<K,V> next = e.next; Object k; if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) { modCount++; size--; if (prev == e) table[i] = next; else prev.next = next; e.recordRemoval(this); return e; } prev = e; e = next; } return e; } |
根据上面代码我们知道了,如果删除是不进行了数组容量的重新定义的。所以,如果你有1000个元素的HashMap就算你最后删除只剩下一个了,你在内存中依然还有大于1000个容量,其中大于999个是空的。 为什么是大于因为扩容之后的HashMap实际容量大于1000个。
因此如果我们项目中有很大的HashMap,删除之后却很小了,我们还是弄一个新的小的存它 吧。
6.HashMap同步
如果HashMap在多线程下会出现什么问题呢
我们知道HashMap不是线程安全的(HashMap和HashTable的另外一个区别),如果我们也想要在多线程的环境下使用它怎么办呢?
也许你会说不是有HashTable吗?那我们就试试
1 | public class MyThread extends Thread { // 线程类 private Map<Integer, String> maps; // 多线程处理的map public MyThread(Map<Integer, String> maps) { this.maps = maps; } @Override public void run() { int delNumber = (int) (Math.random() * 10000);//随即删除的key op(delNumber); } public void op(int delNumber) { Iterator<Map.Entry<Integer, String>> t = maps.entrySet().iterator(); while (t.hasNext()) { Map.Entry<Integer, String> entry = t.next(); int key = entry.getKey(); if (key == delNumber) { //看下key是否是需要删除的key,是的话就删除 maps.remove(key); break; } } } } public class HashMapTest { public static void main(String[] args) { testSync(); } public static void testSync(){ Map<Integer, String> maps = new Hashtable<Integer, String>(10000); // Map<Integer, String> maps = new HashMap<Integer, String>(10000); // Map<Integer, String> maps = new ConcurrentHashMap<Integer, String>(10000); for (int i = 0; i < 10000; i++) { maps.put(i, "a"); } for(int i=0;i<10;i++){ new MyThread(maps).start(); } } } |
我们使用HashTable来运行试试,不一会就出现了如下错误信息1
Exception in thread "Thread-6" java.util.ConcurrentModificationException
at java.util.Hashtable$Enumerator.next(Hashtable.java:1031)
at cn.tang.demos.hashmap.MyThread.op(MyThread.java:22)
at cn.tang.demos.hashmap.MyThread.run(MyThread.java:16)
Exception in thread "Thread-4" java.util.ConcurrentModificationException
at java.util.Hashtable$Enumerator.next(Hashtable.java:1031)
at cn.tang.demos.hashmap.MyThread.op(MyThread.java:22)
at cn.tang.demos.hashmap.MyThread.run(MyThread.java:16)
Exception in thread "Thread-2" java.util.ConcurrentModificationException
at java.util.Hashtable$Enumerator.next(Hashtable.java:1031)
at cn.tang.demos.hashmap.MyThread.op(MyThread.java:22)
at cn.tang.demos.hashmap.MyThread.run(MyThread.java:16)
Exception in thread "Thread-1" java.util.ConcurrentModificationException
at java.util.Hashtable$Enumerator.next(Hashtable.java:1031)
at cn.tang.demos.hashmap.MyThread.op(MyThread.java:22)
at cn.tang.demos.hashmap.MyThread.run(MyThread.java:16)
Exception in thread "Thread-8" java.util.ConcurrentModificationException
at java.util.Hashtable$Enumerator.next(Hashtable.java:1031)
at cn.tang.demos.hashmap.MyThread.op(MyThread.java:22)
at cn.tang.demos.hashmap.MyThread.run(MyThread.java:16)
Exception in thread "Thread-9" java.util.ConcurrentModificationException
at java.util.Hashtable$Enumerator.next(Hashtable.java:1031)
at cn.tang.demos.hashmap.MyThread.op(MyThread.java:22)
at cn.tang.demos.hashmap.MyThread.run(MyThread.java:16)
Exception in thread "Thread-5" java.util.ConcurrentModificationException
at java.util.Hashtable$Enumerator.next(Hashtable.java:1031)
at cn.tang.demos.hashmap.MyThread.op(MyThread.java:22)
at cn.tang.demos.hashmap.MyThread.run(MyThread.java:16)
ERROR: JDWP Unable to get JNI 1.2 environment, jvm->GetEnv() return code = -2
JDWP exit error AGENT_ERROR_NO_JNI_ENV(183): [../../../src/share/back/util.c:820]
不是说是安全的不?为什么会出现这个问题呢,继续看源代码1
public T next() {
if (modCount != expectedModCount)
throw new ConcurrentModificationException();
return nextElement();
}
当修改之后的计数器和期望的不一致的时候就会抛出异常了。对应于上面代码,线程1,遍历的时候假如有100个,本来删除之后就99个,但是线程2这段时间也删除了一个
所以实际只有98个了,线程1并不知道,当线程1调用next方法时候比较下结果不对,完了,数据不对了,老板要扣工资了,线程自己也解决不了,抛出去吧,别引起更大的问题了。
于是你得到了一个ConcurrentModificationException。
所以以后要注意了,HashTable,vector都不是绝对线程安全的了,所以我们需要将maps加上同步1
public void op(int delNumber) {
synchronized (maps) {
Iterator<Map.Entry<Integer, String>> t = maps.entrySet().iterator();
while (t.hasNext()) {
Map.Entry<Integer, String> entry = t.next();
int key = entry.getKey();
if (key == delNumber) { // 看下key是否是需要删除的key是的话就删除
maps.remove(key);
break;
}
}
}
}
synchronized(maps)加上之后就不会出现问题了,就算你用的是HashMap都不会出问题。
其实JDK中在早在1.5之后有了ConcurrentHashMap了这个类你可以放心的在多线程下使用并且不需要加任何同步 了。
JDK源代码
HashMap的优点:
1 | package java.util; import java.io.*; public class HashMap<K,V> extends AbstractMap<K,V> implements Map<K,V>, Cloneable, Serializable { /** * The default initial capacity - MUST be a power of two. */ static final int DEFAULT_INITIAL_CAPACITY = 16; /** * The maximum capacity, used if a higher value is implicitly specified * by either of the constructors with arguments. * MUST be a power of two <= 1<<30. */ static final int MAXIMUM_CAPACITY = 1 << 30; /** * The load factor used when none specified in constructor. */ static final float DEFAULT_LOAD_FACTOR = 0.75f; /** * The table, resized as necessary. Length MUST Always be a power of two. */ transient Entry<K,V>[] table; /** * The number of key-value mappings contained in this map. */ transient int size; /** * The next size value at which to resize (capacity * load factor). * @serial */ int threshold; /** * The load factor for the hash table. * * @serial */ final float loadFactor; /** * The number of times this HashMap has been structurally modified * Structural modifications are those that change the number of mappings in * the HashMap or otherwise modify its internal structure (e.g., * rehash). This field is used to make iterators on Collection-views of * the HashMap fail-fast. (See ConcurrentModificationException). */ transient int modCount; /** * The default threshold of map capacity above which alternative hashing is * used for String keys. Alternative hashing reduces the incidence of * collisions due to weak hash code calculation for String keys. * <p/> * This value may be overridden by defining the system property * {@code jdk.map.althashing.threshold}. A property value of {@code 1} * forces alternative hashing to be used at all times whereas * {@code -1} value ensures that alternative hashing is never used. */ static final int ALTERNATIVE_HASHING_THRESHOLD_DEFAULT = Integer.MAX_VALUE; /** * holds values which can't be initialized until after VM is booted. */ private static class Holder { // Unsafe mechanics /** * Unsafe utilities */ static final sun.misc.Unsafe UNSAFE; /** * Offset of "final" hashSeed field we must set in readObject() method. */ static final long HASHSEED_OFFSET; /** * Table capacity above which to switch to use alternative hashing. */ static final int ALTERNATIVE_HASHING_THRESHOLD; static { String altThreshold = java.security.AccessController.doPrivileged( new sun.security.action.GetPropertyAction( "jdk.map.althashing.threshold")); int threshold; try { threshold = (null != altThreshold) ? Integer.parseInt(altThreshold) : ALTERNATIVE_HASHING_THRESHOLD_DEFAULT; // disable alternative hashing if -1 if (threshold == -1) { threshold = Integer.MAX_VALUE; } if (threshold < 0) { throw new IllegalArgumentException("value must be positive integer."); } } catch(IllegalArgumentException failed) { throw new Error("Illegal value for 'jdk.map.althashing.threshold'", failed); } ALTERNATIVE_HASHING_THRESHOLD = threshold; try { UNSAFE = sun.misc.Unsafe.getUnsafe(); HASHSEED_OFFSET = UNSAFE.objectFieldOffset( HashMap.class.getDeclaredField("hashSeed")); } catch (NoSuchFieldException | SecurityException e) { throw new Error("Failed to record hashSeed offset", e); } } } /** * If {@code true} then perform alternative hashing of String keys to reduce * the incidence of collisions due to weak hash code calculation. */ transient boolean useAltHashing; /** * A randomizing value associated with this instance that is applied to * hash code of keys to make hash collisions harder to find. */ transient final int hashSeed = sun.misc.Hashing.randomHashSeed(this); /** * Constructs an empty <tt>HashMap</tt> with the specified initial * capacity and load factor. * * @param initialCapacity the initial capacity * @param loadFactor the load factor * @throws IllegalArgumentException if the initial capacity is negative * or the load factor is nonpositive */ public HashMap(int initialCapacity, float loadFactor) { if (initialCapacity < 0) throw new IllegalArgumentException("Illegal initial capacity: " + initialCapacity); if (initialCapacity > MAXIMUM_CAPACITY) initialCapacity = MAXIMUM_CAPACITY; if (loadFactor <= 0 || Float.isNaN(loadFactor)) throw new IllegalArgumentException("Illegal load factor: " + loadFactor); // Find a power of 2 >= initialCapacity int capacity = 1; while (capacity < initialCapacity) capacity <<= 1; this.loadFactor = loadFactor; threshold = (int)Math.min(capacity * loadFactor, MAXIMUM_CAPACITY + 1); table = new Entry[capacity]; useAltHashing = sun.misc.VM.isBooted() && (capacity >= Holder.ALTERNATIVE_HASHING_THRESHOLD); init(); } /** * Constructs an empty <tt>HashMap</tt> with the specified initial * capacity and the default load factor (0.75). * * @param initialCapacity the initial capacity. * @throws IllegalArgumentException if the initial capacity is negative. */ public HashMap(int initialCapacity) { this(initialCapacity, DEFAULT_LOAD_FACTOR); } /** * Constructs an empty <tt>HashMap</tt> with the default initial capacity * (16) and the default load factor (0.75). */ public HashMap() { this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR); } /** * Constructs a new <tt>HashMap</tt> with the same mappings as the * specified <tt>Map</tt>. The <tt>HashMap</tt> is created with * default load factor (0.75) and an initial capacity sufficient to * hold the mappings in the specified <tt>Map</tt>. * * @param m the map whose mappings are to be placed in this map * @throws NullPointerException if the specified map is null */ public HashMap(Map<? extends K, ? extends V> m) { this(Math.max((int) (m.size() / DEFAULT_LOAD_FACTOR) + 1, DEFAULT_INITIAL_CAPACITY), DEFAULT_LOAD_FACTOR); putAllForCreate(m); } // internal utilities /** * Initialization hook for subclasses. This method is called * in all constructors and pseudo-constructors (clone, readObject) * after HashMap has been initialized but before any entries have * been inserted. (In the absence of this method, readObject would * require explicit knowledge of subclasses.) */ void init() { } /** * Retrieve object hash code and applies a supplemental hash function to the * result hash, which defends against poor quality hash functions. This is * critical because HashMap uses power-of-two length hash tables, that * otherwise encounter collisions for hashCodes that do not differ * in lower bits. Note: Null keys always map to hash 0, thus index 0. */ final int hash(Object k) { int h = 0; if (useAltHashing) { if (k instanceof String) { return sun.misc.Hashing.stringHash32((String) k); } h = hashSeed; } h ^= k.hashCode(); // This function ensures that hashCodes that differ only by // constant multiples at each bit position have a bounded // number of collisions (approximately 8 at default load factor). h ^= (h >>> 20) ^ (h >>> 12); return h ^ (h >>> 7) ^ (h >>> 4); } /** * Returns index for hash code h. */ static int indexFor(int h, int length) { return h & (length-1); } /** * Returns the number of key-value mappings in this map. * * @return the number of key-value mappings in this map */ public int size() { return size; } /** * Returns <tt>true</tt> if this map contains no key-value mappings. * * @return <tt>true</tt> if this map contains no key-value mappings */ public boolean isEmpty() { return size == 0; } /** * Returns the value to which the specified key is mapped, * or {@code null} if this map contains no mapping for the key. * * <p>More formally, if this map contains a mapping from a key * {@code k} to a value {@code v} such that {@code (key==null ? k==null : * key.equals(k))}, then this method returns {@code v}; otherwise * it returns {@code null}. (There can be at most one such mapping.) * * <p>A return value of {@code null} does not <i>necessarily</i> * indicate that the map contains no mapping for the key; it's also * possible that the map explicitly maps the key to {@code null}. * The {@link #containsKey containsKey} operation may be used to * distinguish these two cases. * * @see #put(Object, Object) */ public V get(Object key) { if (key == null) return getForNullKey(); Entry<K,V> entry = getEntry(key); return null == entry ? null : entry.getValue(); } /** * Offloaded version of get() to look up null keys. Null keys map * to index 0. This null case is split out into separate methods * for the sake of performance in the two most commonly used * operations (get and put), but incorporated with conditionals in * others. */ private V getForNullKey() { for (Entry<K,V> e = table[0]; e != null; e = e.next) { if (e.key == null) return e.value; } return null; } /** * Returns <tt>true</tt> if this map contains a mapping for the * specified key. * * @param key The key whose presence in this map is to be tested * @return <tt>true</tt> if this map contains a mapping for the specified * key. */ public boolean containsKey(Object key) { return getEntry(key) != null; } /** * Returns the entry associated with the specified key in the * HashMap. Returns null if the HashMap contains no mapping * for the key. */ final Entry<K,V> getEntry(Object key) { int hash = (key == null) ? 0 : hash(key); for (Entry<K,V> e = table[indexFor(hash, table.length)]; e != null; e = e.next) { Object k; if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) return e; } return null; } /** * Associates the specified value with the specified key in this map. * If the map previously contained a mapping for the key, the old * value is replaced. * * @param key key with which the specified value is to be associated * @param value value to be associated with the specified key * @return the previous value associated with <tt>key</tt>, or * <tt>null</tt> if there was no mapping for <tt>key</tt>. * (A <tt>null</tt> return can also indicate that the map * previously associated <tt>null</tt> with <tt>key</tt>.) */ public V put(K key, V value) { if (key == null) return putForNullKey(value); int hash = hash(key); int i = indexFor(hash, table.length); for (Entry<K,V> e = table[i]; e != null; e = e.next) { Object k; if (e.hash == hash && ((k = e.key) == key || key.equals(k))) { V oldValue = e.value; e.value = value; e.recordAccess(this); return oldValue; } } modCount++; addEntry(hash, key, value, i); return null; } /** * Offloaded version of put for null keys */ private V putForNullKey(V value) { for (Entry<K,V> e = table[0]; e != null; e = e.next) { if (e.key == null) { V oldValue = e.value; e.value = value; e.recordAccess(this); return oldValue; } } modCount++; addEntry(0, null, value, 0); return null; } /** * This method is used instead of put by constructors and * pseudoconstructors (clone, readObject). It does not resize the table, * check for comodification, etc. It calls createEntry rather than * addEntry. */ private void putForCreate(K key, V value) { int hash = null == key ? 0 : hash(key); int i = indexFor(hash, table.length); /** * Look for preexisting entry for key. This will never happen for * clone or deserialize. It will only happen for construction if the * input Map is a sorted map whose ordering is inconsistent w/ equals. */ for (Entry<K,V> e = table[i]; e != null; e = e.next) { Object k; if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) { e.value = value; return; } } createEntry(hash, key, value, i); } private void putAllForCreate(Map<? extends K, ? extends V> m) { for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) putForCreate(e.getKey(), e.getValue()); } /** * Rehashes the contents of this map into a new array with a * larger capacity. This method is called automatically when the * number of keys in this map reaches its threshold. * * If current capacity is MAXIMUM_CAPACITY, this method does not * resize the map, but sets threshold to Integer.MAX_VALUE. * This has the effect of preventing future calls. * * @param newCapacity the new capacity, MUST be a power of two; * must be greater than current capacity unless current * capacity is MAXIMUM_CAPACITY (in which case value * is irrelevant). */ void resize(int newCapacity) { Entry[] oldTable = table; int oldCapacity = oldTable.length; if (oldCapacity == MAXIMUM_CAPACITY) { threshold = Integer.MAX_VALUE; return; } Entry[] newTable = new Entry[newCapacity]; boolean oldAltHashing = useAltHashing; useAltHashing |= sun.misc.VM.isBooted() && (newCapacity >= Holder.ALTERNATIVE_HASHING_THRESHOLD); boolean rehash = oldAltHashing ^ useAltHashing; transfer(newTable, rehash); table = newTable; threshold = (int)Math.min(newCapacity * loadFactor, MAXIMUM_CAPACITY + 1); } /** * Transfers all entries from current table to newTable. */ void transfer(Entry[] newTable, boolean rehash) { int newCapacity = newTable.length; for (Entry<K,V> e : table) { while(null != e) { Entry<K,V> next = e.next; if (rehash) { e.hash = null == e.key ? 0 : hash(e.key); } int i = indexFor(e.hash, newCapacity); e.next = newTable[i]; newTable[i] = e; e = next; } } } /** * Copies all of the mappings from the specified map to this map. * These mappings will replace any mappings that this map had for * any of the keys currently in the specified map. * * @param m mappings to be stored in this map * @throws NullPointerException if the specified map is null */ public void putAll(Map<? extends K, ? extends V> m) { int numKeysToBeAdded = m.size(); if (numKeysToBeAdded == 0) return; /* * Expand the map if the map if the number of mappings to be added * is greater than or equal to threshold. This is conservative; the * obvious condition is (m.size() + size) >= threshold, but this * condition could result in a map with twice the appropriate capacity, * if the keys to be added overlap with the keys already in this map. * By using the conservative calculation, we subject ourself * to at most one extra resize. */ if (numKeysToBeAdded > threshold) { int targetCapacity = (int)(numKeysToBeAdded / loadFactor + 1); if (targetCapacity > MAXIMUM_CAPACITY) targetCapacity = MAXIMUM_CAPACITY; int newCapacity = table.length; while (newCapacity < targetCapacity) newCapacity <<= 1; if (newCapacity > table.length) resize(newCapacity); } for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) put(e.getKey(), e.getValue()); } /** * Removes the mapping for the specified key from this map if present. * * @param key key whose mapping is to be removed from the map * @return the previous value associated with <tt>key</tt>, or * <tt>null</tt> if there was no mapping for <tt>key</tt>. * (A <tt>null</tt> return can also indicate that the map * previously associated <tt>null</tt> with <tt>key</tt>.) */ public V remove(Object key) { Entry<K,V> e = removeEntryForKey(key); return (e == null ? null : e.value); } /** * Removes and returns the entry associated with the specified key * in the HashMap. Returns null if the HashMap contains no mapping * for this key. */ final Entry<K,V> removeEntryForKey(Object key) { int hash = (key == null) ? 0 : hash(key); int i = indexFor(hash, table.length); Entry<K,V> prev = table[i]; Entry<K,V> e = prev; while (e != null) { Entry<K,V> next = e.next; Object k; if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) { modCount++; size--; if (prev == e) table[i] = next; else prev.next = next; e.recordRemoval(this); return e; } prev = e; e = next; } return e; } /** * Special version of remove for EntrySet using {@code Map.Entry.equals()} * for matching. */ final Entry<K,V> removeMapping(Object o) { if (!(o instanceof Map.Entry)) return null; Map.Entry<K,V> entry = (Map.Entry<K,V>) o; Object key = entry.getKey(); int hash = (key == null) ? 0 : hash(key); int i = indexFor(hash, table.length); Entry<K,V> prev = table[i]; Entry<K,V> e = prev; while (e != null) { Entry<K,V> next = e.next; if (e.hash == hash && e.equals(entry)) { modCount++; size--; if (prev == e) table[i] = next; else prev.next = next; e.recordRemoval(this); return e; } prev = e; e = next; } return e; } /** * Removes all of the mappings from this map. * The map will be empty after this call returns. */ public void clear() { modCount++; Entry[] tab = table; for (int i = 0; i < tab.length; i++) tab[i] = null; size = 0; } /** * Returns <tt>true</tt> if this map maps one or more keys to the * specified value. * * @param value value whose presence in this map is to be tested * @return <tt>true</tt> if this map maps one or more keys to the * specified value */ public boolean containsValue(Object value) { if (value == null) return containsNullValue(); Entry[] tab = table; for (int i = 0; i < tab.length ; i++) for (Entry e = tab[i] ; e != null ; e = e.next) if (value.equals(e.value)) return true; return false; } /** * Special-case code for containsValue with null argument */ private boolean containsNullValue() { Entry[] tab = table; for (int i = 0; i < tab.length ; i++) for (Entry e = tab[i] ; e != null ; e = e.next) if (e.value == null) return true; return false; } /** * Returns a shallow copy of this <tt>HashMap</tt> instance: the keys and * values themselves are not cloned. * * @return a shallow copy of this map */ public Object clone() { HashMap<K,V> result = null; try { result = (HashMap<K,V>)super.clone(); } catch (CloneNotSupportedException e) { // assert false; } result.table = new Entry[table.length]; result.entrySet = null; result.modCount = 0; result.size = 0; result.init(); result.putAllForCreate(this); return result; } static class Entry<K,V> implements Map.Entry<K,V> { final K key; V value; Entry<K,V> next; int hash; /** * Creates new entry. */ Entry(int h, K k, V v, Entry<K,V> n) { value = v; next = n; key = k; hash = h; } public final K getKey() { return key; } public final V getValue() { return value; } public final V setValue(V newValue) { V oldValue = value; value = newValue; return oldValue; } public final boolean equals(Object o) { if (!(o instanceof Map.Entry)) return false; Map.Entry e = (Map.Entry)o; Object k1 = getKey(); Object k2 = e.getKey(); if (k1 == k2 || (k1 != null && k1.equals(k2))) { Object v1 = getValue(); Object v2 = e.getValue(); if (v1 == v2 || (v1 != null && v1.equals(v2))) return true; } return false; } public final int hashCode() { return (key==null ? 0 : key.hashCode()) ^ (value==null ? 0 : value.hashCode()); } public final String toString() { return getKey() + "=" + getValue(); } /** * This method is invoked whenever the value in an entry is * overwritten by an invocation of put(k,v) for a key k that's already * in the HashMap. */ void recordAccess(HashMap<K,V> m) { } /** * This method is invoked whenever the entry is * removed from the table. */ void recordRemoval(HashMap<K,V> m) { } } /** * Adds a new entry with the specified key, value and hash code to * the specified bucket. It is the responsibility of this * method to resize the table if appropriate. * * Subclass overrides this to alter the behavior of put method. */ void addEntry(int hash, K key, V value, int bucketIndex) { if ((size >= threshold) && (null != table[bucketIndex])) { resize(2 * table.length); hash = (null != key) ? hash(key) : 0; bucketIndex = indexFor(hash, table.length); } createEntry(hash, key, value, bucketIndex); } /** * Like addEntry except that this version is used when creating entries * as part of Map construction or "pseudo-construction" (cloning, * deserialization). This version needn't worry about resizing the table. * * Subclass overrides this to alter the behavior of HashMap(Map), * clone, and readObject. */ void createEntry(int hash, K key, V value, int bucketIndex) { Entry<K,V> e = table[bucketIndex]; table[bucketIndex] = new Entry<>(hash, key, value, e); size++; } private abstract class HashIterator<E> implements Iterator<E> { Entry<K,V> next; // next entry to return int expectedModCount; // For fast-fail int index; // current slot Entry<K,V> current; // current entry HashIterator() { expectedModCount = modCount; if (size > 0) { // advance to first entry Entry[] t = table; while (index < t.length && (next = t[index++]) == null) ; } } public final boolean hasNext() { return next != null; } final Entry<K,V> nextEntry() { if (modCount != expectedModCount) throw new ConcurrentModificationException(); Entry<K,V> e = next; if (e == null) throw new NoSuchElementException(); if ((next = e.next) == null) { Entry[] t = table; while (index < t.length && (next = t[index++]) == null) ; } current = e; return e; } public void remove() { if (current == null) throw new IllegalStateException(); if (modCount != expectedModCount) throw new ConcurrentModificationException(); Object k = current.key; current = null; HashMap.this.removeEntryForKey(k); expectedModCount = modCount; } } private final class ValueIterator extends HashIterator<V> { public V next() { return nextEntry().value; } } private final class KeyIterator extends HashIterator<K> { public K next() { return nextEntry().getKey(); } } private final class EntryIterator extends HashIterator<Map.Entry<K,V>> { public Map.Entry<K,V> next() { return nextEntry(); } } // Subclass overrides these to alter behavior of views' iterator() method Iterator<K> newKeyIterator() { return new KeyIterator(); } Iterator<V> newValueIterator() { return new ValueIterator(); } Iterator<Map.Entry<K,V>> newEntryIterator() { return new EntryIterator(); } // Views private transient Set<Map.Entry<K,V>> entrySet = null; /** * Returns a {@link Set} view of the keys contained in this map. * The set is backed by the map, so changes to the map are * reflected in the set, and vice-versa. If the map is modified * while an iteration over the set is in progress (except through * the iterator's own <tt>remove</tt> operation), the results of * the iteration are undefined. The set supports element removal, * which removes the corresponding mapping from the map, via the * <tt>Iterator.remove</tt>, <tt>Set.remove</tt>, * <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt> * operations. It does not support the <tt>add</tt> or <tt>addAll</tt> * operations. */ public Set<K> keySet() { Set<K> ks = keySet; return (ks != null ? ks : (keySet = new KeySet())); } private final class KeySet extends AbstractSet<K> { public Iterator<K> iterator() { return newKeyIterator(); } public int size() { return size; } public boolean contains(Object o) { return containsKey(o); } public boolean remove(Object o) { return HashMap.this.removeEntryForKey(o) != null; } public void clear() { HashMap.this.clear(); } } /** * Returns a {@link Collection} view of the values contained in this map. * The collection is backed by the map, so changes to the map are * reflected in the collection, and vice-versa. If the map is * modified while an iteration over the collection is in progress * (except through the iterator's own <tt>remove</tt> operation), * the results of the iteration are undefined. The collection * supports element removal, which removes the corresponding * mapping from the map, via the <tt>Iterator.remove</tt>, * <tt>Collection.remove</tt>, <tt>removeAll</tt>, * <tt>retainAll</tt> and <tt>clear</tt> operations. It does not * support the <tt>add</tt> or <tt>addAll</tt> operations. */ public Collection<V> values() { Collection<V> vs = values; return (vs != null ? vs : (values = new Values())); } private final class Values extends AbstractCollection<V> { public Iterator<V> iterator() { return newValueIterator(); } public int size() { return size; } public boolean contains(Object o) { return containsValue(o); } public void clear() { HashMap.this.clear(); } } /** * Returns a {@link Set} view of the mappings contained in this map. * The set is backed by the map, so changes to the map are * reflected in the set, and vice-versa. If the map is modified * while an iteration over the set is in progress (except through * the iterator's own <tt>remove</tt> operation, or through the * <tt>setValue</tt> operation on a map entry returned by the * iterator) the results of the iteration are undefined. The set * supports element removal, which removes the corresponding * mapping from the map, via the <tt>Iterator.remove</tt>, * <tt>Set.remove</tt>, <tt>removeAll</tt>, <tt>retainAll</tt> and * <tt>clear</tt> operations. It does not support the * <tt>add</tt> or <tt>addAll</tt> operations. * * @return a set view of the mappings contained in this map */ public Set<Map.Entry<K,V>> entrySet() { return entrySet0(); } private Set<Map.Entry<K,V>> entrySet0() { Set<Map.Entry<K,V>> es = entrySet; return es != null ? es : (entrySet = new EntrySet()); } private final class EntrySet extends AbstractSet<Map.Entry<K,V>> { public Iterator<Map.Entry<K,V>> iterator() { return newEntryIterator(); } public boolean contains(Object o) { if (!(o instanceof Map.Entry)) return false; Map.Entry<K,V> e = (Map.Entry<K,V>) o; Entry<K,V> candidate = getEntry(e.getKey()); return candidate != null && candidate.equals(e); } public boolean remove(Object o) { return removeMapping(o) != null; } public int size() { return size; } public void clear() { HashMap.this.clear(); } } /** * Save the state of the <tt>HashMap</tt> instance to a stream (i.e., * serialize it). * * @serialData The <i>capacity</i> of the HashMap (the length of the * bucket array) is emitted (int), followed by the * <i>size</i> (an int, the number of key-value * mappings), followed by the key (Object) and value (Object) * for each key-value mapping. The key-value mappings are * emitted in no particular order. */ private void writeObject(java.io.ObjectOutputStream s) throws IOException { Iterator<Map.Entry<K,V>> i = (size > 0) ? entrySet0().iterator() : null; // Write out the threshold, loadfactor, and any hidden stuff s.defaultWriteObject(); // Write out number of buckets s.writeInt(table.length); // Write out size (number of Mappings) s.writeInt(size); // Write out keys and values (alternating) if (size > 0) { for(Map.Entry<K,V> e : entrySet0()) { s.writeObject(e.getKey()); s.writeObject(e.getValue()); } } } private static final long serialVersionUID = 362498820763181265L; /** * Reconstitute the {@code HashMap} instance from a stream (i.e., * deserialize it). */ private void readObject(java.io.ObjectInputStream s) throws IOException, ClassNotFoundException { // Read in the threshold (ignored), loadfactor, and any hidden stuff s.defaultReadObject(); if (loadFactor <= 0 || Float.isNaN(loadFactor)) throw new InvalidObjectException("Illegal load factor: " + loadFactor); // set hashSeed (can only happen after VM boot) Holder.UNSAFE.putIntVolatile(this, Holder.HASHSEED_OFFSET, sun.misc.Hashing.randomHashSeed(this)); // Read in number of buckets and allocate the bucket array; s.readInt(); // ignored // Read number of mappings int mappings = s.readInt(); if (mappings < 0) throw new InvalidObjectException("Illegal mappings count: " + mappings); int initialCapacity = (int) Math.min( // capacity chosen by number of mappings // and desired load (if >= 0.25) mappings * Math.min(1 / loadFactor, 4.0f), // we have limits... HashMap.MAXIMUM_CAPACITY); int capacity = 1; // find smallest power of two which holds all mappings while (capacity < initialCapacity) { capacity <<= 1; } table = new Entry[capacity]; threshold = (int) Math.min(capacity * loadFactor, MAXIMUM_CAPACITY + 1); useAltHashing = sun.misc.VM.isBooted() && (capacity >= Holder.ALTERNATIVE_HASHING_THRESHOLD); init(); // Give subclass a chance to do its thing. // Read the keys and values, and put the mappings in the HashMap for (int i=0; i<mappings; i++) { K key = (K) s.readObject(); V value = (V) s.readObject(); putForCreate(key, value); } } // These methods are used when serializing HashSets int capacity() { return table.length; } float loadFactor() { return loadFactor; } } |