原创文章,转载请标注出处:https://www.cnblogs.com/V1haoge/p/10755431.html

HashSet是基于哈希实现的set集合,其实它底层是一个value固定的HashMap。
HashMap是无序存储的,所以HashSet也一样是无序的,而且HashSet允许null值,但只能拥有一个null值,即不允许存储相同的元素。

  1. public class HashSet<E> extends AbstractSet<E>
  2. implements Set<E>, Cloneable, java.io.Serializable
  3. {
  4. //...
  5. private transient HashMap<E,Object> map;
  6. private static final Object PRESENT = new Object();
  7. //...
  8. }

上面的map即为HashSet底层的HashMap,针对HashSet的操作,全部转交给这个map来完成。
上面的PRESENT即为底层HashMap中键值对的值的固定值。因为在HashSet中只关注键。

  1. public class HashSet<E> extends AbstractSet<E>
  2. implements Set<E>, Cloneable, java.io.Serializable{
  3. //...
  4. public HashSet() {
  5. map = new HashMap<>();
  6. }
  7. public HashSet(Collection<? extends E> c) {
  8. map = new HashMap<>(Math.max((int) (c.size()/.75f) + 1, 16));
  9. addAll(c);
  10. }
  11. public HashSet(int initialCapacity, float loadFactor) {
  12. map = new HashMap<>(initialCapacity, loadFactor);
  13. }
  14. public HashSet(int initialCapacity) {
  15. map = new HashMap<>(initialCapacity);
  16. }
  17. HashSet(int initialCapacity, float loadFactor, boolean dummy) {
  18. map = new LinkedHashMap<>(initialCapacity, loadFactor);
  19. }
  20. //...
  21. }

很明显,所有的HashSet的构造器最终都在创建底层的HashMap。
最后一个构造器创建了一个LinkedHashMap实例,其实它也是一个HashMap,因为它继承自HashMap,是对HashMap的一个功能扩展集合,它支持多种顺序的遍历(插入顺序和访问顺序)。

  1. public class HashSet<E> extends AbstractSet<E>
  2. implements Set<E>, Cloneable, java.io.Serializable{
  3. //...
  4. public Iterator<E> iterator() {
  5. return map.keySet().iterator();
  6. }
  7. public int size() {
  8. return map.size();
  9. }
  10. public boolean isEmpty() {
  11. return map.isEmpty();
  12. }
  13. public boolean contains(Object o) {
  14. return map.containsKey(o);
  15. }
  16. public boolean add(E e) {
  17. return map.put(e, PRESENT)==null;
  18. }
  19. public boolean remove(Object o) {
  20. return map.remove(o)==PRESENT;
  21. }
  22. public void clear() {
  23. map.clear();
  24. }
  25. //...
  26. }

上面的所有基础操作,全部已开HashMap的对应方法来完成。

HashSet实例的序列化执行时,并不会序列化map属性,因为其被transient关键字所修饰。参照源码:

  1. // 在执行序列化操作的时候会执行这个writeObject方法
  2. public class HashSet<E> extends AbstractSet<E>
  3. implements Set<E>, Cloneable, java.io.Serializable
  4. {
  5. //...
  6. private transient HashMap<E,Object> map;
  7. private static final Object PRESENT = new Object();
  8. private void writeObject(java.io.ObjectOutputStream s)
  9. throws java.io.IOException {
  10. // Write out any hidden serialization magic
  11. // 用于将对象中的非static和非transient的字段值写入流中
  12. s.defaultWriteObject();
  13. // Write out HashMap capacity and load factor
  14. // 将底层HashMap的当前容量和加载因子写入流中
  15. s.writeInt(map.capacity());
  16. s.writeFloat(map.loadFactor());
  17. // Write out size
  18. // 将底层HashMap的当前元素数量size写入流中
  19. s.writeInt(map.size());
  20. // Write out all elements in the proper order.
  21. // 最后将所有的元素写入流中
  22. for (E e : map.keySet())
  23. s.writeObject(e);
  24. }
  25. //...
  26. }
  1. // 在执行反序列化操作的时候会执行这个readObject方法
  2. public class HashSet<E> extends AbstractSet<E>
  3. implements Set<E>, Cloneable, java.io.Serializable
  4. {
  5. //...
  6. private transient HashMap<E,Object> map;
  7. private static final Object PRESENT = new Object();
  8. private void readObject(java.io.ObjectInputStream s)
  9. throws java.io.IOException, ClassNotFoundException {
  10. // Read in any hidden serialization magic
  11. // 读取流中对应的当前类的非static和非transient的值
  12. s.defaultReadObject();
  13. // Read capacity and verify non-negative.
  14. // 读取流中的容量值
  15. int capacity = s.readInt();
  16. if (capacity < 0) {
  17. throw new InvalidObjectException("Illegal capacity: " +
  18. capacity);
  19. }
  20. // Read load factor and verify positive and non NaN.
  21. // 读取流中的加载因子值
  22. float loadFactor = s.readFloat();
  23. if (loadFactor <= 0 || Float.isNaN(loadFactor)) {
  24. throw new InvalidObjectException("Illegal load factor: " +
  25. loadFactor);
  26. }
  27. // Read size and verify non-negative.
  28. // 读取流中的元素数量值
  29. int size = s.readInt();
  30. if (size < 0) {
  31. throw new InvalidObjectException("Illegal size: " +
  32. size);
  33. }
  34. // Set the capacity according to the size and load factor ensuring that
  35. // the HashMap is at least 25% full but clamping to maximum capacity.
  36. capacity = (int) Math.min(size * Math.min(1 / loadFactor, 4.0f),
  37. HashMap.MAXIMUM_CAPACITY);
  38. // Constructing the backing map will lazily create an array when the first element is
  39. // added, so check it before construction. Call HashMap.tableSizeFor to compute the
  40. // actual allocation size. Check Map.Entry[].class since it's the nearest public type to
  41. // what is actually created.
  42. SharedSecrets.getJavaOISAccess()
  43. .checkArray(s, Map.Entry[].class, HashMap.tableSizeFor(capacity));
  44. // Create backing HashMap
  45. // 创建底层HashMap实例
  46. map = (((HashSet<?>)this) instanceof LinkedHashSet ?
  47. new LinkedHashMap<E,Object>(capacity, loadFactor) :
  48. new HashMap<E,Object>(capacity, loadFactor));
  49. // Read in all elements in the proper order.
  50. // 读取流中保存的元素,并将其逐个添加到新创建的HashMap实例中
  51. for (int i=0; i<size; i++) {
  52. @SuppressWarnings("unchecked")
  53. E e = (E) s.readObject();
  54. map.put(e, PRESENT);
  55. }
  56. }
  57. //...
  58. }

HashSet就是依靠HashMap实现的。

版权声明:本文为V1haoge原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://www.cnblogs.com/V1haoge/p/10755431.html