首先,咱们都有一共识,即可以使用缓存来提升系统的访问速度!

  现如今,分布式缓存这么强大,所以,大部分时候,我们可能都不会去关注本地缓存了!

  而在一起高并发的场景,如果我们一味使用nosql式的缓存,如 redis, 那么也是好的吧!

  但是有个问题我们得考虑下: redis 这样的缓存是快,但是它总有自己的瓶颈吧,如果什么东西我们都往里面存储,则在高并发场景下,应用瓶颈将受限于其缓存瓶颈吧!

  所以,针对这种问题,在一些场景下,咱们可以使用本地缓存来存储一些数据,从而避免每次都将请求击穿到 redis 层面!

  本文考虑的是使用 本地缓存 作为二级缓存存在,而非直接的充当缓存工具!  

而使用本地缓存,则有几个讲究:

  1. 缓存一致性问题;
  2. 并发安全问题;

  所谓缓存一致性问题,就是本地缓存,是否和redis等缓存中间件的数据保持一致,如果不一致的表现超过了可接受的程度,则要这访问速度也就没啥意义了!

  所谓并发安全问题,即是,当使用本地缓存时,本地的缓存访问线程安全性问题,如果出现错乱情况,则严重了!

使用本地缓存,有什么好处?

  1. 减少访问远程缓存的网络io,速度自然是要提升的;
  2. 减少远程缓存的并发请求,从而表现出更大的并发处理能力;

本地缓存,都有什么应用场景?

  1. 单机部署的应用咱们就不说了;
  2. 读多写少的场景;(这也缓存的应用场景)
  3. 可以容忍一定时间内的缓存不一致; (因涉及的本地缓存,分布式机器结果必可能不一致)
  4. 应用对缓存的请求量非常大的场景; (如果直接打到redis缓存, 则redis压力巨大, 且应用响应速度将变慢)

  所以,如果自己存在这样的使用场景,不防也考虑下,如何使用这本地缓存,来提升响应速度吧!

如果要求自己来实现这两级缓存功能,我想应该也是不能的!只要解决掉两个问题即可:

  1. 缓存过期策略;
  2. 缓存安全性;

其中一最简单直接的方式,就是使用一个定时刷新缓存的线程,在时间节点到达后,将缓存删除即可; 另一个安全问题,则可以使用 synchronized 或 并发包下的锁工具来实现即可。

  但是真正做起来,可能不一定会简单,这也不是咱们想特别考虑的。

  咱们主要看下 guava 是如何来解决这种问题的?以其思路来开拓自己的设想!

如何 guava 来作为我们的二级缓存?

  1. 首先,咱们得引入 guava 的依赖

  1. <dependency>
  2. <groupId>com.google.guava</groupId>
  3. <artifactId>guava</artifactId>
  4. <version>18.0</version>
  5. </dependency>

  2. 创建guava缓存;

   首先,创建的guava缓存实例,应是全局共用的,否则就失去了缓存的意义;

  其次,一些过期参数,应支持配置化;

  例子如下:

  1. @Component
  2. @Slf4j
  3. public class LocalEnhancedCacheHolder {
  4. @Value("${guava.cache.max.size}")
  5. private Integer maxCacheSize;
  6. @Value("${guava.cache.timeout}")
  7. private Integer guavaCacheTimeout;
  8. /**
  9. * 字符串类型取值, k->v 只支持字符串存取,避免复杂化
  10. */
  11. private LoadingCache<String, String> stringDbCacheContainer;
  12. /**
  13. * hash 数据缓存, 展示使用多key 作为 guava 的缓存key
  14. */
  15. private LoadingCache<HashDbItemEntry, byte[]> hashDbCacheContainer;
  16. @Resource
  17. private RedisTemplate redisTemplate;
  18. /**
  19. * 值为空 字符串标识
  20. */
  21. public static final String EMPTY_VALUE_STRING = "";
  22. /**
  23. * 值为空 字节标识
  24. */
  25. public static final byte[] EMPTY_VALUE_BYTES = new byte[0];
  26. @PostConstruct
  27. public void init() {
  28. Integer dbCount = 2;
  29. stringDbCacheContainer = CacheBuilder.newBuilder()
  30. .expireAfterWrite(guavaCacheTimeout, TimeUnit.SECONDS)
  31. .maximumSize(maxCacheSize)
  32. .build(new CacheLoader<String, String>() {
  33. @Override
  34. public String load(String key) throws Exception {
  35. log.info("【缓存】从redis获取配置:{}", key);
  36. String value = redisTemplate.get(key);
  37. return StringUtils.defaultIfBlank(value, EMPTY_VALUE_STRING);
  38. }
  39. });
  40. hashDbCacheContainer = CacheBuilder.newBuilder()
  41. .expireAfterWrite(guavaCacheTimeout, TimeUnit.SECONDS)
  42. .maximumSize(maxCacheSize / dbCount)
  43. .build(new CacheLoader<HashDbItemEntry, byte[]>() {
  44. @Override
  45. public byte[] load(HashDbItemEntry keyHolder) throws Exception {
  46. log.info("【缓存】从redis获取配置:{}", keyHolder);
  47. byte[] valueBytes = redisTemplate.hgetValue(
  48. keyHolder.getBucketKey(), keyHolder.getSlotKey());
  49. if(valueBytes == null) {
  50. valueBytes = EMPTY_VALUE_BYTES;
  51. }
  52. return valueBytes;
  53. }
  54. });
  55. }
  56. /**
  57. * 获取k-v中的缓存值
  58. *
  59. * @param key 键
  60. * @return 缓存值,没有值时,返回 null
  61. */
  62. public String getCache(String key) {
  63. try {
  64. return stringDbCacheContainer.get(key);
  65. } catch (ExecutionException e) {
  66. log.error("【缓存】获取缓存异常:{}, ex:{}", key, e);
  67. throw new RuntimeException(e);
  68. }
  69. }
  70. /**
  71. * 放入缓存,此处暂只实现为向redis写入值
  72. *
  73. * @param key 缓存key
  74. * @param value 缓存value
  75. */
  76. public void putCache(String key, String value) {
  77. redisTemplate.set(key, value, 0L);
  78. }
  79. /**
  80. * 放入缓存带超时时间设置,此处暂只实现为向redis写入值
  81. *
  82. * @param key 缓存key
  83. * @param value 缓存value
  84. * @param timeout 超时时间,单位 s
  85. */
  86. public void putCache(String key, String value, Long timeout) {
  87. redisTemplate.set(key, value, timeout);
  88. }
  89. /**
  90. * 删除单个kv缓存
  91. *
  92. * @param key 缓存键
  93. */
  94. public void removeCache(String key) {
  95. redisTemplate.remove(key);
  96. }
  97. /**
  98. * 批量删除单个kv缓存
  99. *
  100. * @param keyList 缓存键 列表,以 管道形式删除,性能更高
  101. */
  102. public void removeCache(Collection<String> keyList) {
  103. redisTemplate.remove(keyList);
  104. }
  105. /**
  106. * 从hash数据库中获取缓存值
  107. *
  108. * @param bucketKey 桶key, 对应一系列值 k->v
  109. * @param slotKey 槽key, 对应具体的缓存值
  110. * @return 缓存值
  111. */
  112. public byte[] getCacheFromHash(String bucketKey, String slotKey) {
  113. HashDbItemEntry entry = new HashDbItemEntry(bucketKey, slotKey);
  114. try {
  115. return hashDbCacheContainer.get(entry);
  116. } catch (ExecutionException e) {
  117. log.error("【缓存】获取缓存异常:{}, ex:{}", entry, e);
  118. throw new RuntimeException(e);
  119. }
  120. }
  121. /**
  122. * hash 数据结构存储
  123. *
  124. * value 暂不存储相应值,只做查询使用
  125. */
  126. class HashDbItemEntry {
  127. private String bucketKey;
  128. private String slotKey;
  129. private Object value;
  130. public HashDbItemEntry(String bucketKey, String slotKey) {
  131. this.bucketKey = bucketKey;
  132. this.slotKey = slotKey;
  133. }
  134. public String getBucketKey() {
  135. return bucketKey;
  136. }
  137. public String getSlotKey() {
  138. return slotKey;
  139. }
  140. public Object getValue() {
  141. return value;
  142. }
  143. // 必重写 equals & hashCode, 否则缓存将无法复用
  144. @Override
  145. public boolean equals(Object o) {
  146. if (this == o) return true;
  147. if (o == null || getClass() != o.getClass()) return false;
  148. HashDbItemEntry that = (HashDbItemEntry) o;
  149. return Objects.equals(bucketKey, that.bucketKey) &&
  150. Objects.equals(slotKey, that.slotKey) &&
  151. Objects.equals(value, that.value);
  152. }
  153. @Override
  154. public int hashCode() {
  155. return Objects.hash(bucketKey, slotKey, value);
  156. }
  157. @Override
  158. public String toString() {
  159. return "HashDbItemEntry{" +
  160. "bucketKey='" + bucketKey + '\'' +
  161. ", slotKey='" + slotKey + '\'' +
  162. ", value=" + value +
  163. '}';
  164. }
  165. }
  166. }

  如上例子,展示了两种缓存,一种是 简单的 string -> string 的缓存, 另一种是 (string, string) -> byte[] 的缓存; 不管怎么样,只是想说明,缓存的方式有多种!

  我们就以简单的 string -> string 来说明吧!

  1. stringDbCacheContainer = CacheBuilder.newBuilder()
  2. .expireAfterWrite(guavaCacheTimeout, TimeUnit.SECONDS)
  3. .maximumSize(maxCacheSize)
  4. .build(new CacheLoader<String, String>() {
  5. @Override
  6. public String load(String key) throws Exception {
  7. log.info("【缓存】从redis获取配置:{}", key);
  8. String value = redisTemplate.get(key);
  9. return StringUtils.defaultIfBlank(value, EMPTY_VALUE_STRING);
  10. }
  11. });

  如上,咱们创建了一个缓存容器,它的最大容量是 maxCacheSize, 且每个key将在 guavaCacheTimeout 后过期, 过期后将从 redisTemplate 中获取数据!

  如上,一个完整的 两级缓存组件就完成了,你大可以直接在项目进行相应的操作了!是不是很简单?

 

深入理解guava 二级缓存原理?

  1. CacheBuilder 是如何创建的?

  1. @GwtCompatible(emulated = true)
  2. public final class CacheBuilder<K, V> {
  3. private static final int DEFAULT_INITIAL_CAPACITY = 16;
  4. private static final int DEFAULT_CONCURRENCY_LEVEL = 4;
  5. private static final int DEFAULT_EXPIRATION_NANOS = 0;
  6. private static final int DEFAULT_REFRESH_NANOS = 0;
  7. static final Supplier<? extends StatsCounter> NULL_STATS_COUNTER = Suppliers.ofInstance(
  8. new StatsCounter() {
  9. @Override
  10. public void recordHits(int count) {}
  11. @Override
  12. public void recordMisses(int count) {}
  13. @Override
  14. public void recordLoadSuccess(long loadTime) {}
  15. @Override
  16. public void recordLoadException(long loadTime) {}
  17. @Override
  18. public void recordEviction() {}
  19. @Override
  20. public CacheStats snapshot() {
  21. return EMPTY_STATS;
  22. }
  23. });
  24. static final CacheStats EMPTY_STATS = new CacheStats(0, 0, 0, 0, 0, 0);
  25. static final Supplier<StatsCounter> CACHE_STATS_COUNTER =
  26. new Supplier<StatsCounter>() {
  27. @Override
  28. public StatsCounter get() {
  29. return new SimpleStatsCounter();
  30. }
  31. };
  32. enum NullListener implements RemovalListener<Object, Object> {
  33. INSTANCE;
  34. @Override
  35. public void onRemoval(RemovalNotification<Object, Object> notification) {}
  36. }
  37. enum OneWeigher implements Weigher<Object, Object> {
  38. INSTANCE;
  39. @Override
  40. public int weigh(Object key, Object value) {
  41. return 1;
  42. }
  43. }
  44. static final Ticker NULL_TICKER = new Ticker() {
  45. @Override
  46. public long read() {
  47. return 0;
  48. }
  49. };
  50. private static final Logger logger = Logger.getLogger(CacheBuilder.class.getName());
  51. static final int UNSET_INT = -1;
  52. boolean strictParsing = true;
  53. int initialCapacity = UNSET_INT;
  54. int concurrencyLevel = UNSET_INT;
  55. long maximumSize = UNSET_INT;
  56. long maximumWeight = UNSET_INT;
  57. Weigher<? super K, ? super V> weigher;
  58. Strength keyStrength;
  59. Strength valueStrength;
  60. long expireAfterWriteNanos = UNSET_INT;
  61. long expireAfterAccessNanos = UNSET_INT;
  62. long refreshNanos = UNSET_INT;
  63. Equivalence<Object> keyEquivalence;
  64. Equivalence<Object> valueEquivalence;
  65. RemovalListener<? super K, ? super V> removalListener;
  66. Ticker ticker;
  67. Supplier<? extends StatsCounter> statsCounterSupplier = NULL_STATS_COUNTER;
  68. // TODO(fry): make constructor private and update tests to use newBuilder
  69. CacheBuilder() {}
  70. /**
  71. * Constructs a new {@code CacheBuilder} instance with default settings, including strong keys,
  72. * strong values, and no automatic eviction of any kind.
  73. */
  74. public static CacheBuilder<Object, Object> newBuilder() {
  75. return new CacheBuilder<Object, Object>();
  76. }
  77. /**
  78. * Sets the minimum total size for the internal hash tables. For example, if the initial capacity
  79. * is {@code 60}, and the concurrency level is {@code 8}, then eight segments are created, each
  80. * having a hash table of size eight. Providing a large enough estimate at construction time
  81. * avoids the need for expensive resizing operations later, but setting this value unnecessarily
  82. * high wastes memory.
  83. *
  84. * @throws IllegalArgumentException if {@code initialCapacity} is negative
  85. * @throws IllegalStateException if an initial capacity was already set
  86. */
  87. public CacheBuilder<K, V> initialCapacity(int initialCapacity) {
  88. checkState(this.initialCapacity == UNSET_INT, "initial capacity was already set to %s",
  89. this.initialCapacity);
  90. checkArgument(initialCapacity >= 0);
  91. this.initialCapacity = initialCapacity;
  92. return this;
  93. }
  94. int getInitialCapacity() {
  95. return (initialCapacity == UNSET_INT) ? DEFAULT_INITIAL_CAPACITY : initialCapacity;
  96. }
  97. /**
  98. * Guides the allowed concurrency among update operations. Used as a hint for internal sizing. The
  99. * table is internally partitioned to try to permit the indicated number of concurrent updates
  100. * without contention. Because assignment of entries to these partitions is not necessarily
  101. * uniform, the actual concurrency observed may vary. Ideally, you should choose a value to
  102. * accommodate as many threads as will ever concurrently modify the table. Using a significantly
  103. * higher value than you need can waste space and time, and a significantly lower value can lead
  104. * to thread contention. But overestimates and underestimates within an order of magnitude do not
  105. * usually have much noticeable impact. A value of one permits only one thread to modify the cache
  106. * at a time, but since read operations and cache loading computations can proceed concurrently,
  107. * this still yields higher concurrency than full synchronization.
  108. *
  109. * <p> Defaults to 4. <b>Note:</b>The default may change in the future. If you care about this
  110. * value, you should always choose it explicitly.
  111. *
  112. * <p>The current implementation uses the concurrency level to create a fixed number of hashtable
  113. * segments, each governed by its own write lock. The segment lock is taken once for each explicit
  114. * write, and twice for each cache loading computation (once prior to loading the new value,
  115. * and once after loading completes). Much internal cache management is performed at the segment
  116. * granularity. For example, access queues and write queues are kept per segment when they are
  117. * required by the selected eviction algorithm. As such, when writing unit tests it is not
  118. * uncommon to specify {@code concurrencyLevel(1)} in order to achieve more deterministic eviction
  119. * behavior.
  120. *
  121. * <p>Note that future implementations may abandon segment locking in favor of more advanced
  122. * concurrency controls.
  123. *
  124. * @throws IllegalArgumentException if {@code concurrencyLevel} is nonpositive
  125. * @throws IllegalStateException if a concurrency level was already set
  126. */
  127. public CacheBuilder<K, V> concurrencyLevel(int concurrencyLevel) {
  128. checkState(this.concurrencyLevel == UNSET_INT, "concurrency level was already set to %s",
  129. this.concurrencyLevel);
  130. checkArgument(concurrencyLevel > 0);
  131. this.concurrencyLevel = concurrencyLevel;
  132. return this;
  133. }
  134. int getConcurrencyLevel() {
  135. return (concurrencyLevel == UNSET_INT) ? DEFAULT_CONCURRENCY_LEVEL : concurrencyLevel;
  136. }
  137. /**
  138. * Specifies the maximum number of entries the cache may contain. Note that the cache <b>may evict
  139. * an entry before this limit is exceeded</b>. As the cache size grows close to the maximum, the
  140. * cache evicts entries that are less likely to be used again. For example, the cache may evict an
  141. * entry because it hasn't been used recently or very often.
  142. *
  143. * <p>When {@code size} is zero, elements will be evicted immediately after being loaded into the
  144. * cache. This can be useful in testing, or to disable caching temporarily without a code change.
  145. *
  146. * <p>This feature cannot be used in conjunction with {@link #maximumWeight}.
  147. *
  148. * @param size the maximum size of the cache
  149. * @throws IllegalArgumentException if {@code size} is negative
  150. * @throws IllegalStateException if a maximum size or weight was already set
  151. */
  152. public CacheBuilder<K, V> maximumSize(long size) {
  153. checkState(this.maximumSize == UNSET_INT, "maximum size was already set to %s",
  154. this.maximumSize);
  155. checkState(this.maximumWeight == UNSET_INT, "maximum weight was already set to %s",
  156. this.maximumWeight);
  157. checkState(this.weigher == null, "maximum size can not be combined with weigher");
  158. checkArgument(size >= 0, "maximum size must not be negative");
  159. this.maximumSize = size;
  160. return this;
  161. }
  162. /**
  163. * Specifies the maximum weight of entries the cache may contain. Weight is determined using the
  164. * {@link Weigher} specified with {@link #weigher}, and use of this method requires a
  165. * corresponding call to {@link #weigher} prior to calling {@link #build}.
  166. *
  167. * <p>Note that the cache <b>may evict an entry before this limit is exceeded</b>. As the cache
  168. * size grows close to the maximum, the cache evicts entries that are less likely to be used
  169. * again. For example, the cache may evict an entry because it hasn't been used recently or very
  170. * often.
  171. *
  172. * <p>When {@code weight} is zero, elements will be evicted immediately after being loaded into
  173. * cache. This can be useful in testing, or to disable caching temporarily without a code
  174. * change.
  175. *
  176. * <p>Note that weight is only used to determine whether the cache is over capacity; it has no
  177. * effect on selecting which entry should be evicted next.
  178. *
  179. * <p>This feature cannot be used in conjunction with {@link #maximumSize}.
  180. *
  181. * @param weight the maximum total weight of entries the cache may contain
  182. * @throws IllegalArgumentException if {@code weight} is negative
  183. * @throws IllegalStateException if a maximum weight or size was already set
  184. * @since 11.0
  185. */
  186. @GwtIncompatible("To be supported")
  187. public CacheBuilder<K, V> maximumWeight(long weight) {
  188. checkState(this.maximumWeight == UNSET_INT, "maximum weight was already set to %s",
  189. this.maximumWeight);
  190. checkState(this.maximumSize == UNSET_INT, "maximum size was already set to %s",
  191. this.maximumSize);
  192. this.maximumWeight = weight;
  193. checkArgument(weight >= 0, "maximum weight must not be negative");
  194. return this;
  195. }
  196. /**
  197. * Specifies the weigher to use in determining the weight of entries. Entry weight is taken
  198. * into consideration by {@link #maximumWeight(long)} when determining which entries to evict, and
  199. * use of this method requires a corresponding call to {@link #maximumWeight(long)} prior to
  200. * calling {@link #build}. Weights are measured and recorded when entries are inserted into the
  201. * cache, and are thus effectively static during the lifetime of a cache entry.
  202. *
  203. * <p>When the weight of an entry is zero it will not be considered for size-based eviction
  204. * (though it still may be evicted by other means).
  205. *
  206. * <p><b>Important note:</b> Instead of returning <em>this</em> as a {@code CacheBuilder}
  207. * instance, this method returns {@code CacheBuilder<K1, V1>}. From this point on, either the
  208. * original reference or the returned reference may be used to complete configuration and build
  209. * the cache, but only the "generic" one is type-safe. That is, it will properly prevent you from
  210. * building caches whose key or value types are incompatible with the types accepted by the
  211. * weigher already provided; the {@code CacheBuilder} type cannot do this. For best results,
  212. * simply use the standard method-chaining idiom, as illustrated in the documentation at top,
  213. * configuring a {@code CacheBuilder} and building your {@link Cache} all in a single statement.
  214. *
  215. * <p><b>Warning:</b> if you ignore the above advice, and use this {@code CacheBuilder} to build
  216. * a cache whose key or value type is incompatible with the weigher, you will likely experience
  217. * a {@link ClassCastException} at some <i>undefined</i> point in the future.
  218. *
  219. * @param weigher the weigher to use in calculating the weight of cache entries
  220. * @throws IllegalArgumentException if {@code size} is negative
  221. * @throws IllegalStateException if a maximum size was already set
  222. * @since 11.0
  223. */
  224. @GwtIncompatible("To be supported")
  225. public <K1 extends K, V1 extends V> CacheBuilder<K1, V1> weigher(
  226. Weigher<? super K1, ? super V1> weigher) {
  227. checkState(this.weigher == null);
  228. if (strictParsing) {
  229. checkState(this.maximumSize == UNSET_INT, "weigher can not be combined with maximum size",
  230. this.maximumSize);
  231. }
  232. // safely limiting the kinds of caches this can produce
  233. @SuppressWarnings("unchecked")
  234. CacheBuilder<K1, V1> me = (CacheBuilder<K1, V1>) this;
  235. me.weigher = checkNotNull(weigher);
  236. return me;
  237. }
  238. // Make a safe contravariant cast now so we don't have to do it over and over.
  239. @SuppressWarnings("unchecked")
  240. <K1 extends K, V1 extends V> Weigher<K1, V1> getWeigher() {
  241. return (Weigher<K1, V1>) MoreObjects.firstNonNull(weigher, OneWeigher.INSTANCE);
  242. }
  243. /**
  244. * Specifies that each entry should be automatically removed from the cache once a fixed duration
  245. * has elapsed after the entry's creation, or the most recent replacement of its value.
  246. *
  247. * <p>When {@code duration} is zero, this method hands off to
  248. * {@link #maximumSize(long) maximumSize}{@code (0)}, ignoring any otherwise-specificed maximum
  249. * size or weight. This can be useful in testing, or to disable caching temporarily without a code
  250. * change.
  251. *
  252. * <p>Expired entries may be counted in {@link Cache#size}, but will never be visible to read or
  253. * write operations. Expired entries are cleaned up as part of the routine maintenance described
  254. * in the class javadoc.
  255. *
  256. * @param duration the length of time after an entry is created that it should be automatically
  257. * removed
  258. * @param unit the unit that {@code duration} is expressed in
  259. * @throws IllegalArgumentException if {@code duration} is negative
  260. * @throws IllegalStateException if the time to live or time to idle was already set
  261. */
  262. public CacheBuilder<K, V> expireAfterWrite(long duration, TimeUnit unit) {
  263. checkState(expireAfterWriteNanos == UNSET_INT, "expireAfterWrite was already set to %s ns",
  264. expireAfterWriteNanos);
  265. checkArgument(duration >= 0, "duration cannot be negative: %s %s", duration, unit);
  266. this.expireAfterWriteNanos = unit.toNanos(duration);
  267. return this;
  268. }
  269. long getExpireAfterWriteNanos() {
  270. return (expireAfterWriteNanos == UNSET_INT) ? DEFAULT_EXPIRATION_NANOS : expireAfterWriteNanos;
  271. }
  272. /**
  273. * Specifies that each entry should be automatically removed from the cache once a fixed duration
  274. * has elapsed after the entry's creation, the most recent replacement of its value, or its last
  275. * access. Access time is reset by all cache read and write operations (including
  276. * {@code Cache.asMap().get(Object)} and {@code Cache.asMap().put(K, V)}), but not by operations
  277. * on the collection-views of {@link Cache#asMap}.
  278. *
  279. * <p>When {@code duration} is zero, this method hands off to
  280. * {@link #maximumSize(long) maximumSize}{@code (0)}, ignoring any otherwise-specificed maximum
  281. * size or weight. This can be useful in testing, or to disable caching temporarily without a code
  282. * change.
  283. *
  284. * <p>Expired entries may be counted in {@link Cache#size}, but will never be visible to read or
  285. * write operations. Expired entries are cleaned up as part of the routine maintenance described
  286. * in the class javadoc.
  287. *
  288. * @param duration the length of time after an entry is last accessed that it should be
  289. * automatically removed
  290. * @param unit the unit that {@code duration} is expressed in
  291. * @throws IllegalArgumentException if {@code duration} is negative
  292. * @throws IllegalStateException if the time to idle or time to live was already set
  293. */
  294. public CacheBuilder<K, V> expireAfterAccess(long duration, TimeUnit unit) {
  295. checkState(expireAfterAccessNanos == UNSET_INT, "expireAfterAccess was already set to %s ns",
  296. expireAfterAccessNanos);
  297. checkArgument(duration >= 0, "duration cannot be negative: %s %s", duration, unit);
  298. this.expireAfterAccessNanos = unit.toNanos(duration);
  299. return this;
  300. }
  301. long getExpireAfterAccessNanos() {
  302. return (expireAfterAccessNanos == UNSET_INT)
  303. ? DEFAULT_EXPIRATION_NANOS : expireAfterAccessNanos;
  304. }
  305. /**
  306. * Specifies that active entries are eligible for automatic refresh once a fixed duration has
  307. * elapsed after the entry's creation, or the most recent replacement of its value. The semantics
  308. * of refreshes are specified in {@link LoadingCache#refresh}, and are performed by calling
  309. * {@link CacheLoader#reload}.
  310. *
  311. * <p>As the default implementation of {@link CacheLoader#reload} is synchronous, it is
  312. * recommended that users of this method override {@link CacheLoader#reload} with an asynchronous
  313. * implementation; otherwise refreshes will be performed during unrelated cache read and write
  314. * operations.
  315. *
  316. * <p>Currently automatic refreshes are performed when the first stale request for an entry
  317. * occurs. The request triggering refresh will make a blocking call to {@link CacheLoader#reload}
  318. * and immediately return the new value if the returned future is complete, and the old value
  319. * otherwise.
  320. *
  321. * <p><b>Note:</b> <i>all exceptions thrown during refresh will be logged and then swallowed</i>.
  322. *
  323. * @param duration the length of time after an entry is created that it should be considered
  324. * stale, and thus eligible for refresh
  325. * @param unit the unit that {@code duration} is expressed in
  326. * @throws IllegalArgumentException if {@code duration} is negative
  327. * @throws IllegalStateException if the refresh interval was already set
  328. * @since 11.0
  329. */
  330. @Beta
  331. @GwtIncompatible("To be supported (synchronously).")
  332. public CacheBuilder<K, V> refreshAfterWrite(long duration, TimeUnit unit) {
  333. checkNotNull(unit);
  334. checkState(refreshNanos == UNSET_INT, "refresh was already set to %s ns", refreshNanos);
  335. checkArgument(duration > 0, "duration must be positive: %s %s", duration, unit);
  336. this.refreshNanos = unit.toNanos(duration);
  337. return this;
  338. }
  339. long getRefreshNanos() {
  340. return (refreshNanos == UNSET_INT) ? DEFAULT_REFRESH_NANOS : refreshNanos;
  341. }
  342. /**
  343. * Specifies a nanosecond-precision time source for use in determining when entries should be
  344. * expired. By default, {@link System#nanoTime} is used.
  345. *
  346. * <p>The primary intent of this method is to facilitate testing of caches which have been
  347. * configured with {@link #expireAfterWrite} or {@link #expireAfterAccess}.
  348. *
  349. * @throws IllegalStateException if a ticker was already set
  350. */
  351. public CacheBuilder<K, V> ticker(Ticker ticker) {
  352. checkState(this.ticker == null);
  353. this.ticker = checkNotNull(ticker);
  354. return this;
  355. }
  356. Ticker getTicker(boolean recordsTime) {
  357. if (ticker != null) {
  358. return ticker;
  359. }
  360. return recordsTime ? Ticker.systemTicker() : NULL_TICKER;
  361. }
  362. /**
  363. * Specifies a listener instance that caches should notify each time an entry is removed for any
  364. * {@linkplain RemovalCause reason}. Each cache created by this builder will invoke this listener
  365. * as part of the routine maintenance described in the class documentation above.
  366. *
  367. * <p><b>Warning:</b> after invoking this method, do not continue to use <i>this</i> cache
  368. * builder reference; instead use the reference this method <i>returns</i>. At runtime, these
  369. * point to the same instance, but only the returned reference has the correct generic type
  370. * information so as to ensure type safety. For best results, use the standard method-chaining
  371. * idiom illustrated in the class documentation above, configuring a builder and building your
  372. * cache in a single statement. Failure to heed this advice can result in a {@link
  373. * ClassCastException} being thrown by a cache operation at some <i>undefined</i> point in the
  374. * future.
  375. *
  376. * <p><b>Warning:</b> any exception thrown by {@code listener} will <i>not</i> be propagated to
  377. * the {@code Cache} user, only logged via a {@link Logger}.
  378. *
  379. * @return the cache builder reference that should be used instead of {@code this} for any
  380. * remaining configuration and cache building
  381. * @throws IllegalStateException if a removal listener was already set
  382. */
  383. @CheckReturnValue
  384. public <K1 extends K, V1 extends V> CacheBuilder<K1, V1> removalListener(
  385. RemovalListener<? super K1, ? super V1> listener) {
  386. checkState(this.removalListener == null);
  387. // safely limiting the kinds of caches this can produce
  388. @SuppressWarnings("unchecked")
  389. CacheBuilder<K1, V1> me = (CacheBuilder<K1, V1>) this;
  390. me.removalListener = checkNotNull(listener);
  391. return me;
  392. }
  393. // Make a safe contravariant cast now so we don't have to do it over and over.
  394. @SuppressWarnings("unchecked")
  395. <K1 extends K, V1 extends V> RemovalListener<K1, V1> getRemovalListener() {
  396. return (RemovalListener<K1, V1>)
  397. MoreObjects.firstNonNull(removalListener, NullListener.INSTANCE);
  398. }
  399. /**
  400. * Enable the accumulation of {@link CacheStats} during the operation of the cache. Without this
  401. * {@link Cache#stats} will return zero for all statistics. Note that recording stats requires
  402. * bookkeeping to be performed with each operation, and thus imposes a performance penalty on
  403. * cache operation.
  404. *
  405. * @since 12.0 (previously, stats collection was automatic)
  406. */
  407. public CacheBuilder<K, V> recordStats() {
  408. statsCounterSupplier = CACHE_STATS_COUNTER;
  409. return this;
  410. }
  411. boolean isRecordingStats() {
  412. return statsCounterSupplier == CACHE_STATS_COUNTER;
  413. }
  414. Supplier<? extends StatsCounter> getStatsCounterSupplier() {
  415. return statsCounterSupplier;
  416. }
  417. /**
  418. * Builds a cache, which either returns an already-loaded value for a given key or atomically
  419. * computes or retrieves it using the supplied {@code CacheLoader}. If another thread is currently
  420. * loading the value for this key, simply waits for that thread to finish and returns its
  421. * loaded value. Note that multiple threads can concurrently load values for distinct keys.
  422. *
  423. * <p>This method does not alter the state of this {@code CacheBuilder} instance, so it can be
  424. * invoked again to create multiple independent caches.
  425. *
  426. * @param loader the cache loader used to obtain new values
  427. * @return a cache having the requested features
  428. */
  429. public <K1 extends K, V1 extends V> LoadingCache<K1, V1> build(
  430. CacheLoader<? super K1, V1> loader) {
  431. checkWeightWithWeigher();
  432. return new LocalCache.LocalLoadingCache<K1, V1>(this, loader);
  433. }
  434. /**
  435. * Builds a cache which does not automatically load values when keys are requested.
  436. *
  437. * <p>Consider {@link #build(CacheLoader)} instead, if it is feasible to implement a
  438. * {@code CacheLoader}.
  439. *
  440. * <p>This method does not alter the state of this {@code CacheBuilder} instance, so it can be
  441. * invoked again to create multiple independent caches.
  442. *
  443. * @return a cache having the requested features
  444. * @since 11.0
  445. */
  446. public <K1 extends K, V1 extends V> Cache<K1, V1> build() {
  447. checkWeightWithWeigher();
  448. checkNonLoadingCache();
  449. return new LocalCache.LocalManualCache<K1, V1>(this);
  450. }
  451. }

  如上,使用建造者模式创建 LoadingCache<K, V> 缓存; 设置好 最大值,过期时间等参数;

  2. 如何获取一个guava缓存?

  其实就是一个get方法而已! stringDbCacheContainer.get(key);

  1. // com.google.common.cache.LocalCache
  2. // LoadingCache methods
  3. @Override
  4. public V get(K key) throws ExecutionException {
  5. // 两种数据来源,一是直接获取,二是调用 load() 方法加载数据
  6. return localCache.getOrLoad(key);
  7. }
  8. // com.google.common.cache.LocalCache
  9. V getOrLoad(K key) throws ExecutionException {
  10. return get(key, defaultLoader);
  11. }
  12. V get(K key, CacheLoader<? super K, V> loader) throws ExecutionException {
  13. int hash = hash(checkNotNull(key));
  14. // 还记得 ConcurrentHashMap 吗? 先定位segment, 再定位 entry
  15. return segmentFor(hash).get(key, hash, loader);
  16. }
  17. Segment<K, V> segmentFor(int hash) {
  18. // TODO(fry): Lazily create segments?
  19. return segments[(hash >>> segmentShift) & segmentMask];
  20. }
  21. // 核心取数逻辑在此get 中
  22. // loading
  23. V get(K key, int hash, CacheLoader<? super K, V> loader) throws ExecutionException {
  24. checkNotNull(key);
  25. checkNotNull(loader);
  26. try {
  27. if (count != 0) { // read-volatile
  28. // don't call getLiveEntry, which would ignore loading values
  29. ReferenceEntry<K, V> e = getEntry(key, hash);
  30. if (e != null) {
  31. // 如果存在值,则依据 ticker 进行判断是否过期,从而直接返回值,具体过期逻辑稍后再说
  32. long now = map.ticker.read();
  33. V value = getLiveValue(e, now);
  34. if (value != null) {
  35. recordRead(e, now);
  36. statsCounter.recordHits(1);
  37. return scheduleRefresh(e, key, hash, value, now, loader);
  38. }
  39. ValueReference<K, V> valueReference = e.getValueReference();
  40. if (valueReference.isLoading()) {
  41. return waitForLoadingValue(e, key, valueReference);
  42. }
  43. }
  44. }
  45. // 初次加载或过期之后,进入加载逻辑,重要
  46. // at this point e is either null or expired;
  47. return lockedGetOrLoad(key, hash, loader);
  48. } catch (ExecutionException ee) {
  49. Throwable cause = ee.getCause();
  50. if (cause instanceof Error) {
  51. throw new ExecutionError((Error) cause);
  52. } else if (cause instanceof RuntimeException) {
  53. throw new UncheckedExecutionException(cause);
  54. }
  55. throw ee;
  56. } finally {
  57. postReadCleanup();
  58. }
  59. }
  60. // static class Segment<K, V> extends ReentrantLock
  61. // 整个 Segment 继承了 ReentrantLock, 所以 LocalCache 的锁是依赖于 ReentrantLock 实现的
  62. V lockedGetOrLoad(K key, int hash, CacheLoader<? super K, V> loader)
  63. throws ExecutionException {
  64. ReferenceEntry<K, V> e;
  65. ValueReference<K, V> valueReference = null;
  66. LoadingValueReference<K, V> loadingValueReference = null;
  67. boolean createNewEntry = true;
  68. lock();
  69. try {
  70. // re-read ticker once inside the lock
  71. long now = map.ticker.read();
  72. // 在更新值前,先把过期数据清除
  73. preWriteCleanup(now);
  74. int newCount = this.count - 1;
  75. AtomicReferenceArray<ReferenceEntry<K, V>> table = this.table;
  76. int index = hash & (table.length() - 1);
  77. ReferenceEntry<K, V> first = table.get(index);
  78. // 处理 hash 碰撞时的链表查询
  79. for (e = first; e != null; e = e.getNext()) {
  80. K entryKey = e.getKey();
  81. if (e.getHash() == hash && entryKey != null
  82. && map.keyEquivalence.equivalent(key, entryKey)) {
  83. valueReference = e.getValueReference();
  84. if (valueReference.isLoading()) {
  85. createNewEntry = false;
  86. } else {
  87. V value = valueReference.get();
  88. if (value == null) {
  89. enqueueNotification(entryKey, hash, valueReference, RemovalCause.COLLECTED);
  90. } else if (map.isExpired(e, now)) {
  91. // This is a duplicate check, as preWriteCleanup already purged expired
  92. // entries, but let's accomodate an incorrect expiration queue.
  93. enqueueNotification(entryKey, hash, valueReference, RemovalCause.EXPIRED);
  94. } else {
  95. recordLockedRead(e, now);
  96. statsCounter.recordHits(1);
  97. // we were concurrent with loading; don't consider refresh
  98. return value;
  99. }
  100. // immediately reuse invalid entries
  101. writeQueue.remove(e);
  102. accessQueue.remove(e);
  103. this.count = newCount; // write-volatile
  104. }
  105. break;
  106. }
  107. }
  108. // 如果是第一次加载,则先创建 Entry, 进入 load() 逻辑
  109. if (createNewEntry) {
  110. loadingValueReference = new LoadingValueReference<K, V>();
  111. if (e == null) {
  112. e = newEntry(key, hash, first);
  113. e.setValueReference(loadingValueReference);
  114. table.set(index, e);
  115. } else {
  116. e.setValueReference(loadingValueReference);
  117. }
  118. }
  119. } finally {
  120. unlock();
  121. postWriteCleanup();
  122. }
  123. if (createNewEntry) {
  124. try {
  125. // Synchronizes on the entry to allow failing fast when a recursive load is
  126. // detected. This may be circumvented when an entry is copied, but will fail fast most
  127. // of the time.
  128. // 同步加载数据源值, 从 loader 中处理
  129. synchronized (e) {
  130. return loadSync(key, hash, loadingValueReference, loader);
  131. }
  132. } finally {
  133. // 记录未命中计数,默认为空
  134. statsCounter.recordMisses(1);
  135. }
  136. } else {
  137. // The entry already exists. Wait for loading.
  138. // 如果有线程正在更新缓存,则等待结果即可,具体实现就是调用 Future.get()
  139. return waitForLoadingValue(e, key, valueReference);
  140. }
  141. }
  142. // 加载原始值
  143. // at most one of loadSync.loadAsync may be called for any given LoadingValueReference
  144. V loadSync(K key, int hash, LoadingValueReference<K, V> loadingValueReference,
  145. CacheLoader<? super K, V> loader) throws ExecutionException {
  146. // loadingValueReference中保存了回调引用,加载原始值
  147. ListenableFuture<V> loadingFuture = loadingValueReference.loadFuture(key, loader);
  148. // 存储数据入缓存,以便下次使用
  149. return getAndRecordStats(key, hash, loadingValueReference, loadingFuture);
  150. }
  151. // 从 loader 中加载数据,
  152. public ListenableFuture<V> loadFuture(K key, CacheLoader<? super K, V> loader) {
  153. stopwatch.start();
  154. V previousValue = oldValue.get();
  155. try {
  156. // 如果原来没有值,则直接加载后返回
  157. if (previousValue == null) {
  158. V newValue = loader.load(key);
  159. return set(newValue) ? futureValue : Futures.immediateFuture(newValue);
  160. }
  161. // 否则一般为无过期时间的数据进行 reload, 如果 reload() 的结果为空,则直接返回
  162. // 须重写 reload() 实现
  163. ListenableFuture<V> newValue = loader.reload(key, previousValue);
  164. if (newValue == null) {
  165. return Futures.immediateFuture(null);
  166. }
  167. // To avoid a race, make sure the refreshed value is set into loadingValueReference
  168. // *before* returning newValue from the cache query.
  169. return Futures.transform(newValue, new Function<V, V>() {
  170. @Override
  171. public V apply(V newValue) {
  172. LoadingValueReference.this.set(newValue);
  173. return newValue;
  174. }
  175. });
  176. } catch (Throwable t) {
  177. if (t instanceof InterruptedException) {
  178. Thread.currentThread().interrupt();
  179. }
  180. return setException(t) ? futureValue : fullyFailedFuture(t);
  181. }
  182. }
  183. // com.google.common.util.concurrent.Uninterruptibles
  184. /**
  185. * Waits uninterruptibly for {@code newValue} to be loaded, and then records loading stats.
  186. */
  187. V getAndRecordStats(K key, int hash, LoadingValueReference<K, V> loadingValueReference,
  188. ListenableFuture<V> newValue) throws ExecutionException {
  189. V value = null;
  190. try {
  191. // 同步等待加载结果,注意,此处返回值不允许为null, 否则将报异常,这可能是为了规避缓存攻击漏洞吧
  192. value = getUninterruptibly(newValue);
  193. if (value == null) {
  194. throw new InvalidCacheLoadException("CacheLoader returned null for key " + key + ".");
  195. }
  196. // 加载成功记录,此处扩展点,默认为空
  197. statsCounter.recordLoadSuccess(loadingValueReference.elapsedNanos());
  198. // 最后将值存入缓存容器中,返回(论hash的重要性)
  199. storeLoadedValue(key, hash, loadingValueReference, value);
  200. return value;
  201. } finally {
  202. if (value == null) {
  203. statsCounter.recordLoadException(loadingValueReference.elapsedNanos());
  204. removeLoadingValue(key, hash, loadingValueReference);
  205. }
  206. }
  207. }
  208. /**
  209. * Invokes {@code future.}{@link Future#get() get()} uninterruptibly.
  210. * To get uninterruptibility and remove checked exceptions, see
  211. * {@link Futures#getUnchecked}.
  212. *
  213. * <p>If instead, you wish to treat {@link InterruptedException} uniformly
  214. * with other exceptions, see {@link Futures#get(Future, Class) Futures.get}
  215. * or {@link Futures#makeChecked}.
  216. *
  217. * @throws ExecutionException if the computation threw an exception
  218. * @throws CancellationException if the computation was cancelled
  219. */
  220. public static <V> V getUninterruptibly(Future<V> future)
  221. throws ExecutionException {
  222. boolean interrupted = false;
  223. try {
  224. while (true) {
  225. try {
  226. return future.get();
  227. } catch (InterruptedException e) {
  228. interrupted = true;
  229. }
  230. }
  231. } finally {
  232. if (interrupted) {
  233. Thread.currentThread().interrupt();
  234. }
  235. }
  236. }

  如上,就是获取一个缓存的过程。总结下来就是:

  1. 先使用hash定位到 segment中,然后尝试直接到 map中获取结果;
  2. 如果没有找到或者已过期,则调用客户端的load()方法加载原始数据;
  3. 将结果存入 segment.map 中,本地缓存生效;
  4. 记录命中情况,读取计数;

3. 如何处理过期?

   其实刚刚我们在看get()方法时,就看到了一些端倪。

  要确认两点: 1. 是否有创建异步清理线程进行过期数据清理? 2. 清理过程中,原始数据如何自处?

  其实guava的清理时机是在加载数据之前进行的!

  1. // com.google.common.cache.LocalCache
  2. // static class Segment<K, V> extends ReentrantLock
  3. // 整个 Segment 继承了 ReentrantLock, 所以 LocalCache 的锁是依赖于 ReentrantLock 实现的
  4. V lockedGetOrLoad(K key, int hash, CacheLoader<? super K, V> loader)
  5. throws ExecutionException {
  6. ReferenceEntry<K, V> e;
  7. ValueReference<K, V> valueReference = null;
  8. LoadingValueReference<K, V> loadingValueReference = null;
  9. boolean createNewEntry = true;
  10. lock();
  11. try {
  12. // re-read ticker once inside the lock
  13. long now = map.ticker.read();
  14. // 在更新值前,先把过期数据清除
  15. preWriteCleanup(now);
  16. int newCount = this.count - 1;
  17. AtomicReferenceArray<ReferenceEntry<K, V>> table = this.table;
  18. int index = hash & (table.length() - 1);
  19. ReferenceEntry<K, V> first = table.get(index);
  20. // 处理 hash 碰撞时的链表查询
  21. for (e = first; e != null; e = e.getNext()) {
  22. K entryKey = e.getKey();
  23. if (e.getHash() == hash && entryKey != null
  24. && map.keyEquivalence.equivalent(key, entryKey)) {
  25. valueReference = e.getValueReference();
  26. if (valueReference.isLoading()) {
  27. createNewEntry = false;
  28. } else {
  29. V value = valueReference.get();
  30. if (value == null) {
  31. enqueueNotification(entryKey, hash, valueReference, RemovalCause.COLLECTED);
  32. } else if (map.isExpired(e, now)) {
  33. // This is a duplicate check, as preWriteCleanup already purged expired
  34. // entries, but let's accomodate an incorrect expiration queue.
  35. enqueueNotification(entryKey, hash, valueReference, RemovalCause.EXPIRED);
  36. } else {
  37. recordLockedRead(e, now);
  38. statsCounter.recordHits(1);
  39. // we were concurrent with loading; don't consider refresh
  40. return value;
  41. }
  42. // immediately reuse invalid entries
  43. writeQueue.remove(e);
  44. accessQueue.remove(e);
  45. this.count = newCount; // write-volatile
  46. }
  47. break;
  48. }
  49. }
  50. // 如果是第一次加载,则先创建 Entry, 进入 load() 逻辑
  51. if (createNewEntry) {
  52. loadingValueReference = new LoadingValueReference<K, V>();
  53. if (e == null) {
  54. e = newEntry(key, hash, first);
  55. e.setValueReference(loadingValueReference);
  56. table.set(index, e);
  57. } else {
  58. e.setValueReference(loadingValueReference);
  59. }
  60. }
  61. } finally {
  62. unlock();
  63. postWriteCleanup();
  64. }
  65. if (createNewEntry) {
  66. try {
  67. // Synchronizes on the entry to allow failing fast when a recursive load is
  68. // detected. This may be circumvented when an entry is copied, but will fail fast most
  69. // of the time.
  70. // 同步加载数据源值, 从 loader 中处理
  71. synchronized (e) {
  72. return loadSync(key, hash, loadingValueReference, loader);
  73. }
  74. } finally {
  75. // 记录未命中计数,默认为空
  76. statsCounter.recordMisses(1);
  77. }
  78. } else {
  79. // The entry already exists. Wait for loading.
  80. return waitForLoadingValue(e, key, valueReference);
  81. }
  82. }
  83. // 我们来细看下 preWriteCleanup(now); 是如何清理过期数据的
  84. /**
  85. * Performs routine cleanup prior to executing a write. This should be called every time a
  86. * write thread acquires the segment lock, immediately after acquiring the lock.
  87. *
  88. * <p>Post-condition: expireEntries has been run.
  89. */
  90. @GuardedBy("this")
  91. void preWriteCleanup(long now) {
  92. runLockedCleanup(now);
  93. }
  94. void runLockedCleanup(long now) {
  95. // 再次确保清理数据时,锁是存在的
  96. if (tryLock()) {
  97. try {
  98. // 当存在特殊类型数据时,可以先进行清理
  99. drainReferenceQueues();
  100. // 清理过期数据,按时间清理
  101. expireEntries(now); // calls drainRecencyQueue
  102. // 读计数清零
  103. readCount.set(0);
  104. } finally {
  105. unlock();
  106. }
  107. }
  108. }
  109. /**
  110. * Drain the key and value reference queues, cleaning up internal entries containing garbage
  111. * collected keys or values.
  112. */
  113. @GuardedBy("this")
  114. void drainReferenceQueues() {
  115. if (map.usesKeyReferences()) {
  116. drainKeyReferenceQueue();
  117. }
  118. if (map.usesValueReferences()) {
  119. drainValueReferenceQueue();
  120. }
  121. }
  122. @GuardedBy("this")
  123. void expireEntries(long now) {
  124. // 更新最近的访问队列
  125. drainRecencyQueue();
  126. ReferenceEntry<K, V> e;
  127. // 从头部开始取元素,如果过期就进行清理
  128. // 写队列超时: 清理, 访问队列超时: 清理
  129. while ((e = writeQueue.peek()) != null && map.isExpired(e, now)) {
  130. if (!removeEntry(e, e.getHash(), RemovalCause.EXPIRED)) {
  131. throw new AssertionError();
  132. }
  133. }
  134. while ((e = accessQueue.peek()) != null && map.isExpired(e, now)) {
  135. if (!removeEntry(e, e.getHash(), RemovalCause.EXPIRED)) {
  136. throw new AssertionError();
  137. }
  138. }
  139. }
  140. @Override
  141. public ReferenceEntry<K, V> peek() {
  142. ReferenceEntry<K, V> next = head.getNextInAccessQueue();
  143. return (next == head) ? null : next;
  144. }
  145. // 清理指定类型的元素,如 过期元素
  146. @GuardedBy("this")
  147. boolean removeEntry(ReferenceEntry<K, V> entry, int hash, RemovalCause cause) {
  148. int newCount = this.count - 1;
  149. AtomicReferenceArray<ReferenceEntry<K, V>> table = this.table;
  150. int index = hash & (table.length() - 1);
  151. ReferenceEntry<K, V> first = table.get(index);
  152. for (ReferenceEntry<K, V> e = first; e != null; e = e.getNext()) {
  153. if (e == entry) {
  154. ++modCount;
  155. // 调用 removeValueFromChain, 清理具体元素
  156. ReferenceEntry<K, V> newFirst = removeValueFromChain(
  157. first, e, e.getKey(), hash, e.getValueReference(), cause);
  158. newCount = this.count - 1;
  159. table.set(index, newFirst);
  160. this.count = newCount; // write-volatile
  161. return true;
  162. }
  163. }
  164. return false;
  165. }
  166. @GuardedBy("this")
  167. @Nullable
  168. ReferenceEntry<K, V> removeValueFromChain(ReferenceEntry<K, V> first,
  169. ReferenceEntry<K, V> entry, @Nullable K key, int hash, ValueReference<K, V> valueReference,
  170. RemovalCause cause) {
  171. enqueueNotification(key, hash, valueReference, cause);
  172. // 清理两队列
  173. writeQueue.remove(entry);
  174. accessQueue.remove(entry);
  175. if (valueReference.isLoading()) {
  176. valueReference.notifyNewValue(null);
  177. return first;
  178. } else {
  179. return removeEntryFromChain(first, entry);
  180. }
  181. }
  182. @GuardedBy("this")
  183. @Nullable
  184. ReferenceEntry<K, V> removeEntryFromChain(ReferenceEntry<K, V> first,
  185. ReferenceEntry<K, V> entry) {
  186. int newCount = count;
  187. // 普通情况,则直接返回 next 元素链即可
  188. // 针对有first != entry 的情况,则依次将 first 移动到队尾,然后跳到下一个元素返回
  189. ReferenceEntry<K, V> newFirst = entry.getNext();
  190. for (ReferenceEntry<K, V> e = first; e != entry; e = e.getNext()) {
  191. // 将first链表倒转到 newFirst 尾部
  192. ReferenceEntry<K, V> next = copyEntry(e, newFirst);
  193. if (next != null) {
  194. newFirst = next;
  195. } else {
  196. removeCollectedEntry(e);
  197. newCount--;
  198. }
  199. }
  200. this.count = newCount;
  201. return newFirst;
  202. }

  到此,我们就完整的看到了一个 key 的过期处理流程了。总结就是:

  1. 在读取的时候,触发清理操作;
  2. 使用 ReentrantLock 来进行线程安全的更新;
  3. 读取计数器清零,元素数量减少;

  3. 怎样主动放入一个缓存?

  这个和普通的map的put方法一样,简单看下即可!

  1. // com.google.common.cache.LocalCache$LocalManualCache
  2. @Override
  3. public void put(K key, V value) {
  4. localCache.put(key, value);
  5. }
  6. // com.google.common.cache.LocalCache
  7. @Override
  8. public V put(K key, V value) {
  9. checkNotNull(key);
  10. checkNotNull(value);
  11. int hash = hash(key);
  12. return segmentFor(hash).put(key, hash, value, false);
  13. }
  14. // com.google.common.cache.LocalCache$Segment
  15. @Nullable
  16. V put(K key, int hash, V value, boolean onlyIfAbsent) {
  17. lock();
  18. try {
  19. long now = map.ticker.read();
  20. preWriteCleanup(now);
  21. int newCount = this.count + 1;
  22. if (newCount > this.threshold) { // ensure capacity
  23. expand();
  24. newCount = this.count + 1;
  25. }
  26. AtomicReferenceArray<ReferenceEntry<K, V>> table = this.table;
  27. int index = hash & (table.length() - 1);
  28. ReferenceEntry<K, V> first = table.get(index);
  29. // Look for an existing entry.
  30. for (ReferenceEntry<K, V> e = first; e != null; e = e.getNext()) {
  31. K entryKey = e.getKey();
  32. if (e.getHash() == hash && entryKey != null
  33. && map.keyEquivalence.equivalent(key, entryKey)) {
  34. // We found an existing entry.
  35. ValueReference<K, V> valueReference = e.getValueReference();
  36. V entryValue = valueReference.get();
  37. if (entryValue == null) {
  38. ++modCount;
  39. if (valueReference.isActive()) {
  40. enqueueNotification(key, hash, valueReference, RemovalCause.COLLECTED);
  41. setValue(e, key, value, now);
  42. newCount = this.count; // count remains unchanged
  43. } else {
  44. setValue(e, key, value, now);
  45. newCount = this.count + 1;
  46. }
  47. this.count = newCount; // write-volatile
  48. evictEntries();
  49. return null;
  50. } else if (onlyIfAbsent) {
  51. // Mimic
  52. // "if (!map.containsKey(key)) ...
  53. // else return map.get(key);
  54. recordLockedRead(e, now);
  55. return entryValue;
  56. } else {
  57. // clobber existing entry, count remains unchanged
  58. ++modCount;
  59. enqueueNotification(key, hash, valueReference, RemovalCause.REPLACED);
  60. setValue(e, key, value, now);
  61. evictEntries();
  62. return entryValue;
  63. }
  64. }
  65. }
  66. // Create a new entry.
  67. ++modCount;
  68. ReferenceEntry<K, V> newEntry = newEntry(key, hash, first);
  69. setValue(newEntry, key, value, now);
  70. table.set(index, newEntry);
  71. newCount = this.count + 1;
  72. this.count = newCount; // write-volatile
  73. evictEntries();
  74. return null;
  75. } finally {
  76. unlock();
  77. postWriteCleanup();
  78. }
  79. }

  就这样,基于guava的二级缓存功能就搞定了。其实并没有多高深!

老话:感谢那些折磨你的人!

版权声明:本文为yougewe原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://www.cnblogs.com/yougewe/p/10892173.html