缓存一致性终极解决方案之Facebook租约机制的开源实现集成改造

文章目录

1.前言

由于之前看到了几篇解决缓存一致性问题的文章和开源项目,然后就去看了下,之前都是用啥延迟双删来解决,延迟双删这个也不靠谱,这个赶脚挺靠谱的,然后去看了下两个的实现源码,大致思路都基本上是一样的,只不过集成方式一个是用jedis,一个使用lettuce,下面是文章和项目链接,出于对两位大佬的尊重,特此把灵感的出处先贴出来:

1.1 链接信息

复制代码
https://juejin.cn/post/7447033901657096202
https://juejin.cn/post/7440021417506979866#heading-2
https://github.com/HAibiiin/system-design-codebase
https://gitee.com/sh_wangwanbao/surfing-facebook-cache

然后将两个的源码都看了下,思路都是相通的,只不过使用上不简便,system-design-codebase这个过于炫技了,是jedis实现,又是啥两阶段了的(华而不实),使用上非常不方便,surfing-facebook-cache是集成lettuce实现,且参考了go的开源试下代码,看他写那个代码有go的风格和味道,在实现和使用上优于system-design-codebase这个,所以就有了集成这个实现的想法。

1.2 dtm-labs的rockscache项目

复制代码
https://github.com/dtm-labs/rockscache
https://www.dtm.pub/app/cache.html

2.集成改造实现

2.1集成改造优化点

集成到之前我写的开源的项目biz-ratelimiter-redissonlock-manualctrltrans-spring-boot-start里面,将实现改成了redission的调用实现,换成redission集成调用实现,是因为redission有一些高级的功能,且开启了执行lua脚本缓存提高效率,value的反序列化方式拓展jackson和fastjson的支持,可以根据需要来动态选择和灵活的配置,RocksCacheClient客户端的构建没有给容器托管,因为里面有全局变量且每个使用的地方要的配置都不一样,所以建议使用ConcurrentHashMap来缓存管理降低new对象的数量,比如一个接口可以根据接口的方法名来作为缓存管理的key,这种就可以根据需要来,而不是全局使用同一个,各自用各自的,安全灵活。

2.2项目依赖

xml 复制代码
# gitee
<dependency>
    <groupId>io.gitee.bigbigfeifei</groupId>
    <artifactId>biz-ratelimiter-redissonlock-manualctrltrans-spring-boot-start</artifactId>
    <version>2.2</version>
</dependency>
# github
<dependency>
    <groupId>io.github.bigbigfeifei</groupId>
    <artifactId>biz-ratelimiter-redissonlock-manualctrltrans-spring-boot-start</artifactId>
    <version>2.2</version>
</dependency>

2.3 使用教程

2.3.1开启组件配置

在启动类上加上如下注解

复制代码
@EnableZlfBizRateLimiter

配置好yaml即可

2.3.2 测试类

java 复制代码
package org.example.controller;

import com.zlf.annotation.BizIdempotentManualCtrlTransLimiterAnno;
import com.zlf.cache.RocksCacheClient;
import com.zlf.cache.RocksCacheOptions;
import com.zlf.config.RedissonLockAutoConfiguration;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.example.User;
import org.example.service.impl.LockService;
import org.redisson.api.RedissonClient;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import javax.annotation.Resource;
import java.time.Duration;
import java.util.HashMap;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;

@Slf4j
@RestController
@RequestMapping("rocksCache")
public class RocksCacheController {

    @Resource(name = RedissonLockAutoConfiguration.REDISSON_LOCK_BEAN_NAME)
    private RedissonClient redissonClient;

    private final ConcurrentHashMap<String, RocksCacheClient> rocksCacheClientMaps = new ConcurrentHashMap<>();

    @Autowired
    private LockService lockService;

    /**
     * 获取RocksCacheClient
     *
     * @param rocksCacheClientKey
     * @param options
     * @return
     */
    private RocksCacheClient getRocksCacheClient(final String rocksCacheClientKey, final RocksCacheOptions options) {
        return rocksCacheClientMaps.computeIfAbsent(rocksCacheClientKey, key -> {
            if (Objects.isNull(options)) {
                return new RocksCacheClient(redissonClient, RocksCacheOptions.defaultOptions());
            }
            RocksCacheClient rocksCacheClient = new RocksCacheClient(redissonClient, options);
            return rocksCacheClient;
        });
    }

    @GetMapping("testLock")
    public void testLock() {
        log.info("====testLock.start=====");
        //this.lock("lock");
        lockService.lock("lock1");//分布式锁的正确姿势,如果是要使用接口限流需要继承接口哦
        log.info("====testLock.end=====");
    }

    /**
     * 切面失效,姿势不对
     * 内部调用: 在一个类中,一个方法调用同一个类的另一个方法(self-invocation)会绕过代理,导致切面失效。
     *
     * @param key
     */
    @BizIdempotentManualCtrlTransLimiterAnno(isOpenRedissonLock = true)
    public void lock(String key) {
        log.info("=====locke===key:{}====", key);
    }

    @GetMapping("simpleTest")
    public void simpleTest() {
        // Create cache client with default options
        RocksCacheClient cacheClient = this.getRocksCacheClient("simpleTest", null);
        try {
            // Example 1: Basic fetch
            log.info("=== Example 1: Basic Fetch ===");
            String key1 = "user:1001";
            String result1 = cacheClient.fetch(key1, Duration.ofSeconds(300), () -> {
                log.info("Loading data from database for key: {}", key1);
                simulateDbQuery(100);
                return "User{id=1001, name='Alice'}";
            });
            log.info("First fetch result: {}", result1);
            String v = cacheClient.rawGet(key1);
            log.info("rawGet v: {}", v);
            // Second fetch should hit cache
            String result2 = cacheClient.fetch(key1, Duration.ofSeconds(300), () -> {
                log.info("This should not be called (cache hit)");
                return "Should not see this";
            });
            log.info("Second fetch result (from cache): {}", result2);

            // Example 2: Update and delete cache
            log.info("=== Example 2: Update and Delete Cache ===");
            String key2 = "product:2001";
            String product = cacheClient.fetch(key2, Duration.ofSeconds(300), () -> {
                log.info("Loading product from database");
                simulateDbQuery(100);
                return "Product{id=2001, name='Laptop', price=5999}";
            });
            log.info("Product loaded: {}", product);

            // Simulate database update
            log.info("Updating product in database...");
            simulateDbQuery(50);

            // Tag cache as deleted
            cacheClient.tagAsDeleted(key2);
            log.info("Cache tagged as deleted");

            // Fetch again (weak consistency: returns old value + async update)
            String updatedProduct = cacheClient.fetch(key2, Duration.ofSeconds(300), () -> {
                log.info("Loading updated product from database");
                simulateDbQuery(100);
                return "Product{id=2001, name='Laptop Pro', price=6999}";
            });
            log.info("Product after update: {}", updatedProduct);

            // Wait for async update
            Thread.sleep(300);

            // Fetch again to see new value
            String finalProduct = cacheClient.fetch(key2, Duration.ofSeconds(300), () -> {
                log.info("This should not be called");
                return "Should not see this";
            });
            log.info("Product after async update: {}", finalProduct);

            // Example 3: Empty value caching (anti-penetration)
            log.info("=== Example 3: Empty Value Caching ===");
            String key3 = "user:9999";
            String nonExistUser = cacheClient.fetch(key3, Duration.ofSeconds(300), () -> {
                log.info("Querying non-existent user from database");
                simulateDbQuery(100);
                return ""; // Empty result
            });
            log.info("Non-existent user result: '{}'", nonExistUser);

            // Second query should hit cache (not query DB again)
            String nonExistUser2 = cacheClient.fetch(key3, Duration.ofSeconds(300), () -> {
                log.info("This should not be called (empty value cached)");
                return "Should not see this";
            });
            log.info("Second query result: '{}'", nonExistUser2);

            // Example 4: Batch operations
            log.info("=== Example 4: Batch Operations ===");
            String[] batchKeys = {"order:1", "order:2", "order:3"};
            Map<Integer, String> batchResult = cacheClient.fetchBatch(batchKeys, Duration.ofSeconds(300), indices -> {
                log.info("Batch loading orders for indices: {}", java.util.Arrays.toString(indices));
                simulateDbQuery(150);
                Map<Integer, String> data = new java.util.HashMap<>();
                for (int idx : indices) {
                    data.put(idx, "Order{id=" + (idx + 1) + ", total=" + (100 * (idx + 1)) + "}");
                }
                return data;
            });
            log.info("Batch fetch result: {}", batchResult);

            log.info("=== All examples completed successfully! ===");

        } catch (Exception e) {
            log.error("Error occurred", e);
        }
    }

    @GetMapping("basicFetch")
    public void basicFetch() {
        String key = "test:basic";
        String expected = "value1";

        RocksCacheClient cacheClient = this.getRocksCacheClient("basicFetch", null);

        String result = cacheClient.fetch(key, Duration.ofSeconds(60), () -> expected);

        log.info("result isEqualTo expected:{}", Objects.equals(result, expected));
        // Verify data is cached
        String cached = cacheClient.rawGet(key);
        log.info("cached isEqualTo expected:{}", Objects.equals(cached, expected));
    }

    @GetMapping("fetchCacheHit")
    public void fetchCacheHit() {
        String key = "test:hit";
        String expected = "cached-value";

        RocksCacheClient client = this.getRocksCacheClient("fetchCacheHit", null);
        // Pre-populate cache
        client.rawSet(key, expected, Duration.ofSeconds(60));

        AtomicInteger callCount = new AtomicInteger(0);
        String result = client.fetch(key, Duration.ofSeconds(60), () -> {
            callCount.incrementAndGet();
            return "should-not-be-called";
        });
        log.info("result isEqualTo expected : {}", Objects.equals(result, expected));
        log.info("callCount isEqualTo expected : {}", Objects.equals(callCount.get(), 0));
    }

    @GetMapping("weakConsistency")
    public void weakConsistency() throws Exception {
        String key = "test:weak";
        String expected = "value1";

        RocksCacheClient client = this.getRocksCacheClient("weakConsistency", null);
        // Start first request that takes 200ms
        CompletableFuture<String> future1 = CompletableFuture.supplyAsync(() ->
                client.fetch(key, Duration.ofSeconds(60), () -> {
                    Thread.sleep(200);
                    return expected;
                })
        );

        // Wait a bit and start second request
        Thread.sleep(50);
        CompletableFuture<String> future2 = CompletableFuture.supplyAsync(() ->
                client.fetch(key, Duration.ofSeconds(60), () -> {
                    throw new RuntimeException("Should not be called due to singleflight");
                })
        );

        String result1 = future1.get(1, TimeUnit.SECONDS);
        String result2 = future2.get(1, TimeUnit.SECONDS);
        log.info("result1 isEqualTo expected : {}", Objects.equals(result1, expected));
        log.info("result2 isEqualTo expected : {}", Objects.equals(result2, expected));
    }

    @GetMapping("strongConsistency")
    public void strongConsistency() throws Exception {

        RocksCacheOptions strongOptions = RocksCacheOptions.builder()
                .strongConsistency(true)
                .build();

        RocksCacheClient client = this.getRocksCacheClient("strongConsistency", strongOptions);

        String key = "test:strong";
        String expected = "value1";

        // First fetch
        String result = client.fetch(key, Duration.ofSeconds(60), () -> expected);
        log.info("result isEqualTo expected : {}", Objects.equals(result, expected));

        // Tag as deleted
        client.tagAsDeleted(key);

        // Wait a bit to ensure deletion is processed
        Thread.sleep(100);

        // Fetch again should get new data (strong consistency)
        String newValue = "value2";
        long startTime = System.currentTimeMillis();
        String result2 = client.fetch(key, Duration.ofSeconds(60), () -> {
            try {
                Thread.sleep(150);
            } catch (InterruptedException e) {
                throw new RuntimeException(e);
            }
            return newValue;
        });
        long elapsed = System.currentTimeMillis() - startTime;

        log.info("result2 isEqualTo newValue : {}", Objects.equals(result2, newValue));
        log.info("elapsed 是否大于等于 150 : {}", elapsed >= 150);// Should wait for new fetch
    }

    @GetMapping("tagAsDeleted")
    public void tagAsDeleted() throws Exception {
        String key = "test:delete";
        String value1 = "value1";
        String value2 = "value2";

        RocksCacheClient client = this.getRocksCacheClient("tagAsDeleted", null);

        // First fetch
        String result = client.fetch(key, Duration.ofSeconds(60), () -> value1);

        log.info("result isEqualTo value1 : {}", Objects.equals(result, value1));

        // Tag as deleted
        client.tagAsDeleted(key);

        // In weak consistency mode, should return old value immediately
        // but trigger async update
        String result2 = client.fetch(key, Duration.ofSeconds(60), () -> value2);

        log.info("result2 isEqualTo value1 : {}", Objects.equals(result2, value1));

        // Wait for async update
        Thread.sleep(300);

        // Now should get new value
        String result3 = client.fetch(key, Duration.ofSeconds(60), () -> "should-not-call");

        log.info("result3 isEqualTo value2 : {}", Objects.equals(result3, value2));
    }

    @GetMapping("emptyValueCaching")
    public void emptyValueCaching() {
        String key = "test:empty";

        AtomicInteger callCount = new AtomicInteger(0);

        RocksCacheClient client = this.getRocksCacheClient("emptyValueCaching", null);

        // First call returns empty
        String result1 = client.fetch(key, Duration.ofSeconds(60), () -> {
            callCount.incrementAndGet();
            return "";
        });

        log.info("result1 isEmpty : {}", StringUtils.isEmpty(result1));
        log.info("callCount isEqualTo 1 : {}", Objects.equals(callCount.get(), 1));

        // Second call should hit cache (not call loader again)
        String result2 = client.fetch(key, Duration.ofSeconds(60), () -> {
            callCount.incrementAndGet();
            return "should-not-be-called";
        });

        log.info("result2 isEmpty : {}", StringUtils.isEmpty(result2));
        log.info("callCount isEqualTo 1 : {}", Objects.equals(callCount.get(), 1)); // Should still be 1
    }

    @GetMapping("cacheReadDisabled")
    public void cacheReadDisabled() {
        RocksCacheOptions options = RocksCacheOptions.builder()
                .disableCacheRead(true)
                .build();

        RocksCacheClient client = this.getRocksCacheClient("cacheReadDisabled", options);

        String key = "test:downgrade";

        AtomicInteger callCount = new AtomicInteger(0);

        // First call
        client.fetch(key, Duration.ofSeconds(60), () -> {
            callCount.incrementAndGet();
            return "value1";
        });

        // Second call should also call loader (cache is disabled)
        client.fetch(key, Duration.ofSeconds(60), () -> {
            callCount.incrementAndGet();
            return "value2";
        });

        log.info("callCount isEqualTo 2 : {}", Objects.equals(callCount.get(), 2));
    }

    @GetMapping("typedData")
    public void typedData() {
        String key = "test:typed";

        User user = new User("Alice", 25);

        RocksCacheClient client = this.getRocksCacheClient("typedData", null);

        User result = client.fetch(key, Duration.ofSeconds(60), () -> user, User.class);

        log.info("result isNotNull :{}", Objects.nonNull(result));
        log.info("result.name isEqualTo Alice:{}", Objects.equals(result.getName(), "Alice"));
        log.info("result.age isEqualTo 25 :{}", Objects.equals(result.getAge(), 25));

        // Fetch again (should hit cache)
        User result2 = client.fetch(key, Duration.ofSeconds(60), () -> {
            throw new RuntimeException("Should not be called");
        }, User.class);

        log.info("result2.name isEqualTo Alice:{}", Objects.equals(result2.getName(), "Alice"));
    }

    @GetMapping("rawOperations")
    public void rawOperations() {
        String key = "test:raw";
        String value = "raw-value";

        RocksCacheClient client = this.getRocksCacheClient("rawOperations", null);
        // Raw set
        client.rawSet(key, value, Duration.ofSeconds(60));

        // Raw get
        String result = client.rawGet(key);
        log.info("result isEqualTo value:{}", Objects.equals(result, value));
    }


    @GetMapping("batchFetchAllMiss")
    public void batchFetchAllMiss() {
        String[] keys = {"batch:key1", "batch:key2", "batch:key3"};

        RocksCacheClient client = this.getRocksCacheClient("batchFetchAllMiss", null);

        Map<Integer, String> result = client.fetchBatch(keys, Duration.ofSeconds(60), indices -> {
            Map<Integer, String> data = new HashMap<>();
            for (int idx : indices) {
                data.put(idx, "value" + idx);
            }
            return data;
        });

        log.info("result hasSize(3): {}", result.size() == 3);
        log.info("result0 isEqualTo value0: {}", Objects.equals(result.get(0), "value0"));
        log.info("result1 isEqualTo value1: {}", Objects.equals(result.get(1), "value1"));
        log.info("result2 isEqualTo value2: {}", Objects.equals(result.get(2), "value2"));

        // Verify all cached
        log.info("rawGet.keys0 isEqualTo value0: {}", Objects.equals(client.rawGet(keys[0]), "value0"));
        log.info("rawGet.keys1 isEqualTo value1: {}", Objects.equals(client.rawGet(keys[1]), "value1"));
        log.info("rawGet.keys2 isEqualTo value2: {}", Objects.equals(client.rawGet(keys[2]), "value2"));
    }

    @GetMapping("batchFetchPartialHit")
    public void batchFetchPartialHit() {
        String[] keys = {"batch:p1", "batch:p2", "batch:p3"};

        RocksCacheClient client = this.getRocksCacheClient("batchFetchPartialHit", null);

        // Pre-populate some keys
        client.rawSet(keys[0], "cached0", Duration.ofSeconds(60));
        client.rawSet(keys[2], "cached2", Duration.ofSeconds(60));

        AtomicInteger callCount = new AtomicInteger(0);

        Map<Integer, String> result = client.fetchBatch(keys, Duration.ofSeconds(60), indices -> {
            Map<Integer, String> data = new HashMap<>();
            for (int idx : indices) {
                callCount.incrementAndGet();
                data.put(idx, "fetched" + idx);
            }
            return data;
        });

        log.info("result hasSize(3): {}", result.size() == 3);
        log.info("result0 isEqualTo cached0: {}", Objects.equals(result.get(0), "cached0"));
        log.info("result1 isEqualTo fetched1: {}", Objects.equals(result.get(1), "fetched1"));
        log.info("result2 isEqualTo cached2: {}", Objects.equals(result.get(2), "cached2"));

        // Should only fetch index 1
        log.info("callCount isEqualTo 1: {}", Objects.equals(callCount.get(), 1));
    }

    @GetMapping("batchFetchAllHit")
    public void batchFetchAllHit() {
        String[] keys = {"batch:h1", "batch:h2", "batch:h3"};

        RocksCacheClient client = this.getRocksCacheClient("batchFetchAllHit", null);
        // Pre-populate all keys
        for (int i = 0; i < keys.length; i++) {
            client.rawSet(keys[i], "cached" + i, Duration.ofSeconds(60));
        }

        AtomicInteger callCount = new AtomicInteger(0);

        Map<Integer, String> result = client.fetchBatch(keys, Duration.ofSeconds(60), indices -> {
            callCount.incrementAndGet();
            return new HashMap<>();
        });

        log.info("result hasSize(3): {}", result.size() == 3);
        log.info("result0 isEqualTo cached0: {}", Objects.equals(result.get(0), "cached0"));
        log.info("result1 isEqualTo fetched1: {}", Objects.equals(result.get(1), "fetched1"));
        log.info("result2 isEqualTo cached2: {}", Objects.equals(result.get(2), "cached2"));
        // Should not call loader at all
        log.info("callCount isEqualTo 0: {}", Objects.equals(callCount.get(), 0));
    }

    @GetMapping("batchTagAsDeleted")
    public void batchTagAsDeleted() throws Exception {
        String[] keys = {"batch:del1", "batch:del2", "batch:del3"};

        RocksCacheClient client = this.getRocksCacheClient("batchTagAsDeleted", null);

        // First fetch
        Map<Integer, String> result1 = client.fetchBatch(keys, Duration.ofSeconds(60), indices -> {
            Map<Integer, String> data = new HashMap<>();
            for (int idx : indices) {
                data.put(idx, "value" + idx);
            }
            return data;
        });

        log.info("result1 hasSize(3): {}", result1.size() == 3);
        // Tag as deleted
        client.tagAsDeletedBatch(keys);

        // In weak consistency mode, should return old values
        Map<Integer, String> result2 = client.fetchBatch(keys, Duration.ofSeconds(60), indices -> {
            Map<Integer, String> data = new HashMap<>();
            for (int idx : indices) {
                data.put(idx, "new" + idx);
            }
            return data;
        });

        // Should return old values
        log.info("result2-0 isEqualTo value0: {}", Objects.equals(result2.get(0), "value0"));
        log.info("result2-1 isEqualTo value1: {}", Objects.equals(result2.get(1), "value1"));
        log.info("result2-2 isEqualTo value2: {}", Objects.equals(result2.get(2), "value2"));
        // Wait for async update
        Thread.sleep(300);

        // Now should get new values
        Map<Integer, String> result3 = client.fetchBatch(keys, Duration.ofSeconds(60), indices -> {
            throw new RuntimeException("Should not be called");
        });

        log.info("result3-0 isEqualTo new0: {}", Objects.equals(result3.get(0), "new0"));
        log.info("result3-1 isEqualTo new1: {}", Objects.equals(result3.get(1), "new1"));
        log.info("result3-2 isEqualTo new2: {}", Objects.equals(result3.get(2), "new2"));
    }

    @GetMapping("batchWithEmptyValues")
    public void batchWithEmptyValues() {
        String[] keys = {"batch:empty1", "batch:empty2", "batch:empty3"};

        RocksCacheClient client = this.getRocksCacheClient("batchWithEmptyValues", null);

        Map<Integer, String> result = client.fetchBatch(keys, Duration.ofSeconds(60), indices -> {
            Map<Integer, String> data = new HashMap<>();
            data.put(0, "value0");
            data.put(1, "");  // Empty value
            data.put(2, "value2");
            return data;
        });

        log.info("result hasSize(3): {}", result.size() == 3);
        log.info("result0 isEqualTo value0: {}", Objects.equals(result.get(0), "value0"));
        log.info("result1 isEmpty: {}", StringUtils.isEmpty(result.get(1)));
        log.info("result2 isEqualTo value2: {}", Objects.equals(result.get(2), "value2"));
        // Verify empty value is cached
        log.info("rawGet isEmpty : {}", StringUtils.isEmpty(client.rawGet(keys[1])));
    }

    @GetMapping("batchStrongConsistency")
    public void batchStrongConsistency() throws Exception {
        RocksCacheOptions strongOptions = RocksCacheOptions.builder()
                .strongConsistency(true)
                .build();

        RocksCacheClient client = this.getRocksCacheClient("batchStrongConsistency", strongOptions);

        String[] keys = {"batch:strong1", "batch:strong2"};

        // First fetch
        Map<Integer, String> result1 = client.fetchBatch(keys, Duration.ofSeconds(60), indices -> {
            Map<Integer, String> data = new HashMap<>();
            for (int idx : indices) {
                data.put(idx, "v1-" + idx);
            }
            return data;
        });

        log.info("result1-0 isEqualTo v1-0: {}", Objects.equals(result1.get(0), "v1-0"));
        log.info("result1-1 isEqualTo v1-1: {}", Objects.equals(result1.get(1), "v1-1"));
        // Tag as deleted
        client.tagAsDeletedBatch(keys);

        Thread.sleep(100);

        // In strong consistency mode, should fetch new values
        long startTime = System.currentTimeMillis();
        Map<Integer, String> result2 = client.fetchBatch(keys, Duration.ofSeconds(60), indices -> {
            try {
                Thread.sleep(150);
            } catch (InterruptedException e) {
                throw new RuntimeException(e);
            }
            Map<Integer, String> data = new HashMap<>();
            for (int idx : indices) {
                data.put(idx, "v2-" + idx);
            }
            return data;
        });
        long elapsed = System.currentTimeMillis() - startTime;

        log.info("result2-0 isEqualTo v2-0: {}", Objects.equals(result2.get(0), "v2-0"));
        log.info("result2-1 isEqualTo v2-1: {}", Objects.equals(result2.get(1), "v2-1"));
        log.info("elapsed 是否大于等于150毫秒: {}", elapsed >= 150);
    }

    @GetMapping("batchWithMissingIndices")
    public void batchWithMissingIndices() {
        String[] keys = {"batch:miss1", "batch:miss2", "batch:miss3"};

        RocksCacheClient client = this.getRocksCacheClient("batchWithMissingIndices", null);

        // Loader only returns some indices
        Map<Integer, String> result = client.fetchBatch(keys, Duration.ofSeconds(60), indices -> {
            Map<Integer, String> data = new HashMap<>();
            data.put(0, "value0");
            // Index 1 is missing
            data.put(2, "value2");
            return data;
        });

        log.info("result hasSize(3): {}", result.size() == 3);
        log.info("result0 isEqualTo value0: {}", Objects.equals(result.get(0), "value0"));
        log.info("result1 isEmpty: {}", StringUtils.isEmpty(result.get(1)));  // Missing index should be empty
        log.info("result2 isEqualTo value2: {}", Objects.equals(result.get(2), "value2"));
    }

    private static void simulateDbQuery(long millis) {
        try {
            Thread.sleep(millis);
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
        }
    }

}

上面的这个测试的用例全部都debug一遍了的,赶脚还是很靠谱的。

3.总结

只是优化集成调用方式和使用方式,简单方便高效,而不是花里胡哨,华而不实,大道至简,本次分享到此结束,希望我的分享对你有所启发和帮助,请一键三连,么么么哒!

相关推荐
凌冰_1 小时前
Thymeleaf 核心语法详解
java·前端·javascript
AIBox3651 小时前
claude 镜像 api 使用指南(2026 年4 月更新)
java·服务器·前端·人工智能·gpt·前端框架
Dontla1 小时前
Prometheus介绍(开源系统监控与告警工具)(时间序列数据库TSDB、标签化label-based多维分析、Pull模型、PromQL查询语言)
数据库·开源·prometheus
极光代码工作室1 小时前
基于SpringBoot的在线考试系统
java·springboot·web开发·后端开发
Gopher_HBo1 小时前
CompletableFuture运用原理
java·后端
庞轩px1 小时前
反射与动态代理——Java语言动态性的核心
java·spring·反射·aop·动态代理·类型
LL_break2 小时前
从零上手Redis:string编码原理、常用命令与设计逻辑详解
java·数据库·redis·缓存·java-ee
Rsun045512 小时前
13、Java 策略模式从入门到实战
java·bash·策略模式
历程里程碑2 小时前
Linux 50 IP协议深度解析:从报头结构到子网划分与NAT
java·linux·开发语言·网络·c++·python·智能路由器