中间件--redis集群 大批量查询删除 优化及相关异常解决

2020年7月28日, 星期二

优化及异常解决耗费我两三天时间,最后还是和同事一块解决了,,

果然有时候在卡壳的时候,和队友商量着活动下思路,还是很有效果的

异常

socket异常 或read time out解决

测试环境少量数据没问题,到生产环境就会经常性出现

问题在初始化连接时的配置,要设置几个校验性质的配置为true及超时时间及重连次数new JedisCluster(nodes,0, 3, jedisPoolConfig),

相关配置如下

jedisPoolConfig.setMaxWaitMillis(600*1000); // 设置60秒

复制代码
                    //对拿到的connection进行validateObject校验
                    jedisPoolConfig.setTestOnBorrow(true);
                    jedisPoolConfig.setTestWhileIdle(true);
                    jedisPoolConfig.setTestOnReturn(true);


                    //未设置auth Password
                    jedisCluster = new JedisCluster(nodes,0, 3, jedisPoolConfig);

具体配置如下:

private boolean initRedisCon() {

复制代码
    if(jedisCluster==null){
        synchronized (HhHeadService.class) {
            if(jedisCluster==null){
                try {
                    Set<HostAndPort> nodes = new LinkedHashSet<>();

// redis_cluter: 192.168.2.63:2111,192.168.2.63:2112,192.168.2.63:3111,192.168.2.63:3112,192.168.2.63:4111,192.168.2.63:4112

复制代码
                    String[] split = redis_cluter.split(",");

                    //一般选用slaveof从IP+端口进行增删改查,不用master
                    for (String s : split) {
                        System.out.println("s = " + s);
                        String[] split1 = s.split(":");
                        nodes.add(new HostAndPort(split1[0], Integer.valueOf(split1[1])));//测试环境
                    }

                    // Jedis连接池配置
                    JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
                    // 最大空闲连接数, 默认8个
                    jedisPoolConfig.setMaxIdle(100);
                    // 最大连接数, 默认8个
                    jedisPoolConfig.setMaxTotal(500);
                    //最小空闲连接数, 默认0
                    jedisPoolConfig.setMinIdle(0);
                    // 获取连接时的最大等待毫秒数(如果设置为阻塞时BlockWhenExhausted),如果超时就抛异常, 小于零:阻塞不确定的时间, 默认-1
                    jedisPoolConfig.setMaxWaitMillis(600*1000); // 设置60秒
                    
                    //对拿到的connection进行validateObject校验
                    jedisPoolConfig.setTestOnBorrow(true);
                    jedisPoolConfig.setTestWhileIdle(true);
                    jedisPoolConfig.setTestOnReturn(true);

                    //未设置auth Password
                    jedisCluster = new JedisCluster(nodes,0, 3, jedisPoolConfig);

// jedisCluster = new JedisCluster(nodes, jedisPoolConfig);

//设置auth Password

//JedisCluster jedis = new JedisCluster(nodes,5000,3000,10,{auth_password}, new JedisPoolConfig());

// System.out.println(jedisCluster.get("mykey"));

} catch (Exception e) {

e.printStackTrace();

}/finally {
if(null !=jedisCluster)
jedisCluster.close();
}
/

if (jedisCluster == null) {

复制代码
                    return false;
                }
            }
        }
    }
    return true;

}

耗时优化

背景

环境--主从,3个主/从节点,每个主节点约300w的相关业务key,value是string

业务--获取redis所有相关key-value,整理成list实体,把数据推送出去每10min一次任务即重复一遍业务,每次任务会有多批推送,每2w数据按一批推送,直到把所有数据推送一遍

实现--模糊匹配获取所有key,遍历key并技术直接get key,每2w条发送一次

现象--遍历时每获取2w-key耗时2-3s,每推送2w-key耗时2-3s

优化

获取key耗时优化--使用pipeline管道,以减少查询时获取连接的耗时,可类比连接池

推送耗时优化--涉及的是具体业务难优化,考虑使用多线程加大并发量,以类似实现单批耗时的优化

优化效果

获取单节点全部数据300w耗时3-5s

推送数据业务,设置3个并发,

前后主要代码如下:

优化前

复制代码
public void headRedisTask0708(String redisSign,String pattern) throws InterruptedException {

    log.info(redisSign+"begin task ");

    if(!initRedisCon()){
        log.info(redisSign+"连接redis集群失败--");
        return ;
    }

    String startTime = LocalDateTime.now().toString();
    String todayYMS = "answer_"+LocalDate.now().getYear()+"_"+LocalDate.now().getMonthValue();

    //获取key值
    Set<String> lbs_domain_phone_S = redisKeys(pattern);

    AtomicReference<Integer> sendCounts= new AtomicReference<>(0);//发送次数计数
    AtomicReference<Integer> sendNumCounts= new AtomicReference<>(0);//发送号码量计数

    if(lbs_domain_phone_S ==null|| lbs_domain_phone_S.size()==0){
        log.info(redisSign+"err-->startTime:"+startTime+"\t now"+ LocalDateTime.now()+"\tkeys:"+ lbs_domain_phone_S.size()+"redis-->获取domain_lac_ci值为空");
        return;
    }

    log.info(redisSign+"start-->startTime:"+startTime+"\t now"+ LocalDateTime.now()+"\tkeys:"+ lbs_domain_phone_S.size()+"start-->domain_lac_ci counts--"+ lbs_domain_phone_S.size());

    List<List<String>> dataLists= new LinkedList<>();//每个号码的位置

    AtomicReference<Integer> telCounts= new AtomicReference<>(0);//号码计数 最值5000
    AtomicReference<Long> startTimeL1 = new AtomicReference<>(System.currentTimeMillis());

    JedisCluster finalJedisCluster = jedisCluster;
    lbs_domain_phone_S.forEach(lbs_domain_phone->{

        if(StringUtils.isBlank(lbs_domain_phone)){
            return;
        }

        String[] lbsDomainPhoneArr = lbs_domain_phone.split("_");
        if(lbsDomainPhoneArr==null||lbsDomainPhoneArr.length!=3||StringUtils.isBlank(lbsDomainPhoneArr[2])){
            System.out.println(redisSign+"-->lbs_domain_phone not suit-->"+lbs_domain_phone);
            return;
        }

        String domainLacCi = finalJedisCluster.get(lbs_domain_phone);
        String[] domainLacCiArr = domainLacCi.split("_");

        Map<String, String> titudes = getTitudes(domainLacCi);
        String longtitude = titudes==null? "":titudes.get("longtitude")==null? "":titudes.get("longtitude");//longtitude/经度
        String latitude = titudes==null? "":titudes.get("latitude")==null? "":titudes.get("latitude");//latitude/纬度

        //和avro数据顺序一致
        String[] oneData=new String[6];
        oneData[0]=lbsDomainPhoneArr[2];//user_number

        oneData[1]=domainLacCiArr[1];//lac
        oneData[2]=domainLacCiArr[2];//ci
        oneData[3]=longtitude;
        oneData[4]=latitude;
        oneData[5]="3".equals(domainLacCiArr[0].trim())?"0":domainLacCiArr[0];//domain

        dataLists.add(Arrays.asList(oneData));
        telCounts.set(telCounts.get()+1);

        if(telCounts.get()>eachRedisNum){

            long startTimeL2 = System.currentTimeMillis();
            sendDate0708(redisSign,todayYMS, dataLists);//推送

            long startTimeL3 = System.currentTimeMillis();
            String timeUseStr="\tredisHttpTime(ms):"+(startTimeL2-startTimeL1.get())+"--"+(startTimeL3-startTimeL2);
            startTimeL1.set(System.currentTimeMillis());

            log.info(redisSign+"send-->startTime:"+startTime+"\t now"+ LocalDateTime.now()+"\tkeys:"+ lbs_domain_phone_S.size()+"\tsendCounts:"+sendCounts.get()+"\ttelCounts:"+telCounts.get()+timeUseStr);
            sendCounts.updateAndGet(v -> v + 1);
            sendNumCounts.updateAndGet(v -> v + telCounts.get());
            telCounts.set(0);
            dataLists.clear();
        }

    });

    if(dataLists.size()>0){
        sendDate0708(redisSign,todayYMS,dataLists);

        sendCounts.updateAndGet(v -> v + 1);
        log.info(redisSign+"send-->startTime:"+startTime+"\t now"+ LocalDateTime.now()+"\tkeys:"+ lbs_domain_phone_S.size()+"\tsendCounts:"+sendCounts.get()+"\ttelCounts:"+telCounts.get());
    }

    log.info(redisSign+"end-->startTime:"+startTime+"\t now"+ LocalDateTime.now()+"\tsendCounts:"+sendCounts.get()+"\t sendNumCounts--"+sendNumCounts.get());
}

优化后

复制代码
public void headRedisTask0720(String redisSign,String pattern) throws Exception {

    String startTime = LocalDateTime.now().toString();
    log.info("redisTaskbegin--"+startTime);

    if(!initRedisCon()){
        log.info(redisSign+"连接redis集群失败--");
        return ;
    }

    String todayYMS = "answer_"+LocalDate.now().getYear()+"_"+LocalDate.now().getMonthValue();
    AtomicInteger sendCounts= new AtomicInteger();//发送次数计数
    List<Thread> allThreadList=new ArrayList<>();

    //获取所有连接池节点
    Map<String, JedisPool> nodesMap = jedisCluster.getClusterNodes();

// JedisClusterPipeline pipe = JedisClusterPipeline.pipelined(jedisCluster);//pipe

for(String nodeStr : nodesMap.keySet()){

AtomicReference startTimeL1 = new AtomicReference<>(System.currentTimeMillis());

AtomicInteger oneSendCounts= new AtomicInteger();//发送次数计数

复制代码
        List<Map<String,Response<String>>> allDateMapList=new LinkedList<>();
        JedisPool pool = nodesMap.get(nodeStr);
        try(
                Jedis jedis = pool.getResource();
                Pipeline pipe = jedis.pipelined();//pipe
                ){
            if (!jedis.info("replication").contains("role:slave")) {
                synchronized (HhHeadService.class){

// Pipeline pipe = jedis.pipelined();

Set lbs_domain_phone_S= jedis.keys(pattern);

Map<String,Response> dateMap=new HashedMap();

复制代码
                    if(lbs_domain_phone_S ==null|| lbs_domain_phone_S.size()==0){
                        log.info(redisSign+" err-->startTime:"+startTime+"\t now"+ LocalDateTime.now()+"redis-->获取domain_lac_ci值为空");
                        continue;
                    }

                    //遍历
                    for(String lbs_domain_phone:lbs_domain_phone_S) {
                        dateMap.put(lbs_domain_phone, pipe.get(lbs_domain_phone));

                        if (dateMap.size() > eachRedisNum) {
                            allDateMapList.add(dateMap);
                            dateMap=new HashedMap();
                        }
                    }
                    if(dateMap.size()>0){
                        allDateMapList.add(dateMap);
                    }
                    pipe.sync();

                    long redisUseTime = System.currentTimeMillis()-startTimeL1.get();
                    log.info(nodeStr+"-startAt-"+startTime+"\t"+pattern+"\tgetKeys:"+ lbs_domain_phone_S.size()+"\tredisUseTime(ms):"+redisUseTime);
                }

            }else{
                log.info(nodeStr+" redis is slave 跳过--");
                continue;
            }

        } catch(Exception e){
            log.error(nodeStr+" err推送数据异常 "+pattern, e);
            continue;
        }/*finally {
            if (jedis != null) jedis.close();
        }*/


        /*
        推送数据
         */
        int times=5;//线程数
        int threadNum=allDateMapList.size()/times;
        for(int i=0;i<times;i++){
            int start = i*threadNum;
            int end = (i+1) *threadNum;
            if(i==times-1) end=allDateMapList.size()-1;//最后一次
            List<Map<String,Response<String>>> subDateMapList=allDateMapList.subList(start,end);

            int finalI = i;
            Thread oneThread=new Thread(()->{
                try{
                    for(Map<String,Response<String>> oneDateMap:subDateMapList) {

                        long startTimeL2 = System.currentTimeMillis();
                        String keyName="head_redis_"+DateUtils.getStrCurDateTime(DateUtils.YYYYMMDDHH)+"_"+LocalDateTime.now().getMinute()/30;

                        initAndSend(redisSign,oneDateMap,todayYMS);//推送数据

// TimeUnit.MILLISECONDS.sleep(new Random().nextInt(3000));

复制代码
                        synchronized (HhHeadService.class){//计数
                            log.info(nodeStr +"-Thread"+ finalI +"---第"+ oneSendCounts.getAndIncrement()+
                                    "/"+sendCounts.getAndIncrement()+"次\tuseTime(ms):"+(System.currentTimeMillis()-startTimeL2));
                            jedisCluster.incrBy(keyName ,oneDateMap.size());
                        }
                    }
                }catch (Exception e){
                    e.printStackTrace();
                }
            });
            oneThread.start();
            allThreadList.add(oneThread);
        }

        for(Thread oneThread:allThreadList){
            oneThread.join();//等待全部结束
        }
        allThreadList.clear();
        log.info(nodeStr+"---over eachTelNum:"+eachRedisNum+"\tsendCounts:"+oneSendCounts.get());

    }

// for(Thread oneThread:allThreadList){

// oneThread.join();//等待全部结束

// }

复制代码
    log.info("allover"+redisSign+"send-->startTime:"+startTime+"\t--"+pattern+"\teachTelNum:"+eachRedisNum+"\tsendCounts:"+sendCounts.get());
}

多线程业务模拟

@Test

public void testThread1() throws InterruptedException {

List<Map<String,String>> allDateMapList=new LinkedList<>();

复制代码
        int index=0;
        for(int i=0;i<31;i++){
            Map<String,String> dateMap=new HashedMap();

            for(int j=0;j<2;j++){
                dateMap.put(index+"",index+"");
                index++;
            }
            allDateMapList.add(dateMap);
        }

        int times=10;
        int threadNum=allDateMapList.size()/times;
        List<Thread> allThreadList=new ArrayList<>();
        AtomicInteger send= new AtomicInteger();


        for(int i=0;i<times;i++){
            int start = i*threadNum;
            int end = (i+1) *threadNum;
            if(i==times-1) end=allDateMapList.size()-1;//最后一次

            List<Map<String,String>> subDateMapList=allDateMapList.subList(start,end);
            String keyName="head_redis_"+DateUtils.getStrCurDateTime(DateUtils.YYYYMMDDHH)+"_"+LocalDateTime.now().getMinute()/30;

            Thread a=new Thread(()->{
                try{
                    for(Map<String,String> oneDateMap:subDateMapList) {

                        long startTimeL2 = System.currentTimeMillis();
                        initAndSend("redisSign",oneDateMap,"todayYMS");//推送数据业务
                        //TimeUnit.SECONDS.sleep(new Random().nextInt(6));//模拟业务
                        TimeUnit.MILLISECONDS.sleep(new Random().nextInt(3000));
                        
                        synchronized (HhHeadService.class){//计数
                            System.out.println(Thread.currentThread().getName()+"\tthisTelCounts:"+oneDateMap.size()+
                                    "\tsendUseTime(ms):"+(System.currentTimeMillis()-startTimeL2)+"\t"+(send.getAndIncrement()));
                        }
                    }
                }catch (Exception e){
                    e.printStackTrace();
                }
            });
            a.start();
            allThreadList.add(a);
        }

        for(Thread a:allThreadList){
            a.join();
        }
        System.out.println("over0---");
    }

线程池

@Test

public void testThread(){

List<Map<String,String>> allDateMapList=new LinkedList<>();

复制代码
        int index=0;
        for(int i=0;i<6;i++){
            Map<String,String> dateMap=new HashedMap();

            for(int j=0;j<3;j++){
                dateMap.put(index+"",index+"");
                index++;
            }
            allDateMapList.add(dateMap);
        }

        ExecutorService executorService = Executors.newFixedThreadPool(3);
        try{
            for(Map<String,String> oneDateMap:allDateMapList) {

                executorService.execute(()->{
                    String keyName="head_redis_"+DateUtils.getStrCurDateTime(DateUtils.YYYYMMDDHH)+"_"+LocalDateTime.now().getMinute()/30;
                    long startTimeL2 = System.currentTimeMillis();

// initAndSend("redisSign",oneDateMap,"todayYMS");//推送数据

复制代码
                    synchronized (HhHeadService.class){//计数
                        System.out.println(Thread.currentThread().getName()+"\t--thisTelCounts:"+oneDateMap.size()+
                                "\tsendUseTime(ms):"+(System.currentTimeMillis()-startTimeL2));
                    }
                });
            }
        }finally {
            if(executorService!=null) executorService.shutdown();
        }

        log.info("over--------");

    }

批量删除

造测试数据时,要使用集群方式插入,这样hash槽才会生效 ,批量删除时才不会有如下问题

jedisCluster删除:JedisClusterException: No way to dispatch this command to Redis Cluster because keys have different slots.

jedis删除:JedisDataException: CROSSSLOT Keys in request don't hash to the same slot

pipeline删除:JedisDataException: CROSSSLOT Keys in request don't hash to the same slot

-----简约-----28

相关推荐
fuquxiaoguang2 小时前
十五五规划明确发力基础软件:中间件成为企业数字化与合规升级的刚性需求
中间件·国产化替代·十五五规划
七夜zippoe2 小时前
Redis高级数据结构实战:从Stream到HyperLogLog的深度解析
数据结构·数据库·redis·python·缓冲
小龙加油!!!2 小时前
k8s 部署中间件(mysql、redis、minio、nacos)并持久化数据
mysql·中间件·kubernetes
1941s3 小时前
05-Agent 智能体开发实战指南(五):中间件系统与动态提示词
人工智能·python·中间件·langchain
vanvivo13 小时前
redis 使用
数据库·redis·缓存
Augustvic19 小时前
gRPC基本原理
后端·http·中间件·rpc
咖啡の猫20 小时前
Redis命令-Hash命令
redis·php·哈希算法
scofield_gyb20 小时前
Redis简介、常用命令及优化
数据库·redis·缓存
難釋懷20 小时前
Redis搭建分片集群
数据库·redis·缓存