最近线上一台自建redis服务的服务器频繁报警,内存使用率有点高,这是一台配置比较简陋(2C8G)的机子了,近期也打算准备抛弃它了。抛弃之前需对原先的数据进行迁移,全量数据,增量数据都需要考虑,确保数据不丢失,在网上查了下发现了阿里自研的RedisShake工具,据说很妙,那就先试试吧。

正式操作前先在测试环境实践一把看看效果如何,先说明下环境
源库:192.168.28.142
目标库:192.168.147.128

步骤一:
使用wget命令下载至本地
wget https://github.com/alibaba/RedisShake/releases/download/release-v2.0.2-20200506/redis-shake-v2.0.2.tar.gz
步骤二:
解压,进入相应目录看看有哪些东东

  1. [root@dev ~]# cd /opt/redis-shake/
  2. [root@dev redis-shake]# ls
  3. redis-shake-v2.0.2.tar.gz
  4. [root@dev redis-shake]# tar -zxvf redis-shake-v2.0.2.tar.gz
  5. redis-shake-v2.0.2/
  6. redis-shake-v2.0.2/redis-shake.darwin
  7. redis-shake-v2.0.2/redis-shake.windows
  8. redis-shake-v2.0.2/redis-shake.conf
  9. redis-shake-v2.0.2/ChangeLog
  10. redis-shake-v2.0.2/stop.sh
  11. redis-shake-v2.0.2/start.sh
  12. redis-shake-v2.0.2/hypervisor
  13. redis-shake-v2.0.2/redis-shake.linux
  14. [root@dev redis-shake]# ls
  15. redis-shake-v2.0.2 redis-shake-v2.0.2.tar.gz
  16. [root@dev redis-shake]# cd redis-shake-v2.0.2
  17. [root@dev redis-shake-v2.0.2]# ls
  18. ChangeLog hypervisor redis-shake.conf redis-shake.darwin redis-shake.linux redis-shake.windows start.sh stop.sh

步骤三:
更改配置文件redis-shake.conf
日志输出

  1. # log file,日志文件,不配置将打印到stdout (e.g. /var/log/redis-shake.log )
  2. log.file =/opt/redis-shake/redis-shake.log

源端连接配置

  1. # ip:port
  2. # the source address can be the following:
  3. # 1. single db address. for "standalone" type.
  4. # 2. ${sentinel_master_name}:${master or slave}@sentinel single/cluster address, e.g., mymaster:master@127.0.0.1:26379;127.0.0.1:26380, or @127.0.0.1:26379;127.0.0.1:26380. for "sentinel" type.
  5. # 3. cluster that has several db nodes split by semicolon(;). for "cluster" type. e.g., 10.1.1.1:20331;10.1.1.2:20441.
  6. # 4. proxy address(used in "rump" mode only). for "proxy" type.
  7. # 源redis地址。对于sentinel或者开源cluster模式,输入格式为"master名字:拉取角色为master或者slave@sentinel的地址",别的cluster
  8. # 架构,比如codis, twemproxy, aliyun proxy等需要配置所有master或者slave的db地址。
  9. source.address = 192.168.28.142:6379
  10. # password of db/proxy. even if type is sentinel.
  11. source.password_raw = xxxxxxx

目标端设置

  1. # ip:port
  2. # the target address can be the following:
  3. # 1. single db address. for "standalone" type.
  4. # 2. ${sentinel_master_name}:${master or slave}@sentinel single/cluster address, e.g., mymaster:master@127.0.0.1:26379;127.0.0.1:26380, or @127.0.0.1:26379;127.0.0.1:26380. for "sentinel" type.
  5. # 3. cluster that has several db nodes split by semicolon(;). for "cluster" type.
  6. # 4. proxy address. for "proxy" type.
  7. target.address = 192.168.147.128:6379
  8. # password of db/proxy. even if type is sentinel.
  9. target.password_raw = xxxxxx
  10. # auth type, don\'t modify it
  11. target.auth_type = auth
  12. # all the data will be written into this db. < 0 means disable.
  13. target.db = -1

步骤四
./start.sh redis-shake.conf sync
查看日志文件

  1. 2020/05/15 09:00:29 [INFO] DbSyncer[0] starts syncing data from 192.168.28.142:6379 to [192.168.147.128:6379] with http[9321], enableResumeFromBreakPoint[false], slot boundary[-1, -1]
  2. 2020/05/15 09:00:29 [INFO] DbSyncer[0] psync connect \'192.168.28.142:6379\' with auth type[auth] OK!
  3. 2020/05/15 09:00:29 [INFO] DbSyncer[0] psync send listening port[9320] OK!
  4. 2020/05/15 09:00:29 [INFO] DbSyncer[0] try to send \'psync\' command: run-id[?], offset[-1]
  5. 2020/05/15 09:00:29 [INFO] Event:FullSyncStart Id:redis-shake
  6. 2020/05/15 09:00:29 [INFO] DbSyncer[0] psync runid = 0a08aa75b91f8724014e056cd2c3068eebf81ec4, offset = 126, fullsync
  7. 2020/05/15 09:00:30 [INFO] DbSyncer[0] +
  8. 2020/05/15 09:00:30 [INFO] DbSyncer[0] rdb file size = 45173748
  9. 2020/05/15 09:00:30 [INFO] Aux information key:redis-ver value:4.0.10
  10. 2020/05/15 09:00:30 [INFO] Aux information key:redis-bits value:64
  11. 2020/05/15 09:00:30 [INFO] Aux information key:ctime value:1589521609
  12. 2020/05/15 09:00:30 [INFO] Aux information key:used-mem value:66304824
  13. 2020/05/15 09:00:30 [INFO] Aux information key:repl-stream-db value:0
  14. 2020/05/15 09:00:30 [INFO] Aux information key:repl-id value:0a08aa75b91f8724014e056cd2c3068eebf81ec4
  15. 2020/05/15 09:00:30 [INFO] Aux information key:repl-offset value:126
  16. 2020/05/15 09:00:30 [INFO] Aux information key:aof-preamble value:0
  17. 2020/05/15 09:00:30 [INFO] db_size:8 expire_size:0
  18. 2020/05/15 09:00:31 [INFO] DbSyncer[0] total = 43.08MB - 10.87MB [ 25%] entry=0 filter=4
  19. 2020/05/15 09:00:32 [INFO] DbSyncer[0] total = 43.08MB - 21.78MB [ 50%] entry=0 filter=5
  20. 2020/05/15 09:00:33 [INFO] DbSyncer[0] total = 43.08MB - 32.64MB [ 75%] entry=0 filter=5
  21. 2020/05/15 09:00:34 [INFO] DbSyncer[0] total = 43.08MB - 42.92MB [ 99%] entry=0 filter=6
  22. 2020/05/15 09:00:34 [INFO] db_size:1 expire_size:0
  23. 2020/05/15 09:00:34 [INFO] db_size:48 expire_size:12
  24. 2020/05/15 09:00:34 [INFO] db_size:533 expire_size:468
  25. 2020/05/15 09:00:34 [INFO] DbSyncer[0] total

查看下数据同步情况,如下图,发现所有的库都同步过来了,非常nice。

但如果只想同步某个库又该怎么操作呢?
马上查阅了配置文件及官方文档,稍作调整就可以,具体如下

配置项 说明
target.db 设置待迁移的数据在目的Redis中的逻辑数据库名。例如,要将所有数据迁移到目的Redis中的DB10,则需将此参数的值设置为10。当该值设置为-1时,逻辑数据库名在源Redis和目的Redis中的名称相同,即源Redis中的DB0将被迁移至目的Redis中的DB0,DB1将被迁移至DB1,以此类推。
filter.db.whitelist 指定的db被通过,比如0;5;10将会使db0, db5, db10通过, 其他的被过滤
那比如我这边只想把源端的10库同步至目标端的10库只需对配置文件进行如下改动
  1. target.db = 10
  2. filter.db.whitelist =10

重新执行步骤四命令,执行后效果如下,大功告成。

另外还有一个配置项特意说明下

配置项 说明
key_exists 当源目的有重复key,是否进行覆写。rewrite表示源端覆盖目的端。none表示一旦发生进程直接退出。ignore表示保留目的端key,忽略源端的同步key。该值在rump模式下没有用。
当前仅仅是单个节点到单个节点的同步,如涉及到集群等其他一些场景下,请参考官方文档说明,自行测试。

参考文档

版权声明:本文为chinaWu原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://www.cnblogs.com/chinaWu/p/12968429.html