rbd恢复
一、本文为使用运行良好的集群进行恢复测试:
1 #md5sum CentOS-6.2-x86_64-bin-DVD2.iso
2 b0b03502875490417c9f8cb9fe8ce6d6 /root/CentOS-6.2-x86_64-bin-DVD2.iso
3 #rbd import rbd/mytest.img ./CentOS-6.2-x86_64-bin-DVD2.iso --image-format 2
4
5 #rbd info rbd/myimg #该办法的鸡肋之处,需要有块前缀(或者rbd_directory查看)
6 rbd image \'myimg\':
7 size 1256 MB in 315 objects
8 order 22 (4096 kB objects)
9 block_name_prefix: rbd_data.cb4bf238e1f29
10 format: 2
11 features: layering
12 flags:
13 #/etc/init.d/ceph -a stop #关闭集群
14 #mkdir recover; cd recover; wget -O rbd_restore https://raw.githubusercontent.com/smmoore/ceph/master/rbd_restore.sh #下载恢复工具
15 #for block in $(find /var/lib/ceph/osd/ -type f -name *.cb4bf238e1f29.*); do cp $block . ; done
16
17 #sh rbd_restore myimg "*data.cb4bf238e1f29*" 1317967872 #第二个参数,生成的文件名,参数三,块前缀 参数四:rbd大小
18 #ll myimg
19 -rw-r--r-- 1 root root 1317967872 6月 28 15:12 myimg
20 #md5sum myimg
21 b0b03502875490417c9f8cb9fe8ce6d6 myimg
在单一节点上收集object get_ob.sh (存在局限性)
1 id=${1} 2 mkdir $id 3 for i in `find /var/lib/ceph/osd/ceph-*/current -name "*$id*"`; do cp $i $id\'/\'; done
rbd_restore.sh脚本:
1 #!/bin/sh 2 # 3 # AUTHORS 4 # Shawn Moore <smmoore@catawba.edu> 5 # Rodney Rymer <rrr@catawba.edu> 6 # 7 # 8 # REQUIREMENTS 9 # GNU Awk (gawk) 10 # 11 # 12 # NOTES 13 # This utility assumes one copy of all object files needed to construct the rbd 14 # are located in the present working direcory at the time of execution. 15 # For example all the rb.0.1032.5e69c215.* files. 16 # 17 # When listing the "RBD_SIZE_IN_BYTES", be sure you list the full potential size, 18 # not just what it appears to be. If you do not know the true size of the rbd, 19 # you can input a size in bytes that you know is larger than the disk could be 20 # and it will be a large sparse file with un-partioned space at the end of the 21 # disk. In our tests, this doesn\'t occupy any more space/objects in the cluster 22 # but the rbd could be resized from within the rbd (VM) to grow. Once you bring 23 # it up and are able to find the true size, you can resize with "rbd resize ..". 24 # 25 # To obtain needed utility input information if not already known run: 26 # rbd info RBD 27 # 28 # To find needed files we run the following command on all nodes that might have 29 # copies of the rbd objects: 30 # find /${CEPH} -type f -name rb.0.1032.5e69c215.* 31 # Then copy the files to a single location from all nodes. If using btrfs be 32 # sure to pay attention to the btrfs snapshots that ceph takes on it\'s own. 33 # You may want the "current" or one of the "snaps". 34 # 35 # We are actually taking our own btrfs snapshots cluster osd wide at the same 36 # time with parallel ssh and then using "btrfs subvolume find-new" command to 37 # merge them all together for disaster recovery and also outside of ceph rbd 38 # versioning. 39 # 40 # Hopefully once the btrfs send/recv functionality is stable we can switch to it. 41 # 42 # 43 # This utility works for us but may not for you. Always test with non-critical 44 # data first. 45 # 46 47 # Rados object size 48 obj_size=4194304 49 50 # DD bs value 51 rebuild_block_size=512 52 53 rbd="${1}" 54 base="${2}" 55 dir=$base\'/\' 56 rbd_size="${3}" 57 if [ "${1}" = "-h" -o "${1}" = "--help" -o "${rbd}" = "" -o "${base}" = "" -o "${rbd_size}" = "" ]; then 58 echo "USAGE: $(echo ${0} | awk -F/ \'{print $NF}\') RESTORE_RBD BLOCK_PREFIX RBD_SIZE_IN_BYTES" 59 exit 1 60 fi 61 #base_files=$(ls -1 ${base}.* 2>/dev/null | wc -l | awk \'{print $1}\') 62 #if [ ${base_files} -lt 1 ]; then 63 # echo "COULD NOT FIND FILES FOR ${base} IN $(pwd)" 64 # exit 65 #fi 66 67 # Create full size sparse image. Could use truncate, but wanted 68 # as few required files and dd what a must. 69 dd if=/dev/zero of=${rbd} bs=1 count=0 seek=${rbd_size} 2>/dev/null 70 71 for file_name in $(ls ./$dir |grep data); do 72 seek_loc=$(echo ${file_name} | awk -F_ \'{print $1}\' | awk -v os=${obj_size} -v rs=${rebuild_block_size} -F. \'{print os*strtonum("0x" $NF)/rs}\') 73 dd conv=notrunc if=$dir${file_name} of=${rbd} seek=${seek_loc} bs=${rebuild_block_size} 74 done
版权声明:本文为chris-cp原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。