ceph(luminous)-Bluestore,更换ssd和wal位置(不改变大小)_pyrl的博客-程序员秘密

2021/10/28 14:09:36

本文主要是介绍ceph(luminous)-Bluestore,更换ssd和wal位置(不改变大小)_pyrl的博客-程序员秘密,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

简介

随着业务的增长,osd中数据很多,如果db或者wal设备需要更换,删除osd并且新建osd会引发大量迁移。
本文主要介绍需要更换db或者wal设备时(可能由于需要更换其他速度更快的ssd;可能时这个db的部分分区损坏,但是db或者wal分区完好,所以需要更换),如何只更换db或者wal设备,减少数据迁移(不允许db或者wal设备容量变大或者变小).

具体步骤如下:

  1. 设置osd noout ,停止相应osd
[root@test-1 ~]# ceph osd set noout
noout is set
[root@test-1 ~]# systemctl stop ceph-osd@1

  1. 找到osd对应的lv设备,修改data-device上的lvtags.
[root@test-1 tool]# ll /var/lib/ceph/osd/ceph-1/
total 48
-rw-r--r-- 1 ceph ceph 402 Oct 15 14:05 activate.monmap
lrwxrwxrwx 1 ceph ceph  93 Oct 15 14:05 block -> /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5
lrwxrwxrwx 1 ceph ceph   9 Oct 15 14:05 block.db -> /dev/vdf4
lrwxrwxrwx 1 ceph ceph   9 Oct 15 14:05 block.wal -> /dev/vdf3
-rw-r--r-- 1 ceph ceph   2 Oct 15 14:05 bluefs
-rw-r--r-- 1 ceph ceph  37 Oct 15 14:05 ceph_fsid
-rw-r--r-- 1 ceph ceph  37 Oct 15 14:05 fsid
-rw------- 1 ceph ceph  55 Oct 15 14:05 keyring
-rw-r--r-- 1 ceph ceph   8 Oct 15 14:05 kv_backend
-rw-r--r-- 1 ceph ceph  21 Oct 15 14:05 magic
-rw-r--r-- 1 ceph ceph   4 Oct 15 14:05 mkfs_done
-rw-r--r-- 1 ceph ceph  41 Oct 15 14:05 osd_key
-rw-r--r-- 1 ceph ceph   6 Oct 15 14:05 ready
-rw-r--r-- 1 ceph ceph  10 Oct 15 14:05 type
-rw-r--r-- 1 ceph ceph   2 Oct 15 14:05 whoami
##查看device的lvtags
[root@test-1 tool]# lvs  --separator=';' -o lv_tags /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 
  LV Tags
  ceph.block_device=/dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5,ceph.block_uuid=fvIZR9-G6Pd-o3BR-Vir2-imEH-e952-sIED0E,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=acc6dc6a-79cd-45dc-bf1f-83a576eb8039,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.db_device=/dev/vdf4,ceph.db_uuid=5fdf11bf-7a3d-4e05-bf68-a03e8360c2b8,ceph.encrypted=0,ceph.osd_fsid=a4b0d600-eed7-4dc6-b20e-6f5dab561be5,ceph.osd_id=1,ceph.type=block,ceph.vdo=0,ceph.wal_device=/dev/vdf3,ceph.wal_uuid=d82d9bb0-ffda-451b-95e1-a16b4baec69
##删除ceph.db_device
  [root@test-1 tool]# lvchange --deltag ceph.db_device=/dev/vdf4 /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 
    Logical volume ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 changed.
##删除ceph.db_uuid
[root@test-1 tool]# lvchange --deltag ceph.db_uuid=5fdf11bf-7a3d-4e05-bf68-a03e8360c2b8 /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 
  Logical volume ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 changed.
##删除ceph.wal_device
  [root@test-1 tool]# lvchange --deltag ceph.wal_device=/dev/vdf3 /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 
  Logical volume ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 changed.
##删除ceph.wal_uuid
[root@test-1 tool]# lvchange --deltag ceph.wal_uuid=d82d9bb0-ffda-451b-95e1-a16b4baec697 /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 
  Logical volume ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 changed.
##添加新的db,wal和他们的uuid,uuid再/dev/disk/by-partuuid/中可以找到
[root@test-1 tool]# lvchange --addtag ceph.db_device=/dev/vdh4 /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 
  Logical volume ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 changed.
[root@test-1 tool]# lvchange --addtag ceph.wal_device=/dev/vdh3 /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 
  Logical volume ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 changed.
[root@test-1 tool]# lvchange --addtag ceph.wal_uuid=74b93324-49fb-426e-9fc0-9fc4d5db9286 /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 
  Logical volume ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 changed.
[root@test-1 tool]# lvchange --addtag ceph.db_uuid=d6de0e5b-f935-46d2-94b0-762b196028de /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 
  Logical volume ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 changed.

  1. 把原db和wal设备上的数据拷贝到新的设备上.
[root@test-1 tool]# dd if=/dev/vdf4 of=/dev/vdh4 bs=4M
7680+0 records in
7680+0 records out
32212254720 bytes (32 GB) copied, 219.139 s, 147 MB/s
[root@test-1 tool]# dd if=/dev/vdf3 of=/dev/vdh3 bs=4M
7680+0 records in
7680+0 records out
32212254720 bytes (32 GB) copied, 431.513 s, 74.6 MB/s
  1. umount原来得osd目录,重新activate对应osd
[root@test-1 tool]# umount /var/lib/ceph/osd/ceph-1/
[root@test-1 tool]# ceph-volume lvm activate 1 a4b0d600-eed7-4dc6-b20e-6f5dab561be5
Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 --path /var/lib/ceph/osd/ceph-1
Running command: ln -snf /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 /var/lib/ceph/osd/ceph-1/block
Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Running command: chown -R ceph:ceph /dev/dm-1
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Running command: ln -snf /dev/vdh4 /var/lib/ceph/osd/ceph-1/block.db
Running command: chown -R ceph:ceph /dev/vdh4
Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block.db
Running command: chown -R ceph:ceph /dev/vdh4
Running command: ln -snf /dev/vdh3 /var/lib/ceph/osd/ceph-1/block.wal
Running command: chown -R ceph:ceph /dev/vdh3
Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block.wal
Running command: chown -R ceph:ceph /dev/vdh3
Running command: systemctl enable ceph-volume@lvm-1-a4b0d600-eed7-4dc6-b20e-6f5dab561be5
Running command: systemctl start ceph-osd@1
--> ceph-volume lvm activate successful for osd ID: 1
[root@test-1 tool]# ll /var/lib/ceph/osd/ceph-1/
total 24
lrwxrwxrwx 1 ceph ceph 93 Oct 15 15:59 block -> /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5
lrwxrwxrwx 1 ceph ceph  9 Oct 15 15:59 block.db -> /dev/vdh4
lrwxrwxrwx 1 ceph ceph  9 Oct 15 15:59 block.wal -> /dev/vdh3
-rw------- 1 ceph ceph 37 Oct 15 15:59 ceph_fsid
-rw------- 1 ceph ceph 37 Oct 15 15:59 fsid
-rw------- 1 ceph ceph 55 Oct 15 15:59 keyring
-rw------- 1 ceph ceph  6 Oct 15 15:59 ready
-rw------- 1 ceph ceph 10 Oct 15 15:59 type
-rw------- 1 ceph ceph  2 Oct 15 15:59 whoami

至此,db和wal已经更换完成了,再次强调,更换db,wal得设备需要更原设备大小相同.



这篇关于ceph(luminous)-Bluestore,更换ssd和wal位置(不改变大小)_pyrl的博客-程序员秘密的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!


扫一扫关注最新编程教程