728x90
반응형
Remove OSD/OSD-node from Ceph running in docker
도커로 운영되는 Ceph에서 OSD/OSD노드를 제거하기
Env.
OS : RHEL 9 ~ / Ubuntu 20.04 ~
Ceph : Octopus ~
Ceph버전이 Octopus이상부터 docker방식으로 운영된다.
* 위 버전은 운영해본 버전을 기재함.
Prep.
운영중인 ceph-cluster의 영향이 없도록 불필요한 데이터 이동을 제한.
root@mon0:~# ceph osd set nobackfill
nobackfill is set
root@mon0:~# ceph osd set noout
noout is set
root@mon0:~# ceph osd set norecover
norecover is set
제거할 OSD노드에서 서비스 중지
root@OSD-node:~# systemctl stop cpeh-[ID]@osd.[N].service
제거하려는 OSD 및 OSD노드에 따라 모두 중지.
반응형
Main.
ceph-cluster에서 OSD제거
root@mon0:~# ceph osd out [ID]
root@mon0:~# ceph osd crush remove osd.[ID]
root@mon0:~# ceph auth del osd.[ID]
root@mon0:~# ceph osd down [ID]
root@mon0:~# ceph osd rm [ID]
# OSD-노드까지 삭제시
root@mon0:~# ceph osd crush remove [OSD-node]
OSD노드에서 매핑되어있는 OSD확인
root@OSD-node:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sda ceph-59e28fbd-b681-4213-9f70-b7f4703e6b68 lvm2 a-- <1.82t 0
/dev/sdb ceph-1d2eeeaa-3de9-4cce-9bd4-e3a68c17a5cc lvm2 a-- <1.82t 0
/dev/sdc ceph-45b9e8c1-38b7-40c4-b8f6-ef1286124318 lvm2 a-- <1.82t 0
/dev/sdd ceph-7da1afdc-06d4-42b7-a291-a434d63aed4c lvm2 a-- <1.82t 0
/dev/sde ceph-15786099-1337-4537-a8ba-a4bf2ca923ed lvm2 a-- <1.82t 0
/dev/sdf ceph-edff68f9-efac-45b8-af75-84887096409c lvm2 a-- <1.82t 0
/dev/sdg ceph-d8e237a5-bb07-4971-b07d-1f06e9a37269 lvm2 a-- <1.82t 0
/dev/sdh ceph-84c4a67f-a683-4c76-a5cd-f8a032b72a3f lvm2 a-- <1.82t 0
root@OSD-node:~# dmsetup status
ceph--15786099--1337--4537--a8ba--a4bf2ca923ed-osd--block--aaff7f61--7a15--4ab6--a5d1--073f8a39cb13: 0 3905937408 linear
ceph--1d2eeeaa--3de9--4cce--9bd4--e3a68c17a5cc-osd--block--ea87d3c7--2b69--4102--bd6c--3123a9565e2c: 0 3905937408 linear
ceph--45b9e8c1--38b7--40c4--b8f6--ef1286124318-osd--block--e0e56e12--f16d--433d--afc2--6a7c4c1c35eb: 0 3905937408 linear
ceph--59e28fbd--b681--4213--9f70--b7f4703e6b68-osd--block--b680ed90--5fa8--4824--9f3f--b56b83805c77: 0 3905937408 linear
ceph--7da1afdc--06d4--42b7--a291--a434d63aed4c-osd--block--6fc6ddff--266c--4fe0--b8b4--644de00f7d62: 0 3905937408 linear
ceph--84c4a67f--a683--4c76--a5cd--f8a032b72a3f-osd--block--df3f5a86--78be--44cb--977d--412c7f564504: 0 3905937408 linear
ceph--d8e237a5--bb07--4971--b07d--1f06e9a37269-osd--block--e43f40f8--af08--4fb3--bd7c--22a152401d97: 0 3905937408 linear
ceph--edff68f9--efac--45b8--af75--84887096409c-osd--block--6ab899c4--dba7--4832--87b5--12fa8bac4aa8: 0 3905937408 linear
위에서 확인한 OSD중 삭제할 OSD를 제거
root@OSD-node:~# pvremove [/dev/sdn] --force --force --y
root@OSD-node:~# dmsetup remove [ceph--]
Verifying.
ceph-cluster상태 확인 후 데이터 이동 제한 해제.
root@mon0:~# ceph health
HEALTH_OK
root@mon0:~# ceph osd unset nobackfill
nobackfill is unset
root@mon0:~# ceph osd unset noout
noout is unset
root@mon0:~# ceph osd unset norecover
norecover is unset
728x90
반응형
'Ceph' 카테고리의 다른 글
[Ceph] rbd: error: image still has watchers (0) | 2025.01.03 |
---|---|
[Ceph] pgs not deep-scrubbed in time (0) | 2024.11.27 |
[Ceph] daemons have recently crashed (0) | 2024.11.24 |
[Ceph] failed cephadm daemon(s) out of quorum (0) | 2024.11.22 |
[Ceph] CEPHADM_HOST_CHECK_FAILED: 4 hosts fail cephadm check (1) | 2024.09.25 |