728x90
반응형
Symptom..
ceph -s로 간단한 상태 확인
root@mon0:~# ceph -s
cluster:
id: [fsid]
health: HEALTH_WARN
4 hosts fail cephadm check
ceph health detail로 증상 확인
root@mon0:~# ceph health detail
HEALTH_WARN 4 hosts fail cephadm check
[WRN] CEPHADM_HOST_CHECK_FAILED: 4 hosts fail cephadm check
host osd2 ([osd2_ip]) failed check: Can't communicate with remote host `[osd2_ip]`, possibly because python3 is not installed there. [Errno 111] Connect call failed ('[osd2_ip]', 22)
host osd0 ([osd0_ip]) failed check: Can't communicate with remote host `[osd0_ip]`, possibly because python3 is not installed there. [Errno 111] Connect call failed ('[osd0_ip]', 22)
host osd1 ([osd1_ip]) failed check: Can't communicate with remote host `[osd1_ip]`, possibly because python3 is not installed there. [Errno 111] Connect call failed ('[osd1_ip]', 22)
host mon0 ([mon0_ip]) failed check: Can't communicate with remote host `[mon0_ip]`, possibly because python3 is not installed there. [Errno 111] Connect call failed ('[mon0_ip]', 22)
728x90
Cause..
보안을 위해 Ceph 시스템의 ssh포트를 22번 포트에서 [PORT]번 포트로 변경,
Ceph내의 시스템끼리의 통신 및 구동이 ssh기반으로 기본 포트 변경으로 인한 증상.
Trouble_shooting..
현재의 ceph의 ssh 설정 확인
root@mon0:~# ceph cephadm get-ssh-config
Host *
User root
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
ConnectTimeout=30
위에서 확인한 형식대로 설정 수정
root@mon0:~# ceph cephadm get-ssh-config > /root/ssh_config
root@mon0:~# vi /root/ssh_config
Host *
User root
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
ConnectTimeout=30
Port [PORT]
수정한 파일로 적용
root@mon0:~# ceph cephadm set-ssh-config -i /root/ssh_config
반응형
Verifying..
수정된 설정으로 check-host
root@mon0:~# ceph cephadm check-host mon0
mon0 (None) ok
docker (/usr/bin/docker) is present
systemctl is present
lvcreate is present
Unit systemd-timesyncd.service is enabled and running
Hostname "mon0" matches what is expected.
Host looks OK
Host looks OK 가 뜬 후 ceph -s로 다시 확인해보면
root@mon0:~# ceph -s
cluster:
id: [fsid]
health: HEALTH_OK
728x90
반응형
'Ceph' 카테고리의 다른 글
[Ceph] remove OSD/OSD-node 제거 (0) | 2025.03.27 |
---|---|
[Ceph] rbd: error: image still has watchers (0) | 2025.01.03 |
[Ceph] pgs not deep-scrubbed in time (0) | 2024.11.27 |
[Ceph] daemons have recently crashed (0) | 2024.11.24 |
[Ceph] failed cephadm daemon(s) out of quorum (0) | 2024.11.22 |