https://docs.ceph.com/en/reef/rados/configuration/bluestore-config-ref/
cephのストレージデバイスの定義方法.
メインのHDD/SSDストーレジ(プライマリ デバイス)に1つまたは2つの追加デバイスを付けられる.
DBデバイスのサイズはプライマリ デバイスの1%-4%ほどで. cephfs,rdb,rgwと使用するワークロードで異なるようで、cephfsでは4%みたい. ただ、各種"説"があるようで、、
DBデバイスとWALデバイスを1つのデバイスに割り当てても構わない.
いわゆるストレージの利用制限.
https://docs.ceph.com/en/latest/cephfs/quota/
xfs/ext4なら uid/gid/project とかでそのファイルし済む全体を容量制限できるけど、cephは無理. 特定のフォルダ配下の容量制限、ファイル数制限ができるだけみたい.
「/emfs/user/ABC」に10TiBの制限を掛けたければ「setfattr -n ceph.quota.max_bytes -v 10Ti /emfs/user/ABC」で適用される。
解除はサイズを0にする。「setfattr -n ceph.quota.max_bytes -v 0 /emfs/user/ABC」
ファイル数はceph.quota.max_bytes の代わりに ceph.quota.max_files を適用します
様々なワーニングが発生するかと思いますが、基本dashboardに対処が表示される
デーモンが起動してなくて起こるワーニング
[root@ceph01 ~]# ceph health detail
HEALTH_WARN 3 failed cephadm daemon(s); insufficient standby MDS daemons available
[WRN] CEPHADM_FAILED_DAEMON: 3 failed cephadm daemon(s)
daemon mgr.ceph01.ocyoth on ceph01 is in error state
daemon ceph-exporter.ceph01 on ceph01 is in error state
daemon mds.emfs.ceph01.etimbe on ceph01 is in error state
[WRN] MDS_INSUFFICIENT_STANDBY: insufficient standby MDS daemons available
have 0; want 1 more
[root@ceph01 ~]# ceph -s
:
health: HEALTH_WARN
3 failed cephadm daemon(s)
insufficient standby MDS daemons available
:
[root@ceph01 ~]#
この場合はceph health detailのデーモンを起動せよとdashboardに記載されてるので下記を実施
[root@ceph01 ~]# ceph orch daemon start mgr.ceph01.ocyoth
[root@ceph01 ~]# ceph orch daemon start ceph-exporter.ceph01
[root@ceph01 ~]# ceph orch daemon start mds.emfs.ceph01.etimbe
で対処
[root@ceph01 ~]# ceph config dump
WHO MASK LEVEL OPTION VALUE RO
global advanced cluster_network 10.10.10.0/24 *
global basic container_image quay.io/ceph/ceph@sha256:41d3f5e46ff7de28544cc8869fdea13fca824dcef83936cb3288ed9de935e4de *
mon advanced auth_allow_insecure_global_id_reclaim false
mon advanced mon_allow_pool_delete true
mon advanced public_network 192.168.0.0/24 *
mgr advanced mgr/cephadm/container_init True *
mgr advanced mgr/cephadm/migration_current 7 *
mgr advanced mgr/dashboard/ALERTMANAGER_API_HOST http://ceph01.sybyl.local:9093 *
mgr advanced mgr/dashboard/GRAFANA_API_SSL_VERIFY false *
mgr advanced mgr/dashboard/GRAFANA_API_URL https://ceph01.sybyl.local:3000 *
mgr advanced mgr/dashboard/PROMETHEUS_API_HOST http://ceph01.sybyl.local:9095 *
mgr advanced mgr/dashboard/ssl_server_port 8443 *
mgr advanced mgr/orchestrator/orchestrator cephadm
osd host:ceph-osd1 basic osd_memory_target 1237406515
osd host:ceph-osd2 basic osd_memory_target 1182852573
osd host:ceph-osd3 basic osd_memory_target 1182775159
osd advanced osd_memory_target_autotune true
osd.0 basic osd_mclock_max_capacity_iops_ssd 17326.774918
osd.1 basic osd_mclock_max_capacity_iops_ssd 17013.178805
osd.6 basic osd_mclock_max_capacity_iops_ssd 11841.277433
mds.emfs basic mds_join_fs emfs
[root@ceph01 ~]#
[root@ceph01 ~]# ceph config get osd osd_op_num_shards_hdd
1
[root@ceph01 ~]# ceph config get osd osd_op_num_threads_per_shard_hdd
5
[root@ceph01 ~]#
https://docs.ceph.com/en/reef/start/hardware-recommendations/#write-caches
dnf install smartmontools hdparm
hdparm -W0 /dev/sda <-- 書き込み (揮発性) キャッシュを無効にして
echo "write through" > /sys/class/scsi_disk/0\:0\:0\:0/cache_type <-- ライトスルー にする
「hdparm -W0 /dev/sda」で書き込みキャッシュは無効になるが、「wite back」のままである. なので直接「write through」に変更させる
ノードに入ってcephadmを入れる
[root@ceph-osd1 ~]# dnf install centos-release-ceph-squid.noarch -y
[root@ceph-osd1 ~]# dnf install cephadm python3-jinja2 python3-pyyaml -y
[root@ceph-osd1 ~]# cephadm shell
[ceph: root@ceph-osd1 /]# ceph daemon osd.0 perf dump | jq .'bluefs'
{
"db_total_bytes": 53682888704,
"db_used_bytes": 27656192,
"wal_total_bytes": 0,
"wal_used_bytes": 0,
"slow_total_bytes": 0,
"slow_used_bytes": 0,
:
[ceph: root@ceph-osd1 /]#
基本Ceph/windowsでいいのかなと思いますが、sambaで提供することで認証が組める. この辺が楽かなと思えます.
っでここではCeph/cephadmで mgr を担っている ceph01 にsmbを話させてみた.
クライアント接続に既に用意した「client.emfs」を外だしします
[root@ceph01 ~]# ceph auth get client.emfs | tee /etc/ceph/ceph.client.emfs.keyring
[client.emfs]
key = AQDTBbJnouWUBxAAhy6T8A6w8nTxcqDZrAaUhA==
caps mds = "allow rwp fsname=emfs"
caps mon = "allow r fsname=emfs"
caps osd = "allow rw tag cephfs data=emfs"
[root@ceph01 ~]# ls -l /etc/ceph/
total 20
-rw-------. 1 root root 151 Feb 17 01:50 ceph.client.admin.keyring
-rw-r--r--. 1 root root 176 Feb 19 00:56 ceph.client.emfs.keyring
-rw-r--r--. 1 root root 271 Feb 17 01:50 ceph.conf
-rw-r--r--. 1 root root 595 Feb 15 04:54 ceph.pub
-rw-r--r--. 1 root root 92 Feb 7 23:15 rbdmap
[root@ceph01 ~]#
その上で
[root@ceph01 ~]# mount -t ceph emfs@.emfs=/ /emfs
[root@ceph01 ~]# df -Tht ceph
Filesystem Type Size Used Avail Use% Mounted on
emfs@4ce725c6-eb0d-11ef-980a-bc24112ffd94.emfs=/ ceph 254G 0 254G 0% /emfs
[root@ceph01 ~]#
あとはsmbを入れて
[root@ceph01 ~]# dnf install samba
[root@ceph01 ~]# vi /etc/samba/smb.conf
[global]
workgroup = SAMBA
security = user
passdb backend = tdbsam
[emfs]
path = /emfs
browseable = yes
read only = No
inherit acls = Yes
[root@ceph01 ~]# systemctl enable smb --now
[root@ceph01 ~]# firewall-cmd --add-service=samba --zone=public --permanent
[root@ceph01 ~]# firewall-cmd --reload
(selinuxが有効なら)
[root@ceph01 ~]# dnf install policycoreutils-python-utils
[root@ceph01 ~]# semanage fcontext -a -t samba_share_t "/emfs"
[root@ceph01 ~]# restorecon -R -v /emfs
(aclを付与して使えるようにします)
[root@ceph01 ~]# setfacl -m user:saber:rwx /emfs
[root@ceph01 ~]# setfacl -dm user:saber:rwx /emfs
smartの値とかで微妙なHDDが存在したとする. これを抜き取ってより健全なHDDに置き換える. そんなお話.
まず微妙はHDDをoutにします
[root@ceph01 ~]# ceph osd out osd.3
marked out osd.3.
[root@ceph01 ~]#
(暫くして)
[root@ceph01 ~]# ceph osd safe-to-destroy osd.3
OSD(s) 3 are safe to destroy without reducing data durability.
[root@ceph01 ~]# ceph osd tree
:
2 hdd 0.04880 osd.2 up 1.00000 1.00000
3 hdd 0.04880 osd.3 up 0 1.00000 <- REWEIGHT 値が0になる
:
[root@ceph01 ~]#
とosd.3を外しても問題ないと確認してから、osd.3を外してdaemonを停止させます(コンテナですが)
[root@ceph01 ~]# ceph osd out osd.3
osd.3 is already out.
[root@ceph01 ~]# ceph orch daemon stop osd.3
Scheduled to stop osd.3 on host 'ceph-osd2'
(osd.3を持つceph-osd2では)
[root@ceph-osd2 ~]# podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
:
8021d6fd6825 quay.io/ceph/ceph@sha256:41d3f5e46ff7de28544cc8869fdea13fca824dcef83936cb3288ed9de935e4de -n osd.2 -f --set... 15 hours ago Up 15 hours ceph-4ce725c6-eb0d-11ef-980a-bc24112ffd94-osd-2
20fd50080cdc quay.io/ceph/ceph@sha256:41d3f5e46ff7de28544cc8869fdea13fca824dcef83936cb3288ed9de935e4de -n osd.3 -f --set... 15 hours ago Up 15 hours ceph-4ce725c6-eb0d-11ef-980a-bc24112ffd94-osd-3 <--このコンテナが消えます
28137df48c7e quay.io/ceph/ceph@sha256:41d3f5e46ff7de28544cc8869fdea13fca824dcef83936cb3288ed9de935e4de -n osd.7 -f --set... 15 hours ago Up 15 hours ceph-4ce725c6-eb0d-11ef-980a-bc24112ffd94-osd-7
↓
8021d6fd6825 quay.io/ceph/ceph@sha256:41d3f5e46ff7de28544cc8869fdea13fca824dcef83936cb3288ed9de935e4de -n osd.2 -f --set... 15 hours ago Up 15 hours ceph-4ce725c6-eb0d-11ef-980a-bc24112ffd94-osd-2
28137df48c7e quay.io/ceph/ceph@sha256:41d3f5e46ff7de28544cc8869fdea13fca824dcef83936cb3288ed9de935e4de -n osd.7 -f --set... 15 hours ago Up 15 hours ceph-4ce725c6-eb0d-11ef-980a-bc24112ffd94-osd-7
[root@ceph-osd2 ~]#
そうしてceph-osd2で抜き取ったHDDと同じ場所に新しいHDDを植える場合は「--replace」を用いる
[root@ceph01 ~]# ceph orch osd rm osd.3 --replace
[root@ceph01 ~]# ceph orch osd rm status
No OSD remove/replace operations reported
[root@ceph01 ~]# ceph device ls-lights <-- HDDスロットのランプが使えるか、確認
[root@ceph01 ~]# ceph device ls <-- HDDのデバイス一覧
物理的にHDDを交換して
[root@ceph01 ~]# ceph orch daemon add osd ceph-osd2:/dev/sdb
検証環境は Proxmox です. OSDのホスト ceph-osd3 のHDD(data) を1つ外して再起動させた.
当然ながら
[root@ceph01 ~]# ceph -s
cluster:
id: 4ce725c6-eb0d-11ef-980a-bc24112ffd94
health: HEALTH_WARN
1 failed cephadm daemon(s)
services:
mon: 3 daemons, quorum ceph01,ceph02,ceph-osd1 (age 2h)
mgr: ceph01.ocyoth(active, since 2h), standbys: ceph02.uuiyno
mds: 1/1 daemons up, 1 standby
osd: 8 osds: 7 up (since 35m), 7 in (since 26m)
data:
volumes: 1/1 healthy
pools: 3 pools, 66 pgs
objects: 24 objects, 587 KiB
usage: 604 MiB used, 349 GiB / 350 GiB avail
pgs: 66 active+clean
[root@ceph01 ~]#
と警告になる.
どこか問題なのかは
[root@ceph01 ~]# ceph osd tree down
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.39038 root default
-7 0.14639 host ceph-osd3
1 hdd 0.04880 osd.1 down 0 1.00000
[root@ceph01 ~]# ceph orch ps --service_name osd --sort_by status
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
osd.1 ceph-osd3 error 5m ago 41h - 1128M <unknown> <unknown> <unknown>
osd.4 ceph-osd1 running (2h) 5m ago 43h 62.1M 1180M 19.2.1 f2efb0401a30 24a40935de09
osd.6 ceph-osd1 running (2h) 5m ago 40h 53.5M 1180M 19.2.1 f2efb0401a30 d4c35e3b2434
:
[root@ceph01 ~]#
で見れて、ceph-osd3の osd.1 が問題と.
このHDDを交換してみる.
まずは交換対象の osd.1 のデータが他に回るように手配します
[root@ceph01 ~]# ceph osd out 1
osd.1 is already out.
[root@ceph01 ~]#
と既に他に回っているようなので停止させ、HDD交換手続きを行う.
[root@ceph01 ~]# ceph orch daemon stop osd.1
Scheduled to stop osd.1 on host 'ceph-osd3'
[root@ceph01 ~]# ceph orch osd rm 1 --replace
Scheduled OSD(s) for removal.
VG/LV for the OSDs won't be zapped (--zap wasn't passed).
Run the `ceph-volume lvm zap` command with `--destroy` against the VG/LV if you want them to be destroyed.
[root@ceph01 ~]#
* 注意: block.dbやwalを使ってosdを構成していて osd を外すと block.db と wal 領域も消えます.
なので交換したHDDを組み込むには block.db 、wal 領域を再作成して構築する必要があります.
実体としてHDDを交換して
[root@ceph-osd3 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
(略
mqceph--465e1cab--73d2--4083--95ff--b33bed556c53-osd--block--72c87975--6920--4a7f--8a6b--4b594a4bb62c 253:0 0 50G 0 lvm
sdc 8:32 0 50G 0 disk
mqceph--19239b39--3cd8--4024--91f5--f42239c0e257-osd--block--1951d075--b65d--4c2e--a4a2--fe393621a0f3 253:1 0 50G 0 lvm
sdd 8:48 0 50G 0 disk <-- 交換したHDD
[root@ceph-osd3 ~]#
(block.dbとwalを作り直す)
lvcreate -n sdc-db -l 100%FREE ceph-66205364-530c-4132-a1b7-f8b9f00194d1
lvcreate -n sdc-wal -l 100%FREE ceph
その上で
ceph orch daemon add osd ceph-osd3:/dev/sdc,db_devices=/dev/mapper/ceph--66205364--530c--4132--a1b7--f8b9f00194d1-sdc--db,wal_devices=/dev/ceph/sdc-wal
ここでは追加した /dev/sdd をcephに加える
[root@ceph01 ~]# ceph orch daemon add osd ceph-osd3:/dev/sdd
[root@ceph01 ~]# ceph osd tree
:
-7 0.14639 host ceph-osd3
0 hdd 0.04880 osd.0 up 1.00000 1.00000
1 hdd 0.04880 osd.1 destroyed 0 1.00000
5 ssd 0.04880 osd.5 up 1.00000 1.00000
[root@ceph01 ~]#
(暫くすると)
[root@ceph01 ~]# ceph osd tree
:
-7 0.14639 host ceph-osd3
0 hdd 0.04880 osd.0 up 1.00000 1.00000
1 hdd 0.04880 osd.1 up 1.00000 1.00000
5 ssd 0.04880 osd.5 up 1.00000 1.00000
[root@ceph01 ~]#
と回復する.
Squid(19.x)のrockylinux8向けパッケージが用意されていない. なので作ってみた.
[root@rockylinux8 ~]# cat /etc/redhat-release
Rocky Linux release 8.10 (Green Obsidian)
[root@rockylinux8 ~]# dnf install epel-release -y
[root@rockylinux8 ~]# dnf --enablerepo=devel install CUnit-devel cmake cryptsetup-devel expat-devel fmt-devel fuse-devel gperf gperftools-devel json-devel \
libaio-devel libatomic libbabeltrace-devel libblkid-devel libcap-devel libcap-ng-devel libcurl-devel libevent-devel libibverbs-devel \
libicu-devel libnl3-devel liboath-devel librabbitmq-devel librdkafka-devel librdmacm-devel libxml2-devel lmdb-devel lttng-ust-devel \
lua-devel lz4-devel nasm ncurses-devel ninja-build nss-devel openldap-devel perl python3-Cython python3-devel python3-prettytable \
python3-sphinx re2-devel selinux-policy-devel snappy-devel sqlite-devel thrift-devel xfsprogs-devel xmlstarlet yaml-cpp-devel \
gcc-toolset-11 gcc-toolset-11-build gcc-toolset-11-gcc-c++ gcc-toolset-11-libatomic-devel systemd-devel -y
[root@rockylinux8 ~]# dnf install -y python3-jenkins
[root@rockylinux8 ~]# wget https://download.ceph.com/rpm-squid/el9/SRPMS/ceph-19.2.1-0.el9.src.rpm
[root@rockylinux8 ~]# rpmbuild --rebuild ceph-19.2.1-0.el9.src.rpm
:
(紅茶とケーキの時間)
:
[root@rockylinux8 ~]#
構築が終わったらceph-commonを入れます. 関連パッケージが沢山つきますが、ceph-commonが入ればいいです.
下記の実行で「/usr/bin/ceph」「/usr/sbin/mount.ceph」がインストールされます
[root@rockylinux8 ~]# cd rpmbuild/RPMS/x86_64
[root@rockylinux8 x86_64]# dnf localinstall \
ceph-common-19.2.1-0.el8.x86_64.rpm \
python3-ceph-argparse-19.2.1-0.el8.x86_64.rpm \
python3-ceph-common-19.2.1-0.el8.x86_64.rpm \
python3-cephfs-19.2.1-0.el8.x86_64.rpm \
python3-rados-19.2.1-0.el8.x86_64.rpm \
python3-rbd-19.2.1-0.el8.x86_64.rpm \
python3-rgw-19.2.1-0.el8.x86_64.rpm \
libcephfs2-19.2.1-0.el8.x86_64.rpm \
librados2-19.2.1-0.el8.x86_64.rpm \
libradosstriper1-19.2.1-0.el8.x86_64.rpm \
librbd1-19.2.1-0.el8.x86_64.rpm \
librgw2-19.2.1-0.el8.x86_64.rpm
[root@rockylinux8 x86_64]#
現状確認、「ceph fs ls」でファイルシステム名称と使用しているpool(データとメタ)が判明する
[root@ceph01 ~]# ceph fs ls
name: emfs, metadata pool: emfs-meta, data pools: [emfs-data ]
[root@ceph01 ~]# ceph fs status
emfs - 0 clients
====
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active cephfs.ceph-osd2.tjxksw Reqs: 0 /s 10 13 12 0
POOL TYPE USED AVAIL
emfs-meta metadata 96.0k 47.4G
emfs-data data 0 252G
STANDBY MDS
cephfs.ceph01.ejtdsq
MDS version: ceph version 19.2.1 (58a7fab8be0a062d730ad7da874972fd3fba59fb) squid (stable)
[root@ceph01 ~]#
STATEが「active」となっている。停止にはまず
[root@ceph01 ~]# ceph fs set emfs down true
emfs marked down.
[root@ceph01 ~]# ceph fs status
emfs - 0 clients
====
POOL TYPE USED AVAIL
emfs-meta metadata 96.0k 47.4G
emfs-data data 0 252G
STANDBY MDS
cephfs.ceph01.ejtdsq
cephfs.ceph-osd2.tjxksw
MDS version: ceph version 19.2.1 (58a7fab8be0a062d730ad7da874972fd3fba59fb) squid (stable)
[root@ceph01 ~]# ceph fs rm emfs --yes-i-really-mean-it
と停止させる. っがまだmdsが残っているのでこれも削除/停止させます
[root@ceph01 ~]# ceph orch ps --daemon_type=mds
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
mds.emfs.ceph01.magcmf ceph01 running (9m) 4m ago 9m 14.8M - 19.2.1 f2efb0401a30 53d4887fd871
mds.emfs.ceph02.nuleix ceph02 running (9m) 9m ago 9m 15.3M - 19.2.1 f2efb0401a30 0808d4835824
[root@ceph01 ~]# ceph orch rm mds.emfs
[root@ceph01 ~]# ceph orch ps --daemon_type=mds
No daemons reported
[root@ceph01 ~]#
これで削除は完了です.
未完成
https://heiterbiswolkig.blogs.nde.ag/2020/12/18/cephadm-changing-a-monitors-ip-address/
具体的には
ホスト名 | 役目 | IPアドレス | backend | OS | ストレージ | CPU threads |
ceph01 | Ceph Manager Ceph Metadata Server Ceph Monitor | 192.168.0.47/24 -> 172.16.0.1 | 10.10.10.47/24 | Rockylinux9.4 | system(16GB) /var/lib/ceph(8GB,monitor) | 8 threads(monitor:4+metadata:2+manager:1+ ほか) |
ceph02 | Ceph Manager Ceph Metadata Server Ceph Monitor | 192.168.0.48/24 -> 172.16.0.2 | 10.10.10.48/24 | system(16GB) /var/lib/ceph(8GB,monitor) | 8 threads(monitor:4+metadata:2+manager:1+ ほか) | |
ceph-osd1 | Ceph OSDs Ceph Monitor | 192.168.0.49/24 -> 172.16.0.3 | 10.10.10.49/24 | system(24GB: os[16GB]+wal[8GB]) /var/lib/ceph(8GB,monitor) block.db(16GB) OSD(50GB) | 6 threads(moitor:4t+osd:2) | |
ceph-osd2 | Ceph OSDs | 192.168.0.50/24 -> 172.16.0.4 | 10.10.10.50/24 | system(16GB) block.db(32GB) OSD(50GB)+OSD(50GB) | 4 threads(OSD:2+OSD:2) | |
ceph-osd3 | Ceph OSDs | 192.168.0.51/24 -> 172.16.0.5 | 10.10.10.51/24 | system(16GB) OSD(50GB)+OSD(50GB) | 4 threads(OSD:2+OSD:2) |
ってな感じに.
[root@ceph01 ~]# ceph osd set noout
[root@ceph01 ~]# ceph osd set norecover
[root@ceph01 ~]# ceph osd set norebalance
[root@ceph01 ~]# ceph osd set nobackfill
[root@ceph01 ~]# ceph osd set nodown
[root@ceph01 ~]# ceph osd set pause
[root@ceph-osd3 ~]# systemctl stop ceph.target
[root@ceph-osd3 ~]# systemctl disable ceph.target
[root@ceph-osd2 ~]# systemctl stop ceph.target
[root@ceph-osd2 ~]# systemctl disable ceph.target
[root@ceph-osd1 ~]# systemctl stop ceph.target
[root@ceph-osd1 ~]# systemctl disable ceph.target
[root@ceph02 ~]# systemctl stop ceph.target
[root@ceph02 ~]# systemctl disable ceph.target
[root@ceph01 ~]# systemctl stop ceph.target
[root@ceph01 ~]# systemctl disable ceph.target
この後、各ノードをshutdownして移設して新規ネットワーク(172.16.0.0/24)に繋げて dns/ntpを調整します.
っでup.
https://docs.ceph.com/en/reef/cephadm/services/#extra-container-arguments
キーワード:cephadm osd "extra_container_args" cpus
[root@ceph01 ~]# ceph orch host ls
HOST ADDR LABELS STATUS
ceph01 192.168.0.47 _admin
ceph02 192.168.0.48
ceph-osd1 192.168.0.49
ceph-osd2 192.168.0.50
ceph-osd3 192.168.0.51
5 hosts in cluster
[root@ceph01 ~]# ceph osd lspools
1 .mgr
[root@ceph01 ~]# ceph config set mon mon_allow_pool_delete true
[root@ceph01 ~]# ceph osd pool delete .mgr .mgr --yes-i-really-really-mean-it
[root@ceph01 ~]# ceph osd tree
[root@ceph01 ~]# ceph orch osd rm 4 --zap --force
[root@ceph01 ~]# ceph orch osd rm status (OSD 状態の監視)
[root@ceph01 ~]# ceph orch ps (cephで動いているcontainerの全部)
[root@ceph01 ~]# ceph orch host drain ceph-osd3 (cephから ceph-osd3 の container を削除)
[root@ceph01 ~]# ceph orch ps ceph-osd3 (containerがあるか確認)
[root@ceph01 ~]# ceph orch host rm ceph-osd3 (ceph-osd3 の削除)
[root@ceph01 ~]# ceph orch host drain ceph-osd2
[root@ceph01 ~]# ceph orch ps ceph-osd2
[root@ceph01 ~]# ceph orch host rm ceph-osd2
[root@ceph01 ~]# ceph orch apply mon --placement="ceph01" --dry-run
[root@ceph01 ~]# ceph orch apply mon --placement="ceph01" (monをceph01のみに変更)
[root@ceph01 ~]# ceph orch host drain ceph-osd1
[root@ceph01 ~]# ceph orch ps ceph-osd1
[root@ceph01 ~]# ceph orch host rm ceph-osd1
[root@ceph01 ~]# ceph orch apply mgr --placement="ceph01"
[root@ceph01 ~]# ceph orch host drain ceph02
[root@ceph01 ~]# ceph orch daemon rm mon.ceph02 --force
[root@ceph01 ~]# ceph orch host rm ceph02
「cephadm bootstrap」の後に取り消して無効にするには
[root@ceph01 ~]# systemctl list-unit-files |grep ceph
var-lib-ceph.mount generated -
ceph-9ab38ad2-e6a9-11ef-9ba5-bc24112ffd94@.service indirect disabled
ceph-9ab38ad2-e6a9-11ef-9ba5-bc24112ffd94.target enabled disabled
ceph.target enabled disabled
[root@ceph01 ~]# systemctl disable ceph-9ab38ad2-e6a9-11ef-9ba5-bc24112ffd94@.service ceph-9ab38ad2-e6a9-11ef-9ba5-bc24112ffd94.target ceph.target
[root@ceph01 ~]# reboot
[root@ceph01 ~]# ls -l /etc/systemd/system/ceph*
-rw-r--r--. 1 root root 1181 Feb 9 14:51 /etc/systemd/system/ceph-9ab38ad2-e6a9-11ef-9ba5-bc24112ffd94@.service
-rw-r--r--. 1 root root 157 Feb 9 14:51 /etc/systemd/system/ceph-9ab38ad2-e6a9-11ef-9ba5-bc24112ffd94.target
-rw-r--r--. 1 root root 88 Feb 9 15:18 /etc/systemd/system/ceph.target
[root@ceph01 ~]# rm -rf /etc/systemd/system/ceph*
[root@ceph01 ~]# rm -rf /var/lib/ceph/* /etc/ceph/*
これで「cephadm bootstrap」がチャラになります.
https://docs.ceph.com/en/reef/cephadm/services/#extra-container-arguments
キーワード:cephadm osd "extra_container_args" cpus
[root@ceph01 ~]# ceph orch host ls
HOST ADDR LABELS STATUS
ceph01 192.168.0.47 _admin
ceph02 192.168.0.48
ceph-osd1 192.168.0.49
ceph-osd2 192.168.0.50
ceph-osd3 192.168.0.51
5 hosts in cluster
[root@ceph01 ~]# ceph osd lspools
1 .mgr
[root@ceph01 ~]# ceph config set mon mon_allow_pool_delete true
[root@ceph01 ~]# ceph osd pool delete .mgr .mgr --yes-i-really-really-mean-it
[root@ceph01 ~]# ceph osd tree
[root@ceph01 ~]# ceph orch osd rm 4 --zap --force
[root@ceph01 ~]# ceph orch osd rm status (OSD 状態の監視)
[root@ceph01 ~]# ceph orch ps (cephで動いているcontainerの全部)
[root@ceph01 ~]# ceph orch host drain ceph-osd3 (cephから ceph-osd3 の container を削除)
[root@ceph01 ~]# ceph orch ps ceph-osd3 (containerがあるか確認)
[root@ceph01 ~]# ceph orch host rm ceph-osd3 (ceph-osd3 の削除)
[root@ceph01 ~]# ceph orch host drain ceph-osd2
[root@ceph01 ~]# ceph orch ps ceph-osd2
[root@ceph01 ~]# ceph orch host rm ceph-osd2
[root@ceph01 ~]# ceph orch apply mon --placement="ceph01" --dry-run
[root@ceph01 ~]# ceph orch apply mon --placement="ceph01" (monをceph01のみに変更)
[root@ceph01 ~]# ceph orch host drain ceph-osd1
[root@ceph01 ~]# ceph orch ps ceph-osd1
[root@ceph01 ~]# ceph orch host rm ceph-osd1
[root@ceph01 ~]# ceph orch apply mgr --placement="ceph01"
[root@ceph01 ~]# ceph orch host drain ceph02
[root@ceph01 ~]# ceph orch daemon rm mon.ceph02 --force
[root@ceph01 ~]# ceph orch host rm ceph02
「cephadm bootstrap」の後に取り消して無効にするには
[root@ceph01 ~]# systemctl list-unit-files |grep ceph
var-lib-ceph.mount generated -
ceph-9ab38ad2-e6a9-11ef-9ba5-bc24112ffd94@.service indirect disabled
ceph-9ab38ad2-e6a9-11ef-9ba5-bc24112ffd94.target enabled disabled
ceph.target enabled disabled
[root@ceph01 ~]# systemctl disable ceph-9ab38ad2-e6a9-11ef-9ba5-bc24112ffd94@.service ceph-9ab38ad2-e6a9-11ef-9ba5-bc24112ffd94.target ceph.target
[root@ceph01 ~]# reboot
[root@ceph01 ~]# ls -l /etc/systemd/system/ceph*
-rw-r--r--. 1 root root 1181 Feb 9 14:51 /etc/systemd/system/ceph-9ab38ad2-e6a9-11ef-9ba5-bc24112ffd94@.service
-rw-r--r--. 1 root root 157 Feb 9 14:51 /etc/systemd/system/ceph-9ab38ad2-e6a9-11ef-9ba5-bc24112ffd94.target
-rw-r--r--. 1 root root 88 Feb 9 15:18 /etc/systemd/system/ceph.target
[root@ceph01 ~]# rm -rf /etc/systemd/system/ceph*
[root@ceph01 ~]# rm -rf /var/lib/ceph/* /etc/ceph/*
これで「cephadm bootstrap」がチャラになります.
https://yourcmc.ru/wiki/Ceph_performance
SSDなフラッシュストレージを作るなら、コンシューマー向けのSSDではなくデータセンター向けのSSDを使うべきみたい.
HDDがdataで、blockdb/walがコンシューマーなSSDでもいいとは書いていないけど、トランザクション云々からSSDをdataにした場合かなと思っている