参照先 https://www.linuxtechi.com/configure-nfs-server-clustering-pacemaker-centos-7-rhel-7/
HAクラスターのサービス対象をNFSした場合.
まずスプリットブレイン阻止のためにクラスターの異常ノードを検出して防備するfencing(フェンシング)を組み込む.
っでこのフェンシングはSTONITHリソースで行い
[root@ha01 ~]# pcs stonith list
:
fence_sbd - Fence agent for sbd
fence_scsi - Fence agent for SCSI persistent reservation
fence_virt - Fence agent for virtual machines
:
[root@ha01 ~]#
らがある.
ここでは「fence_scsi」を使って特定デバイス(/dev/sdb, 1GB)経由でフェンシングを行う.
[root@ha01 ~]# cat /proc/partitions
major minor #blocks name
2 0 4 fd0
8 0 8388608 sda
8 1 524288 sda1
8 2 1048576 sda2
8 3 6814720 sda3
8 16 1048576 sdb <-- fencing device
8 32 104857600 sdc <-- 共有ストレージ
11 0 4554752 sr0
[root@ha01 ~]# ls -l /dev/disk/by-id/
total 0
lrwxrwxrwx. 1 root root 9 Jun 4 02:54 ata-VMware_Virtual_IDE_CDROM_Drive_10000000000000000001 -> ../../sr0
lrwxrwxrwx. 1 root root 9 Jun 4 02:54 scsi-36000c291795923e7963a793fbe837a0c -> ../../sdc
lrwxrwxrwx. 1 root root 9 Jun 4 02:54 scsi-36000c29bd619c056793ef3d92992dffd -> ../../sdb
lrwxrwxrwx. 1 root root 9 Jun 4 02:54 wwn-0x6000c291795923e7963a793fbe837a0c -> ../../sdc
lrwxrwxrwx. 1 root root 9 Jun 4 02:54 wwn-0x6000c29bd619c056793ef3d92992dffd -> ../../sdb
[root@ha01 ~]#
まず、fence_scsiを下拵え. 「fence_scsi」の設定パラメーターは「pcs stonith describe fence_scsi」で表示される.
[root@ha01 ~]# pcs stonith describe fence_scsi
:
pcmk_host_map: A mapping of host names to ports numbers for devices that do not support host names. Eg. node1:1;node2:2,3 would
tell the cluster to use port 1 for node1 and ports 2 and 3 for node2
pcmk_host_list: A list of machines controlled by this device (Optional unless pcmk_host_check=static-list).
pcmk_host_check: How to determine which machines are controlled by the device. Allowed values: dynamic-list (query the device
via the 'list' command), static-list (check the pcmk_host_list attribute), status (query the device via the
'status' command), none (assume every device can fence every machine)
pcmk_delay_max: Enable a random delay for stonith actions and specify the maximum of random delay. This prevents double fencing
when using slow devices such as sbd. Use this to enable a random delay for stonith actions. The overall delay is
derived from this random delay value adding a static delay so that the sum is kept below the maximum delay.
pcmk_delay_base: Enable a base delay for stonith actions and specify base delay value. This prevents double fencing when
different delays are configured on the nodes. Use this to enable a static delay for stonith actions. The
overall delay is derived from a random delay value adding this static delay so that the sum is kept below the
maximum delay.
pcmk_action_limit: The maximum number of actions can be performed in parallel on this device Pengine property concurrent-
fencing=true needs to be configured first. Then use this to specify the maximum number of actions can be
performed in parallel on this device. -1 is unlimited.
Default operations:
monitor: interval=60s
[root@ha01 ~]#
なので、「disk_fencing」というなのフェンシングを作成した.
[root@ha01 ~]# pcs stonith create disk_fencing fence_scsi pcmk_host_list="ha01-1-heartbeat ha02-1-heartbeat" \
pcmk_monitor_action="metadata" \
pcmk_reboot_action="off" \
devices="/dev/disk/by-id/wwn-0x6000c29bd619c056793ef3d92992dffd" \ <--- 対象とした /dev/sdb のby-id値
meta provides="unfencing"
(確認)
[root@ha01 ~]# pcs stonith show disk_fencing
Resource: disk_fencing (class=stonith type=fence_scsi)
Attributes: devices=/dev/disk/by-id/wwn-0x6000c29bd619c056793ef3d92992dffd pcmk_host_list="ha01-1-heartbeat ha02-1-heartbeat" pcmk_monitor_action=metadata pcmk_reboot_action=off
Meta Attrs: provides=unfencing
Operations: monitor interval=60s (disk_fencing-monitor-interval-60s)
[root@ha01 ~]# pcs stonith show
disk_fencing (stonith:fence_scsi): Started ha01-1-heartbeat
(削除)
[root@ha01 ~]# pcs stonith delete disk_fencing
nfsサーバにするのでクラスターのメンバー(ha01とha02)すべてに「nfs-utils」をインストールします.
そして、nfs-lockサービスを停止/無効にする. これらは pacemaker 側で制御するので.
加えてfirewallも調整します. 具体的には
yum install nfs-utils -y
systemctl stop nfs-lock && systemctl disable nfs-lock
firewall-cmd --permanent --add-service=nfs
firewall-cmd --permanent --add-service=mountd
firewall-cmd --permanent --add-service=rpc-bind
firewall-cmd --reload
次に、共有ドライブ /dev/sdc の下拵え.
現状ローカルデバイス(/dev/sda)と共有ドライブ(/dev/sdb,/dev/sdc)がクラスターには接続している.
[root@ha01 ~]# cat /proc/partitions
major minor #blocks name
2 0 4 fd0
8 0 8388608 sda
8 1 524288 sda1
8 2 1048576 sda2
8 3 6814720 sda3
8 16 1048576 sdb
8 32 104857600 sdc
11 0 4554752 sr0
[root@ha01 ~]#
クラスターの片方で/dev/sdcにパーティションを作成する. 「fdisk /dev/sdc」 gdiskを使っても構わない
作ったパーティションを確認する
[root@ha01 ~]# cat /proc/partitions
major minor #blocks name
2 0 4 fd0
8 0 8388608 sda
8 1 524288 sda1
8 2 1048576 sda2
8 3 6814720 sda3
8 16 1048576 sdb
8 32 104857600 sdc
8 33 104856576 sdc1 <-- 作られた新規のパーティション
11 0 4554752 sr0
[root@ha01 ~]#
[root@ha01 ~]# mkfs.xfs /dev/sdc1
ha02側で「cat /proc/partitions」としても「/dev/sdc」のままかも知れないが、その際はha02側で「partprobe」と実行すると正しく反映される.
まずは現状確認.
[root@ha01 ~]# pcs status
Cluster name: ha
Stack: corosync
Current DC: ha02-1-heartbeat (version 1.1.21-4.el7-f14e36fd43) - partition with quorum
Last updated: Thu Jun 4 03:24:33 2020
Last change: Thu Jun 4 03:24:31 2020 by root via cibadmin on ha01-1-heartbeat
2 nodes configured
2 resources configured
Online: [ ha01-1-heartbeat ha02-1-heartbeat ]
Full list of resources:
VIP (ocf::heartbeat:IPaddr): Started ha01-1-heartbeat
disk_fencing (stonith:fence_scsi): Started ha02-1-heartbeat
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
[root@ha01 ~]#
リソースにグループ名「nfsgrp」を設けて
[root@ha01 ~]# pcs resource create nfsshare Filesystem device=/dev/sdc1 directory=/nfsshare fstype=xfs --group nfsgrp
Assumed agent name 'ocf:heartbeat:Filesystem' (deduced from 'Filesystem')
[root@ha01 ~]# pcs resource
VIP (ocf::heartbeat:IPaddr): Started ha01-1-heartbeat
Resource Group: nfsgrp
nfsshare (ocf::heartbeat:Filesystem): Started ha01-1-heartbeat
[root@ha01 ~]#
[root@ha01 ~]# pcs resource create nfsd nfsserver nfs_shared_infodir=/nfsshare/nfsinfo --group nfsgrp
Assumed agent name 'ocf:heartbeat:nfsserver' (deduced from 'nfsserver')
[root@ha01 ~]# pcs resource
VIP (ocf::heartbeat:IPaddr): Started ha01-1-heartbeat
Resource Group: nfsgrp
nfsshare (ocf::heartbeat:Filesystem): Started ha01-1-heartbeat
nfsd (ocf::heartbeat:nfsserver): Started ha01-1-heartbeat
[root@ha01 ~]#
[root@ha01 ~]# pcs resource create nfsroot exportfs clientspec="192.168.0.0/24" options=rw,sync,no_root_squash directory=/nfsshare fsid=0 --group nfsgrp
Assumed agent name 'ocf:heartbeat:exportfs' (deduced from 'exportfs')
[root@ha01 ~]# pcs resource
VIP (ocf::heartbeat:IPaddr): Started ha01-1-heartbeat
Resource Group: nfsgrp
nfsshare (ocf::heartbeat:Filesystem): Started ha01-1-heartbeat
nfsd (ocf::heartbeat:nfsserver): Started ha01-1-heartbeat
nfsroot (ocf::heartbeat:exportfs): Started ha01-1-heartbeat
[root@ha01 ~]#
[root@ha01 ~]# pcs resource group add nfsgrp VIP
[root@ha01 ~]# pcs resource update VIP cidr_netmask=24
[root@ha01 ~]# pcs resource
Resource Group: nfsgrp
nfsshare (ocf::heartbeat:Filesystem): Started ha01-1-heartbeat
nfsd (ocf::heartbeat:nfsserver): Started ha01-1-heartbeat
nfsroot (ocf::heartbeat:exportfs): Started ha01-1-heartbeat
VIP (ocf::heartbeat:IPaddr): Started ha01-1-heartbeat
[root@ha01 ~]#
作った個々のresouceの確認は「pcs resource show nfsshare」とかでできます
nfs クライアントからmount
作成したNFSマウント先は NFSv4 の疑似ファイルシステム を使った方式なので「192.168.0.102:/」となる.
[root@c ~]# mount -t nfs 192.168.0.102:/ /mnt
[root@c ~]# ls -l /mnt/
total 0
drwxr-x--x. 6 root root 166 Jun 4 03:47 nfsinfo <-- 自動的に作られている
[root@c ~]# cd /
[root@c /]# tar cvf /mnt/etc.tar ./etc/
[root@c /]# cd /mnt
[root@c mnt]# ls -l
total 22220
-rw-r--r--. 1 root root 22753280 Jun 4 03:50 etc.tar
drwxr-x--x. 6 root root 166 Jun 4 03:47 nfsinfo
[root@c mnt]#
ここでクラスターをフェイルオーバーさせてみる
「pcs cluster stop」
直後に切り替わるが、クライアント側では切り替わり直後からすぐ使えるわけではなく、、、60sec位待たされる....