Openfiler 2.99 Active/Passive Cluster How-To

要理解 Openfiler 是什麼東西前,我們便需要首先了解他和他的”對手” FreeNAS 的分別。我自己用後主觀覺得,論功能上, Openfiler與 FreeNAS是同一樣東西。兩只也是瑞士軍刀式的 Open Source NAS。但最主要分別是:FreeNAS 比較傾向一個 embedded system,你可以把 FreeNAS 安裝在 CF card 中,而且NAS的所需功能非常齊全,Firewall也可以用 Web 介面操作,對一個 *NIX 新手來說是極易上手。反倒Openfiler和CentOS沒有分別,只是提供多了一個控制 Samba, NFS, iSCSI 等等的 Web Tools 的而已。而且看到他的體積,你不會把他安裝在 CF 上跑的(要 基本8GB 再加 1GB swap)。需要 Firewall嘛,請自行用ssh console 設定 iptables 吧。

所以,如果你需要簡單的 NAS ,FreeNAS 是首選,但如果你需要特別用途(像本文介紹的 Cluster 設定),請選擇 Openfiler 吧,相比起自行在 CentOS 上設定 heartbeat 和 DRBD,他會使你跑少了很多很多冤枉路!

 

0 – 基本設定:

用 VirtualBox 設定兩台 VM,每台VM需要兩張 LAN Card。VM設定如下:

CPU – 1 Core
RAM – 512MB
Hard Disk 1 (sda)– SATA 12GB
Hard Disk 2 (sdb)– SATA 32GB
LAN 1 (eth0)– Bridging在 Host machine 的網卡上。
LAN 2 (eth1)– 連接 Virtualbox Host-Only Adapter 上。

安裝Openfiler 在各台VM上。請安裝在 sda 上。留下sdb 一會兒安裝好後再人手設定為 cluster 的 data partition。

每台 VM 的 IP 設定如下:

VM1 :
Hostname: san1.host.local
eth0: 128.128.0.5/16
eth1: 192.168.56.1/24

VM2:
Hostname: san2.host.local
eth0: 128.128.0.6/16
eth1: 192.168.56.2/24

Virtual IP: 128.128.0.7 (這是一個 cluster 的 virtual IP,兩台系統將會共用這一 IP address來提供服務。一會兒在設定 corosync 時會看到設定方法)

 

1.1 – 設定Host file
修改在 san1 上的host file
root@san1 ~# vi /etc/hosts

加入:

192.168.56.2            san2.host.local

修改在 san2上的host file
root@san1 ~# vi /etc/hosts

加入:

192.168.56.1           san1.host.local

 

1.2 – 設定 SSH Key,使兩台機器經 scp交換檔案不需要輸入密碼,方便一些:

在 san1 上:

root@san1 ~# ssh-keygen -t dsa

Generating public/private dsa key pair.
Enter file in which to save the key (/root/.ssh/id_dsa):
Created directory ‘/root/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:

 

在 san2 上作同樣的動作:
root@san2 ~# ssh-keygen -t dsa

Generating public/private dsa key pair.
Enter file in which to save the key (/root/.ssh/id_dsa):
Created directory ‘/root/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:

交換 public key

在 san1 上:
root@san1 ~# scp ~/.ssh/id_dsa.pub root@san2.host.local:~/.ssh/authorized_keys

在 san2 上:
root@san2 ~# scp ~/.ssh/id_dsa.pub root@san1.host.local:~/.ssh/authorized_keys

 

2 – 設定DRBD Meta / Data Partitions

設定 corosync/pacemaker 前首先需要設定 DRBD 。DRBD 是用作實時同步 Block devices的服務。可視同為透過網路進行即時同步的RAID-1,架構如下(from DRBD網站)

2.1 –設定DRBD

我們會設定兩個 partition分別是 sda4 和 sdb1。然後把它記錄在DRDB 的設定檔案內。記住:兩部機也需要設定 partition。

root@san1 ~# fdisk /dev/sda
Disk /dev/sda: 12.9 GB, 12884901888 bytes
255 heads, 63 sectors/track, 1566 cylinders, total 25165824 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c6d56

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *          63      610469      305203+  83  Linux
/dev/sda2          610470    17382329     8385930   83  Linux
/dev/sda3        17382330    19486844     1052257+  82  Linux swap / Solaris
/dev/sda4        19486845    25165823     2839489+  83  Linux

root@san1 ~# fdisk /dev/sdb
Disk /dev/sdb: 34.4 GB, 34359738368 bytes
86 heads, 4 sectors/track, 195083 cylinders, total 67108864 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    67108863    33553408   83  Linux

root@san2 ~# fdisk /dev/sda
Disk /dev/sda: 12.9 GB, 12884901888 bytes
255 heads, 63 sectors/track, 1566 cylinders, total 25165824 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c6d56

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *          63      610469      305203+  83  Linux
/dev/sda2          610470    17382329     8385930   83  Linux
/dev/sda3        17382330    19486844     1052257+  82  Linux swap / Solaris
/dev/sda4        19486845    25165823     2839489+  83  Linux

root@san2 ~# fdisk /dev/sdb
Disk /dev/sdb: 34.4 GB, 34359738368 bytes
86 heads, 4 sectors/track, 195083 cylinders, total 67108864 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    67108863    33553408   83  Linux

 

san1 上設定 /etc/drbd.conf。如下:

# You can find an example in  /usr/share/doc/drbd.../drbd.conf.example
include "drbd.d/global_common.conf";
# include "drbd.d/*.res";

resource meta {
    on san1.host.local {
        device /dev/drbd0;
        disk /dev/sda4;
        address 192.168.56.1:7788;
        meta-disk internal;
    }

    on san2.host.local {
        device /dev/drbd0;
        disk /dev/sda4;
        address 192.168.56.2:7788;
        meta-disk internal;
    }
}

resource data {
    on san1.host.local {
        device /dev/drbd1;
        disk /dev/sdb1;
        address 192.168.56.1:7789;
        meta-disk internal;
    }

    on san2.host.local {
        device /dev/drbd1;
        disk /dev/sdb1;
        address 192.168.56.2:7789;
        meta-disk internal;
    }
}

複製 san1 上的/etc/drbd.conf 去 san2。

root@san1 ~# scp /etc/drbd.conf root@san2.host.local: /etc/drbd.conf

在 san1 和 san2 上執行 drbdadm。如下:

root@ san1 ~# drbdadm create-md meta
root@ san1 ~# drbdadm create-md data

root@ san2 ~# drbdadm create-md meta
root@ san2 ~# drbdadm create-md data

 

如果有問題的話,可以嘗試用 dd 清空 partition 然後再執行上面的命令。例如:

dd if=/dev/zero of=/dev/sda4

dd if=/dev/zero of=/dev/sdb1

但請不要弄錯 partition。否則…….重新安裝 openfiler 吧。

OK,沒有問題的話可以起動這兩部機的 drbd daemon了。

root@ san1 ~# service drbd start

root@ san2 ~# service drbd start

在 san1 上,執行drbdsetup設定 san1 為 drbd 的primary node。

root@ san1 ~# drbdsetup /dev/drbd0 primary -o
root@ san1 ~# drbdsetup /dev/drbd1 primary –o

現在請查看drbd 的 status:
root@ san1 ~# service drbd status

drbd driver loaded OK; device status:
version: 8.3.10 (api:88/proto:86-96)
GIT-hash: 5c0b0469666682443d4785d90a2c603378f9017b build by phil@fat-tyre, 2011-01-28 12:17:35
m:res   cs         ro                 ds                 p  mounted  fstype
0:meta  Connected  Primary/Secondary  UpToDate/UpToDate  C
1:data  Connected  Primary/Secondary  UpToDate/UpToDate  C

2.2 –預備系統設定的 partition

所有有關 openfiler 的設定檔會被移送到 DRBD 的 meta resource /dev/drdb0 中。Meta resource 是掛載到 /dev/sda4中。

root@san1 ~# mkfs.ext3 /dev/drbd0

root@san1 ~# service openfiler stop

2.3 –移送 san1 的Openfiler 設定到 meta partition
root@ san1 ~# mkdir /meta
root@ san1 ~# mount /dev/drbd0 /meta
root@ san1 ~# mv /opt/openfiler/ /opt/openfiler.local
root@ san1 ~# mkdir /meta/opt
root@ san1 ~# cp -a /opt/openfiler.local /meta/opt/openfiler
root@ san1 ~# ln -s /meta/opt/openfiler /opt/openfiler
root@ san1 ~# rm /meta/opt/openfiler/sbin/openfiler
root@ san1 ~# ln -s /usr/sbin/httpd /meta/opt/openfiler/sbin/openfiler
root@ san1 ~# rm /meta/opt/openfiler/etc/rsync.xml
root@ san1 ~# ln -s /opt/openfiler.local/etc/rsync.xml /meta/opt/openfiler/etc/
root@ san1 ~# mkdir -p /meta/etc/httpd/conf.d

2.4 –移送 san1 的SAMBA/NFS/iSCSI/PROFTPD 設定到 meta partition
root@ san1 ~# service nfslock stop
root@ san1 ~# umount -a -t rpc-pipefs
root@ san1 ~# mkdir /meta/etc
root@ san1 ~# mv /etc/samba/ /meta/etc/
root@ san1 ~# ln -s /meta/etc/samba/ /etc/samba
root@ san1 ~# mkdir -p /meta/var/spool
root@ san1 ~# mv /var/spool/samba/ /meta/var/spool/
root@ san1 ~# ln -s /meta/var/spool/samba/ /var/spool/samba
root@ san1 ~# mkdir -p /meta/var/lib
root@ san1 ~# mv /var/lib/nfs/ /meta/var/lib/
root@ san1 ~# ln -s /meta/var/lib/nfs/ /var/lib/nfs
root@ san1 ~# mv /etc/exports /meta/etc/
root@ san1 ~# ln -s /meta/etc/exports /etc/exports
root@ san1 ~# mv /etc/ietd.conf /meta/etc/
root@ san1 ~# ln -s /meta/etc/ietd.conf /etc/ietd.conf
root@ san1 ~# mv /etc/initiators.allow /meta/etc/
root@ san1 ~# ln -s /meta/etc/initiators.allow /etc/initiators.allow
root@ san1 ~# mv /etc/initiators.deny /meta/etc/
root@ san1 ~# ln -s /meta/etc/initiators.deny /etc/initiators.deny
root@ san1 ~# mv /etc/proftpd /meta/etc/
root@ san1 ~# ln -s /meta/etc/proftpd/ /etc/proftpd

2.5 –移送 san1 的httpd modules
root@ san1 ~# rm /opt/openfiler/etc/httpd/modules
root@ san1 ~# ln -s /usr/lib64/httpd/modules /opt/openfiler/etc/httpd/modules

現在可以嘗試起 Openfiler
root@ san1 ~# service openfiler start

2.6 –設定 san2 的Openfiler環境
在 san2 上只需略加設定便可。因為 meta partition 是被掛載到 san1 上,而 san2 仍然是處於 passive mode。

root@san2 ~# service openfiler stop
root@ san2 ~# mkdir /meta
root@ san2 ~# mv /opt/openfiler/ /opt/openfiler.local
root@ san2 ~# ln -s /meta/opt/openfiler /opt/openfiler

2.7 –設定san2 的SAMBA/NFS/iSCSI/PROFTPD 環境
root@ san2 ~# service nfslock stop
root@ san2 ~# umount -a -t rpc-pipefs
root@ san2 ~# rm -rf /etc/samba/
root@ san2 ~# ln -s /meta/etc/samba/ /etc/samba
root@ san2 ~# rm -rf /var/spool/samba/
root@ san2 ~# ln -s /meta/var/spool/samba/ /var/spool/samba
root@ san2 ~# rm -rf /var/lib/nfs/
root@ san2 ~# ln -s /meta/var/lib/nfs/ /var/lib/nfs
root@ san2 ~# rm -rf /etc/exports
root@ san2 ~# ln -s /meta/etc/exports /etc/exports
root@ san2 ~# rm /etc/ietd.conf
root@ san2 ~# ln -s /meta/etc/ietd.conf /etc/ietd.conf
root@ san2 ~# rm /etc/initiators.allow
root@ san2 ~# ln -s /meta/etc/initiators.allow /etc/initiators.allow
root@ san2 ~# rm /etc/initiators.deny
root@ san2 ~# ln -s /meta/etc/initiators.deny /etc/initiators.deny
root@ san2 ~# rm -rf /etc/proftpd
root@ san2 ~# ln -s /meta/etc/proftpd/ /etc/proftpd

2.8 –設定Data Partition

在 san1 上設定 /etc/lvm/lvm.conf。如下:

查找:

filter = [ "a/.*/" ]

更改為:

filter = [ "a|drbd[0-9]|", "r|.*|" ]

複製 san1 上的/etc/lvm/lvm.conf 去 san2。
root@san1 ~# scp /etc/lvm/lvm.conf root@san2.host.local: /etc/lvm/lvm.conf

可以在 san1 上開始新建 Volume。
root@ san1 ~# pvcreate /dev/drbd1
root@ san1 ~# vgcreate data /dev/drbd1

3 –Corosync 設定
3.1 –新建Corosync authkey

root@san1~# corosync-keygen

這裡有個小 trick,corosync-keygen 是需要你的協助,幫忙在新建 authkey 的時間中搞打 keyboard。但不能在ssh console 中搞打。而是須要在 VirtualBox 的 console 上搞打才行(即是如在真實環境,你必須搞打真實連接上 server 的 keyboard)。否則這動作會跑很長時間。

複製 san1 上的 authkey 去 san2
root@ san1~# scp /etc/corosync/authkey root@san2.host.local:/etc/corosync/authkey
root@ san2~# chmod 400 /etc/corosync/authkey

3.2 –設定pcmk

 

root@san1~# vi /etc/corosync/service.d/pcmk

service {
    # Load the Pacemaker Cluster Resource Manager
    name: pacemaker
    ver:  0
}

複製 san1 上的pcmk去 san2
root@san1~# scp /etc/corosync/service.d/pcmk root@san2.host.local:/etc/corosync/service.d/pcmk

3.3 –設定Corosync
root@san1~# vi /etc/corosync/corosync.conf

# Please read the corosync.conf.5 manual page
compatibility: whitetank
totem {
    version: 2
    secauth: off
    threads: 0
    interface {
        ringnumber: 0
        bindnetaddr: 192.168.56.0
        mcastaddr: 226.94.1.1
        mcastport: 5405
        ttl: 1
    }
}

logging {
    fileline: off
    to_stderr: no
    to_logfile: yes
    to_syslog: yes
    logfile: /var/log/cluster/corosync.log
    debug: on
    timestamp: on
    logger_subsys {
        subsys: AMF
        debug: off
    }
}

amf {
    mode: disabled
}

複製 san1 上的corosyn.conf去 san2
root@san1~# scp /etc/corosync/corosync.conf root@san2.host.local:/etc/corosync/corosync.conf

4 – 起動Corosync

root@san1~# chkconfig –level 2345 openfiler off
root@ san1~# chkconfig –level 2345 nfslock off
root@ san1~# chkconfig –level 2345 corosync on

root@san2~# chkconfig –level 2345 openfiler off
root@ san2~# chkconfig –level 2345 nfslock off
root@ san2~# chkconfig –level 2345 corosync on

嘗試起動 Corosync。

root@ san1~# service corosync start
root@ san2~# service corosync start

root@ san1~# ps auxf
root      3480  0.0  0.8 534456  4112 ?        Ssl  19:15   0:00 corosync
root      3486  0.0  0.5  68172  2776 ?        S    19:15   0:00  \_ /usr/lib64/heartbeat/stonith
106       3487  0.0  1.0  67684  4956 ?        S    19:15   0:00  \_ /usr/lib64/heartbeat/cib
root      3488  0.0  0.4  70828  2196 ?        S    19:15   0:00  \_ /usr/lib64/heartbeat/lrmd
106       3489  0.0  0.6  68536  3096 ?        S    19:15   0:00  \_ /usr/lib64/heartbeat/attrd
106       3490  0.0  0.6  69064  3420 ?        S    19:15   0:00  \_ /usr/lib64/heartbeat/pengine
106       3491  0.0  0.7  76764  3488 ?        S    19:15   0:00  \_ /usr/lib64/heartbeat/crmd

root@san2~# crm_mon –one-shot –V

crm_mon[3602]: 2012/06/01_19:32:07 ERROR: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
crm_mon[3602]: 2012/06/01_19:32:07 ERROR: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
crm_mon[3602]: 2012/06/01_19:32:07 ERROR: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
============
Stack: openais
Current DC: san1.host.local – partition with quorum
Version: 1.1.2-c6b59218ee949eebff30e837ff6f3824ed0ab86b
2 Nodes configured, 2 expected votes
0 Resources configured.
============

Online: [san1.host.local san2.host.local ]

4.1設定 corosync
在 san1 上:

root@san1~# crm configure
crm(live)configure# property stonith-enabled=”false”
crm(live)configure# property no-quorum-policy=”ignore”
crm(live)configure# rsc_defaults $id=”rsc-options” resource-stickiness=”100″
crm(live)configure# primitive ClusterIP ocf:heartbeat:IPaddr2 params ip=”128.128.0.7″ cidr_netmask=”16″ op monitor interval=”30s”
crm(live)configure# primitive MetaFS ocf:heartbeat:Filesystem params device=”/dev/drbd0″ directory=”/meta” fstype=”ext3″
crm(live)configure# primitive lvmdata ocf:heartbeat:LVM params volgrpname=”data”
crm(live)configure# primitive drbd_meta ocf:linbit:drbd params drbd_resource=”meta” op monitor interval=”15s”
crm(live)configure# primitive drbd_data ocf:linbit:drbd params drbd_resource=”data” op monitor interval=”15s”
crm(live)configure# primitive openfiler lsb:openfiler
crm(live)configure# primitive iscsi lsb:iscsi-target
crm(live)configure# primitive samba lsb:smb
crm(live)configure# primitive nfs lsb:nfs
crm(live)configure# primitive nfs-lock lsb:nfslock
crm(live)configure# group g_drbd drbd_meta drbd_data
crm(live)configure# group g_services MetaFS lvmdata openfiler ClusterIP iscsi samba nfs nfs-lock
crm(live)configure# ms ms_g_drbd g_drbd meta master-max=”1″ master-node-max=”1″ clone-max=”2″ clone-node-max=”1″ notify=”true”
crm(live)configure# colocation c_g_services_on_g_drbd inf: g_services ms_g_drbd:Master
crm(live)configure# order o_g_servicesafter_g_drbd inf: ms_g_drbd:promote g_services:start
crm(live)configure# commit

跑 crm_mon 看看有什麼改變
root@san1 ~# crm_mon

1 comment for “Openfiler 2.99 Active/Passive Cluster How-To

  1. Admin
    July 13, 2013 at 11:00 AM

    Heres the code you need to get RAID1 working on Openfiler 2.99:

    You need to type the following commands into the shell.

    conary update mdadm=openfiler.rpath.org@rpl:devel/2.6.4-0.2-1

    This will install the correct mdadm files, once installed type the next command in.

    ln -s /sbin/lvm /usr/sbin/lvm

    This ensures that the GUI is populated correctly.
    This will then allow you to create all of the software RAID types without issue.

    http://www.everything-virtual.com/?p=349

Leave a Reply