Категория:GlusterFS

Материал из Webko Wiki
Версия от 08:45, 13 мая 2015; Sol (обсуждение | вклад)
(разн.) ← Предыдущая | Текущая версия (разн.) | Следующая → (разн.)
Перейти к навигации Перейти к поиску

Install GlusterFS Server And Client On CentOS 7

GlusterFS aggregates various storage servers over Ethernet or Infiniband RDMA interconnect into one large parallel network file system. It is free software, with some parts licensed under the GNU General Public License(GPL) v3 while others are dual licensed under either GPL v2 or the Lesser General Public License (LGPL) v3. GlusterFS is based on a stackable user space design.

GlusterFS has a client and server component. Servers are typically deployed as storage bricks, with each server running a glusterfsd daemon to export a local file system as a volume. The glusterfs client process, which connects to servers with a custom protocol over TCP/IP, InfiniBand or Sockets Direct Protocol, creates composite virtual volumes from multiple remote servers using stackable translators. By default, files are stored whole, but striping of files across multiple remote volumes is also supported. The final volume may then be mounted by the client host using its own native protocol via the FUSE mechanism, using NFS v3 protocol using a built-in server translator, or accessed via gfapi client library. Native-protocol mounts may then be re-exported e.g. via the kernel NFSv4 server, SAMBA, or the object-based OpenStack Storage (Swift) protocol using the “UFO” (Unified File and Object) translator.

I am using 2 CentOS 7 nodes with hostnames: glusterfs1 and glusterfs2.

Add this to both servers in /etc/hosts

192.168.254.133 glusterfs1
192.168.254.134 glusterfs2

Installing in CentOS:

wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
yum -y install glusterfs glusterfs-fuse glusterfs-server
systemctl start glusterd

Add iptable rules for glusterfs:

-A INPUT -m state --state NEW -m tcp -p tcp -s 192.168.254.0/24 --dport 111         -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp -s 192.168.254.0/24 --dport 111         -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp -s 192.168.254.0/24 --dport 2049        -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp -s 192.168.254.0/24 --dport 24007       -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp -s 192.168.254.0/24 --dport 38465:38469 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp -s 192.168.254.0/24 --dport 49152       -j ACCEPT

Added glusterfs2 in glusterfs1’s hosts file, and tested the config:

[root@glusterfs1 ~]# gluster peer probe glusterfs2
peer probe: success.
[root@glusterfs2 ~]# gluster peer probe glusterfs1
peer probe: success. Host glusterfs1 port 24007 already in peer list

At this time I can test the storage pool:

[root@glusterfs1 glusterfs]# gluster pool list
UUID                                    Hostname        State
4cf47688-74ba-4c5b-bf3f-3270bb9a4871    glusterfs2      Connected
a3ce0329-35d8-4774-a061-148a735657c4    localhost       Connected
[root@glusterfs1 ~]# gluster volume status
No volumes present

Create a gluster volume and test replication:

[root@glusterfs1 ~]# gluster
gluster> volume create vol0 rep 2 transport tcp glusterfs1:/data/gluster/brick glusterfs2:/data/gluster/brick force
volume create: vol0: success: please start the volume to access data
gluster>
            1. if vol creation fails for some reason, do # setfattr -x trusted.glusterfs.volume-id /data/gluster/brick and restart glusterd.
gluster> volume start vol0
volume start: vol0: success

Create mount point and mount the volume on both nodes:

[root@glusterfs1 ~]# mount -t glusterfs glusterfs1:/vol0 /mnt/gluster/
[root@glusterfs2 ~]# mount -t glusterfs glusterfs1:/vol0 /mnt/gluster/
[root@glusterfs1 ~]# cp /var/log/secure /mnt/gluster/

The content is automatically synced between nodes

[root@glusterfs1 ~]# ls /mnt/gluster/
secure
[root@glusterfs2 ~]# ls /mnt/gluster/
secure

Эта категория в данный момент пуста.