Tuesday, October 09, 2012

Installing Gluster on CentOS 6.3

Download the packages,
wget -l 1 -nd -nc -r -A.rpm http://download.gluster.com/pub/gluster/glusterfs/LATEST/CentOS
 Install the packages, including RDMA packages for InfiniBand,
yum install glusterfs-3.3.0-1.el6.x86_64.rpm glusterfs-fuse-3.3.0-1.el6.x86_64.rpm glusterfs-geo-replication-3.3.0-1.el6.x86_64.rpm glusterfs-rdma-3.3.0-1.el6.x86_64.rpm glusterfs-server-3.3.0-1.el6.x86_64.rpm

To create a volume that works on both RDMA and TCP, below command should be used:
gluster volume create VOLNAME  transport tcp,rdma BRICKS
Later, to mount it on RDMA transport, use mount command like below:
mount -t glusterfs IP:/VOLNAME.rdma /mount/point
To mount the tcp volume it will be just,
mount -t glusterfs IP:/VOLNAME /mount/point
Example:
gluster> volume create test-volume transport tcp,rdma 192.168.0.104:/test-volume
Creation of volume test-volume has been successful. Please start the volume to access data.
gluster> volume list
test-volume
gluster> volume info
Volume Name: test-volume
Type: Distribute
Volume ID: 07153b66-06ea-4025-97cf-8ae8f0cfc09a
Status: Created
Number of Bricks: 1
Transport-type: tcp,rdma
Bricks:
Brick1: 192.168.0.104:/test-volume
gluster> volume start test-volume
Starting volume test-volume has been successful



Sunday, October 07, 2012

Installing "ZFS on Linux" on CentOS 6.3 with DKMS and L2ARC Caching

Prepare the system with the following packages,
yum groupinstall "Development Tools" 
yum install kernel-devel zlib-devel libuuid-devel libblkid-devel libselinux-devel e2fsprogs-devel lsscsi parted lsscsi nano mdadm bc 

Obtain the latest DKMS RPM package from Dell. Note that you need the latest version 2.x above for the installation to proceed properly.
wget http://linux.dell.com/dkms/permalink/dkms-2.2.0.3-1.noarch.rpm
Install the RPM package,
rpm -Uvh dkms-2.2.0.3-1.noarch.rpm

Obtain the latest version of DKMS RPM modules for SPL and ZFS,
wget http://github.com/downloads/zfsonlinux/spl/spl-modules-dkms-0.6.0-rc11.noarch.rpm
wget http://github.com/downloads/zfsonlinux/zfs/zfs-modules-dkms-0.6.0-rc11.noarch.rpm
Install SPL module first,
rpm -Uvh  spl-modules-dkms-0.6.0-rc11.noarch.rpm
Obtain SPL RPM, rebuild and install,
wget http://github.com/downloads/zfsonlinux/spl/spl-0.6.0-rc11.src.rpm
rpmbuild --rebuild spl-0.6.0-rc11.src.rpm
rpm -Uvh rpmbuild/RPMS/x86_64/spl-0.6.0-rc11.el6.x86_64.rpm
Install ZFS module,
rpm -Uvh zfs-modules-dkms-0.6.0-rc11.noarch.rpm

Obtain ZFS RPM, rebuild and install,
wget http://github.com/downloads/zfsonlinux/zfs/zfs-0.6.0-rc11.src.rpm
rpmbuild --rebuild zfs-0.6.0-rc11.src.rpm
rpm -Uvh rpmbuild/RPMS/x86_64/zfs-0.6.0-rc11.el6.x86_64.rpm
rpm -Uvh rpmbuild/RPMS/x86_64/zfs-devel-0.6.0-rc11.el6.x86_64.rpm
rpm -Uvh rpmbuild/RPMS/x86_64/zfs-dracut-0.6.0-rc11.el6.x86_64.rpm
rpm -Uvh rpmbuild/RPMS/x86_64/zfs-test-0.6.0-rc11.el6.x86_64.rpm
Restart your system. DKMS might automatically rebuild the SPL and ZFS package during the system boot up if your kernel is updated.

To check ZFS is properly loaded, run:
lsmod | grep -i zfs
zfs                  1104868  0
zcommon                43286  1 zfs
znvpair                47487  2 zfs,zcommon
zavl                    6925  1 zfs
zunicode              323120  1 zfs
spl                   253420  5 zfs,zcommon,znvpair,zavl,zunicode
Check installed hard disk properties,
fdisk -l | grep GB
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
Disk /dev/sdc: 64.0 GB, 64023257088 bytes
Disk /dev/sda: 64.0 GB, 64023257088 bytes
Disk /dev/mapper/vg_azure0-lv_root: 50.2 GB, 50189041664 bytes
Create ZFS storage pool with name as "tank" consisting of sdb sdd and sde. The -f is to override any errors,
zpool create tank raidz -f sdb sdd sde
Check the pool,
zpool status
  pool: tank
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdb     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sde     ONLINE       0     0     0

errors: No known data errors
Setup sdc as L2ARC cache drive (the -f is to override any errors),
zpool add tank cache -f sdc
Finally, let's check our ZFS pool status,
zpool status
  pool: tank
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdb     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sde     ONLINE       0     0     0
        cache
          sdc       ONLINE       0     0     0

errors: No known data errors
Additionally, let's put some tuning to the file system to obtain a better performance,
zfs set compression=on tank
zfs set dedup=on tank
zfs set atime=off tank
Yeay! Now things are working great! For further info, please refer to the following links:
http://pingd.org/2012/installing-zfs-raid-z-on-centos-6-2-with-ssd-caching.html
http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/zfsadmin.pdf
http://constantin.glez.de/blog/2010/04/ten-ways-easily-improve-oracle-solaris-zfs-filesystem-performance