Showing posts with label storage. Show all posts
Showing posts with label storage. Show all posts

Saturday, 10 May 2014

Set up GlusterFS with a volume replicated over 2 nodes

The servers setup:

To install the required packages run on both servers:
sudo apt-get install glusterfs-server
If you want a more up to date version of GlusterFS you can add the following repo:
sudo add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.4
Now from one of the servers you must connect to the other:
sudo gluster peer probe <ip_of_the_other_server>
You should see the following output:
peer probe: success
You can check the status from any of the hosts with:
sudo gluster peer status
Now we need to create the volume where the data will reside. For this run the following comand:
sudo gluster volume create datastore1 replica 2 transport tcp <server1_IP>:/mnt/gfs_block <server2_IP>:/mnt/gfs_block
Where /mnt/gfs_block is the mount point where the data will be on each node and datastore1 is the name of the volume you are creating.

If this has been sucessful, you should see:
Creation of volume datastore1 has been successful. Please start the volume to access data.
As the message indicates, we now need to start the volume:
sudo gluster volume start datastore1
As a final test, to make sure the volume is available, run gluster volume info.
sudo gluster volume info 
Your GlusterFS volume is ready and will maintain replication across two nodes.
If you want to Restrict Access to the Volume, you can use the following command:
sudo gluster volume set datastore1 auth.allow gluster_client1_ip,gluster_client2_ip
If you need to remove the restriction at any point, you can type:
sudo gluster volume set volume1 auth.allow *

Setup the clients:

Install the needed packages with:
sudo apt-get install glusterfs-client
To mount the volume you must edit the fstab file:
sudo vi /etc/fstab
And append the following to it:
[HOST1]:/[VOLUME]    /[MOUNT] glusterfs defaults,_netdev,backupvolfile-server=[HOST2] 0 0
Where [HOST1] is the IP address of one of the servers and [HOST2] is the IP of the other server. [VOLUME] is the Volume name, in our case datastore1 and [MOUNT] is the path where you whant the files on the client.

Or, you can also mount the volume using a volume config file:

Create a volume config file for your GlusterFS client.
vi /etc/glusterfs/datastore.vol
Create the above file and replace [HOST1] with your GlusterFS server 1, [HOST2] with your GlusterFS server 2 and [VOLNAME] with the Gluster FS volume to mount.
 volume remote1
 type protocol/client
 option transport-type tcp
 option remote-host [HOST1]
 option remote-subvolume [VOLNAME]
 end-volume

 volume remote2
 type protocol/client
 option transport-type tcp
 option remote-host [HOST2]
 option remote-subvolume [VOLNAME]
 end-volume

 volume replicate
 type cluster/replicate
 subvolumes remote1 remote2
 end-volume

 volume writebehind
 type performance/write-behind
 option window-size 1MB
 subvolumes replicate
 end-volume

 volume cache
 type performance/io-cache
 option cache-size 512MB
 subvolumes writebehind
 end-volume
Finally, edit fstab to add this config file and it's mount point. Replace [MOUNT] with the location to mount the storage to.
/etc/glusterfs/datastore.vol [MOUNT] glusterfs rw,allow_other,default_permissions,max_read=131072 0 0

Possibly Related Posts

Create LVM volume from multiple disks

Recently I had to crate an Amazon EC2 Instance with a storage capacity of 5Tb, unfortunately, Amazon only allows us to create 1Tb volumes so I had to create 5 volumes, attach them to the instance and create a 5Tb LVM device.

My instance was running Ubuntu and I hat to install the lvm2 package:
apt-get install lvm2 
The volumes attached to my instance where named from /dev/xvdg to /dev/xvdk

to find the names you can use the command:
fdisk -l
First we have to prepare our volumes for LVM with:
pvcreate /dev/xvdg /dev/xvdh /dev/xvdi /dev/xvdj /dev/xvdk
You can run the following command to check the result:
pvdisplay
The next step is to create a volume group, I used the command:
vgcreate storage /dev/xvdg /dev/xvdh /dev/xvdi /dev/xvdj /dev/xvdk 
And used the command:
vgdisplay
to check the result, you can also use:
vgscan 
Now we need to create the logical volume, in this case I wanted to use the entire available space so, I used the command:
lvcreate -n data -l 100%free storage
And
lvdisplay 
to check the new volume if every thing goes well it should be on /dev/storage/data 

you can also use the command 
lvscan
Now you just have to format the new device, you can use:
mkft -t ext4 /dev/storage/data
When ready you can mount it with:
mout /dev/storage/data /mnt

Possibly Related Posts

Tuesday, 12 June 2012

How to reset folder permissions to their default in Ubuntu/Debian

If by mistake you've ran something like:
sudo chmod / 777 -R
or similar and broken your permissions, It is possible to come back from such a messy situation, without reinstalling the system.

One way is to install another machine or VM with the same version of the OS and on that machine run this two commands:
find / -exec stat --format "chmod %a %n" {} \; > /tmp/restoreperms.sh
find / -exec stat --format 'chown %U:%G %n' {} \; >> /tmp/restoreperms.sh
Or this one that combines both:
/usr/bin/find / -exec /usr/bin/stat --format="[ ! -L {} ] && /bin/chmod %a %n" {} \; -exec /usr/bin/stat --format="/bin/chown -h %U:%G %n" {} \; > /tmp/restoreperms.sh
then, copy the /tmp/restoreperms.sh file to the machine with broken permissions:
scp /tmp/restoreperms.sh user@ip_address:/tmp/
and execute it from there.

Another way is to use the info from the deb packages and a script, but for that you'll have to have the deb packages in your machine, usualy they can be found in /var/cache/apt/archives/. This way you don't need a second machine.

The script:
#!/bin/bash
# Restores file permissions for all files on a debian system for which .deb
# packages exist.
#
# Author: Larry Kagan <me at larrykagan dot com>
# Since 2007-02-20
ARCHIVE_DIR=/var/cache/apt/archives/
PACKAGES=`ls $ARCHIVE_DIR`
cd /
function changePerms()
{
CHOWN="/bin/chown"
CHMOD="/bin/chmod"
#PERMS=$1
PERMS=`echo $1 | sed -e 's/--x/1/g' -e 's/-w-/2/g' -e 's/-wx/3/g' -e 's/r--/4/g' -e 's/r-x/5/g' -e 's/rw-/6/g' -e 's/rwx/7/g' -e 's/---/0/g'`
PERMS=`echo ${PERMS:1}`
OWN=`echo $2 | /usr/bin/tr '/' '.'`
PATHNAME=$3
PATHNAME=`echo ${PATHNAME:1}`
echo -e "CHOWN: $CHOWN $OWN $PATHNAME"
result=`$CHOWN $OWN $PATHNAME`
if [ $? -ne 0 ]; then
echo -e $result
fi
echo -e "CHMOD: $CHMOD $PERMS $PATHNAME"
result=`$CHMOD $PERMS $PATHNAME`
if [ $? -ne 0 ]; then
echo -e $result
fi
}
for PACKAGE in $PACKAGES;
do
if [ -d $PACKAGE ]; then
continue;
fi
echo -e "Getting information for $PACKAGE\n"
FILES=`/usr/bin/dpkg -c "${ARCHIVE_DIR}${PACKAGE}"`
for FILE in "$FILES";
do
#FILE_DETAILS=`echo "$FILE" | awk '{print $1"\t"$2"\t"$6}'`
echo "$FILE" | awk '{print $1"\t"$2"\t"$6}' | while read line;
do
changePerms $line
done
#changePerms $FILE_DETAILS
done
done
If that doesn't work you can try to reinstall every installed package with this script:
#!/bin/bash
for pkg in `dpkg --get-selections | egrep -v deinstall | awk '{print $1}' | egrep -v '(dpkg|apt|mysql|mythtv)'` ; do apt-get -y install --reinstall $pkg ; done
Or with:
dpkg --get-selections \* | awk '{print $1}' | xargs -r -l1 aptitude reinstall
Which does the same.



Possibly Related Posts

Saturday, 31 March 2012

Check file system usage using command line

If you want to check you disk space left you can use:
df -h
And if you whant to find the files bigger than a given size, you can use this command:
find </path/to/directory/> -type f -size +<size-in-kb>k -exec ls -lh {} \; | awk ‘{ print $9 “: ” $5 }’
all you need is to specify the path and the size in kb (50000 for 50Mb for example).

You can also check the top 10 biggest files in a given directory with:
du -sk </path/to/directory/>* | sort -r -n | head -10
Or with a more readable output:
du -sh $(du -sk ./* | sort -r -n | head -10 | cut -d / -f 2)


Possibly Related Posts

Monday, 12 March 2012

Add extra disk as /home

Install the disk and then use:
fdisk -l
to check the new device name, if the new disk is not detected try installing scsitools:
apt-get install scsitools
then run:
rescan-scsi-bus
and issue fdisk -l again, supposing your new disk is /dev/sdb use
fdisk /dev/sdb
to create a new partition, press n, then p for a primary partition then enter 1 sinse this will be the only partition on the drive, when it asks you about the first and last cylinders, just use the defaults.
now, to format the newly created partition use:
mkfs.ext4 /dev/sdb1
when done use:
blkid
to check the new partition's uuid and using that edit the /etc/fstab file andding:
UUID=d70d801e-5246-46e2-a7ed-1a95819fd326 /home ext4 errors=remount-ro 0 1
Now mount the new partition on a temporary location with:
mount /dev/sdb1/mnt/
and copy the contents of the current home to the new partition:
cp -r /home/*/mnt/
when done delete all contents from the current home
rm -rf /home/*
unmount the new partiotion
umount /mnt
and remount it under /home
mount /dev/sdb1/home/


Possibly Related Posts

Friday, 8 July 2011

Resizing LUNs for Xenserver SRs

Perform steps 2-7 on the Pool Master:

1. Extend the volume/LUN from the SAN management console

2.Execute the following command and note the uuid of the SR.
xe sr-list name-label=<your SR name you want to resize>
3.To get the device name (eg: PV /dev/sdj ) use:
pvscan | grep <the uuid you noted in the previous step>
4.Tell the serve to refresh the iscsi connection:
echo 1 > /sys/block/device/device/rescan (e.g. echo 1 > /sys/block/sdj/device/rescan)
5.Resize the volume
pvresize <device name> (eg: pvresize /dev/sdj )
6. Rescan the SR:
xe sr-scan <the uuid you noted in the previous step>
7. Verify that the XE host sees the larger physical disk:
pvscan | grep <the uuid you noted in step 2>

References: http://blogs.citrix.com/2011/03/07/live-lun-resize-on-xenserver/

Possibly Related Posts