Category: Linux

How to deploy servers with KickStart 5.0

  1. Open up Vcenter and login
  1. Find the folder you wish to create the new vm
    1. Right click on the folder and select create a new vm
    2. Go through and select the VM parameters you require ie(CPU, Memory, HD space, etc)
      NOTE: that you should keep the HD space to 50 gigs and thin provision the vm.
  2. Next you want to edit the VM settings
    1. Select the CD/DVD option and then boot off a redhat linux 6.6 install dvd.
      1. Enable the connect on start and conneted check boxes at the top.
    2. Next you want to select the Network adapter and select the correct Network Label(VLAN) so the server will be able to communicate dependant on which ever ip/network you chose.

Note: You will not be able to kickstart if you do not have the proper vlan for your ip.

  1. Next Login into satellite
    1. Click on kickstart on the left pane and then profiles
    2. Select the button “Advanced options
    3. Scroll down to network and edit the line as needed.
      1. –bootproto=static –ip=10.2.10.13 –netmask=255.255.255.0 –gateway=10.2.10.254  –hostname=server1.nicktailor.com –nameserver=10.20.0.17.

Note: You need to do this if you want the server provisioned with ip and hostname post install.

  1. Scroll down and click update for settings to take effect.
  2. Next click on System Details and then Paritioning.
  3. Edit the partitions to the specification required. You in most cases wont need to update this will be a standard template. However for the purposes of documentation its here.

Example of standard partition scheme

part /boot –fstype=ext4 –size=500
part pv.local –size=1000 –grow
volgroup vg_local pv.local
logvol / –fstype ext4 –name=root –vgname=vg_local –size=2048
logvol swap –fstype swap –name=swap –vgname=vg_local –recommended
logvol /tmp –fstype ext4 –name=tmp –vgname=vg_local –size=1024
logvol /usr –fstype ext4 –name=usr –vgname=vg_local –size=4096
logvol /home –fstype ext4 –name=home –vgname=vg_local –size=2048
logvol /var –fstype ext4 –name=var –vgname=vg_local –size=4096 –grow
logvol /var/log –fstype ext4 –name=log –vgname=vg_local –size=2048 –grow
logvol /var/log/audit –fstype ext4 –name=audit –vgname=vg_local –size 1024
logvol /opt –fstype ext4 –name=opt –vgname=vg_local –size=4096 –grow

  • Once you have the desired setting, select “Update Paritions”

4. Next Select Software
     5. You can add or remove any necessary or un-necessary packages.

By using the (-) before the package name it will remove it from the base install. If you simply type in the package name it will ensure its added to the base install.

The packages indicated below are an example of how you
@ Base
@X Window System
@Desktop
@fonts
python-dmidecode
python-ethtool
rhn-check
rhn-client-tools
rhn-setup
rhncfg-actions
rhncfg-client
yum-rhn-plugin
sssd

6.  Select update packages once you have chosen your base packages

7. Now boot up the vm, once your cd/image is booted you should see a grub line, before it boots into the install, follow the steps below.

8. At the grub line issue the following command. (Update the ip according to above step as needed. If you are using DHCP then you just need the url without the additional parameters.

linux ks=http://satellite.nicktailor.com/ks/cfg/org/5/label/Kickstartname ip=10.0.12.99 netmask=255.255.255.0 gateway=10.0.12.254 nameserver=10.20.0.17

9. Your VM at this point should go through without any user interaction and install and reboot with a functional OS.

Note: Since you have kickstarted your server using satellite, it will automatically be registered to satellite server, saving you the hassel of doing it after the fact.

 

How to do a full volume heal with glusterfs

How to fix a split-brain fully

If you nodes get out of sync and you know which node is the correct one.

So if you want node 2 to match Node 1

Follow the following setps:

  • gluster volume stop $volumename
  • /etc/init.d/glusterfsd stop
  • rm -rf  /mnt/lv_glusterfs/brick/*
  • /etc/init.d/glusterfsd start
  • “gluster volume start $volumename force”
  • “gluster volume heal $volumename full”

You should see a successful output, and you will start to see the “/mnt/lv_glusterfs/brick/” directory now match node a

Finally you can run.

  • gluster volume heal $volumename info split-brain (this will show if there are any splitbrains)
  • gluster volume heal $volumename info heal-failed (this will show you files that failed the heal)

Cheers

 

How to setup GlusterFS Server/Client

Gluster setup Server/client on both nodes

                         On both machines:

  • wget http://www.nicktailor.com/files/redhat6/glusterfs-3.4.0-8.el6.x86_64.rpm
  • wget http://www.nicktailor.com/files/redhat6/glusterfs-fuse-3.4.0-8.el6.x86_64.rpm
  • wget http://www.nicktailor.com/files/redhat6/glusterfs-server-3.4.0-8.el6.x86_64.rpm
  • wget http://www.nicktailor.com/files/redhat6/glusterfs-libs-3.4.0-8.el6.x86_64.rpm
  • wget http://www.nicktailor.com/files/redhat6/glusterfs-cli-3.4.0-8.el6.x86_64.rpm

    Install GlusterFS Server and Client
  • yum localinstall -y gluster*.rpm
  • yum install fuse

We want to use LVM for the glusterfs, so if we need to increase the size of the volume in future we can do so relatively easily. Repeat these steps on both nodes.

Create your physical volume

  • pvcreate /dev/sdb

Create your volume

  • vgcreate vg_gluster /dev/sdb

Create the logical volume

  • lvcreate -l100%VG -n lv_gluster vg_gluster

Format your volume

  • mkfs.ext3 /dev/vg_gluster/lv_gluster

Make the directory for your glusterfs

  • mkdir -p /mnt/lv_gluster

Mount the logical volume to your destination

  • mount /dev/vg_gluster/lv_gluster       /mnt/lv_gluster

Create your brick

  • mkdir -p /mnt/lv_gluster/brick

Add to your fstab if you wish for it to automount upon reboots

  • echo “”  >> /etc/fstab
  • echo “/dev/vg_glusterfs/lv_gluster /mnt/lv_gluster ext3 defaults 0 0”  >> /etc/fstab
  • service glusterd start
  • chkconfig glusterd on

Now from server1.nicktailor.com:

 

Test to ensure you can contact your second node

  • gluster peer probe server2.nicktailor.com

Create glusterfs volume name and replication between both nodes

  • gluster volume create $volumename replica 2 transport tcp
    server1.nicktailor.com:/mnt/lv_gluster/brick server2.nicktailor.com:/mnt/lv_gluster/brick

Start the glusterfs volume

  • gluster volume start $volumename

Now on server1.nicktailor.com:

Now we need to make the glusterfs directory from which everything will write to and replicate from.

NOTE: You will not be able to mount the storage unless your glusterfs volume is started 

  • mkdir /storage
  • mount -t glusterfs server1.nicktailor.com:/sftp /storage 

 

Add to these lines for automounting upon reboots

  • echo “”  >> /etc/fstab
  • echo “glusterfs server1.nicktailor.com:/sftp /storage glusterfs defaults,_netdev 0 0”  >> /etc/fstab
  • echo “”  >> /etc/rc.local
  • echo “grep -v ‘^\s*#’ /etc/fstab | awk ‘{if (\$3 == \”glusterfs\”) print \$2}’ | xargs mount”  >> /etc/rc.local
  • echo “mount -t glusterfs server1.nicktailor.com:/sftp /storage” >> /etc/rc.local

 

Now on server2.nicktailor.com do the following after you install the glusterfs and setup the volume group and start the glusterfs service

  • mkdir /storage
  • mount -t glusterfs server2.nicktailor.com:/sftp /storage
  • echo “”  >> /etc/fstab
  • echo ” server2.nicktailor.com:/sftp /storage glusterfs defaults 0 0″  >> /etc/fstab (if this doesnt automount use the mount -t line at the bottom in /etc/rc.local instead)
  • echo “”  >> /etc/rc.local
  • echo “grep -v ‘^\s*#’ /etc/fstab | awk ‘{if (\$3 == \”glusterfs\”) print \$2}’ | xargs mount”  >> /etc/rc.local
  • echo “mount -t glusterfs server2.nicktailor.com:/sftp /storage” >> /etc/rc.local

CheersNick Tailor

If you have questions email nick@nicktailor.com and I will try to answer as soon as I can.

How to do a full restore if you wiped all your LVM’s

I’m sure some of you have had the wonderful opportunity to experience loosing all your LVM info in error. Well all is not lost and there is hope. I will show ya how to restore it.

The beauty of LVM is that is naturally creates a backup of the Logical Volumes in the following location.

  • /etc/lvm/archive/

Now If you had just wiped out your LVM and it was simply using one physical disk for all your LVM’s you could simply do a full restore doing the following.

      • vgcfgrestore -f /etc/lvm/archive/(volumegroup to restore) (destination volumegroup)
        o    (ie.)vgcfgrestore -f /etc/lvm/archive/vg_dev1_006.000001.vg vg_dev

If you had multiple disks attached to your volume group then you need to do a couple more things to be able to do a restore.

  • Cat the file /etc/lvm/archive/whatevervolumgroup.vg file you should see something like below
  • physical_volumes {

                        pv0 {

                                    id = “ecFWSM-OH8b-uuBB-NVcN-h97f-su1y-nX7jA9”
                                    device = “/dev/sdj”         # Hint only
                                    status = [“ALLOCATABLE”]
                                    flags = []
                                    dev_size = 524288000    # 250 Gigabytes
                                    pe_start = 2048
                                    pe_count = 63999          # 249.996 Gigabytes
                        }

 

You will need to recreate all the physical volume UUID inside that .vg file for volume group to be able to restore.

    • pvcreate –restore /etc/lvm/archive/vgfilename.vg –uuid <UUID> <DEVICE>

      (IE) pvcreate –restorefile /etc/lvm/archive/vg_data_00122-1284284804.vg –uuid ecFWSM-OH8b-uuBB-NVcN-h97f-su1y-nX7jA9 /dev/sdj
  • Repeat this step for all the physical volumes in the archive vg file until they have all been created.

Once you have completed the above step you should now be able to restore your voluegroups that were wiped

    • vgcfgrestore -f /etc/lvm/archive/(volumegroup to restore) (destination volumegroup)

      o (ie.)vgcfgrestore -f /etc/lvm/archive/vg_dev1_006.000001.vg vg_dev
  • Running the command vgdisplay and pvdisplay should show you that everything is back the way it should be

If you have questions email nick@nicktailor.com

Cheers

 

How to setup NFS server on Centos 6.x

Setup NFS Server in CentOS / RHEL / Scientific Linux 6.3/6.4/6.5

1. Install NFS in Server

  • [root@server ~]# yum install nfs* -y

2. Start NFS service

  • [root@server ~]# /etc/init.d/nfs start

Starting NFS services:                                     [  OK  ]

Starting NFS mountd:                                       [  OK  ]

Stopping RPC idmapd:                                       [  OK  ]

Starting RPC idmapd:                                       [  OK  ]

Starting NFS daemon:                                       [  OK  ]

  • [root@server ~]# chkconfig nfs on

3. Install NFS in Client

  • [root@vpn client]# yum install nfs* -y

4. Start NFS service in client

  • [root@vpn client]# /etc/init.d/nfs start

Starting NFS services:                                     [  OK  ]

Starting NFS quotas:                                       [  OK  ]

Starting NFS mountd:                                       [  OK  ]

Stopping RPC idmapd:                                       [  OK  ]

Starting RPC idmapd:                                       [  OK  ]

Starting NFS daemon:                                       [  OK  ]

  • [root@vpn client]# chkconfig nfs on

5. Create shared directories in server

Let us create a shared directory called ‘/home/nicktailor’ in server and let the client users to read and write files in the ‘home/nicktailor’ directory.

  • [root@server ~]# mkdir /home/nicktailor
  • [root@server ~]# chmod 755 /home/nicktailor/

6. Export shared directory on server

Open /etc/exports file and add the entry as shown below

  • [root@server ~]# vi /etc/exports
  • add the following below
  • /home/nicktailor 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)

where,

 /home/nicktailor  – shared directory

192.168.1.0/24      – IP address range of clients to access the shared folder

rw                          – Make the shared folder to be writable

sync                       – Synchronize shared directory whenever create new files/folders

no_root_squash   – Enable root privilege  (Users can read, write and delete the files in the shared directory)

no_all_squash     – Enable user’s authority

Now restart the NFS service.

  • [root@server ~]# /etc/init.d/nfs restart

Shutting down NFS daemon:                                  [  OK  ]

Shutting down NFS mountd:                                  [  OK  ]

Shutting down NFS services:                                [  OK  ]

Starting NFS services:                                     [  OK  ]

Starting NFS mountd:                                       [  OK  ]

Stopping RPC idmapd:                                       [  OK  ]

Starting RPC idmapd:                                       [  OK  ]

Starting NFS daemon:                                       [  OK  ]       –

7. Mount shared directories in client

Create a mount point to mount the shared directories of server.

To do that create a directory called ‘/nfs/shared’ (You can create your own mount point)

  • [root@vpn client]# mkdir -p /nfs/shared

Now mount the shared directories from server as shown below

  • [root@vpn client]# mount -t nfs 192.168.1.200:/home/nicktailor/ /nfs/shared/

This will take a while and shows a connection timed out error for me. Well, don’t panic, firewall might be restricting  the clients to mount shares from server. Simply stop the iptables to rectify the problem or you can allow the NFS service ports through iptables.

To do that open the /etc/sysconfig/nfs file and uncomment the lines which are marked in bold.

  • [root@server ~]# vi /etc/sysconfig/nfs

#

# Define which protocol versions mountd 

# will advertise. The values are “no” or “yes”

# with yes being the default

#MOUNTD_NFS_V2=”no”

#MOUNTD_NFS_V3=”no”

#

#

# Path to remote quota server. See rquotad(8)

#RQUOTAD=”/usr/sbin/rpc.rquotad”

# Port rquotad should listen on.

RQUOTAD_PORT=875

# Optinal options passed to rquotad

#RPCRQUOTADOPTS=””

#

#

# Optional arguments passed to in-kernel lockd

#LOCKDARG=

# TCP port rpc.lockd should listen on.

LOCKD_TCPPORT=32803

# UDP port rpc.lockd should listen on.

LOCKD_UDPPORT=32769

#

#

# Optional arguments passed to rpc.nfsd. See rpc.nfsd(8)

# Turn off v2 and v3 protocol support

#RPCNFSDARGS=”-N 2 -N 3″

# Turn off v4 protocol support

#RPCNFSDARGS=”-N 4″

# Number of nfs server processes to be started.

# The default is 8. 

#RPCNFSDCOUNT=8

# Stop the nfsd module from being pre-loaded

#NFSD_MODULE=”noload”

# Set V4 grace period in seconds

#NFSD_V4_GRACE=90

#

#

#

# Optional arguments passed to rpc.mountd. See rpc.mountd(8)

#RPCMOUNTDOPTS=””

# Port rpc.mountd should listen on.

MOUNTD_PORT=892

#

#

# Optional arguments passed to rpc.statd. See rpc.statd(8)

#STATDARG=””

# Port rpc.statd should listen on.

STATD_PORT=662

# Outgoing port statd should used. The default is port

# is random

STATD_OUTGOING_PORT=2020

# Specify callout program 

#STATD_HA_CALLOUT=”/usr/local/bin/foo”

#

#

# Optional arguments passed to rpc.idmapd. See rpc.idmapd(8)

#RPCIDMAPDARGS=””

#

# Set to turn on Secure NFS mounts. 

#SECURE_NFS=”yes”

# Optional arguments passed to rpc.gssd. See rpc.gssd(8)

#RPCGSSDARGS=””

# Optional arguments passed to rpc.svcgssd. See rpc.svcgssd(8)

#RPCSVCGSSDARGS=””

#

# To enable RDMA support on the server by setting this to

# the port the server should listen on

#RDMA_PORT=20049

Now restart the NFS service

  • [root@server ~]# /etc/init.d/nfs restart

Shutting down NFS daemon:                                  [  OK  ]

Shutting down NFS mountd:                                  [  OK  ]

Shutting down NFS services:                                [  OK  ]

Starting NFS services:                                     [  OK  ]

Starting NFS mountd:                                       [  OK  ]

Stopping RPC idmapd:                                       [  OK  ]

Starting RPC idmapd:                                       [  OK  ]

Starting NFS daemon:                                       [  OK  ]

Add the lines shown in bold in  ‘/etc/sysconfig/iptables’ file.

  • [root@server ~]# vi /etc/sysconfig/iptables

# Firewall configuration written by system-config-firewall

# Manual customization of this file is not recommended.

*filter

-A INPUT -m state –state NEW -m udp -p udp –dport 2049 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 2049 -j ACCEPT

-A INPUT -m state –state NEW -m udp -p udp –dport 111 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 111 -j ACCEPT

-A INPUT -m state –state NEW -m udp -p udp –dport 32769 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 32803 -j ACCEPT

-A INPUT -m state –state NEW -m udp -p udp –dport 892 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 892 -j ACCEPT

-A INPUT -m state –state NEW -m udp -p udp –dport 875 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 875 -j ACCEPT

-A INPUT -m state –state NEW -m udp -p udp –dport 662 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 662 -j ACCEPT

:INPUT ACCEPT [0:0]

:FORWARD ACCEPT [0:0]

:OUTPUT ACCEPT [0:0]

-A INPUT -m state –state ESTABLISHED,RELATED -j ACCEPT

-A INPUT -p icmp -j ACCEPT

-A INPUT -i lo -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 22 -j ACCEPT

-A INPUT -j REJECT –reject-with icmp-host-prohibited

-A FORWARD -j REJECT –reject-with icmp-host-prohibited

COMMIT

Now restart the iptables service

[root@server ~]# service iptables restart

iptables: Flushing firewall rules:                         [  OK  ]

iptables: Setting chains to policy ACCEPT: filter          [  OK  ]

iptables: Unloading modules:                               [  OK  ]

iptables: Applying firewall rules:                         [  OK  ]

Again mount the share from client

  • [root@vpn client]# mount -t nfs 192.168.1.200:/home/nicktailor/ /nfs/shared/

Finally the NFS share is mounted without any connection timed out error.

To verify whether the shared directory is mounted, enter the mount command in client system.

  • [root@vpn client]# mount

/dev/mapper/vg_vpn-lv_root on / type ext4 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

tmpfs on /dev/shm type tmpfs (rw,rootcontext=”system_u:object_r:tmpfs_t:s0″)

/dev/sda1 on /boot type ext4 (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

nfsd on /proc/fs/nfsd type nfsd (rw)

192.168.1.200:/home/ostechnix/ on /nfs/shared type nfs (rw,vers=4,addr=192.168.1.200,clientaddr=192.168.1.29)

8. Testing NFS

Now create some files or folders in the ‘/nfs/shared’ directory which we mounted in the previous step.

  • [root@vpn shared]# mkdir test
  • [root@vpn shared]# touch file1 file2 file3

Now go to the server and change to the ‘/home/nicktailor’ directory.

[root@server ~]# cd /home/nicktailor/

  • [root@server nicktailor]# ls

file1  file2  file3  test

  • [root@server nicktailor]#

Now the files and directories are listed which are created from the client. Also you can share the files from server to client and vice versa.

9. Automount the Shares

If you want to mount the shares automatically instead mounting them manually at every reboot, add the following lines shown in bold in the ‘/etc/fstab’ file of client system.

  • [root@vpn client]# vi /etc/fstab 

#

# /etc/fstab

# Created by anaconda on Wed Feb 27 15:35:14 2013

#

# Accessible filesystems, by reference, are maintained under ‘/dev/disk’

# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info

#

/dev/mapper/vg_vpn-lv_root /                       ext4    defaults        1 1

UUID=59411b1a-d116-4e52-9382-51ff6e252cfb /boot                   ext4    defaults        1 2

/dev/mapper/vg_vpn-lv_swap swap                    swap    defaults        0 0

tmpfs                   /dev/shm                tmpfs   defaults        0 0

devpts                  /dev/pts                devpts  gid=5,mode=620  0 0

sysfs                   /sys                    sysfs   defaults        0 0

proc                    /proc                   proc    defaults        0 0

192.168.1.200:/home/nicktailor/nfs/sharednfsrw,sync,hard,intr0 0

10. Verify the Shares

Reboot your client system and verify whether the share is mounted automatically or not.

  • [root@vpn client]# mount

/dev/mapper/vg_vpn-lv_root on / type ext4 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

tmpfs on /dev/shm type tmpfs (rw,rootcontext=”system_u:object_r:tmpfs_t:s0″)

/dev/sda1 on /boot type ext4 (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

192.168.1.200:/home/nicktailor on /nfs/shared type nfs (rw,sync,hard,intr,vers=4,addr=192.168.1.200,clientaddr=192.168.1.29)

nfsd on /proc/fs/nfsd type nfsd (rw)

 

How to move off san boot to local disk with HP servers

How to move off san boot to local disk
===========================
1. add the disks to the server
next do a rescan-scsi-bus.sh to see if the new disks show up

2. Setup the Raid controler f8 (hp)
3. Boot off of system rescue cd
4. find the new drive, use fdisk -l 
5. copy partition over using dd and reboot to see new parition table

Examples:

  • dd if=/dev/mapper/3600508b4000618d90000e0000b8f0000 of=/dev/sda bs=1M
    or 
  • dd if=/dev/sdar of=/dev/cciss/c0d0 bs=1M

reboot unpresent the SAN’s from virtual connect or whatever storage interface your using.
You need to have the boot from san volumes disabled in VCEM

6. make new swap using cfdisk and then run

  • mkswap /dev/cciss/c0d0p9  (This controller0 drive0 Parititon 9) 
  • The size of the swap partition will vary, I used 32000M when i created it in cfdisk, you are free to use fdisk to do this also.

7. now you need to mount / to a directory, so make a empty directory

  • mkdir /mnt/root
    and mount it, examples below
    mount /dev/sda6 /mnt/root or mount /dev/cciss/c0d0p6 /mnt/root

9. fix the fstab (cfdisk works great for this if your system rescue disk)

  • vi /mnt/root/etc/fstab
  • change /dev/mapper/mpath0p* to cciss/c0d0*
  • comment out the swap volume
  • add new swap (/dev/cciss/c0d0p9)

10. next fix vi /mnt/root/etc/multipath.conf

  • uncomment: devnode “^cciss!c[0-9]d[0-9]*”
  • for EMC add:

device {
vendor “EMC”
product “Invista”
product_blacklist “LUNZ”
getuid_callout “/sbin/scsi_id -g -u -s /block/%n”
features “0”
hardware_handler “0”
path_selector “round-robin 0”
path_grouping_policy multibus
rr_weight uniform
no_path_retry 5
rr_min_io 1000
path_checker tur
}

11. next mount the boot parition

Examples

  • mount /dev/sda1 /mnt/root/boot
    or 
  • mount /dev/cciss/c0d0p1 /mnt/root/boot

12. edit grub.conf

  • vi /mnt/root/boot/grub.conf
  • change /dev/mapper/mpath0p* to /dev/sda*
    or 
  • change /dev/mapper/mpath0p* to /dev/cciss/c0d0

13. edit device.map

  • vi /mnt/root/boot/device.map
  • change /dev/mapper/mpath0 to /dev/sda
    or 
  • change /dev/mapper/mpath0 to /dev/cciss/c0d0

14. fix the initrd

  • zcat /mnt/root/boot/initrd-2.6.18-3……. | cpio -i -d
  • edit the file ‘init’
  • change mkrootdev line to /dev/cciss/c0d0p6 (this is the is / partition)
  • change resume line to /dev/cciss/c0d0p9 (this is the new swap partition)

15. Make a backup of the new partition

  • mv /mnt/root/boot/initrd-2.6.18-…. /mnt/root/boot/initrd-2.6.18-……backup

16. recompress the new initrd

  • find . | cpio -o -H newc |gzip -9 > /mnt/root/boot/initrd-2.6.18-348.2.1.el5.img

17. upon reboot, change the boot order in the bios settings to use the hp smart array controller

18. You may need to create a new inird using the redhat linux distro if the initrd doesnt boot.

  • chroot to /mnt/sysimage for /
  • then go to the boot parition and
  • mkinitrd -f -v initrd-2.6.18-53.1.4.el5.img 2.6.18-53.1.4.el5

19. reboot

 

 

 

 

How to upgrade mysql 5.1 to 5.6 with WHM doing master-slave

How to upgrade MYSQL in a production environment with WHM

Okay, so if you have a master slave database setup with large innodb and myisam, you probably want to upgrade to mysql 5.6. The performance tweaks make a difference especially with utilizing multicores.

Most of the time Cpanel is really good at click upgrade and it works, however with mysql if you’re running a more complex setup, then simply clicking upgrade in cpanel for mysql isn’t going to do the trick. I’ve outlined the process below to help anyone else trying to do this.

  1. Making a backup of the database using Percona and mysqldump
  • The first thing you need to do is make a backup of everything, since we have large innodb and myisam db’s, using mysqldump can be slow.
  • Using percona this will backup everything
    i.    Innobackupex /directory you want everything to backed up to (this will be uncompressed backup. (See my blog on multithreaded backup and restores using percona for more details on how to use Percona Backup)
    ii.    Next you need to make a mysqldump of all your databases
  • Mysqldump –all-databases > alldatabases.sql (old school)
  • I do it a bit differently. I have a script that makes full dump of all the databases and creates separate sql files for each db in case I need to import a specific database after that fact.
    http://nicktailor.com/files/mysqldumpbackup.sh (Here is the script edit according to your needs)

2.   Now you need to upgrade mysql, so log into WHM and run the mysql upgrade in the mysql section of whm. If your running a db server and disabled apache, renable it in WHM temporarily, because WHM will be recompiling php and easyapache with the new mysql modules, once its done you can disable it.

  • If your mysql upgrade fails check your permissions on mysql or you can run the upgrade from command line forced.
    /usr/local/cpanel/scripts/mysqlup –force

And after that run

         /usr/local/cpanel/scripts/easyapache

3. Since WHM upgrades /var/lib/mysql regardless if you specified another directory for your data we’re going to have to do a little bit of extra work, while were doing this were going to shrink ibdata1 file to fix any innodb corruption and save you a ton of space.

  • Find your mysql data directory if its different from /var/lib/mysql, if it’s the same then you don’t need to do these steps.
    i.    Delete everything inside the data directory
    ii.    Copy everything from /var/lib/mysql to mysql datadirectory
                  cp –ra /var/lib/mysql     /datadirectory
    iii.    Try to start mysql, if you get an error saying myqsl cant create a pid, its probably due to your my.cnf, some setting no longer work with mysql 5.6, easiest way to figure out is just comment stuff out until it works. I will provide a sample one that worked for me. Also its easier to start up in safe mode to avoid all the granty permissions simply uncomment the #skip-grant-tables in the my.cnf file
    http://www.nicktailor.com/files/my.cnf.sample (this sample has the performance tweaks enabled it)

         iv.    Once mysql is started, now ya want to fix up the innodb while you got a chance, if you weren’t using /var/lib/mysql as your data directory then the upgrade will have already created new ibdata1, ib_logfile0 & ib_logfile1 files. If however this is not the case, simply rename those files and restart mysql and mysql will create brand spanking new ones
         v.    Now we need to restore everything, now I have SSD drives and if you have large DB’s you should only be using SSD’s anyway. You need to do a mysqldump back to mysql using the all-databases.sql file you created earlier.

  •          Mysql –u root –p<password> < all-databases.sql (best to run this in a screen session on linux as it will take awhile and you don’t want to loose your connection during this)

       vi.    Once the dump is complete you now need to run mysql_upgrade to upgrade all the databases and tables that didn’t get upgraded to the new version, followed by a mysql-check

  • Mysql_upgrade –u root –p<password>
  • mysqlcheck –all-databases –check-upgrade –auto-repair

Now you should be able to set grant permissions and things, if you miss the mysql_upgrade step, some of your sites may work and some may not, in addition you will probably be unable to set grant permission in mysql, you’ll get a connection error most likely.

4. If you have a slave db, then you can continue reading. So the next piece is fixing our slave now. Thanks to percona we can do this quick. You will notice that your ibdata1 file is tiny now and clean, so the backup will be super fast.

  • You need to back-up full backup using percona
               i.    Innobackupex /directoryyouwanttobackupto
  • Now you need to copy the uncompressed backup to your slave server, you can either scp or rsync, whatever works for you. I have gige switch so I sync over
              i.    rsync -rva –numeric-ids –progress . root@192.168.0.20:/backupdirectory (this is just a sample)

              i.    Stop mysql
                    a.   /etc/init.d/mysql stop

             ii.    Delete the data directory on the slave
                    b. rm -f /mysqldatadirectory/*

            iii.    Do a full percona restore
                    c.  Innobackupex –copy-back /backupdirectory

5.     Once mysql is restored change your permissions on mysql files to mysql:mysql, edit your my.cnf and startup mysql and you should be good to go. You will need to fix replication, read my mysql failover setup post on how do that if you’re not sure.
                     chown -R mysql:mysql /mysqldatadirectory

How to check what processes are using your swap

Here is a little script that will show you what processes are using your swap

http://www.nicktailor.com/files/swapusage

vi swapusage && chmod +x swapusage

copy paste below & save

./swapusage (to run)

#!/bin/bash

# find-out-what-is-using-your-swap.sh
# — Get current swap usage for all running processes output
# — rev.0.1, 2011-05-27, Erik Ljungstrom – initial version
SCRIPT_NAME=`basename $0`;
SORT=”kb”; # {pid|kB|name} as first parameter, [default: kb]
[ “$1” != “” ] && { SORT=”$1″; }

[ ! -x `which mktemp` ] && { echo “ERROR: mktemp is not available!”; exit; }
MKTEMP=`which mktemp`;
TMP=`${MKTEMP} -d`;
[ ! -d “${TMP}” ] && { echo “ERROR: unable to create temp dir!”; exit; }

>${TMP}/${SCRIPT_NAME}.pid;
>${TMP}/${SCRIPT_NAME}.kb;
>${TMP}/${SCRIPT_NAME}.name;

SUM=0;
OVERALL=0;
echo “${OVERALL}” > ${TMP}/${SCRIPT_NAME}.overal;

for DIR in `find /proc/ -maxdepth 1 -type d -regex “^/proc/[0-9]+”`;
do
PID=`echo $DIR | cut -d / -f 3`
PROGNAME=`ps -p $PID -o comm –no-headers`

for SWAP in `grep Swap $DIR/smaps 2>/dev/null| awk ‘{ print $2 }’`
do
let SUM=$SUM+$SWAP
done

if (( $SUM > 0 ));
then
echo -n “.”;
echo -e “${PID}\t${SUM}\t${PROGNAME}” >> ${TMP}/${SCRIPT_NAME}.pid;
echo -e “${SUM}\t${PID}\t${PROGNAME}” >> ${TMP}/${SCRIPT_NAME}.kb;
echo -e “${PROGNAME}\t${SUM}\t${PID}” >> ${TMP}/${SCRIPT_NAME}.name;
fi
let OVERALL=$OVERALL+$SUM
SUM=0
done
echo “${OVERALL}” > ${TMP}/${SCRIPT_NAME}.overal;
echo;
echo “Overall swap used: ${OVERALL} kB”;
echo “========================================”;
case “${SORT}” in
name )
echo -e “name\tkB\tpid”;
echo “========================================”;
cat ${TMP}/${SCRIPT_NAME}.name|sort -r;
;;

kb )
echo -e “kB\tpid\tname”;
echo “========================================”;
cat ${TMP}/${SCRIPT_NAME}.kb|sort -rh;
;;

pid | * )
echo -e “pid\tkB\tname”;
echo “========================================”;
cat ${TMP}/${SCRIPT_NAME}.pid|sort -rh;
;;
esac
rm -fR “${TMP}/”;

#!/bin/bash

# find-out-what-is-using-your-swap.sh
# — Get current swap usage for all running processes
# —
# — rev.0.3, 2012-09-03, Jan Smid – alignment and intendation, sorting
# — rev.0.2, 2012-08-09, Mikko Rantalainen – pipe the output to “sort -nk3″ to get sorted output
# — rev.0.1, 2011-05-27, Erik Ljungstrom – initial version
SCRIPT_NAME=`basename $0`;
SORT=”kb”; # {pid|kB|name} as first parameter, [default: kb]
[ “$1” != “” ] && { SORT=”$1″; }

[ ! -x `which mktemp` ] && { echo “ERROR: mktemp is not available!”; exit; }
MKTEMP=`which mktemp`;
TMP=`${MKTEMP} -d`;
[ ! -d “${TMP}” ] && { echo “ERROR: unable to create temp dir!”; exit; }

>${TMP}/${SCRIPT_NAME}.pid;
>${TMP}/${SCRIPT_NAME}.kb;
>${TMP}/${SCRIPT_NAME}.name;

SUM=0;
OVERALL=0;
echo “${OVERALL}” > ${TMP}/${SCRIPT_NAME}.overal;

for DIR in `find /proc/ -maxdepth 1 -type d -regex “^/proc/[0-9]+”`;
do
PID=`echo $DIR | cut -d / -f 3`
PROGNAME=`ps -p $PID -o comm –no-headers`

for SWAP in `grep Swap $DIR/smaps 2>/dev/null| awk ‘{ print $2 }’`
do
let SUM=$SUM+$SWAP
done

if (( $SUM > 0 ));
then
echo -n “.”;
echo -e “${PID}\t${SUM}\t${PROGNAME}” >> ${TMP}/${SCRIPT_NAME}.pid;
echo -e “${SUM}\t${PID}\t${PROGNAME}” >> ${TMP}/${SCRIPT_NAME}.kb;
echo -e “${PROGNAME}\t${SUM}\t${PID}” >> ${TMP}/${SCRIPT_NAME}.name;
fi
let OVERALL=$OVERALL+$SUM
SUM=0
done
echo “${OVERALL}” > ${TMP}/${SCRIPT_NAME}.overal;
echo;
echo “Overall swap used: ${OVERALL} kB”;
echo “========================================”;
case “${SORT}” in
name )
echo -e “name\tkB\tpid”;
echo “========================================”;
cat ${TMP}/${SCRIPT_NAME}.name|sort -r;
;;

kb )
echo -e “kB\tpid\tname”;
echo “========================================”;
cat ${TMP}/${SCRIPT_NAME}.kb|sort -rh;
;;

pid | * )
echo -e “pid\tkB\tname”;
echo “========================================”;
cat ${TMP}/${SCRIPT_NAME}.pid|sort -rh;
;;
esac
rm -fR “${TMP}/”;

#!/bin/bash

# find-out-what-is-using-your-swap.sh
# — Get current swap usage for all running processes
# —
SCRIPT_NAME=`basename $0`;

SORT=”kb”; # {pid|kB|name} as first parameter, [default: kb]
[ “$1” != “” ] && { SORT=”$1″; }

[ ! -x `which mktemp` ] && { echo “ERROR: mktemp is not available!”; exit; }
MKTEMP=`which mktemp`;
TMP=`${MKTEMP} -d`;
[ ! -d “${TMP}” ] && { echo “ERROR: unable to create temp dir!”; exit; }

>${TMP}/${SCRIPT_NAME}.pid;
>${TMP}/${SCRIPT_NAME}.kb;
>${TMP}/${SCRIPT_NAME}.name;

SUM=0;
OVERALL=0;
echo “${OVERALL}” > ${TMP}/${SCRIPT_NAME}.overal;

for DIR in `find /proc/ -maxdepth 1 -type d -regex “^/proc/[0-9]+”`;
do
PID=`echo $DIR | cut -d / -f 3`
PROGNAME=`ps -p $PID -o comm –no-headers`

for SWAP in `grep Swap $DIR/smaps 2>/dev/null| awk ‘{ print $2 }’`
do
let SUM=$SUM+$SWAP
done

if (( $SUM > 0 ));
then
echo -n “.”;
echo -e “${PID}\t${SUM}\t${PROGNAME}” >> ${TMP}/${SCRIPT_NAME}.pid;
echo -e “${SUM}\t${PID}\t${PROGNAME}” >> ${TMP}/${SCRIPT_NAME}.kb;
echo -e “${PROGNAME}\t${SUM}\t${PID}” >> ${TMP}/${SCRIPT_NAME}.name;
fi
let OVERALL=$OVERALL+$SUM
SUM=0
done
echo “${OVERALL}” > ${TMP}/${SCRIPT_NAME}.overal;
echo;
echo “Overall swap used: ${OVERALL} kB”;
echo “========================================”;
case “${SORT}” in
name )
echo -e “name\tkB\tpid”;
echo “========================================”;
cat ${TMP}/${SCRIPT_NAME}.name|sort -r;
;;

kb )
echo -e “kB\tpid\tname”;
echo “========================================”;
cat ${TMP}/${SCRIPT_NAME}.kb|sort -rh;
;;

pid | * )
echo -e “pid\tkB\tname”;
echo “========================================”;
cat ${TMP}/${SCRIPT_NAME}.pid|sort -rh;
;;
esac
rm -fR “${TMP}/”;

How to pass a “password” to su as a variable in a script

How to pass a “password” to su as a variable in script and execute tasks to an array of hosts

Some of you may work for organizations that do access control for linux servers. In which case they do not use ssh keys for root, and are still doing the unthinkable allowing the use of password authentication.

So this means you have to log into a server and the “su –“ to root before you can execute commands, and if you have an array of servers this could be tedious and time consuming. I was told by everyone that you can’t pass a “password” as a variable in script to su, as it’s not allowed.

Guess what…that’s a lie, because I’m going to show you how to do it securely.

 

  1. So you need to install something called expect on all your servers. This tool is used for interactive testing of scripts. It makes the script pass human typing where needed. You can pass variables to this and use it as a wrapper inside another script.
    1. “Yum install expect” on debian “apt-get install expect”
  2. Now what you want to do is log into your server as the user not root, and inside the home directory you want to setup the following to scripts
    1. Create a file called gotroot
    2. Vi   gotroot, add the following below and save.

The script below is a wrapper script, that you will use inside another bash script later in this tutorial.

Usage would be

./gotroot <user> <host><userpass><rootpass>

These arguments will then get passed to the remote host and it will execute the send commands below in our case “ls –al”, and then once its done it will exit and log out of the server and return you to the host you started from. This script does not account for ssh fingerprinting, so you will need to ensure you fingerprint your user to each server before using this script. I will add fingerprinting in future, just got lazy.

What I like to do is..I will write a bash script that it going to do a bunch tasks, scp it over all the servers as my user, then comment out the ls –al section and uncomment the section where you can tell to run the bash script. This will then log in to the server and su to root, execute your bash script, exit and log  out.

========================
#!/usr/bin/expect -f
set mypassword [lindex $argv 2]
set mypassword2 [lindex $argv 3]
set user [lindex $argv 0]
set host  [lindex $argv 1]
spawn ssh -tq $user@$host

########################################################

#this section is only needed if you are NOT using ssh keys
#expect “Password:”
#send “$mypassword\r”
#expect “$ “

#########################################################

send “su -\r”
expect “Password:”

send “$mypassword2\r”
expect “$ “

 

#this will execute command on the remote host

send “ls -al\r”
expect “$ “

 

#this will execute script you want to run on remote host
#send “/home/nicktailor/script.pl\r”
#expect “$ “

 

#this command will exit the remote host
send “exit\r”
send “exit\r”

interact

==============================================

How to wrap this script so it will do any array of hosts within bash.

  1. Create a file called host
    1. Vi  hosts and the following below in it.
    2. Also created a file called logs.txt “touch logs.txt”

This script will all you to use the above script as a wrapper and it will go and execute the command you want from got root to each host in the servers variable listed below.

In addition you it will prompt you the user name and root pass, and will not show he passwords you enter, it will prompt you the same way if you were to do “su –“. It will then take those credentials and use it for each host securely, the passwords will not show up in logs or history anywhere, as some security departments would have issues with that.

So you simply type “./hosts”

It will prompt you for whatever it requires to continue. Just be sure that you add the array of hosts you want to execute the tasks on, and that you have setup a ssh fingerprint as your user first. Expect scripts are extremely easy to learn, once you play with this.

=======================================
#!/bin/bash

#########################

#some colour constants #

#########################

 

CLR_txtblk=’\e[0;30m’ # Black – Regular
CLR_txtred=’\e[0;31m’ # Red
CLR_txtgrn=’\e[0;32m’ # Green
CLR_txtylw=’\e[0;33m’ # Yellow
CLR_txtblu=’\e[0;34m’ # Blue
CLR_txtpur=’\e[0;35m’ # Purple
CLR_txtcyn=’\e[0;36m’ # Cyan
CLR_txtwht=’\e[0;37m’ # White
CLR_bldblk=’\e[1;30m’ # Black – Bold
CLR_bldred=’\e[1;31m’ # Red
CLR_bldgrn=’\e[1;32m’ # Green
CLR_bldylw=’\e[1;33m’ # Yellow
CLR_bldblu=’\e[1;34m’ # Blue
CLR_bldpur=’\e[1;35m’ # Purple
CLR_bldcyn=’\e[1;36m’ # Cyan
CLR_bldwht=’\e[1;37m’ # White
CLR_unkblk=’\e[4;30m’ # Black – Underline
CLR_undred=’\e[4;31m’ # Red
CLR_undgrn=’\e[4;32m’ # Green
CLR_undylw=’\e[4;33m’ # Yellow
CLR_undblu=’\e[4;34m’ # Blue
CLR_undpur=’\e[4;35m’ # Purple
CLR_undcyn=’\e[4;36m’ # Cyan
CLR_undwht=’\e[4;37m’ # White
CLR_bakblk=’\e[40m’   # Black – Background
CLR_bakred=’\e[41m’   # Red
CLR_bakgrn=’\e[42m’   # Green
CLR_bakylw=’\e[43m’   # Yellow
CLR_bakblu=’\e[44m’   # Blue
CLR_bakpur=’\e[45m’   # Purple
CLR_bakcyn=’\e[46m’   # Cyan
CLR_bakwht=’\e[47m’   # White
CLR_txtrst=’\e[0m’    # Text Reset

#

#########################

SERVERS=”host1 host2 host3”
#echo -e “${CLR_bldgrn}Enter Servers (space seperated)${CLR_txtrst}”

#read -p “servers: ” SERVERS
echo -e “${CLR_bldgrn}Enter User${CLR_txtrst}”
read -p “user: ” USERNAME
echo -e “${CLR_bldgrn}Enter Password${CLR_txtrst}”
read -p “password: ” -s USERPW
echo
echo -e “${CLR_bldgrn}Enter Root Password${CLR_txtrst}”
read -p “password: ” -s ROOTPW
echo
for machine in $SERVERS; do
~/gotroot ${USERNAME} ${machine} ${USERPW} ${ROOTPW}  2>&1 | tee -a logs.txt
done

======================================

Hope you enjoyed this tutorial and if you have any questions email nick@nicktailor.com

How to do multi-threaded backups and restores for mysql

How to do multi-threaded backups and restores for mysql

So there are probably a lot of people out there who have the standard master-slave mysql database servers running with InnoDB and MyISAM Databases.

This not usually a problem unless you have high amount of traffic going to your databases using InnoDB, since mysql does not do multithreaded dumps or restores, this can be problematic if your replication is broken to the point where you need to do a full restore.

Now if your database was say 15gigs in size, consisting of Innodb and myisam db’s in a production environment this would be  brutal, as you would need to lock the tables on the primary while your restoring to the slave. Since mysql does not do a multithreaded restores, this could take 12 hours or more, keep in mind this is dependent on hardware.  To give you an idea the servers we had when we ran into this issue, to help you gauge your problem.

Xeon quad core, sata 1 T drives, 18 gigs of ram (Master and Slave)

Fortunately, there is a solution 🙂

There is a free application called xtrabackup by Percona which does multithreaded backup and restores of myisam and innodb combined. In this blog I will be explaining how to set it up, and what I did to minimize downtime for the businesses.

What you should consider doing

Since drive I/O is a factor with high traffic Database servers which can seriously impede performace significantly. We built new servers same specs but with SSD drives this time.

Xeon quad core, (sata3) 1T, (SSD) 120G 18 gigs of ram

Now this is not necessary, however if database traffic is high you should consider SSD or even fiber channel drives if your setup supports it.

Xtrabackup is free unless you use mysql enterprise, then its $5000/server to license it. Honestly using mysql enterprise in my opinion is just stupid, is exactly the same except you get support, the same support you could get online or irc on any forum which is probably better, why pay for something you don’t need to.

Install and setup

Note: This will need to be installed on both master and slave database servers, as this process will replace the mysqldump and restore method you use.

  1. rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
  2. Yum install percona-xtrabackup.x86_64 (Master & Slave both servers)

Create  backup

Note: There are number of ways you can do this. You can have it output to a /tmp directory while its doing the backup process, or you can have it output to stdout and compress to a directory. I will show you how to do both ways.

  1. Since innobackupex, which is the tool with xtrabackup we are going to use, looks at the /etc/my.cnf file for the data directory for mysql, we do not have to define a lot in our command string. For this example we do not setup a mysql password, however if you did you simply add –user <user> -pass <pass> to the string.

This process took 5 minutes on a 15gig Database with Xeon quad core, (sata3) 1T, (SSD) 120G 18Gram

2. Innobackupex <outputdirectory>
     Eg. Innobackupex /test (this command will create another directory inside this one with a time stamp, it’s a fullback of all databases innodb and myisam uncompressed.)

3. innobackupex –stream=tar ./ | gzip – > /test/test.tar.gz (This command will do the same as the above except will output to stdout and compress the fullbackup into the tar file

Note: you also need to use the -i option to untar it eg. tar -ixvf test.tar.gz, ensure mysql is stopped on any slave before restoring, and dont forget to chown -R mysql:mysql the files after you restore the data to the data directory using the innobackupex –copy-back command.

Note: I have experienced issues with getting replication to start doing a full backup and restore on to a slave with innodb and myisam, using the innobackupex stream compression to gzip, after untarring for whatever reason the innodb log files had some corruption, which caused the slave to stop replication upon immediate connection of the master.

if the stream compression doesnt work do a uncompressed backup as shown above, and then rsync the data from your master to the slave via a gige switch if possible (ie. rsync -rva –numeric-ids <source> <destination>:/)

Our 15gig DB compressed to 3.4gigs

  1. Now copy tar file or directory to that innobackupex created to the slave server via scp

Scp * user@host:  <-(edit accordingly)

Doing a Restore

Note: The beauty of this restore is it will be a multi-threaded restore utilizing multiple cores instead of just one, since our server data directory is now sitting on SSD, disk I/O will be almost nill, increasing performance significantly, and reducing load.

  1. On your primary database server log into mysql and lock the tables
    1. Mysql> FLUSH TABLES WITH READ LOCK;
    2. Now on your slave: To do a restore of all the databases its pretty easy.
      1. innobackupex –copy-back /test/2013-02-03_17-21-52/ (update the path to where ever the innobackupex files are.)

This took 3 mins to restore a 15gig DB with innodb and myisam for us on
Xeon quad core, (sata3) 1T, (SSD) 120G 18 gigs of ram

Setting up the backup crons

  1. Now if you were using mysqldump as part of your mysql backup process then you will need to change it to use the following.
  2. Create a directory on the slave called mysqldumps
  3. Create a file called backups.sh and save it.
    1. Add the following to it.

#!/bin/bash
innobackupex –stream=tar ./ | gzip – > /mysqldumps/innobackup-$(date +”%F”)

Note: that our backups are being stored on our sata3 drive and data directory resides on the SSD

  1. Now now add this to your crontab as root, again change the cron to run however often you need to run.
    1. 0 11,23 * * * /fullpath/ backups.sh

Setting up diskspacewatch for the SSD drive.

  1. Since the SSD drive is 120G, we need to setup alert to monitor to watch the space threshold. If you not have the resources to implement a tool to specifically to monitor diskspace, then you can write a script that watches the diskspace and send out an email alert in the event the threshold is reached.
  2. Run a df –h on your server find the partition you want it to watch edit (df /disk2) on the script to which ever partition you want it to watch, threshold is defined by ( if [ $usep -ge 80 ]; then)
  3. Create a file called diskspacewatch, add and save below

#!/bin/sh
df /disk2 | grep -vE ‘^Filesystem|tmpfs|cdrom’ | awk ‘{ print $5 ” ” $1 }’ | while read output;
do
echo $output
usep=$(echo $output | awk ‘{ print $1}’ | cut -d’%’ -f1  )
partition=$(echo $output | awk ‘{ print $2 }’ )
if [ $usep -ge 80 ]; then
echo “SSD Disk on slave database server Running out of space!!  \”$partition
($usep%)\” on $(hostname) as on $(date)” |
mail -s “Alert: Almost out of disk space $usep%” nick@nicktailor.com
fi
done

  1. Now you want to setup a cron that runs this script every 1 hour, or however long you want
    1. 0 * * * * /path/diskspacewatch

That’s my tutorial on a mysql multithreaded backup and restore setup. If you have questions email nick@nicktailor.com

 

 

 

0