Diskstuff
now browsing by category
How to add a new SCSI LUN while server is Live
REDHAT/CENTOS:
In order to get wwn ids from a server:
Or:
Run this to find the new disks after you have added them to your VM
Note: rescan-scsi-bus.sh is part of the sg3-utils package
# That’s it, unless you want to fix the name from mpath(something) to something else
• vi /etc/multipath_bindings
# Go into the multipath consolde and re add the multipath device with your new shortcut name (nickdsk2 in this case)
• add map nickdsk2
Note: Not going to lie, sometimes you could do all this and still need a reboot, majority of the time this should work. But what do i know…haha
How to increase disk size on virtual scsi drive using gpart
Power ON VM guest after editing disk size.

ls -d /sys/block/sd*/device/scsi_device/* |awk -F ‘[/]’ ‘{print $4,”- SCSI”,$7}’



service crond stop
Note: If you observe “Device is busy” error then make sure that your current session is not in /data partition.
For GPT partition type
In this case parted -l command will give below for “sdb” disk partition
*****************************************************
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 215GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 1049kB 215GB 215GB ext4 Linux LVM lvm
*****************************************************







How to recover file system corruption on 4T LVM using DDrescue on a VM
How to recover file system corruption on 4T LVM on ubuntu using a VM
In this example we will be fixing a xfs filesystem that failed initial xfs_repair
If this happens don’t panic. We can fix most likely fix it.
Steps to do
Create new physical volume, volume group and logical volume
Now install ddrescue and make image of the corrupted file system on the new logical volume
Make swap size 30gigs – this is needed so when we repair the filesystem it doesn’t time out because it runs out of memory. Which tends to be the problem when trying to repair such large filesystems.
Sample outputs
Create rescue image on new logical volume
◦ ddrescue -d -r3 $oldfilesyetem imagefile.img loglocationpath.logfile
ddrescue -d -r3 /dev/recovery/data /mnt/recovery/recovery.img /mnt/recovery/recoverylog.logfile
Once the file is created we want to repair it using xfs_repair
– agno = 29
– agno = 9
– agno = 10
– agno = 11
– agno = 12
– agno = 13
– agno = 14
– 20:02:48: check for inodes claiming duplicate blocks – 88951488 of 88951488 inodes done
Phase 5 – rebuild AG headers and trees…
– 20:02:57: rebuild AG headers and trees – 41 of 41 allocation groups done
– reset superblock…
Phase 6 – check inode connectivity…
– resetting contents of realtime bitmap and summary inodes
– traversing filesystem …
– traversal finished …
– moving disconnected inodes to lost+found …
Phase 7 – verify and correct link counts…
Done
Written By Nick Tailor
How to do a full restore if you wiped all your LVM’s
I’m sure some of you have had the wonderful opportunity to experience loosing all your LVM info in error. Well all is not lost and there is hope. I will show ya how to restore it.
The beauty of LVM is that is naturally creates a backup of the Logical Volumes in the following location.
- /etc/lvm/archive/
Now If you had just wiped out your LVM and it was simply using one physical disk for all your LVM’s you could simply do a full restore doing the following.
-
-
- vgcfgrestore -f /etc/lvm/archive/(volumegroup to restore) (destination volumegroup)
o (ie.)vgcfgrestore -f /etc/lvm/archive/vg_dev1_006.000001.vg vg_dev
- vgcfgrestore -f /etc/lvm/archive/(volumegroup to restore) (destination volumegroup)
-
If you had multiple disks attached to your volume group then you need to do a couple more things to be able to do a restore.
- Cat the file /etc/lvm/archive/whatevervolumgroup.vg file you should see something like below
- physical_volumes {
pv0 {
id = “ecFWSM-OH8b-uuBB-NVcN-h97f-su1y-nX7jA9”
device = “/dev/sdj” # Hint only
status = [“ALLOCATABLE”]
flags = []
dev_size = 524288000 # 250 Gigabytes
pe_start = 2048
pe_count = 63999 # 249.996 Gigabytes
}
You will need to recreate all the physical volume UUID inside that .vg file for volume group to be able to restore.
-
- pvcreate –restore /etc/lvm/archive/vgfilename.vg –uuid <UUID> <DEVICE>
o (IE) pvcreate –restorefile /etc/lvm/archive/vg_data_00122-1284284804.vg –uuid ecFWSM-OH8b-uuBB-NVcN-h97f-su1y-nX7jA9 /dev/sdj
- pvcreate –restore /etc/lvm/archive/vgfilename.vg –uuid <UUID> <DEVICE>
- Repeat this step for all the physical volumes in the archive vg file until they have all been created.
Once you have completed the above step you should now be able to restore your voluegroups that were wiped
-
- vgcfgrestore -f /etc/lvm/archive/(volumegroup to restore) (destination volumegroup)
o (ie.)vgcfgrestore -f /etc/lvm/archive/vg_dev1_006.000001.vg vg_dev
- vgcfgrestore -f /etc/lvm/archive/(volumegroup to restore) (destination volumegroup)
- Running the command vgdisplay and pvdisplay should show you that everything is back the way it should be
If you have questions email nick@nicktailor.com
Cheers
How to move off san boot to local disk with HP servers
How to move off san boot to local disk
===========================
1. add the disks to the server
next do a rescan-scsi-bus.sh to see if the new disks show up
2. Setup the Raid controler f8 (hp)
3. Boot off of system rescue cd
4. find the new drive, use fdisk -l
5. copy partition over using dd and reboot to see new parition table
Examples:
- dd if=/dev/mapper/3600508b4000618d90000e0000b8f0000 of=/dev/sda bs=1M
or - dd if=/dev/sdar of=/dev/cciss/c0d0 bs=1M
reboot unpresent the SAN’s from virtual connect or whatever storage interface your using.
You need to have the boot from san volumes disabled in VCEM
6. make new swap using cfdisk and then run
- mkswap /dev/cciss/c0d0p9 (This controller0 drive0 Parititon 9)
- The size of the swap partition will vary, I used 32000M when i created it in cfdisk, you are free to use fdisk to do this also.
7. now you need to mount / to a directory, so make a empty directory
- mkdir /mnt/root
and mount it, examples below
mount /dev/sda6 /mnt/root or mount /dev/cciss/c0d0p6 /mnt/root
9. fix the fstab (cfdisk works great for this if your system rescue disk)
- vi /mnt/root/etc/fstab
- change /dev/mapper/mpath0p* to cciss/c0d0*
- comment out the swap volume
- add new swap (/dev/cciss/c0d0p9)
10. next fix vi /mnt/root/etc/multipath.conf
- uncomment: devnode “^cciss!c[0-9]d[0-9]*”
- for EMC add:
device {
vendor “EMC”
product “Invista”
product_blacklist “LUNZ”
getuid_callout “/sbin/scsi_id -g -u -s /block/%n”
features “0”
hardware_handler “0”
path_selector “round-robin 0”
path_grouping_policy multibus
rr_weight uniform
no_path_retry 5
rr_min_io 1000
path_checker tur
}
11. next mount the boot parition
Examples
- mount /dev/sda1 /mnt/root/boot
or - mount /dev/cciss/c0d0p1 /mnt/root/boot
12. edit grub.conf
- vi /mnt/root/boot/grub.conf
- change /dev/mapper/mpath0p* to /dev/sda*
or - change /dev/mapper/mpath0p* to /dev/cciss/c0d0
13. edit device.map
- vi /mnt/root/boot/device.map
- change /dev/mapper/mpath0 to /dev/sda
or - change /dev/mapper/mpath0 to /dev/cciss/c0d0
14. fix the initrd
- zcat /mnt/root/boot/initrd-2.6.18-3……. | cpio -i -d
- edit the file ‘init’
- change mkrootdev line to /dev/cciss/c0d0p6 (this is the is / partition)
- change resume line to /dev/cciss/c0d0p9 (this is the new swap partition)
15. Make a backup of the new partition
- mv /mnt/root/boot/initrd-2.6.18-…. /mnt/root/boot/initrd-2.6.18-……backup
16. recompress the new initrd
- find . | cpio -o -H newc |gzip -9 > /mnt/root/boot/initrd-2.6.18-348.2.1.el5.img
17. upon reboot, change the boot order in the bios settings to use the hp smart array controller
18. You may need to create a new inird using the redhat linux distro if the initrd doesnt boot.
- chroot to /mnt/sysimage for /
- then go to the boot parition and
- mkinitrd -f -v initrd-2.6.18-53.1.4.el5.img 2.6.18-53.1.4.el5
19. reboot