Category: Logical Volumes

How to Deploy LVM’s with Ansible


This role is designed to use ansible-merge-vars module. An Ansible plugin to merge all variables in context with a certain suffix (lists or dicts only) and create a new variable that contains the result of this merge. This is an Ansible action plugin, which is basically an Ansible module that runs on the machine running Ansible rather than on the host that Ansible is provisioning.

Benefits: Configuring disks into LVM

 Adds Physical Disk to Volume to Volume Group
 Creates Logical Volume assigned to Volume Group
 Creates the size of the disk to full or whatever you wish to set
 It will format the LVM to whatever file format you wish
 It will then create the mount point and mount the lvm
 It will also add the lvm to fstab upon mounting

Note: This post assumes you have already ansible installed and running.

Install ansible-merge-vars module:

1.       root@KVM-test-box:~# pip install ansible_merge_vars
          Requirement already satisfied: ansible_merge_vars in
          /usr/local/lib/python3.8/dist-packages (5.0.0)

2.Create an action_plugins directory in the directory in which you run Ansible.

By default, Ansible will look for action plugins in an action_plugins folder adjacent to the running playbook. For more information on this, or to change the location where ansible looks for action plugin.


3.Create a file called (or whatever name you picked) in the action_plugins directory, with one line:

  from ansible_merge_vars import ActionModule


4.Save the file


Role Setup:

Once the plugin has been setup, you now you will want to setup a role.

1.Move inside the ansible/roles directory ansible/roles/
2.Create a directory for the role provision-fs
b.mkdir -p provision-fs/tasks
3.Move inside the tasks directory ansible/roles/provision-fs/tasks

Now we will create a task that will merge variable names associated with a list and then itemise the list for variables we will pass to provision the filesystem via the inventory/host_var or group_var


4.Create a file called main.yml inside provision-fs/tasks/main.yml


name: Merge VG variables


    suffix_to_merge: vgs__to_merge

    merged_var_name: merged_vgs

    expected_type: ‘list’


name: Merge LV variables


    suffix_to_merge: lvs__to_merge

    merged_var_name: merged_lvs

    expected_type: ‘list’


name: Merge FS variables


    suffix_to_merge: fs__to_merge

    merged_var_name: merged_fs

    expected_type: ‘list’


name: Merge MOUNT variables


    suffix_to_merge: mnt__to_merge

    merged_var_name: merged_mnt

    expected_type: ‘list’


name: Create VGs


    vg: {{ }}”

    pvs: {{ item.pvs }}”

  with_items: {{ merged_vgs }}”


name: Create LVs


    vg: {{ }}”

    lv: {{ }}”

    size: {{ item.size }}”

    pvs: {{ item.pvs | default(omit) }}”

    shrink: no

  with_items: {{ merged_lvs }}”


name: Create FSs


    dev: {{ }}”

    fstype: {{ item.fstype }}”

  with_items: {{ merged_fs }}”


name: Mount FSs


    path: {{ item.path }}”

    src: {{ item.src }}”

    state: mounted

    fstype: {{ item.fstype }}”

    opts: {{ item.opts | default(‘defaults’) }}”

    dump: {{ item.dump | default(‘1’) }}”

    passno: {{ item.passno | default(‘2’) }}”

  with_items: {{ merged_mnt }}”



5.Save the file


Note: Now this currently task has no safe guards for /dev/sda or checks to ensure the disk is wiped properly in order for the disks to be added to the volume group. I have created such safe guards for others. But for the purposes of this blog post this is basics. If you want to my help you can contact me via email or the ticketing system.


Now what we are going to do is define our inventory file with what file lvm we want to crave out.


Setup inventory:
1.Go inside your inventory/host_var or group_var file and create a file for testserver1

  • .nano inventory/host_var/testserver1
  – vg: vg_vmguest
    pvs: /dev/sdb
  – vg: vg_sl_storage
    pvs: /dev/sdc
  – vg: vg_vmguest
    lv: lv_vg_vmguest
    size: 100%FREE
    shrink: no
  – vg: vg_sl_storage
    lv: lv_vg_sl_storage
    size: 100%FREE
    shrink: no
  – dev: /dev/vg_vmguest/lv_vg_vmguest
    fstype: ext4
  – dev: /dev/vg_sl_storage/lv_vg_sl_storage
    fstype: ext4
  – path: /vmguests
    src: /dev/vg_vmguest/lv_vg_vmguest
    fstype: ext4
  – path: /sl_storage
    src: /dev/vg_sl_storage/lv_vg_sl_storage
    fstype: ext4


2. save the file.


Definitions of the variables above:

vgs__to_merge: This section is the creation volume/physical groups

  – vg: vg_vmguest (this is the volume group name)

    pvs: /dev/sdb (this is the physical assigned to the above volume group

  – vg: vg_sl_storage (This the second volume name)

    pvs: /dev/sdc (This is the second physical disk assigned to the above

*You can add as many as you like*


lvs__to_merge: This section is the logical Volume creations

  – vg: vg_vmguest (this is the volume group created)

    lv: lv_vg_vmguest (this is the logical volume that is attached to above vg

    size: 100%FREE (this says please use the whole disk)

    shrink: no (this is needed to so the disk space is used correctly)

  – vg: vg_sl_storage (this is the second volume created)

    lv: lv_vg_sl_storage (this is the second lvm created attached to above vg)

    size: 100%FREE (this is use the whole disk)

    shrink: no (this is needed so the disk space is properly used)


fs__to_merge: This section formats the lvm

  – dev: /dev/vg_vmguest/lv_vg_vmguest (lvm name)

    fstype: ext4 (file system you want to format with)

  – dev: /dev/vg_sl_storage/lv_vg_sl_storage (2nd lvm name)

    fstype: ext4 (file system you want to format with)


mnt__to_merge: This section will create the path,mount, and add to fstab

  – path: /vmguests (path you want created for mount)

    src: /dev/vg_vmguest/lv_vg_vmguest (lvm you want to mount)

    fstype: ext4 (this is for fstab adding)

  – path: /sl_storage (this is second path to create)

    src: /dev/vg_sl_storage/lv_vg_sl_storage (second lvm you want to mount)

    fstype: ext4 (to add to fstab)


Running your playbook:

1.Go inside your ansible home directory

cd ansible/

2.Create a file called justdofs.yml and save it

Example: of justdofs.yml

hosts: all

  gather_facts: yes

  any_errors_fatal: true


    – role: provision-fs


3.Ensure the testervernick1 is listed under inventory/hosts
a.Testservernick1 ansible_host=


ansible/$ ansible-playbook -i inventory/hosts justdofs.yml -u root -k –limit=’testservernick1′


Example of successful play:

ntailor@test-box:~/ansible/computelab$ ansible-playbook –i inventory/hosts justdofs.yml -u root -k –limit=’testservernick1

SSH password:


PLAY [all] *******************************************************************************************************************************************************************************************************


TASK [provision-fs : Merge VG variables] *************************************************************************************************************************************************************************

ok: [testservernick1]


TASK [provision-fs : Merge LV variables] *************************************************************************************************************************************************************************

ok: [testservernick1]


TASK [provision-fs : Merge FS variables] *************************************************************************************************************************************************************************

ok: [testservernick1]


TASK [provision-fs : Merge MOUNT variables] **********************************************************************************************************************************************************************

ok: [testservernick1]


TASK [provision-fs : Create VGs] *********************************************************************************************************************************************************************************

ok: [testservernick1] => (item={‘vg’: ‘vg_vmguest‘, ‘pvs‘: ‘/dev/sdb‘})

ok: [testservernick1] => (item={‘vg’: ‘vg_sl_storage‘, ‘pvs‘: ‘/dev/sdc‘})


TASK [provision-fs : Create LVs] *********************************************************************************************************************************************************************************

ok: [testservernick1] => (item={‘vg’: ‘vg_vmguest‘, ‘lv’: ‘lv_vg_vmguest‘, ‘size’: ‘100%FREE’, ‘shrink’: False})

ok: [testservernick1] => (item={‘vg’: ‘vg_sl_storage‘, ‘lv’: ‘lv_vg_sl_storage‘, ‘size’: ‘100%FREE’, ‘shrink’: False})


TASK [provision-fs : Create FSs] *********************************************************************************************************************************************************************************

ok: [testservernick1] => (item={‘dev’: ‘/dev/vg_vmguest/lv_vg_vmguest‘, ‘fstype‘: ‘ext4’})

ok: [testservernick1] => (item={‘dev’: ‘/dev/vg_sl_storage/lv_vg_sl_storage‘, ‘fstype‘: ‘ext4’})


TASK [provision-fs : Mount FSs] **********************************************************************************************************************************************************************************

ok: [testservernick1] => (item={‘path’: ‘/vmguests‘, ‘src‘: ‘/dev/vg_vmguest/lv_vg_vmguest‘, ‘fstype‘: ‘ext4’})

ok: [testservernick1] => (item={‘path’: ‘/sl_storage‘, ‘src‘: ‘/dev/vg_sl_storage/lv_vg_sl_storage‘, ‘fstype‘: ‘ext4’})


PLAY RECAP *******************************************************************************************************************************************************************************************************

testservernick1 : ok=8    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0  


How to do a full restore if you wiped all your LVM’s

I’m sure some of you have had the wonderful opportunity to experience loosing all your LVM info in error. Well all is not lost and there is hope. I will show ya how to restore it.

The beauty of LVM is that is naturally creates a backup of the Logical Volumes in the following location.

  • /etc/lvm/archive/

Now If you had just wiped out your LVM and it was simply using one physical disk for all your LVM’s you could simply do a full restore doing the following.

      • vgcfgrestore -f /etc/lvm/archive/(volumegroup to restore) (destination volumegroup)
        o    (ie.)vgcfgrestore -f /etc/lvm/archive/ vg_dev

If you had multiple disks attached to your volume group then you need to do a couple more things to be able to do a restore.

  • Cat the file /etc/lvm/archive/ file you should see something like below
  • physical_volumes {

                        pv0 {

                                    id = “ecFWSM-OH8b-uuBB-NVcN-h97f-su1y-nX7jA9”
                                    device = “/dev/sdj”         # Hint only
                                    status = [“ALLOCATABLE”]
                                    flags = []
                                    dev_size = 524288000    # 250 Gigabytes
                                    pe_start = 2048
                                    pe_count = 63999          # 249.996 Gigabytes


You will need to recreate all the physical volume UUID inside that .vg file for volume group to be able to restore.

    • pvcreate –restore /etc/lvm/archive/ –uuid <UUID> <DEVICE>

      (IE) pvcreate –restorefile /etc/lvm/archive/ –uuid ecFWSM-OH8b-uuBB-NVcN-h97f-su1y-nX7jA9 /dev/sdj
  • Repeat this step for all the physical volumes in the archive vg file until they have all been created.

Once you have completed the above step you should now be able to restore your voluegroups that were wiped

    • vgcfgrestore -f /etc/lvm/archive/(volumegroup to restore) (destination volumegroup)

      o (ie.)vgcfgrestore -f /etc/lvm/archive/ vg_dev
  • Running the command vgdisplay and pvdisplay should show you that everything is back the way it should be

If you have questions email