Author: admin

How to generate new Network UUID’s with Ansible

Okay some of you might have deployed linux vm’s from clone templates using ansible by way of the vmware_guest module.

Now everybody goes about it differently, and from what I read online…. It would seem that lots of people over complicate the generation of the UUID with over complicated code to generate the UUID.

.

At the end of the day all a UUID is….is JUST A “UNIQUE IDENTIFIER”. It serves no other function other than being another form of labelling the network interface on the vm. There is no need to over complicate the creation of a UUID. This is also provided you defined UUID’s on your deployments.

.

Why…would you want to do this? Well if you cloned from a template. The new clone with have the same network UUID on every new machine you create. Now this wont impact your infrastructure in anyway, other than you *might* get duplicate UUID warning at some point. However, it can be problematic when doing backups, restores, migrations, and monitoring in some cases.

.

Ansible Setup:

 This post assumes that you have ansible setup and running

Role :

 Create a role called CreateNewNetworkUUID in /etc/ansible/roles
mkdir -p /etc/ansible/roles/CreateNewNetworkUUID/tasks
 Create a main.yml inside /etc/ansible/roles/CreateNewNetworkUUID/tasks/
vi /etc/ansible/roles/CreateNewNetworkUUID/tasks/main.yml

.

 Now add the following yaml code.

.

Note: This just runs the ‘uuidgen’ command on the linux vm and then registers the result into a variable that is passed to the next task.

.

name: Generate new UUID

shell: uuidgen

register: new_uuid_result

.

– debug:

var: new_uuid_result

.

Note: This updates the network file on redhat and adds the UUID line with the newly generated UUID and shows a log of the new UUID that was added. This section will also be outlined in the file as managed by ansible

.

– name: Add New UUID to network config

  blockinfile:

    dest: /etc/sysconfig/network-scripts/ifcfg-ens192

    insertafter: NAME=”ens192″

block: |

UUID=”{{ new_uuid_result[‘stdout‘] }}”

register: filecontents

.

– debug: msg=”{{ filecontents }}”

.

 Save the file

.

Ansible playbook run:

.

 From inside /etc/ansible directory call your role inside your playbook or create a new playbook calling the role

.

 vi createnewUUID.yml

 Add the following to your playbook.

..

– hosts: all

  gather_facts: no

roles:

– role: CreateNewNetworkUUID

.

 Save the file

.

Ansible playbook run:

 Run your new role against your hosts

Note: this
run the role against all your hosts defined in inventory/DEV/hosts via ssh. You will need to know the root/pass for your ssh connection to be able to carry out the tasks.
ansible-playbook –i inventory/DEV/hosts createnewUUID.yml -k

.

Ansible playbook log:

SSH password:

.

PLAY [all] ****************************************************************************************************************************************************************************************************

.

TASK [CreateNewUUID : Generate new UUID] **********************************************************************************************************************************************************************

changed: [nicktestvm]

.

TASK [CreateNewUUID : debug] **********************************************************************************************************************************************************************************

ok: [nicktestvm] => {

new_uuid_result“: {

ansible_facts“: {

discovered_interpreter_python“: “/usr/bin/python”

},

“changed”: true,

cmd“: “uuidgen“,

“delta”: “0:00:00.010810”,

“end”: “2020-12-21 20:13:36.614154”,

“failed”: false,

rc“: 0,

“start”: “2020-12-21 20:13:36.603344”,

“stderr”: “”,

stderr_lines“: [],

stdout“: “49242349-5168-4713-bcb6-a53840b2e1d6”,

stdout_lines“: [

“49242349-5168-4713-bcb6-a53840b2e1d6”

]

}

}

.

TASK [CreateNewUUID : Add New UUID to network config] *********************************************************************************************************************************************************

changed: [nicktestvm]

.

TASK [CreateNewUUID : debug] **********************************************************************************************************************************************************************************

ok: [nicktestvm] => {

new_uuid_result.stdout“: “49242349-5168-4713-bcb6-a53840b2e1d6”

}

.

PLAY RECAP ****************************************************************************************************************************************************************************************************

nicktestvm              : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

.

Nicktestvm:

.

[root@nicktestvm ~]$ cat /etc/sysconfig/network-scripts/ifcfg-ens192

TYPE=”Ethernet”

PROXY_METHOD=”none”

BROWSER_ONLY=”no”

BOOTPROTO=”none”

DEFROUTE=”yes”

IPV4_FAILURE_FATAL=”no”

IPV6INIT=”yes”

IPV6_AUTOCONF=”yes”

IPV6_DEFROUTE=”yes”

IPV6_FAILURE_FATAL=”no”

IPV6_ADDR_GEN_MODE=”stable-privacy”

NAME=”ens192″

# BEGIN ANSIBLE MANAGED BLOCK

UUID=”49242349-5168-4713-bcb6-a53840b2e1d6″

# END ANSIBLE MANAGED BLOCK

DEVICE=”ens192″

ONBOOT=”yes”

IPADDR=”192.168.1.69″

PREFIX=”24″

GATEWAY=”192.168.1.254″

DNS1=”8.8.8.1″

DNS2=”8.8.8.2″

DOMAIN=”nicktailor.co.uk”

IPV6_PRIVACY=”no”

.

How to deploy Vmware VM’s using Ansible from Cloned Templates

QUICK OVERVIEW OF WHAT ANSIBLE IS..

Ansible is a radically simple IT automation engine that automates cloud provisioning, configuration management, application deployment, intra-service orchestration, and many other IT needs.

Designed for multi-tier deployments since day one, Ansible models your IT infrastructure by describing how all of your systems inter-relate, rather than just managing one system at a time.

It uses no agents and no additional custom security infrastructure, so it’s easy to deploy – and most importantly, it uses a very simple language (YAML, in the form of Ansible Playbooks) that allow you to describe your automation jobs in a way that approaches plain English.

On this page, we’ll give you a really quick overview so you can see things in context. For more detail, hop over to docs.ansible.com.

EFFICIENT ARCHITECTURE

Ansible works by connecting to your nodes and pushing out small programs, called “Ansible modules” to them. These programs are written to be resource models of the desired state of the system. Ansible then executes these modules (over SSH by default), and removes them when finished.

Your library of modules can reside on any machine, and there are no servers, daemons, or databases required. Typically you’ll work with your favorite terminal program, a text editor, and probably a version control system to keep track of changes to your content.

 Okay so what that actually is saying is. Ansible has a whole library of python modules that come out of the box coupled with a huge community of open source python modules to do all sorts of tasks to automate infrastructure.
 You can call these modules by writing yaml code, inside your yaml code when you call a specific module, you can the pass specific variables to that module to do specific things defined by the python module.
Example power on and off a vm, or connect or disconnect network, etc.

For the purposes of this post we are are going to dive into using vmware_guest” module by way of using http api authentication session & cookies. There are many other python modules which you can search in the ansible documentation and or ansible-galaxy

.

https://docs.ansible.com/ansible/latest/collections/community/vmware/index.html

Now it definitely helps to be able to code in python or at least be able to read python code, however completely not necessary. Anyone with basic understanding of bash scripting can learn ansible. I could teach a newbie ansible in a couple days. Sharing is caring.

.

Anyone who says otherwise……don’t hire them.

.

.

Ansible Setup: 

 Now this post assumes you already have ansible setup and are running a newer version. If not you will need to review post on how to setup ansible before you can proceed with this.

Pre-Module install Steps: 

Requirements

The below requirements are needed on the host that executes this module.

 python >= 2.6
 PyVmomi
 PIP
 Community.vmware library of python modules

.

1.Okay so you if your on our ansible machine as root
 Run the following this should install the modules you need
ansible-galaxy collection install community.vmware
 Note: Depending on where you ran this from. If you ran this from /home/root. You can find all your python modules in ‘root/.ansible/collections/ansible_collections/community/vmware/plugins/modules’
 You will probably need to install python 2.6 or greater
Redhat : Yum install python (should get you the latest version)
 Okay you may also neeed to install pip

Note: Now on centos its not available out of the box

.Centos 7 PIP install:

1.sudo yum install epel-release
2.sudo yum install python-pip
3.pip –version (verify its installed)
4.sudo yum install python-devel (these are for building python modules)
5.sudo yum groupinstall ‘development tools’ (these are for building python modules(

.

.Install PyVmomi: 

1.pip install –upgrade pyvmomi

.

It will look like…..

[root@nick roles]# pip install –upgrade pyvmomi

Collecting pyvmomi

Downloading https://files.pythonhosted.org/packages/ba/69/4e8bfd6b0aae49382e1ab9e3ce7de9ea6318eac007b3076e6006dbe5a7cd/pyvmomi-7.0.1.tar.gz (584kB)

100% |████████████████████████████████| 593kB 861kB/s

Cache entry deserialization failed, entry ignored

Collecting requests>=2.3.0 (from pyvmomi)

Downloading https://files.pythonhosted.org/packages/29/c1/24814557f1d22c56d50280771a17307e6bf87b70727d975fd6b2ce6b014a/requests-2.25.1-py2.py3-none-any.whl (61kB)

100% |████████████████████████████████| 61kB 3.5MB/s

Collecting six>=1.7.3 (from pyvmomi)

Downloading https://files.pythonhosted.org/packages/ee/ff/48bde5c0f013094d729fe4b0316ba2a24774b3ff1c52d924a8a4cb04078a/six-1.15.0-py2.py3-none-any.whl

Cache entry deserialization failed, entry ignored

Collecting certifi>=2017.4.17 (from requests>=2.3.0->pyvmomi)

Downloading https://files.pythonhosted.org/packages/5e/a0/5f06e1e1d463903cf0c0eebeb751791119ed7a4b3737fdc9a77f1cdfb51f/certifi-2020.12.5-py2.py3-none-any.whl (147kB)

100% |████████████████████████████████| 153kB 6.5MB/s

Cache entry deserialization failed, entry ignored

Collecting urllib3<1.27,>=1.21.1 (from requests>=2.3.0->pyvmomi)

Downloading https://files.pythonhosted.org/packages/f5/71/45d36a8df68f3ebb098d6861b2c017f3d094538c0fb98fa61d4dc43e69b9/urllib3-1.26.2-py2.py3-none-any.whl (136kB)

100% |████████████████████████████████| 143kB 6.9MB/s

Cache entry deserialization failed, entry ignored

Collecting idna<3,>=2.5 (from requests>=2.3.0->pyvmomi)

Downloading https://files.pythonhosted.org/packages/a2/38/928ddce2273eaa564f6f50de919327bf3a00f091b5baba8dfa9460f3a8a8/idna-2.10-py2.py3-none-any.whl (58kB)

100% |████████████████████████████████| 61kB 4.4MB/s

Cache entry deserialization failed, entry ignored

Collecting chardet<5,>=3.0.2 (from requests>=2.3.0->pyvmomi)

Downloading https://files.pythonhosted.org/packages/19/c7/fa589626997dd07bd87d9269342ccb74b1720384a4d739a1872bd84fbe68/chardet-4.0.0-py2.py3-none-any.whl (178kB)

100% |████████████████████████████████| 184kB 3.5MB/s

Installing collected packages: certifi, urllib3, idna, chardet, requests, six, pyvmomi

Found existing installation: certifi 2018.4.16

Uninstalling certifi-2018.4.16:

Successfully uninstalled certifi-2018.4.16

Found existing installation: urllib3 1.22

Uninstalling urllib3-1.22:

Successfully uninstalled urllib3-1.22

Found existing installation: idna 2.6

Uninstalling idna-2.6:

Successfully uninstalled idna-2.6

Found existing installation: chardet 3.0.4

Uninstalling chardet-3.0.4:

Successfully uninstalled chardet-3.0.4

Found existing installation: requests 2.18.4

Uninstalling requests-2.18.4:

Successfully uninstalled requests-2.18.4

Found existing installation: six 1.9.0

Uninstalling six-1.9.0:

Successfully uninstalled six-1.9.0

Running setup.py install for pyvmomi … done

Successfully installed certifi-2020.12.5 chardet-4.0.0 idna-2.10 pyvmomi-7.0.1 requests-2.25.1 six-1.15.0 urllib3-1.26.2

You are using pip version 10.0.1, however version 20.3.3 is available.

.

You should consider upgrading via the ‘pip install –upgrade pip’ command.

(You noticed this at the bottom)

A lot of the time you need to upgrade pip for the modules to install as python is always evolving at a fast pace

.

So run

.

2.pip install –upgrade pip

.

[root@nick roles]# pip install –upgrade pip

Collecting pip

Downloading https://files.pythonhosted.org/packages/54/eb/4a3642e971f404d69d4f6fa3885559d67562801b99d7592487f1ecc4e017/pip-20.3.3-py2.py3-none-any.whl (1.5MB)

100% |████████████████████████████████| 1.5MB 799kB/s

Installing collected packages: pip

Found existing installation: pip 8.1.2

Uninstalling pip-8.1.2:

Successfully uninstalled pip-8.1.2

Successfully installed pip-10.0.1

.

You get the idea……

.

.VpsherePre-requistes for this to work:

.

You will need a vmware user who has api access permission for the following items. If the user you have setup in vcenter is unable to see these items. This module will fail. You do not need a user with full admin privileges, which is what a lot of documentation says online cryptically. I have tested this and confirmed that is not the case. Obivously, its way better to just give admin privileges to the user and trust the people you hire and use ansible vault to hide the credentials. Which we will get into later….

.

You can also check these parameters in your code by validating using assertions to see if they are all working with your user prior to moving on the next task.

.

– vSphere API configuration

– VM details

vcenter_host

– cluster

datacenter

– folder

vm_disk_size

vm_cpu_count

vm_memory

vm_vlan

vm_vlan_name

vm_dvswitch

vm_datatstore

vmware tools and or open_vm_tools must installed the clone template (super important)

.

.

Okay so now were on setting up the vmware_guest module using yaml code.

.

Setting vmware_guest module on ansible:

.

Now what I like to do is set everything up as a role in ansible to call in your playbooks, it keeps things cleaner and its much easier to find spacing mistakes in your code when writing in yaml. Lots of NBTo aid in checking for mistakes. But ultimately its experience. I’m a bit of both but I tend just pop a vi open and just write and much in there

.

1.Inside your /etc/ansible
 Create a directory called roles
mkdir roles
3.Next you want to move inside the that directory and create a name directory for this role and then go inside that directory
i.cd roles
ii.mkdir ansible-vmware-deploy
iii.cd ansible-vmware-deploy
4.Next create the following direcorties inside ‘ansible-vmware-deploy’
iv.mkdir defaults
v.mkdir tasks
vi.mkdir meta (this is really only needed for when you’re setting repositories in bickbucket, git, etc)
5.move into the tasks directory
vii.cd tasks

Note: Now we do most of our work in this directory. Your primary yaml file is always called “main.yml” Your playbooks always look for this file when trying to call python modules.

.

6.Open your favorite editor vi, nano, joe, visual studio (whatever)
a.Call the file “main.yml
b.Inside the file…

.

Setting up the yaml:

.

1. First stage of the yaml is use the http login to the vcenter host and successfully authenticate and then grab those session cookies to carry out the next set of tasks which utilise the vmware_guest module.

– name: Login into vCenter and get cookies

  delegate_to: localhost

  uri:

url: https://{{ vcenter_host }}/rest/com/vmware/cis/session

    force_basic_auth: yes

    validate_certs: no

method: POST

user: ‘{{ vcenter_username }}’

password: ‘{{ vcenter_password }}’

register: login

.

.

2. Okay so this where we are now actually calling the vmware_guest module in yaml. You can see that the code has a lot of areas that are variablelised. These variable are passed in a couple of ways. You need to pass the defaults through the defaults directory we created earlier, and the second is host specific variables which will be under your host_vars directory under your inventory structure, which we will get into later.

 

Note: Now remember this is code to deploy from an existing cloned template you have sitting on datastore somewhere in your environment. The process to deploy a vm using kickstart using DHCP that’s bit different to setup I wrote this to help out those people who cant see the wisdom and efficiency of having DHCP’d deployments

You will be passing these variables

.

– name: Create a VM

  vmware_guest:

hostname: “{{ vcenter_host }}”

username: “{{ vcenter_username }}”

password: “{{ vcenter_password }}”

    validate_certs: False

cluster: “{{ vcenter_cluster }}”

    datacenter: “{{ vcenter_dc }}”

 

Note: name: This will be the name of the new vm created. Keep in mind the vm host will also be setup with a shortname for the hostname of the server not the FQDN. You can probably fix this using vmshell or I used a completely separate role to setup the network for physical machines which uses jinja templates and inside the role I passed the new name as a variable. But that’s for another post

name: “{{ inventory_hostname }}”

folder: “{{ vm_folder }}”

template: “{{ VMTemplate }}”

state: “{{ vm_state }}”

Note: guest_id: this is what kind of OS will the VM Run, almost every hypervisor asks that prior to creating a vm. You can find the list online.

    guest_id: “{{ vm_guest_id }}”

Note: disk: this section you could technically pass it through as a variable in your host_vars on the specific hosts, but since were using a template. I kept these parameters static here inside the role.

disk:

size_gb: 80

type: thin

datastore: “{{ vm_datastore }}”

size_gb: 100

type: thin

datastore: “{{ vm_datastore }}”

hardware:

      memory_mb: “{{ vm_memory }}”

      num_cpus: “{{ vm_cpu_count }}”

      scsi: paravirtual

 

Note: Customization: This section is very important because without it your dns in /etc/resolv.conf will not be configured correctly. A lot of people have a hell of time with this on the net, as the parsing of this in yaml is bit tricky, and people resort to using vm_guest_file to update the /etc/resolv.conf, which sucks because now you need the root/pass via ssh. My way will work


customization:

      dns_servers: “{{ vm_dns_servers }}”

      dns_suffix: “{{ vm_dns_suffix }}”


Note: networks: This section is the section which will use
vmware-tools or open_vm_tools to update the network config on host after powering on the vm, but before the OS is booted, provided you said to power it on in your host_var file. This section helps people get around the issue of having no DHCP and having to deploy each server using the same static address on a dedicated vlan. This section will go and update the vm network parameters and the template vm will deploy on a  whatever vlan, with different ip, gateway, netmask. It will also register a new mac address to the vm, so you don’t end up with vm’s with duplicate mac-addresses. Lastly, it will update /etc/hosts with the new ip and shortname of the server


networks:

– name: “{{ vm_vlan_name }}”

type: static

      dvswitch_name: “{{ vm_dvswitch }}”

      ip: “{{ vm_ip }}”

netmask: “{{ vm_netmask }}”

gateway: “{{ vm_gateway }}”

      start_connected: “{{ vm_connected }}”

# wait_for_ip_address: yes (this is if you are using DHCP)

  delegate_to: localhost

register: vm_deploy

.

Note: This section is just spits out verbose information on the how the build went and the mac-address of the vm. This hand to pay attention to so you can ensure your template mac and your new vm don’t have duplicate macs. If you do. You will need to go into vshere find the VM. Remove the network and readd it manually, to register a new mac

.

– debug:

var: vm_deploy.instance.hw_eth0.macaddress

.

– debug:

var: deploy_vm

.

– debug:

var: mac.

.

7.Okay so now we need to setup our defaults to pass the to role we just created.

.

 So go into your defaults directory for the role
cd /etc/ansible/roles/ansible-vmware-deploy/defaults
 Create another file called ‘main.yml
Vi main.yml and copy the contents below.

Not: Its easier to put all your defaults here and then comment out the ones you want to pass through your host_vars specific files after you got it working the way you want.

.

vm_disks: 100

vm_cpu_count: 2

vm_state: present

vm_memory: 2048

#vm_datastore: vmfs-datastore1234

vcenter_username: BruceWayne

vcenter_password: ( you will put ansible_vault encrypted variable here, for now just put in your password for testing)

vm_dvswitch: DvSwitch

vcenter_cluster: ProdCluster

vcenter_host: vcenter.nicktailor.com

vcenter_dc: London

#vm_folder: /Production/Unix/

#vm_vlan_name: VM76123

vm_guest_id: rhel7_64Guest

#VMTemplate: redhat-template2020

.

 Save the file defaults/main.yml

.

Ansible Hosts and Inventory:

.

Okay so this is where everyone handles things uniquely. I personally like to take the approach of creating inventory based on environment. Its logical and the best way to manage hosts in very large infrastructures.

.

So if you have DEV/STAGING/PRODUCTION as your environments. Then I would set it up as such

.

.

      1. Inside your /etc/ansible directory create the following

Mkdir -p /etc/ansible/inventory
Mkdir -p /etc/ansible/inventory/DEV
Mkdir -p /etc/ansible/inventory/STAGING
Mkdir -p /etc/ansible/inventory/PRODUCTION

.

2.Inside each environment(DEV,STAGING,PROODUCTION) one you want to create the following:

.

Mkdir -p /etc/ansible/inventory/DEV/group_vars
 This is where you can pass group variables if you have hosts setup as groups in your hosts file that we just created.
Mkdir -p /etc/ansible/inventory/DEV/host_vars
 This is where you pass specific variables per host instead of groups
Touch /etc/ansible/inventory/DEV/hosts
3.Open up one of the host files in your favorite editor vi, nano, joe, visual studio, etc….

.

 vi /etc/ansible/inventory/DEV/hosts

.

For the purposes of this post we are just going to
create one group
=====================================

.

[All]

nicktestvm.nicktailor.com ansible_host=192.168.1.200

=====================================

.

 Save file

.

Note: ansible_host=(ip) This is used when you want to override dns of the host and tell ansible. Do not resolve the dns this host only connect to this ip. You don’t need this here, however if your’re using ‘a’ static address to deploy vm’s initially and not using vmwre_tools to configure the network, and went with SSH after for configuration of the host. Then it will need to know which host to connect to setup the network. So I just like to have there in case I want to temporary tell ansible look here for this server.

4.Now we want to create host_var for the specific VM host we want to deploy.

.

 Create a host_var file for the new host you want to deplo
Vi /etc/ansible/inventory/DEV/host_vars/nicktestvm

.Note: You can see all the variables that were in the role and defaults are now being passed through here for this specific host. It has to be done in this fashion for it all work correctly. If you pass all this through the role may crap out on you.

.

#vm_requirements

vm_ip: 192.168.1.86

vm_netmask: 255.255.255.0

vm_gateway: 192.168.1.1

vm_vlan_name: VM76123

VMTemplate: redhat-template2020

vm_folder: /Production/Unix

vm_state: poweredon

vm_connected: true

vm_datastore: vmfs-datastore1234

note: vm_dns_servers: this section is very important. This was the only way I could get the dns server to parse and update the /etc/resolv.conf properly. If you list them out individually as one lineers. It seems to be a bug and will simply empty out the file, which will leave your vm unable to resolve dns.

vm_dns_servers: [8.8.8.1, 8.8.8.2]

vm_dns_suffix: nicktailor.co.uk

.

 Save the file

.

Setting Ansible Vault and Encrypted variables:

.

5.Setting up the vmware-user password to be encrypted using ansible vault. Now this can be easily decrypted by anyone who has the vault password. But the benefit is that its not directly visible in your open code for prying as eyes. Which is just a generally good idea.

.

 So you want to create vault password for the variable in side defaults which was “vcenter_password”. Keep in mind variable is apart of the encrypted process.

there a couple of ways to do this you can do it via file, or via prompt.
I’m going to show you how to do it via file.
First create a vault password file
Echo “password” >> vault.pw.txt
Cat vault.pw.txt (to ensure the password is now there)
 This the password for the ansible vault not the password for your vcenter_password
Now encrypt the vcenter_password as a varible inside the vault as id1. It good to use id’s incase you you want to have multiple passwords inside your vault.

Note: the –-name is the variable you want to pass in your code. So whatever you call that has to be there.

ansible-vault encrypt_string –vault-id 1@vault.pass.txt ‘vcenter-password-here’ –name ‘vcenter_password

 

vcenter_password: !vault |

$ANSIBLE_VAULT;1.2;AES256;1

31623638366337643437633065623538663565336232333863303763336364396438663032363364

3665376363663839306165663435356365643965343364310a313832393261363466393237666666

36666437626563386366653938383565663361646333333732336439356633616231653639626465

3130656134383365320a323032366238303366336562653865663130333963316237393839373830

65396139323739323266643961653766333633366638336435613933373966643561

Encryption successful

.

6.Okay now you want copy by highlighting this section below

.

.

vcenter_password: !vault |

$ANSIBLE_VAULT;1.2;AES256;1

31623638366337643437633065623538663565336232333863303763336364396438663032363364

3665376363663839306165663435356365643965343364310a313832393261363466393237666666

36666437626563386366653938383565663361646333333732336439356633616231653639626465

3130656134383365320a323032366238303366336562653865663130333963316237393839373830

65396139323739323266643961653766333633366638336435613933373966643561

.

 open your /etc/ansible/roles/ansible-vmware-deploy/defaults/main.yml
vi etc/ansible/roles/ansible-vmware-deploy/defaults/main.yml

.

 Next replace the whole ‘vcenter_password’ line with the highlight section above and save the file.
  •  •  You should also store the vault password somewhere offsite in some password database and delete the vault.pass.txt file you created.

.

Deploy VM with ansible:

.

 From inside the /etc/ansible directory you now need to create your playbook that will call the role you just setup.

.

 Create a new playbook file standard_build.yml
Vi standard_build.yml

.

 Now add the following:

– hosts: all

  gather_facts: no

roles:

– role: ansible-vmware-deploy

 Save the file

.

 Now you want to call the new role to deploy against the environment and specific host we setup earlier

.

 Still from inside the /etc/ansible directory you want to run all your playbooks from here

.

ansible-playbook –i inventory/DEV/hosts –-ask-vault standard_build.yml

.

.

Note: Important thing to remember when deploying linux machines from a template is that all your machines will have the same ‘Network’ UUID as the template machine. If you define these…. You will need to write some code to fix that up after the VM is deployed and powered up. Check  out the link below on how to do that.

http://www.nicktailor.com/?p=1177

.

Special Note: if you attempt to deploy multiple hosts at the same time. This will deploy 5 clones in parallel at a time and not one by one. Which will reduce deployment time significantly. I didnt bother to see if i could override this….:)

Output log of successful automated ansible deploy:

.

[root@nickansible]# ansible-playbook –i inventory/DEV/hosts standard_build.yml –ask-vault –limit ‘nicktestvm

.

Vault password: (paste password here in your shell window)

.

PLAY [all] ****************************************************************************************************************************************************************************************************

.

TASK [ansible-vmware-deploy : Validate Project Requirements] **********************************************************************************************************************************************

ok

.

TASK [ansible-vmware-deploy : Login into vCenter and get cookies] *****************************************************************************************************************************************

ok: [nicktestvm]

.

TASK [ansible-vmware-deploy : Create a VM] ****************************************************************************************************************************************************************

changed: [nicktestvm]

.

TASK [ansible-vmware-deploy : debug] **********************************************************************************************************************************************************************

ok: [nicktestvm] => {

“vm_deploy.instance.hw_eth0.macaddress”: “00:40:51:53:11:a6”

}

.

nicktestvm            : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

.

.

.

.

How to check if ports are open on an array of servers

Okay now there is a whole bunch of ways you can do this. This is just the way I played around with to save myself a bunch of time, using NCAT. Also previously known as NETCAT.

1.Ensure your Jumphost can ssh to all your newely deployed machines. Either you will use a root password or ssh key of some sort.

2.You will also need to install ncat
a.Yum install nmap-ncat (redhat/centos)
Note (ensure you have this install on all the new servers) 

3.Open your editor and copy and paste this script below and save the file
b.Vi portcheckscriptnick.sh & save
c.Chmod +x portcheckscriptnick.sh (change permissioned to executable)

portcheckscriptnick.sh – this will check to see if your new server can talk to all the hosts below and check to see if those ports are up or down on each

============================

#!/bin/bash

host=”nick1 nick2 nick3 nick4″

for host in $host; do

for port in 22 53 67 68

do

if ncat -z $host $port

then

echo port $port $host is up

else

echo port $port $host is down

fi

.

done

done
========================================

4.Next you want create an array for your for loop to cycle through and check if all those servers can communicate with those machine and ports
d.Create a file called servers
i.Vi servers
ii.Add a bunch of hosts in a single column

Example:

Server1

Server2

Server3

Server4

e.Save the file servers

.

5.Now what were going to is have a for loop cycle through the list by logging into each host running that script and outputting the results to a file for us to look at.

.

6.Run the following below check the servers and see if each server can communicate with the hosts and ports necessary. If you see the are down. Then you will need to check the firewalls to see why the host is unable to communicate.

 for HOST in $(cat server.txt) ; do ssh root@$HOST “bash -s” < portcheckscriptnick.sh ; echo $HOST ; done 2>&1 | tee -a port.status

Note: the file port.status will be created on the jump host and you can simply look through to see if any ports were down on whichever hosts.

.

This is what the script looks like on one host if its working properly

[root@nick ~]# ./portcheckscriptnick.sh

port 22 192.168.1.11 is up

port 53 192.168.1.11 is down

port 67 192.168.1.11 is down

port 68 192.168.1.11 is down

.

This is what it will look like when you run against your array of new hosts from your jumpbox

[root@nick ~]# for HOST in $(cat servers.txt) ; do ssh root@$HOST “bash -s” < portcheckscriptnick.sh ; echo $HOST ; done

root@192.168.1.11’s password:

port 22 nick1 is up

port 53 nick1 is down

port 67 nick1 is down

port 68 nick1 is down

port 22 nick2 is up

port 53 nick2 is down

port 67 nick2 is down

port 68 nick2 is down

How to setup SMTP port redirect with IPTABLES and NAT

RedHat/Centos

Okay its really easy to do. You will need to add the following in /etc/sysctl.conf
Note: these are kernel parameter changes

1.vi /etc/sysctl.conf add the following lines

kernel.sysrq = 1

net.ipv4.tcp_syncookies=1

net/ipv4/ip_forward=1 (important)

net.ipv4.conf.all.route_localnet=1 (important)

net.ipv4.conf.default.send_redirects = 0

net.ipv4.conf.all.send_redirects = 0

.

2.Save the file and run
 Sysctl -p (this will load the new kernel parameters)
3.Now you if you already have iptables running you want to save the running config and add the new redirect rules
 Iptables-save > iptables.back
4.Now you want to edit the iptables.back file and add the redirect rules
 vi iptables.back

It will probably look something like the rules below.

EXAMPLE

# Generated by iptables-save v1.2.8 on Thu July 6 18:50:55 2020

*filter

:INPUT ACCEPT [0:0]

:FORWARD ACCEPT [0:0]

:OUTPUT ACCEPT [2211:2804881]

:RH-Firewall-1-INPUT – [0:0]

-A INPUT -j RH-Firewall-1-INPUT

-A FORWARD -j RH-Firewall-1-INPUT

-A RH-Firewall-1-INPUT -i lo -j ACCEPT

-A RH-Firewall-1-INPUT -p icmp -m icmp –icmp-type 255 -j ACCEPT

-A RH-Firewall-1-INPUT -p esp -j ACCEPT

-A RH-Firewall-1-INPUT -p ah -j ACCEPT

-A RH-Firewall-1-INPUT -m state –state RELATED,ESTABLISHED -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m tcp –dport 1025-m state –state NEW -j ACCEPT (make sure to have open)

-A RH-Firewall-1-INPUT -p tcp -m tcp –dport 443 -m state –state NEW -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m tcp –dport 8443 -m state –state NEW -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m tcp –dport 25 -m state –state NEW -j ACCEPT (make sure to have open)

-A RH-Firewall-1-INPUT -p tcp -m tcp –dport 80 -m state –state NEW -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m tcp –dport 21 -m state –state NEW -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m tcp –dport 22 -m state –state NEW -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m tcp –dport 106 -m state –state NEW -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m tcp –dport 143 -m state –state NEW -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m tcp –dport 465 -m state –state NEW -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m tcp –dport 993 -m state –state NEW -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m tcp –dport 995 -m state –state NEW -j ACCEPT

-A RH-Firewall-1-INPUT -p tcp -m tcp –dport 8222 -m state –state NEW -j ACCEPT

-A RH-Firewall-1-INPUT -j REJECT –reject-with icmp-host-prohibited

COMMIT

#ADD this section with another Commit like below

# Completed on Thu July 6 18:50:55 2020

# Generated by iptables-save v1.2.8 on Thu July 6 18:50:55 2020

*nat

:PREROUTING ACCEPT [388:45962]

:POSTROUTING ACCEPT [25:11595]

:OUTPUT ACCEPT [25:11595]

-A PREROUTING -p tcp -m tcp –dport 1025 -j REDIRECT –to-ports 25

COMMIT

# Completed on Thu July 6 18:50:55 2020

.

 Save the file

.

5.Next you want to reload the new config
 Iptables-restore < iptables.back
6.Now you should be able see the new rules and test
 Iptables -L -n -t nat (should show the rules)

.

[root@nick ~]# iptables -L -n | grep 1025

ACCEPT tcp — 0.0.0.0/0 0.0.0.0/0 tcp dpt:1025 state NEW

[root@nick ~]# iptables -L -n -t nat| grep 1025

REDIRECT tcp — 0.0.0.0/0 0.0.0.0/0 tcp dpt:1025 redir ports 25

.

Note:

You will need to run telnet from outside the host as you cant NAT to localhost locally. 🙂

.

[root@nick1 ~]# telnet 192.168.86.111 1025

Trying 192.168.86.111…

Connected to localhost.

Escape character is ‘^]’.

220 nick.ansible.com ESMTP Postfix

How to rebuild a drive that’s fallen out of a software raid

Now I know nobody uses this kind of raid technology anymore, but it was one of the cool things I learned from my mentor at the time, when I first started my career centuries ago. I happen to find this in my archives and thought I would write up to share.

There is another way to do this as using mdadm & sfdisk. When I find time I will share how to do that as well.

1.First thing you want to do is check to see drive has fallen out of the raid by running the following command below

 cat /proc/mdstat

md2 : active raid1 hda3[0] hdc3[1]

524096 blocks [2/2] [UU]

md1 : active raid1 hda2[0] hdc2[1]

524096 blocks [2/2] [UU]

md0 : active raid1 hda1[0]

78994304 blocks [2/1] [U_] *You notice this one is showing a drive has fallen out*

Note: If you see this, take notice to the one with [U_] this line means that the drive has fallen out of the raid.

1. To enter it back in run the lines below, based on the drive assignments in the above paritions that are good.

 raidhotadd /dev/md0 /dev/hdc1
 echo -n 6666666 > /proc/sys/dev/raid/speed_limit_max (this increases the rebuild speed)

How to rebuild a failed drive in software if you replaced the drive:

 cat /proc/mdstat

md2 : active raid1 hda3[0] hdc3[1]
524096 blocks [2/2] [UU]
md1 : active raid1 hda2[0] hdc2[1]
524096 blocks [2/2] [UU]
md0 : active raid1 hda1[0]
78994304 blocks [2/1] [U_]

2. recreate the paritions on the new drive by doing the following, using the same mirror drive designations from /proc/mdstat.

 sfdisk -d /dev/hda(source) | sfdisk /dev/hdc(destination) (this duplicates all three partitions on the drive on the new drive)
  echo 6666666666 > /proc/sys/dev/raid/speed_limit_max (increase rebuild speed)

3. Next check the partition by running

 df -h
 fdisk -l

Disk /dev/hdc: 81.9 GB, 81964302336 bytes

16 heads, 63 sectors/track, 158816 cylinders

Units = cylinders of 1008 * 512 = 516096 bytes

.

Device Boot Start End Blocks Id System

/dev/hdc1 * 1 156735 78994408+ fd Linux raid autodetect

/dev/hdc2 156736 157775 524160 fd Linux raid autodetect

/dev/hdc3 157776 158815 524160 fd Linux raid autodetect

.

Disk /dev/hda: 81.9 GB, 81964302336 bytes

16 heads, 63 sectors/track, 158816 cylinders

Units = cylinders of 1008 * 512 = 516096 bytes

.

Device Boot Start End Blocks Id System

/dev/hda1 * 1 156735 78994408+ fd Linux raid autodetect

/dev/hda2 156736 157775 524160 fd Linux raid autodetect

/dev/hda3 157776 158815 524160 fd Linux raid autodetect

———————————————————————

Filesystem Size Used Avail Use% Mounted on

/dev/md0 75G 11G 60G 16% /

none 251M 0 251M 0% /dev/shm

/dev/md1 496M 8.1M 463M 2% /tmp

4. Next you want it rebuild the partitions on the new drive so run the following, you will need to update your drive designation according to your drive assignment.

 raidhotadd /dev/md0 /dev/hdc1
 raidhotadd /dev/md1 /dev/hdc2
 raidhotadd /dev/md2 /dev/hdc3

.

Note: the primary partition should match the new drive designation ‘dev/md0 /dev/hdc1’.

How to add a new SCSI LUN while server is Live

REDHAT/CENTOS:

In order to get wwn ids from a server:

 cat /sys/class/scsi_host/host0/device/fc_host\:host0/port_name
 cat /sys/class/scsi_host/host1/device/fc_host\:host1/port_name

Or:

 systool -av -cfc_host | grep port_name | awk ‘{ print $3 }’ | cut -d\” -f 2 | cut -dx -f 2

.

1.To add a new SAN LUN while live:

Run this to find the new disks after you have added them to your VM

 rescan-scsi-bus.sh

Note: rescan-scsi-bus.sh is part of the sg3-utils package

2.Check that it has been found, will be mpath(something)
 multipath –l

# That’s it, unless you want to fix the name from mpath(something) to something else

1.Change the shortcut name

 vi /etc/multipath_bindings

 

2.Remove the default mpath device autogenerated
 multipath –f mpath(something)

# Go into the multipath consolde and re add the multipath device with your new shortcut name (nickdsk2 in this case)

 multipathd –k

 add map nickdsk2

Note: Not going to lie, sometimes you could do all this and still need a reboot, majority of the time this should work. But what do i know…haha

How to figure out switch and port via tcpdump

Okay if you have ever worked in a place where their network was complete choas with no documentation or network maps to help you figure out where something resides.

You can sometimes use tcpdump to help you figure out where the server is sitting by using tcpdump.

Syntax

tcpdump -nn -v -i <NIC_INTERFACE> -s 1500 -c 1 ‘ether[20:2] == 0x2000’


Example:

root@ansible:~ # tcpdumpnn -v –i eth0 -s 1500 -c 1 ‘ether[20:2] == 0x2000’
tcpdump: listening on eth3, link-type EN10MB (Ethernet), capture size 1500 bytes
03:25:22.146564 CDPv2, ttl: 180s, checksum: 692 (unverified), length 370
   Device-ID (0x01), length: 11 bytes: switch-sw02‘ 
   Address (0x02), length: 13 bytes: IPv4 (1) 192.168.1.15
   Port-ID (0x03), length: 15 bytes: ‘Ethernet0/1
   Capability (0x04), length: 4 bytes: (0x00000028): L2 Switch, IGMP snooping
   Version String (0x05), length: 220 bytes:
   Cisco Internetwork Operating System Software
   IOS ™ C2950 Software (C2950-I6Q4L2-M), Version 12.1(14)EA1a, RELEASE SOFTWARE (fc1)
   Copyright (c) 1986-2003 by cisco Systems, Inc.
   Compiled Tue 02-Sep-03 03:33 by Nicola tesla
   Platform (0x06), length: 18 bytes: ‘cisco WS-C2950T-24’
   Protocol-Hello option (0x08), length: 32 bytes:
   VTP Management Domain (0x09), length: 6 bytes: ‘ecomrd
   Duplex (0x0b), length: 1 byte: full
   AVVID trust bitmap (0x12), length: 1 byte: 0x00
   AVVID untrusted ports CoS (0x13), length: 1 byte: 0x00
1 packets captured
2 packets received by filter
0 packets dropped by kernel

root@ansible:~ #

Written by Nick Tailor

How to increase disk size on virtual scsi drive using gpart

1.Login to VMware vSphere Client and shutdown server VM guest.
2.Select VM Guest server and click “Edit virtual machine settings”. Virtual Machine Properties window will appear. Under “Hardware”, click Hard Disk 2 (which is /data partition) and edit provision size to 200 GB as shown in below screenshot.
Power ON VM guest after editing disk size.

vsphere

3.Take VM snapshot of VM guest.
4.Log on to VM Guest using SSH client, like PuTTy, with “root” user.
5.List the SCSI devices using command – cat /proc/scsi/scsi
disk7
6.Run following command to see the name of the partition
ls -d /sys/block/sd*/device/scsi_device/* |awk -F ‘[/]’ ‘{print $4,”- SCSI”,$7}’
disk8
7.Run following commands to confirm the size of the SCSI disk for which you have increased size in step 4 has been updated by the following steps
1.echo 1 > /sys/class/scsi_device/2\:0\:1\:0/device/rescan
2.fdisk -l | grep Disk
3.df -h
disk9_1
disk9_2
8.Stop cron and services using commands
service crond stop
9.Unmount “/data” partition using command — umount /data
Note: If you observe “Device is busy” error then make sure that your current session is not in /data partition.
10.Perform following steps to grow added disk space of /data partition based on partition type

For GPT partition type

In this case parted -l command will give below for “sdb” disk partition
*****************************************************

Model: VMware Virtual disk (
scsi)
Disk /dev/
sdb: 215GB
Sector size (logical/physical): 512B/512B

Partition Table: gpt
Number  Start   End    Size   File system  Name       Flags

1      1049kB  215GB 
215GB  ext4         Linux LVM  lvm
*****************************************************

4.Execute command — gdisk /dev/sdb
5.Type “p” to print the partition
gpt_p
6.Type “d” to delete partition
gpt_d
7.Type “p” to check if partition is deleted
gpt_p1
8.Type “n” to create new partition
9.Type “1” as partition number
10.Press “Enter” twice
11.Type “8E00” as GUID
gpt_n
12.Type “p” to check newly added partition
gpt_p2
13.Type “w” to alter partition table
14.Type “Y” to continue
gpt_w
15.To mount /data partition run command – mount /data
16.To resize the file system run command – resize2fs /dev/sdb1 
17.To check increased disk space run command – df -h
gpt_df

.

How to compare your route table isn’t missing any routes from your ansible config

REDHAT/CENTOS

Okay so those of you who use ansible like me and deal with complicated networks where they have a route list that’s a mile long on servers that you might need to migrate or copy to ansible and you want to save yourself some time and be accurate by ensuring the routes are correct and the file isn’t missing any routes as missing routes can be problematic and time consuming to troubleshoot after the fact.

Here is something cool you can do.

On your server you can

  1.  On the client server
  • You can use “ip” command with a flag r for routes

Example:

It will look look something like this.

[root@ansibleserver]# ip r
default via 192.168.1.1 dev enp0s8
default via 10.0.2.2 dev enp0s3 proto dhcp metric 100
default via 192.168.1.1 dev enp0s8 proto dhcp metric 101
10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15 metric 100
192.168.1.0/24 dev enp0s8 proto kernel scope link src 192.168.1.12 metric 101
10.132.100.0/24 dev mgt proto kernel scope link src 10.16.110.1 metric 1011
10.132.10.0/24  dev mgt proto kernel scope link src 10.16.110.1 metric 101
10.136.100.0/24 dev mgt proto kernel scope link src 10.16.110.1 metric 1011
10.136.10.0/24  dev mgt proto kernel scope link src 10.16.110.1 metric 101
10.134.100.0/24 dev mgt proto kernel scope link src 10.16.110.1 metric 1011
10.133.10.0/24  dev mgt proto kernel scope link src 10.16.110.1 metric 101
10.127.10.0/24  dev mgt proto kernel scope link src 10.16.110.1 metric 101
10.122.100.0/24 dev mgt proto kernel scope link src 10.16.110.1 metric 101
10.134.100.0/24 dev mgt proto kernel scope link src 10.16.110.1 metric 101
10.181.100.0/24 dev mgt proto kernel scope link src 10.16.110.1 metric 101
10.181.100.0/24dev mgt proto kernel scope link src 10.16.110.1 metric 101
10.247.200.0/24dev mgt proto kernel scope link src 10.16.110.1 metric 101
10.172.300.0/24dev mgt proto kernel scope link src 10.16.110.1 metric 101
10.162.100.0/24dev mgt proto kernel scope link src 10.16.110.1 metric 101
10.161.111.0/24 dev mgt proto kernel scope link src 10.16.110.1 metric 101
10.161.0.0/16   dev mgt proto kernel scope link src 10.16.110.1 metric 101
10.233.130.0/24 dev mgt proto kernel scope link src 10.16.110.1 metric 101
10.60.140.0/24   dev mgt proto kernel scope link src 10.16.110.1 metric 101

Now what you want to do is take the all the ips that show up on “mgt” interface and put them in a text file

  • vi ips1 
  • save the file 

copy on the section of one after the other in a column and save the file.

10.132.100.0/24
10.132.10.0/24

10.136.100.0/24
10.136.10.0/24
10.134.100.0/24
10.133.10.0/24
10.127.10.0/24
10.122.100.0/24

  1. Now your ansible route section will probably look something like this…
Example of ansible yaml file “ansblefile”
routes:
    - device: mgt
      gw: 10.16.110.1
      route:
        - 10.132.100.0/24
        - 10.132.10.0/24
        - 10.136.100.0/24
        - 10.136.10.0/24
        - 10.134.100.0/24
        - 10.133.10.0/24
        - 10.127.10.0/24
        - 10.122.100.0/24
        - 10.134.100.0/24
        - 10.181.100.0/24
        - 10.181.100.0/24
        - 10.247.200.0/24
        - 10.172.300.0/24
        - 10.162.100.0/24
        - 10.161.111.0/24
        - 10.161.0.0/16
        - 10.233.130.0/24
  1. So you what you want to do now is copy and paste the routes from the file so they line up perfectly with the correct spacing in your yaml file.Note:
    If they aren’t lined up correctly your playbook will fail.
  2. So you can either copy them into a text editor like textpad or notepad++ and just use the replace function to add the “- “ (8 spaces before the – and 1 space before the – and ip) or  you can you perl or sed  script to do it right from the command line.
# If you want to edit the file in-place
sed -i -e 's/^/prefix/' file

Example:

sed -e 's/^/ - /' ips1 > ips2
  1. Okay now you should have a new file called ips2 that looks like below with 8 space from the left margin.
– 10.136.100.0/24
–  10.136.10.0/24
– 10.134.100.0/24
– 10.133.10.0/24
– 10.127.10.0/24
– 10.122.100.0/24
  1. Now you if you cat that ips2
  • cat ips2
  • Then highlight everything inside the file
[highlighted]
- 10.136.100.0/24
- 10.136.10.0/24
- 10.134.100.0/24
- 10.133.10.0/24
- 10.127.10.0/24
- 10.122.100.0/24
[highlighted]

7. Open your ansible yaml that contains the route section and just below “route:” right against the margin paste what you highlighted. Everything should line up perfectly and save the ansible file.

routes:
– device: mgt
gw: 10.16.110.1
route:
[paste highlight]
- 10.132.100.0/24
- 10.132.10.0/24
- 10.136.100.0/24
- 10.136.10.0/24
- 10.134.100.0/24
- 10.133.10.0/24

[paste highlight]

Okay no we need to check to ensure that you didn’t accidently miss any routes between the route table and inside your ansible yaml.

  1. Now with the original ips1 file with just the routes table without the –
    • Make sure the ansible yaml file and the ips1 file are inside the same directory to make life easier.
  • We can run a little compare script like so
    while read a b c d e; do if [[ $(grep -w $a ansiblefile) ]]; then :; else echo $a $b $c $d $e; fi  ; done < <(cat ips1)

Note:
If there are any routes missing from the ansible file it will spit them out. You can keep running this until the list shows no results, minus any gateway ips of course.

Example:

[root@ansibleserver]# while read a b c d e; do if [[ $(grep -w $a  ansiblefile) ]]; then:; else echo $a $b $c $d $e; fi  ; done < <(cat ips1)
10.168.142.0/24
10.222.100.0/24
10.222.110.0/24

By Nick Tailor

How to change the currently active slave of a bonded interface

RedHat / CentOS :

Interface Bonding as we all know is very useful in providing the fault tolerance and increased bandwidth. We can change the active slave interface of bonding without interrupting the production work. In the example below we have the interface bonding bond0 with 2 slaves em0 and em1 (em1 being the active slave). We will be replacing slave em0 with new slave em2.

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: em0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 5000
Down Delay (ms): 5000

Slave Interface: em0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:21:28:b2:65:26
Slave queue ID: 0

Slave Interface: em1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:21:28:b2:65:27
Slave queue ID: 0

1. Change the active slave to em1

ifenslave command can be used to attach or detach or change the currently active slave interface from the bonding. Now, Change the active slave interface to em1.

# ifenslave -c bond0 em1

Check the bonding status again to ensure that em1 is the new active slave :

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: em1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 5000
Down Delay (ms): 5000

Slave Interface: em0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:3b:26:b2:68:26
Slave queue ID: 0

Slave Interface: em1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:3b:26:b2:68:27
Slave queue ID: 0
The switch of active slave should get effective immediately, but on critical production systems, please schedule maintenance window or make some test in an identical test environment first.

2. Attach the new slave interface

We can now attach the new slave interface em2 to the bonding.

# ifenslave bond0 em2

3. Unattach the old slave interface

Once we have attached a new slave interface, we can unattach the old slave and remove it from the bonding.

# ifenslave -d bond0 em0

4. Verify

Confirm that the new slave is now the standby interface in the bonding.

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: em1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 5000
Down Delay (ms): 5000

Slave Interface: em1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:21:29:bf:55:30
Slave queue ID: 0

Slave Interface: em2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:19:1a:d1:43:61
Slave queue ID: 0

If you want to make the changes more permanent

The changes we just made, are temporary and will be cleared after a reboot of the server. To make these changes permanent we will have to make few changes.

Make sure you delete the file /etc/sysconfig/network-scripts/ifcfg-em0 as we are no longer are using this interface in bonding. Create a new file for the new slave interface in the bonding :

# rm /etc/sysconfig/network-scripts/ifcfg-em0
# vi /etc/sysconfig/network-scripts/ifcfg-em2
DEVICE=em2
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
0