Category: Linux
Fixing Read-Only Mode on eLux Thin Clients
Fixing Read-Only Mode on eLux Thin Clients
If your eLux device boots into a read-only filesystem or prevents saving changes, it’s usually due to the write filter or system protection settings. Here’s how to identify and fix the issue.
Common Causes
- Write Filter is enabled (RAM overlay by default)
- System partition is locked as part of image protection
- Corrupted overlay from improper shutdown
Fix 1: Temporarily Remount as Read/Write
sudo mount -o remount,rw /
This allows you to make temporary changes. They will be lost after reboot unless you adjust the image or profile settings.
Fix 2: Enable Persistent Mode via the EIS Tool
- Open your image project in the EIS Tool
- Go to the Settings tab
- Locate the write filter or storage persistence section
- Set it to Persistent Storage
- Export the updated image and redeploy
Fix 3: Enable Persistence via Scout Configuration Profile
- Open Scout Enterprise Console
- Go to Configuration > Profiles
- Edit the assigned profile
- Enable options like:
- Persistent user data
- Persistent certificate storage
- Persistent logging
- Save and reassign the profile
Fix 4: Reimage the Device
- If the system is damaged or stuck in read-only permanently, use a USB stick or PXE deployment to reflash the device.
- Ensure the new image has persistence enabled in the EIS Tool before deploying.
Check Filesystem Mount Status
mount | grep ' / '
If you see (ro)
in the output, the system is in read-only mode.
Final Notes
- eLux protects system partitions by design — use Scout and EIS Tool to make lasting changes
- Remounting manually is fine for diagnostics but not a long-term fix
- Always test changes on a test device before rolling out to production
Elux Image Deployment
How to Create and Deploy a Custom eLux Image at Scale
This guide is intended for Linux/VDI system administrators managing eLux thin clients across enterprise environments. It covers:
- Part 1: Creating a fresh, customized eLux image
- Part 2: Deploying the image at scale using Scout Enterprise
Part 1: Creating a Custom eLux Image with Tailored Settings
Step 1: Download Required Files
- Go to https://www.myelux.com and log in.
- Download the following:
- Base OS image (e.g.,
elux-RP6-base.ufi
) - Module files (
.ulc
) – Citrix, VMware, Firefox, etc. - EIS Tool (eLux Image Stick Tool) for your admin OS
- Base OS image (e.g.,
Step 2: Install and Open the EIS Tool
- Install the EIS Tool on a Windows or Linux system.
- Launch the tool and click New Project.
- Select the downloaded
.ufi
base image. - Name your project (e.g.,
elux-custom-v1
) and confirm.
Step 3: Add or Remove Modules
- Go to the Modules tab inside the EIS Tool.
- Click Add and import the required
.ulc
files. - Deselect any modules you don’t need.
- Click Apply to save module selections.
Step 4: Modify System Settings (Optional)
- Set default screen resolution
- Enable or disable write protection
- Choose RAM overlay or persistent storage
- Enable shell access if needed for support
- Disable unneeded services
Step 5: Export the Image
- To USB stick:
Click "Write to USB Stick" Select your USB target drive
- To file for network deployment:
Click "Export Image" Save your customized .ufi (e.g., elux-custom-v1.ufi)
Part 2: Deploying the Custom Image at Scale Using Scout Enterprise
Step 1: Import the Image into Scout
- Open Scout Enterprise Console
- Navigate to Repository > Images
- Right-click → Import Image
- Select the
.ufi
file created earlier
Step 2: Create and Configure a Profile
- Go to Configuration > Profiles
- Click New Profile
- Configure network, session, and UI settings
- Save and name the profile (e.g.,
Citrix-Kiosk-Profile
)
Step 3: Assign Image and Profile to Devices or Groups
- Navigate to Devices or Groups
- Right-click → Assign OS Image
- Select your custom
.ufi
- Right-click → Assign Profile
- Select your configuration profile
Step 4: Deploy the Image
Option A: PXE Network Deployment
- Enable PXE boot on client devices (via BIOS)
- Ensure PXE services are running (Scout or custom)
- On reboot, clients auto-deploy image and config
Option B: USB Stick Installation
- Boot client device from prepared USB stick
- Follow on-screen instructions to install
- Device registers and pulls config from Scout
Step 5: Monitor Deployment
- Use Logs > Job Queue to track installations
- Search for devices to confirm version and status
Optional Commands
Inspect or Write Images
# Mount .ufi image (read-only)
sudo mount -o loop elux-custom.ufi /mnt/elux
# Write image to USB on Linux
sudo dd if=elux-custom.ufi of=/dev/sdX bs=4M status=progress
Manual PXE Server Setup (Linux)
sudo apt install tftpd-hpa dnsmasq
# Example dnsmasq.conf
port=0
interface=eth0
dhcp-range=192.168.1.100,192.168.1.200,12h
dhcp-boot=pxelinux.0
enable-tftp
tftp-root=/srv/tftp
sudo systemctl restart tftpd-hpa
dsudo systemctl restart dnsmasq
Commands on eLux Device Shell
# Switch to shell (Ctrl+Alt+F1), then:
uname -a
df -h
scout showconfig
scout pullconfig
Summary
Task | Tool |
---|---|
Build custom image | EIS Tool |
Add/remove software modules | .ulc files + EIS Tool |
Customize settings | EIS Tool + Scout Profile |
Deploy to all clients | PXE boot or USB + Scout |
Manage and monitor at scale | Scout Enterprise Console |
How to Power Up or Power Down multiple instances in OCI using CLI with Ansible
Now the reason why you would probably want this is over terraform is because terraform is more suited for infrastructure orchestration and not really suited to deal with the instances once they are up and running.
If you have scaled servers out in OCI powering servers up and down in bulk currently is not available. If you are doing a migration or using a staging environment that you need need to use the machine when building or doing troubleshooting.
Then having a way to power up/down multiple machines at once is convenient.
Install the OCI collections if you don’t have it already.
Linux/macOS
curl -L https://raw.githubusercontent.com/oracle/oci-ansible-collection/master/scripts/install.sh | bash -s — —verbose
ansible-galaxy collection list – Will list the collections installed
# /path/to/ansible/collections
Collection Version
——————- ——-
amazon.aws 1.4.0
ansible.builtin 1.3.0
ansible.posix 1.3.0
oracle.oci 2.10.0
Once you have it installed you need to test the OCI client is working
oci iam compartment list –all (this will list out the compartment ID list for your instances.
Compartments in OCI are a way to organise infrastructure and control access to those resources. This is great for if you have contractors coming and you only want them to have access to certain things not everything.
Now there are two ways you can your instance names.
Bash Script to get the instances names from OCI
compartment_id=“ocid1.compartment.oc1..insert compartment ID here“
# Explicitly define the availability domains based on your provided data
availability_domains=(“zcLB:US-CHICAGO-1-AD-1” “zcLB:US-CHICAGO-1-AD-2” “zcLB:US-CHICAGO-1-AD-3”)
# For each availability domain, list the instances
for ad in “${availability_domains[@]}”; do
# List instances within the specific AD and compartment, extracting the “id” field
oci compute instance list –compartment-id $compartment_id –availability-domain $ad –query “data[].id” –raw-output > instance_ids.txt
# Clean up the instance IDs (removing brackets, quotes, etc.)
sed –i ‘s/\[//g’ instance_ids.txt
sed –i ‘s/\]//g’ instance_ids.txt
sed –i ‘s/”//g’ instance_ids.txt
sed –i ‘s/,//g’ instance_ids.txt
# Read each instance ID from instance_ids.txt
while read -r instance_id; do
# Get instance VNIC information
instance_info=$(oci compute instance list-vnics –instance-id “$instance_id“)
# Extract the required fields and print them
display_name=$(echo “$instance_info“ | jq -r ‘.data[0].”display-name”‘)
public_ip=$(echo “$instance_info“ | jq -r ‘.data[0].”public-ip“‘)
private_ip=$(echo “$instance_info“ | jq -r ‘.data[0].”private-ip“‘)
echo “Availability Domain: $ad“
echo “Display Name: $display_name“
echo “Public IP: $public_ip“
echo “Private IP: $private_ip“
echo “—————————————–“
done < instance_ids.txt
done
The output of the script when piped in to a file will look like
Instance.names
Availability Domain: zcLB:US-CHICAGO-1-AD-1
Display Name: Instance1
Public IP: 192.0.2.1
Private IP: 10.0.0.1
—————————————–
Availability Domain: zcLB:US-CHICAGO-1-AD-1
Display Name: Instance2
Public IP: 192.0.2.2
Private IP: 10.0.0.2
—————————————–
…
You can now grep this file for the name of the servers you want to power on or off quickly
Now we have an ansible playbook that can power on or power off the instance by name provided by the OCI client
Ansible playbook to power on or off multiple instances via OCI CLI
—
– name: Control OCI Instance Power State based on Instance Names
hosts: localhost
vars:
instance_names_to_stop:
– instance1
# Add more instance names here if you wish to stop them…
instance_names_to_start:
# List the instance names you wish to start here…
# Example:
– Instance2
tasks:
– name: Fetch all instance details in the compartment
command:
cmd: “oci compute instance list –compartment-id ocid1.compartment.oc1..aaaaaaaak7jc7tn2su2oqzmrbujpr5wmnuucj4mwj4o4g7rqlzemy4yvxrza –output json“
register: oci_output
– set_fact:
instances: “{{ oci_output.stdout | from_json }}”
– name: Extract relevant information
set_fact:
clean_instances: “{{ clean_instances | default([]) + [{ ‘name’: item[‘display-name’], ‘id’: item.id, ‘state’: item[‘lifecycle-state’] }] }}”
loop: “{{ instances.data }}”
when: “‘display-name’ in item and ‘id’ in item and ‘lifecycle-state’ in item”
– name: Filter out instances to stop
set_fact:
instances_to_stop: “{{ instances_to_stop | default([]) + [item] }}”
loop: “{{ clean_instances }}”
when: “item.name in instance_names_to_stop and item.state == ‘RUNNING'”
– name: Filter out instances to start
set_fact:
instances_to_start: “{{ instances_to_start | default([]) + [item] }}”
loop: “{{ clean_instances }}”
when: “item.name in instance_names_to_start and item.state == ‘STOPPED'”
– name: Filter out instances to stop
set_fact:
instances_to_stop: “{{ clean_instances | selectattr(‘name’, ‘in’, instance_names_to_stop) | selectattr(‘state’, ‘equalto‘, ‘RUNNING’) | list }}”
– name: Filter out instances to start
set_fact:
instances_to_start: “{{ clean_instances | selectattr(‘name’, ‘in’, instance_names_to_start) | selectattr(‘state’, ‘equalto‘, ‘STOPPED’) | list }}”
– name: Display instances to stop (you can remove this debug task later)
debug:
var: instances_to_stop
– name: Display instances to start (you can remove this debug task later)
debug:
var: instances_to_start
– name: Power off instances
command:
cmd: “oci compute instance action —action STOP –instance-id {{ item.id }}”
loop: “{{ instances_to_stop }}”
when: instances_to_stop | length > 0
register: state
# – debug:
# var: state
– name: Power on instances
command:
cmd: “oci compute instance action —action START –instance-id {{ item.id }}”
loop: “{{ instances_to_start }}”
when: instances_to_start | length > 0
The output will look like
PLAY [Control OCI Instance Power State based on Instance Names] **********************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************
ok: [localhost]
TASK [Fetch all instance details in the compartment] *********************************************************************************************
changed: [localhost]
TASK [Parse the OCI CLI output] ******************************************************************************************************************
ok: [localhost]
TASK [Extract relevant information] **************************************************************************************************************
ok: [localhost] => (item={‘display-name’: ‘Instance1’, ‘id’: ‘ocid1.instance.oc1..exampleuniqueID1’, ‘lifecycle-state’: ‘STOPPED’})
ok: [localhost] => (item={‘display-name’: ‘Instance2’, ‘id’: ‘ocid1.instance.oc1..exampleuniqueID2’, ‘lifecycle-state’: ‘RUNNING’})
TASK [Filter out instances to stop] **************************************************************************************************************
ok: [localhost]
TASK [Filter out instances to start] *************************************************************************************************************
ok: [localhost]
TASK [Display instances to stop (you can remove this debug task later)] **************************************************************************
ok: [localhost] => {
“instances_to_stop“: [
{
“name”: “Instance2”,
“id”: “ocid1.instance.oc1..exampleuniqueID2″,
“state”: “RUNNING”
}
]
}
TASK [Display instances to start (you can remove this debug task later)] *************************************************************************
ok: [localhost] => {
“instances_to_start“: [
{
“name”: “Instance1”,
“id”: “ocid1.instance.oc1..exampleuniqueID1″,
“state”: “STOPPED”
}
]
}
TASK [Power off instances] ***********************************************************************************************************************
changed: [localhost] => (item={‘name’: ‘Instance2’, ‘id’: ‘ocid1.instance.oc1..exampleuniqueID2’, ‘state’: ‘RUNNING’})
TASK [Power on instances] ************************************************************************************************************************
changed: [localhost] => (item={‘name’: ‘Instance1’, ‘id’: ‘ocid1.instance.oc1..exampleuniqueID1’, ‘state’: ‘STOPPED’})
PLAY RECAP ****************************************************************************************************************************************
localhost : ok=9 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
How to Configure Redhat 7 & 8 Network Interfaces using Ansible
(bonded nics, gateways, routes, interface names)
How to use this role:
Example file: hosts.dev, hosts.staging, hosts.prod
Cool Stuff: If you deployed a virtual-machine using the ansible-vmware modules it will set the hostname of the host using the same shortname of the vm. If you require the fqdn vs the shortname on the host. To solve this I added some code to set the fdqn as the new_hostname if you define it under you hosts.file as shown below.
Now inside this directory you should see hosts & host_vars, group_vars
Descriptions:
Operational Use:
Descriptions:
Operational Use:
passed parameters: example: var/testmachine1
#Configure network can be used on physical and virtual-machines
nic_devices:
– device: ens192
ip: 192.168.10.100
nm: 255.255.255.0
gw: 192.168.10.254
uuid:
mac:
Note: you do not need to specify the UUID, you can if you wish. You do need the MAC. if you are doing bonded nics on the hosts. If you are using physical machines with satellite deployments. Then its probably a good to idea to use the mac of the nic you want the dhcp request to hit to avoid accidently deploying to the wrong host. When dealing with physical machines you don’t really have the same forgiveness of snapshots or quickly rebuilding as a vm. You can do more complicated configurations as indicated below….You can always email or contact me via linkedin, top right of the blog if you need assistance.
More Advanced configurations: bonded nics, routes, multiple nics and gateways
bond_devices:
– device: ens1
mac: ec:0d:9a:05:3b:f0
master: mgt
eth_opts: ‘-C ${DEVICE} adaptive-rx off rx-usecs 0 rx-frames 0; -K ${DEVICE} lro off’
– device: ens1d1
mac: ec:0d:9a:05:3b:f1
master: mgt
eth_opts: ‘-C ${DEVICE} adaptive-rx off rx-usecs 0 rx-frames 0; -K ${DEVICE} lro off’
– device: mgt
ip: 10.100.1.2
nm: 255.255.255.0
gw: 10.100.1.254
pr: ens1
– device: ens6
mac: ec:0d:9a:05:16:g0
master: app
– device: ens6d1
mac: ec:0d:9a:05:16:g1
master: app
– device: app
ip: 10.101.1.3
nm: 255.255.255.0
pr: ens6
routes:
– device: app
route:
– 100.240.136.0/24
– 100.240.138.0/24
– device: app
gw: 10.156.177.1
route:
– 10.156.148.0/24
Running your playbook:
Example: of ansible/ setup-networkonly.yml
– hosts: all
gather_facts: no
roles:
– role: setup-redhat-interfaces
Command:
ansible-playbook -i inventory/dev/hosts setup-networkonly.yml–limit=’testmachine1.nicktailor.com’
Test Run:
[root@ansible-home]# ansible-playbook –i inventory/dev/hosts setup-metworkonly.yml –limit=’testmachine1.nicktailor.com’ -k
SSH password:
PLAY [all] *************************************************************************************************************************************************************************
TASK [setup-redhat-network : Gather facts] ************************************************************************************************************************************
ok: [testmachine1.nicktailor.com]
TASK [setup-redhat-network : set_fact] ****************************************************************************************************************************************
ok: [testmachine1.nicktailor.com]
TASK [setup-redhat-network : Cleanup network confguration] ********************************************************************************************************************
ok: [testmachine1.nicktailor.com]
TASK [setup-redhat-network : find] ********************************************************************************************************************************************
ok: [testmachine1.nicktailor.com]
TASK [setup-redhat-network : file] ********************************************************************************************************************************************
changed: [testmachine1.nicktailor.com] => (item={u’rusr‘: True, u’uid‘: 0, u’rgrp‘: True, u’xoth‘: False, u’islnk‘: False, u’woth‘: False, u’nlink‘: 1, u’issock‘: False, u’mtime‘: 1530272815.953706, u’gr_name‘: u’root‘, u’path‘: u’/etc/sysconfig/network-scripts/ifcfg-enp0s3′, u’xusr‘: False, u’atime‘: 1665494779.63, u’inode‘: 1055173, u’isgid‘: False, u’size‘: 285, u’isdir‘: False, u’ctime‘: 1530272816.3037066, u’isblk‘: False, u’wgrp‘: False, u’xgrp‘: False, u’isuid‘: False, u’dev‘: 64769, u’roth‘: True, u’isreg‘: True, u’isfifo‘: False, u’mode‘: u’0644′, u’pw_name‘: u’root‘, u’gid‘: 0, u’ischr‘: False, u’wusr‘: True})
changed: [testmachine1.nicktailor.com] => (item={u’rusr‘: True, u’uid‘: 0, u’rgrp‘: True, u’xoth‘: False, u’islnk‘: False, u’woth‘: False, u’nlink‘: 1, u’issock‘: False, u’mtime‘: 1530272848.538762, u’gr_name‘: u’root‘, u’path‘: u’/etc/sysconfig/network-scripts/ifcfg-enp0s8′, u’xusr‘: False, u’atime‘: 1665494779.846, u’inode‘: 2769059, u’isgid‘: False, u’size‘: 203, u’isdir‘: False, u’ctime‘: 1530272848.6417623, u’isblk‘: False, u’wgrp‘: False, u’xgrp‘: False, u’isuid‘: False, u’dev‘: 64769, u’roth‘: True, u’isreg‘: True, u’isfifo‘: False, u’mode‘: u’0644′, u’pw_name‘: u’root‘, u’gid‘: 0, u’ischr‘: False, u’wusr‘: True})
TASK [setup-redhat-network : file] ********************************************************************************************************************************************
ok: [testmachine1.nicktailor.com]
TASK [setup-redhat-network : Setup bond devices] ******************************************************************************************************************************
changed: [testmachine1.nicktailor.com] => (item={u’device‘: u’enp0s8′, u’mac‘: u’08:00:27:13:b2:73′, u’master‘: u’mgt‘})
changed: [testmachine1.nicktailor.com] => (item={u’device‘: u’enp0s9′, u’mac‘: u’08:00:27:e8:cf:cd’, u’master‘: u’mgt‘})
changed: [testmachine1.nicktailor.com] => (item={u’device‘: u’mgt‘, u’ip‘: u’192.168.10.200‘, u’nm‘: u’255.255.255.0′, u’gw‘: u’10.0.2.2′, u’pr‘: u’enp0s8′})
TASK [setup-redhat-network : Setup NIC] ***************************************************************************************************************************************
TASK [setup-redhat-network : Setup static routes] *****************************************************************************************************************************
PLAY RECAP *************************************************************************************************************************************************************************
testmachine1.nicktailor.com : ok=7 changed=2 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
[root@testmachine1.nicktailor.com]# cat /proc/net/bonding/mgt
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: enp0s8 (primary_reselect failure)
Currently Active Slave: enp0s8
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: enp0s8
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:13:b2:73
Slave queue ID: 0
Slave Interface: enp0s9
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:e8:cf:cd
Slave queue ID: 0
[root@testmachine1.nicktailor.com]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:63:63:0e brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic enp0s3
valid_lft 86074sec preferred_lft 86074sec
inet6 fe80::a162:1b49:98b7:6c54/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master mgt state UP group default qlen 1000
link/ether 08:00:27:13:b2:73 brd ff:ff:ff:ff:ff:ff
4: enp0s9: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master mgt state UP group default qlen 1000
link/ether 08:00:27:13:b2:73 brd ff:ff:ff:ff:ff:ff
5: enp0s10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:05:b4:e8 brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether ae:db:dc:52:22:f8 brd ff:ff:ff:ff:ff:ff
7: mgt: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 08:00:27:13:b2:73 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.200/24 brd 192.168.56.255 scope global mgt
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe13:b273/64 scope link
valid_lft forever preferred_lft forever
How to deploy OpenNebula Frontends via Ansible
Frontend: This role deploys the OpenNebula Cloud platform frontends via Ansible
Ansible Operational Documentation – OpenNebula Frontend Deployments
https://opennebula.io/ – OpenNebula is basically a opensource inhouse cloud platform that you can deploy and manage virtual machines using a kvm backend on the host which is scalable. OpenNebula support give you a document to run manual commands, and would not provide the opensource playbook they use to deploy frontends.
So I reverse engineered one for others to use and edit as needed. As nobody runs commands manually anymore. If you are not automating then you are basically a dinosaur
Note: You will still need to buy your own enterprise license to get access to the apt source. You can find that below and you can plug those into defaults/main.yml before you run the book.
This role handles the following when deploying OpenNebula Frontends in standalone or HA using groups to distinguish how to deploy in scale using apache.
How to use this role:
Example file: hosts.opennebula
Example: This is how you would list out 3 frontend hosts
[all:children]
frontend_server_primary # this is where you list ON server number 1
mysql_servers – you list any server that will require mysql install for ON
apache_servers – you list any server that will be running ON apache
frontend_HA – you list any additional front ends that will be used in HA here for OpenNebula
[frontend_server_primary]
Testmachine1 ansible_host=192.168.86.61
[mysql_servers]
Testmachine1 ansible_host=192.168.86.61
Testmachine2 ansible_host=192.168.86.62
#Testmachine3 ansibel_host=192.168.86.63
[apache_servers]
Testmachine1 ansible_host=192.168.86.61
Testmachine2 ansible_host=192.168.86.62
#Testmachine3 ansibel_host=192.168.86.63
[frontend_HA]
Testmachine2 ansible_host=192.168.86.62
#Testmachine3 ansible_host=192.168.86.63
Note: For a standalone setup you simply list the same host under the following 3 groups listed below and then in your command under –limt=”testmachine1” instead of ‘testmachine1,testmachine2′. The playbook is smart enough to know what to do from there.
[frontend_server_primary]
Testmachine1 ansible_host=192.168.86.63
[mysql_servers]
Testmachine1 ansible_host=192.168.86.63
[apache_servers]
Testmachine1 ansible_host=192.168.86.63
Special Notes: This playbook is designed so you can choose deploy ON in standalone, in classic centralised mysql(HA), or OpenNebula HA(with mysql deploy individually with rafthook configuration.
We will be deploying the OpenNebula officially supported way.
Although no senior architect would usually choose this approach over classic mysql HA(active/passive), we followed it anyway.
Important things to know:
Group variables for this role that are passed and need to be defined below. If you want to change certificates and configure mysql it has to be done in these group vars for this role to work. You will need to create opennebula ssl keys for the vnc console stuff to work, they are not provided by this playbook.
Dev/group_vars:
session_memcache: memcache
vnc_proxy_support_wss: true
vnc_proxy_cert_path: /etc/ssl/certs/opennebula.pem
vnc_proxy_key_path: /etc/ssl/private/opennebula.key
vnc_proxy_ipv6: false
vnc_request_password: false
driver: qcow2
#If these are defined HA setup is pushed.
#It Adds VIP hooks for floating IP and federation server ID:
#these variables can be overidden at at the host_var level.
#If host is listed under frontend_HA group in your host
#then these defaults will be used
leader_interface_name: enp0s8
leader_ip: 192.168.50.132/24
follower_ip: 192.168.50.132/24
follower_interface_name: enp0s8
Mysql_servers
OpenNebula Mysql Installation
mysqlrootuser: root
mysqlnewinstallpassword: Swordfish123
mysql_admin_user: admin
mysql_admin_password: admin
database_to_create: opennebula
Running your playbook:
Example: of opennebula-frontend/ON-frontenddeploy.yml
– hosts: all
become: True
become_user: root
gather_facts: no
roles:
– role: opennebula-frontend
Command: Running – playbook to deploy OpenNebula in HA
ansible-playbook -i inventory/dev/hosts ON-frontenddeploy.yml -u brucewayne -Kkb –ask-become –limit=’testmachine1,testmachine2′
Command: Running – playbook to deploy OpenNebula in Standalone
ansible-playbook -i inventory/dev/hosts ON-frontenddeploy.yml -u brucewayne -Kkb –ask-become –limit=’testmachine1′
Successful run:
brucewayne@KVM–test–box:~/ansible/opennebula-frontend$ ansible–playbook –i inventory/dev/hosts.opennebula2 ON–frontenddeploy.yml –u brucewayne –Kkb —ask–become —limit=‘testmachine1,testmachine2’
SSH password:
BECOME password[defaults to SSH password]:
PLAY [all] ***************************************************************************************************************************************************************************************************************
TASK [frontend : install debian packages] ********************************************************************************************************************************************************************************
ok: [testmachine2] => (item=curl)
ok: [testmachine1] => (item=curl)
ok: [testmachine1] => (item=gnupg)
ok: [testmachine2] => (item=gnupg)
changed: [testmachine1] => (item=build–essential)
ok: [testmachine1] => (item=dirmngr)
ok: [testmachine1] => (item=ca–certificates)
ok: [testmachine1] => (item=memcached)
changed: [testmachine2] => (item=build–essential)
ok: [testmachine2] => (item=dirmngr)
ok: [testmachine2] => (item=ca–certificates)
ok: [testmachine2] => (item=memcached)
TASK [frontend : import the opennebula apt key] **************************************************************************************************************************************************************************
changed: [testmachine2]
changed: [testmachine1]
TASK [frontend : Show Key list] ******************************************************************************************************************************************************************************************
changed: [testmachine1]
changed: [testmachine2]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“keylist.stdout_lines”: [
“/etc/apt/trusted.gpg”,
“——————–“,
“pub rsa2048 2013-06-13 [SC]”,
” 92B7 7188 854C F23E 1634 DA89 592F 7F05 85E1 6EBF”,
“uid [ unknown] OpenNebula Repository <contact@opennebula.org>”,
“sub rsa2048 2013-06-13 [E]”,
“”,
“/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-archive.gpg”,
“——————————————————“,
“pub rsa4096 2012-05-11 [SC]”,
” 790B C727 7767 219C 42C8 6F93 3B4F E6AC C0B2 1F32″,
“uid [ unknown] Ubuntu Archive Automatic Signing Key (2012) <ftpmaster@ubuntu.com>”,
“”,
“/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-cdimage.gpg”,
“——————————————————“,
“pub rsa4096 2012-05-11 [SC]”,
” 8439 38DF 228D 22F7 B374 2BC0 D94A A3F0 EFE2 1092″,
“uid [ unknown] Ubuntu CD Image Automatic Signing Key (2012) <cdimage@ubuntu.com>”,
“”,
“/etc/apt/trusted.gpg.d/ubuntu-keyring-2018-archive.gpg”,
“——————————————————“,
“pub rsa4096 2018-09-17 [SC]”,
” F6EC B376 2474 EDA9 D21B 7022 8719 20D1 991B C93C”,
“uid [ unknown] Ubuntu Archive Automatic Signing Key (2018) <ftpmaster@ubuntu.com>”
]
}
ok: [testmachine2] => {
“keylist.stdout_lines”: [
“/etc/apt/trusted.gpg”,
“——————–“,
“pub rsa2048 2013-06-13 [SC]”,
” 92B7 7188 854C F23E 1634 DA89 592F 7F05 85E1 6EBF”,
“uid [ unknown] OpenNebula Repository <contact@opennebula.org>”,
“sub rsa2048 2013-06-13 [E]”,
“”,
“/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-archive.gpg”,
“——————————————————“,
“pub rsa4096 2012-05-11 [SC]”,
” 790B C727 7767 219C 42C8 6F93 3B4F E6AC C0B2 1F32″,
“uid [ unknown] Ubuntu Archive Automatic Signing Key (2012) <ftpmaster@ubuntu.com>”,
“”,
“/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-cdimage.gpg”,
“——————————————————“,
“pub rsa4096 2012-05-11 [SC]”,
” 8439 38DF 228D 22F7 B374 2BC0 D94A A3F0 EFE2 1092″,
“uid [ unknown] Ubuntu CD Image Automatic Signing Key (2012) <cdimage@ubuntu.com>”,
“”,
“/etc/apt/trusted.gpg.d/ubuntu-keyring-2018-archive.gpg”,
“——————————————————“,
“pub rsa4096 2018-09-17 [SC]”,
” F6EC B376 2474 EDA9 D21B 7022 8719 20D1 991B C93C”,
“uid [ unknown] Ubuntu Archive Automatic Signing Key (2018) <ftpmaster@ubuntu.com>”
]
}
TASK [frontend : import the phusionpassenger apt key] ********************************************************************************************************************************************************************
changed: [testmachine2]
changed: [testmachine1]
TASK [frontend : Show Key list] ******************************************************************************************************************************************************************************************
changed: [testmachine1]
changed: [testmachine2]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“keylist2.stdout_lines”: [
“/etc/apt/trusted.gpg”,
“——————–“,
“pub rsa2048 2013-06-13 [SC]”,
” 92B7 7188 854C F23E 1634 DA89 592F 7F05 85E1 6EBF”,
“uid [ unknown] OpenNebula Repository <contact@opennebula.org>”,
“sub rsa2048 2013-06-13 [E]”,
“”,
“pub rsa4096 2013-06-30 [SC]”,
” 1637 8A33 A6EF 1676 2922 526E 561F 9B9C AC40 B2F7″,
“uid [ unknown] Phusion Automated Software Signing (Used by automated tools to sign software packages) <auto-software-signing@phusion.nl>”,
“sub rsa4096 2013-06-30 [E]”,
“”,
“/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-archive.gpg”,
“——————————————————“,
“pub rsa4096 2012-05-11 [SC]”,
” 790B C727 7767 219C 42C8 6F93 3B4F E6AC C0B2 1F32″,
“uid [ unknown] Ubuntu Archive Automatic Signing Key (2012) <ftpmaster@ubuntu.com>”,
“”,
“/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-cdimage.gpg”,
“——————————————————“,
“pub rsa4096 2012-05-11 [SC]”,
” 8439 38DF 228D 22F7 B374 2BC0 D94A A3F0 EFE2 1092″,
“uid [ unknown] Ubuntu CD Image Automatic Signing Key (2012) <cdimage@ubuntu.com>”,
“”,
“/etc/apt/trusted.gpg.d/ubuntu-keyring-2018-archive.gpg”,
“——————————————————“,
“pub rsa4096 2018-09-17 [SC]”,
” F6EC B376 2474 EDA9 D21B 7022 8719 20D1 991B C93C”,
“uid [ unknown] Ubuntu Archive Automatic Signing Key (2018) <ftpmaster@ubuntu.com>”
]
}
ok: [testmachine2] => {
“keylist2.stdout_lines”: [
“/etc/apt/trusted.gpg”,
“——————–“,
“pub rsa2048 2013-06-13 [SC]”,
” 92B7 7188 854C F23E 1634 DA89 592F 7F05 85E1 6EBF”,
“uid [ unknown] OpenNebula Repository <contact@opennebula.org>”,
“sub rsa2048 2013-06-13 [E]”,
“”,
“pub rsa4096 2013-06-30 [SC]”,
” 1637 8A33 A6EF 1676 2922 526E 561F 9B9C AC40 B2F7″,
“uid [ unknown] Phusion Automated Software Signing (Used by automated tools to sign software packages) <auto-software-signing@phusion.nl>”,
“sub rsa4096 2013-06-30 [E]”,
“”,
“/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-archive.gpg”,
“——————————————————“,
“pub rsa4096 2012-05-11 [SC]”,
” 790B C727 7767 219C 42C8 6F93 3B4F E6AC C0B2 1F32″,
“uid [ unknown] Ubuntu Archive Automatic Signing Key (2012) <ftpmaster@ubuntu.com>”,
“”,
“/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-cdimage.gpg”,
“——————————————————“,
“pub rsa4096 2012-05-11 [SC]”,
” 8439 38DF 228D 22F7 B374 2BC0 D94A A3F0 EFE2 1092″,
“uid [ unknown] Ubuntu CD Image Automatic Signing Key (2012) <cdimage@ubuntu.com>”,
“”,
“/etc/apt/trusted.gpg.d/ubuntu-keyring-2018-archive.gpg”,
“——————————————————“,
“pub rsa4096 2018-09-17 [SC]”,
” F6EC B376 2474 EDA9 D21B 7022 8719 20D1 991B C93C”,
“uid [ unknown] Ubuntu Archive Automatic Signing Key (2018) <ftpmaster@ubuntu.com>”
]
}
TASK [frontend : add opennebula apt repository] **************************************************************************************************************************************************************************
changed: [testmachine1]
changed: [testmachine2]
TASK [frontend : add bionic phusionpassenger apt repository] *************************************************************************************************************************************************************
changed: [testmachine1]
changed: [testmachine2]
TASK [frontend : wget apt–transport–https ca–certificates] ***************************************************************************************************************************************************************
changed: [testmachine1]
changed: [testmachine2]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“install2”: {
“changed”: true,
“cmd”: “apt-get -y install wget apt-transport-https ca-certificates”,
“delta”: “0:00:02.087119”,
“end”: “2022-04-06 03:13:42.512860”,
“failed”: false,
“msg”: “”,
“rc”: 0,
“start”: “2022-04-06 03:13:40.425741”,
“stderr”: “”,
“stderr_lines”: [],
“stdout”: “Reading package lists…\nBuilding dependency tree…\nReading state information…\nca-certificates is already the newest version (20210119~20.04.2).\nwget is already the newest version (1.20.3-1ubuntu2).\nwget set to manually installed.\nThe following NEW packages will be installed\n apt-transport-https\n0 to upgrade, 1 to newly install, 0 to remove and 1 not to upgrade.\nNeed to get 4,680 B of archives.\nAfter this operation, 162 kB of additional disk space will be used.\nGet:1 http://gb.archive.ubuntu.com/ubuntu focal-updates/universe amd64 apt-transport-https all 2.0.6 [4,680 B]\nFetched 4,680 B in 0s (15.1 kB/s)\nSelecting previously unselected package apt-transport-https.\r\n(Reading database … \r(Reading database … 5%\r(Reading database … 10%\r(Reading database … 15%\r(Reading database … 20%\r(Reading database … 25%\r(Reading database … 30%\r(Reading database … 35%\r(Reading database … 40%\r(Reading database … 45%\r(Reading database … 50%\r(Reading database … 55%\r(Reading database … 60%\r(Reading database … 65%\r(Reading database … 70%\r(Reading database … 75%\r(Reading database … 80%\r(Reading database … 85%\r(Reading database … 90%\r(Reading database … 95%\r(Reading database … 100%\r(Reading database … 199304 files and directories currently installed.)\r\nPreparing to unpack …/apt-transport-https_2.0.6_all.deb …\r\nUnpacking apt-transport-https (2.0.6) …\r\nSetting up apt-transport-https (2.0.6) …”,
“stdout_lines”: [
“Reading package lists…”,
“Building dependency tree…”,
“Reading state information…”,
“ca-certificates is already the newest version (20210119~20.04.2).”,
“wget is already the newest version (1.20.3-1ubuntu2).”,
“wget set to manually installed.”,
“The following NEW packages will be installed”,
” apt-transport-https”,
“0 to upgrade, 1 to newly install, 0 to remove and 1 not to upgrade.”,
“Need to get 4,680 B of archives.”,
“After this operation, 162 kB of additional disk space will be used.”,
“Get:1 http://gb.archive.ubuntu.com/ubuntu focal-updates/universe amd64 apt-transport-https all 2.0.6 [4,680 B]”,
“Fetched 4,680 B in 0s (15.1 kB/s)”,
“Selecting previously unselected package apt-transport-https.”,
“(Reading database … “,
“(Reading database … 5%”,
“(Reading database … 10%”,
“(Reading database … 15%”,
“(Reading database … 20%”,
“(Reading database … 25%”,
“(Reading database … 30%”,
“(Reading database … 35%”,
“(Reading database … 40%”,
“(Reading database … 45%”,
“(Reading database … 50%”,
“(Reading database … 55%”,
“(Reading database … 60%”,
“(Reading database … 65%”,
“(Reading database … 70%”,
“(Reading database … 75%”,
“(Reading database … 80%”,
“(Reading database … 85%”,
“(Reading database … 90%”,
“(Reading database … 95%”,
“(Reading database … 100%”,
“(Reading database … 199304 files and directories currently installed.)”,
“Preparing to unpack …/apt-transport-https_2.0.6_all.deb …”,
“Unpacking apt-transport-https (2.0.6) …”,
“Setting up apt-transport-https (2.0.6) …”
]
}
}
ok: [testmachine2] => {
“install2”: {
“changed”: true,
“cmd”: “apt-get -y install wget apt-transport-https ca-certificates”,
“delta”: “0:00:02.710741”,
“end”: “2022-04-06 03:13:43.155299”,
“failed”: false,
“msg”: “”,
“rc”: 0,
“start”: “2022-04-06 03:13:40.444558”,
“stderr”: “”,
“stderr_lines”: [],
“stdout”: “Reading package lists…\nBuilding dependency tree…\nReading state information…\nca-certificates is already the newest version (20210119~20.04.2).\nwget is already the newest version (1.20.3-1ubuntu2).\nwget set to manually installed.\nThe following packages were automatically installed and are no longer required:\n linux-headers-5.11.0-27-generic linux-hwe-5.11-headers-5.11.0-27\n linux-image-5.11.0-27-generic linux-modules-5.11.0-27-generic\n linux-modules-extra-5.11.0-27-generic\nUse ‘sudo apt autoremove’ to remove them.\nThe following NEW packages will be installed\n apt-transport-https\n0 to upgrade, 1 to newly install, 0 to remove and 37 not to upgrade.\nNeed to get 4,680 B of archives.\nAfter this operation, 162 kB of additional disk space will be used.\nGet:1 http://gb.archive.ubuntu.com/ubuntu focal-updates/universe amd64 apt-transport-https all 2.0.6 [4,680 B]\nFetched 4,680 B in 0s (13.2 kB/s)\nSelecting previously unselected package apt-transport-https.\r\n(Reading database … \r(Reading database … 5%\r(Reading database … 10%\r(Reading database … 15%\r(Reading database … 20%\r(Reading database … 25%\r(Reading database … 30%\r(Reading database … 35%\r(Reading database … 40%\r(Reading database … 45%\r(Reading database … 50%\r(Reading database … 55%\r(Reading database … 60%\r(Reading database … 65%\r(Reading database … 70%\r(Reading database … 75%\r(Reading database … 80%\r(Reading database … 85%\r(Reading database … 90%\r(Reading database … 95%\r(Reading database … 100%\r(Reading database … 202372 files and directories currently installed.)\r\nPreparing to unpack …/apt-transport-https_2.0.6_all.deb …\r\nUnpacking apt-transport-https (2.0.6) …\r\nSetting up apt-transport-https (2.0.6) …”,
“stdout_lines”: [
“Reading package lists…”,
“Building dependency tree…”,
“Reading state information…”,
“ca-certificates is already the newest version (20210119~20.04.2).”,
“wget is already the newest version (1.20.3-1ubuntu2).”,
“wget set to manually installed.”,
“The following packages were automatically installed and are no longer required:”,
” linux-headers-5.11.0-27-generic linux-hwe-5.11-headers-5.11.0-27″,
” linux-image-5.11.0-27-generic linux-modules-5.11.0-27-generic”,
” linux-modules-extra-5.11.0-27-generic”,
“Use ‘sudo apt autoremove’ to remove them.”,
“The following NEW packages will be installed”,
” apt-transport-https”,
“0 to upgrade, 1 to newly install, 0 to remove and 37 not to upgrade.”,
“Need to get 4,680 B of archives.”,
“After this operation, 162 kB of additional disk space will be used.”,
“Get:1 http://gb.archive.ubuntu.com/ubuntu focal-updates/universe amd64 apt-transport-https all 2.0.6 [4,680 B]”,
“Fetched 4,680 B in 0s (13.2 kB/s)”,
“Selecting previously unselected package apt-transport-https.”,
“(Reading database … “,
“(Reading database … 5%”,
“(Reading database … 10%”,
“(Reading database … 15%”,
“(Reading database … 20%”,
“(Reading database … 25%”,
“(Reading database … 30%”,
“(Reading database … 35%”,
“(Reading database … 40%”,
“(Reading database … 45%”,
“(Reading database … 50%”,
“(Reading database … 55%”,
“(Reading database … 60%”,
“(Reading database … 65%”,
“(Reading database … 70%”,
“(Reading database … 75%”,
“(Reading database … 80%”,
“(Reading database … 85%”,
“(Reading database … 90%”,
“(Reading database … 95%”,
“(Reading database … 100%”,
“(Reading database … 202372 files and directories currently installed.)”,
“Preparing to unpack …/apt-transport-https_2.0.6_all.deb …”,
“Unpacking apt-transport-https (2.0.6) …”,
“Setting up apt-transport-https (2.0.6) …”
]
}
}
TASK [frontend : apt–get update] *****************************************************************************************************************************************************************************************
changed: [testmachine1]
changed: [testmachine2]
TASK [frontend : Include mysql task when groupvar mysqlservers is defined] ***********************************************************************************************************************************************
included: /home/brucewayne/ansible/opennebula-frontend/roles/frontend/tasks/mysql.yml for testmachine1, testmachine2
TASK [frontend : install debian packages] ********************************************************************************************************************************************************************************
changed: [testmachine1] => (item=mariadb–server)
changed: [testmachine1] => (item=python3–pymysql)
changed: [testmachine2] => (item=mariadb–server)
changed: [testmachine2] => (item=python3–pymysql)
TASK [frontend : Secure mysql installation] ******************************************************************************************************************************************************************************
[WARNING]: Module did not set no_log for change_root_password
changed: [testmachine1]
changed: [testmachine2]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“mysql_secure”: {
“changed”: true,
“failed”: false,
“meta”: {
“change_root_pwd”: “True — But not for all of the hosts”,
“connected_with_socket?”: true,
“disallow_root_remotely”: “False — meets the desired state”,
“hosts_failed”: [
“127.0.0.1”,
“::1”
],
“hosts_success”: [
“localhost”
],
“mysql_version_above_10_3?”: false,
“new_password_correct?”: false,
“remove_anonymous_user”: “False — meets the desired state”,
“remove_test_db”: “False — meets the desired state”,
“stdout”: “Password for user: root @ Hosts: [‘localhost’] changed to the desired state”
},
“warnings”: [
“Module did not set no_log for change_root_password”
]
}
}
ok: [testmachine2] => {
“mysql_secure”: {
“changed”: true,
“failed”: false,
“meta”: {
“change_root_pwd”: “True — But not for all of the hosts”,
“connected_with_socket?”: true,
“disallow_root_remotely”: “False — meets the desired state”,
“hosts_failed”: [
“::1”,
“127.0.0.1”
],
“hosts_success”: [
“localhost”
],
“mysql_version_above_10_3?”: false,
“new_password_correct?”: false,
“remove_anonymous_user”: “False — meets the desired state”,
“remove_test_db”: “False — meets the desired state”,
“stdout”: “Password for user: root @ Hosts: [‘localhost’] changed to the desired state”
},
“warnings”: [
“Module did not set no_log for change_root_password”
]
}
}
TASK [frontend : Create opennebula database] *****************************************************************************************************************************************************************************
changed: [testmachine2]
changed: [testmachine1]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“database”: {
“changed”: true,
“db”: “opennebula”,
“db_list”: [
“opennebula”
],
“executed_commands”: [
“CREATE DATABASE `opennebula`”
],
“failed”: false
}
}
ok: [testmachine2] => {
“database”: {
“changed”: true,
“db”: “opennebula”,
“db_list”: [
“opennebula”
],
“executed_commands”: [
“CREATE DATABASE `opennebula`”
],
“failed”: false
}
}
TASK [frontend : create user ‘admin’ with password ‘admin’ for ‘{{opennebula_db}}’ and grant all priveleges] *******************************************************************************************************
changed: [testmachine2]
changed: [testmachine1]
TASK [frontend : install opennebula packages] ****************************************************************************************************************************************************************************
changed: [testmachine1] => (item=opennebula)
changed: [testmachine1] => (item=opennebula–sunstone)
changed: [testmachine1] => (item=opennebula–gate)
changed: [testmachine1] => (item=opennebula–flow)
ok: [testmachine1] => (item=opennebula–rubygems)
changed: [testmachine1] => (item=opennebula–fireedge)
ok: [testmachine1] => (item=gnupg)
changed: [testmachine2] => (item=opennebula)
changed: [testmachine2] => (item=opennebula–sunstone)
changed: [testmachine2] => (item=opennebula–gate)
changed: [testmachine2] => (item=opennebula–flow)
ok: [testmachine2] => (item=opennebula–rubygems)
changed: [testmachine2] => (item=opennebula–fireedge)
ok: [testmachine2] => (item=gnupg)
TASK [frontend : Copy oned.conf to server with updated DB(host,user,pass)] ***********************************************************************************************************************************************
changed: [testmachine2]
changed: [testmachine1]
TASK [frontend : Copy sunstone–server.conf to server configs] ************************************************************************************************************************************************************
changed: [testmachine2]
changed: [testmachine1]
TASK [frontend : Add credentials to Admin] ****************************************************************************************************************************************************************************
changed: [testmachine1]
changed: [testmachine2]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“authfile.stdout_lines”: [
“admin:IgDeMozOups8”
]
}
ok: [testmachine2] => {
“authfile.stdout_lines”: [
“admin:Tafwaytofen2”
]
}
TASK [frontend : Set fact for authfile] **********************************************************************************************************************************************************************************
ok: [testmachine1]
ok: [testmachine2]
TASK [frontend : update permissions opennebula permissions] **************************************************************************************************************************************************************
changed: [testmachine1]
changed: [testmachine2]
TASK [frontend : Include apache configuration] ***************************************************************************************************************************************************************************
included: /home/brucewayne/ansible/opennebula-frontend/roles/frontend/tasks/apache.yml for testmachine1, testmachine2
TASK [frontend : restart systemd–timesyncd] ******************************************************************************************************************************************************************************
changed: [testmachine1]
changed: [testmachine2]
TASK [frontend : install debian packages] ********************************************************************************************************************************************************************************
changed: [testmachine1] => (item=apache2–utils)
changed: [testmachine2] => (item=apache2–utils)
changed: [testmachine1] => (item=apache2)
changed: [testmachine1] => (item=libapache2–mod–proxy–msrpc)
changed: [testmachine2] => (item=apache2)
changed: [testmachine2] => (item=libapache2–mod–proxy–msrpc)
changed: [testmachine1] => (item=libapache2–mod–passenger)
changed: [testmachine2] => (item=libapache2–mod–passenger)
TASK [frontend : copy opennebula apache ssl virtualhost config to server] ************************************************************************************************************************************************
changed: [testmachine1] => (item=/home/brucewayne/ansible/opennebula-frontend/roles/frontend/templates/apache_confs/opennebula.conf)
changed: [testmachine2] => (item=/home/brucewayne/ansible/opennebula-frontend/roles/frontend/templates/apache_confs/opennebula.conf)
TASK [frontend : copy opennebul ssl certificate to servers] **************************************************************************************************************************************************************
changed: [testmachine1] => (item=/home/brucewayne/ansible/opennebula-frontend/roles/frontend/templates/certs/opennebula.pem)
changed: [testmachine2] => (item=/home/brucewayne/ansible/opennebula-frontend/roles/frontend/templates/certs/opennebula.pem)
TASK [frontend : copy opennebula ssl private key to server] **************************************************************************************************************************************************************
changed: [testmachine1] => (item=/home/brucewayne/ansible/opennebula-frontend/roles/frontend/templates/private/opennebula.key)
changed: [testmachine2] => (item=/home/brucewayne/ansible/opennebula-frontend/roles/frontend/templates/private/opennebula.key)
TASK [frontend : Enable SSL virtual host for openebula] ******************************************************************************************************************************************************************
changed: [testmachine1]
changed: [testmachine2]
TASK [frontend : enable opennebula virtualhost] **************************************************************************************************************************************************************************
changed: [testmachine1]
changed: [testmachine2]
TASK [frontend : Restart service httpd, in all cases] ********************************************************************************************************************************************************************
changed: [testmachine1]
changed: [testmachine2]
TASK [frontend : Enable service httpd and ensure it is not masked] *******************************************************************************************************************************************************
ok: [testmachine1]
ok: [testmachine2]
TASK [frontend : get service facts] **************************************************************************************************************************************************************************************
ok: [testmachine1]
ok: [testmachine2]
TASK [frontend : Check to see if httpd is running] ***********************************************************************************************************************************************************************
ok: [testmachine1] => {
“ansible_facts.services[\”apache2.service\”]”: {
“name”: “apache2.service”,
“source”: “systemd”,
“state”: “running”,
“status”: “enabled”
}
}
ok: [testmachine2] => {
“ansible_facts.services[\”apache2.service\”]”: {
“name”: “apache2.service”,
“source”: “systemd”,
“state”: “running”,
“status”: “enabled”
}
}
TASK [frontend : start opennebula] ***************************************************************************************************************************************************************************************
changed: [testmachine1]
changed: [testmachine2]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“openebula.state”: “started”
}
ok: [testmachine2] => {
“openebula.state”: “started”
}
TASK [frontend : start opennebula–gate] **********************************************************************************************************************************************************************************
changed: [testmachine1]
changed: [testmachine2]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“gate.state”: “started”
}
ok: [testmachine2] => {
“gate.state”: “started”
}
TASK [frontend : start opennebula–flow] **********************************************************************************************************************************************************************************
changed: [testmachine1]
changed: [testmachine2]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“flow.state”: “started”
}
ok: [testmachine2] => {
“flow.state”: “started”
}
TASK [frontend : start opennebula–novc] **********************************************************************************************************************************************************************************
changed: [testmachine1]
changed: [testmachine2]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“novnc.state”: “started”
}
ok: [testmachine2] => {
“novnc.state”: “started”
}
TASK [frontend : start systemd–timesyncd] ********************************************************************************************************************************************************************************
ok: [testmachine1]
ok: [testmachine2]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“timesyncd.state”: “started”
}
ok: [testmachine2] => {
“timesyncd.state”: “started”
}
TASK [frontend : Check if server is listed under frontend_HA] ************************************************************************************************************************************************************
skipping: [testmachine1]
ok: [testmachine2]
TASK [frontend : Stopping OpenNebula on frontend_server_primary] *********************************************************************************************************************************************************
changed: [testmachine1]
changed: [testmachine2]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“stop, group_names”: “({‘changed’: True, ‘stdout’: ”, ‘stderr’: ”, ‘rc’: 0, ‘cmd’: ‘systemctl stop opennebula’, ‘start’: ‘2022-04-06 03:19:42.714817’, ‘end’: ‘2022-04-06 03:19:48.841833’, ‘delta’: ‘0:00:06.127016’, ‘msg’: ”, ‘stdout_lines’: [], ‘stderr_lines’: [], ‘failed’: False}, [‘apache_servers’, ‘frontend_server_primary’, ‘mysql_servers’])”
}
ok: [testmachine2] => {
“stop, group_names”: “({‘changed’: True, ‘stdout’: ”, ‘stderr’: ”, ‘rc’: 0, ‘cmd’: ‘systemctl stop opennebula’, ‘start’: ‘2022-04-06 03:19:42.761875’, ‘end’: ‘2022-04-06 03:21:14.632276’, ‘delta’: ‘0:01:31.870401’, ‘msg’: ”, ‘stdout_lines’: [], ‘stderr_lines’: [], ‘failed’: False}, [‘apache_servers’, ‘frontend_HA’, ‘mysql_servers’])”
}
TASK [frontend : delete sqlfile if it exists to create a current one.] ***************************************************************************************************************************************************
changed: [testmachine2]
changed: [testmachine1]
TASK [frontend : make backup of OpenNebula database] *********************************************************************************************************************************************************************
skipping: [testmachine2]
changed: [testmachine1]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“backup”: {
“changed”: true,
“cmd”: “onedb backup -u admin -p admin -d opennebula /var/lib/one/opennebula.sql”,
“delta”: “0:00:00.406599”,
“end”: “2022-04-06 03:21:16.346013”,
“failed”: false,
“msg”: “”,
“rc”: 0,
“start”: “2022-04-06 03:21:15.939414”,
“stderr”: “”,
“stderr_lines”: [],
“stdout”: “MySQL dump stored in /var/lib/one/opennebula.sql\nUse ‘onedb restore’ or restore the DB using the mysql command:\nmysql -u user -h server -P port db_name < backup_file”,
“stdout_lines”: [
“MySQL dump stored in /var/lib/one/opennebula.sql”,
“Use ‘onedb restore’ or restore the DB using the mysql command:”,
“mysql -u user -h server -P port db_name < backup_file”
]
}
}
ok: [testmachine2] => {
“backup”: {
“changed”: false,
“skip_reason”: “Conditional result was False”,
“skipped”: true
}
}
TASK [frontend : Fetch the OpenNebula sql dumpfile from frontend_server_primary] *****************************************************************************************************************************************
skipping: [testmachine2]
changed: [testmachine1 -> testmachine1]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“fetch, group_names”: “({‘changed’: True, ‘md5sum’: ‘a54c58c27e96d29cb99a26a595263164’, ‘dest’: ‘/home/brucewayne/ansible/opennebula-frontend/buffer/tmp/opennebula.sql’, ‘remote_md5sum’: None, ‘checksum’: ‘040e9ae687df46fc26a64f038992bd28e1d7e369’, ‘remote_checksum’: ‘040e9ae687df46fc26a64f038992bd28e1d7e369’, ‘failed’: False}, [‘apache_servers’, ‘frontend_server_primary’, ‘mysql_servers’])”
}
ok: [testmachine2] => {
“fetch, group_names”: “({‘changed’: False, ‘skipped’: True, ‘skip_reason’: ‘Conditional result was False’}, [‘apache_servers’, ‘frontend_HA’, ‘mysql_servers’])”
}
TASK [frontend : Copy the ON–sqldump file from master to the secondary HA nodes] *****************************************************************************************************************************************
skipping: [testmachine1]
changed: [testmachine2]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“sqlcopy”: {
“changed”: false,
“skip_reason”: “Conditional result was False”,
“skipped”: true
}
}
ok: [testmachine2] => {
“sqlcopy”: {
“changed”: true,
“checksum”: “040e9ae687df46fc26a64f038992bd28e1d7e369”,
“dest”: “/tmp/opennebula.sql”,
“diff”: [],
“failed”: false,
“gid”: 0,
“group”: “root”,
“md5sum”: “a54c58c27e96d29cb99a26a595263164”,
“mode”: “0644”,
“owner”: “root”,
“size”: 41546,
“src”: “/home/brucewayne/.ansible/tmp/ansible-tmp-1649211677.4405959-9803-36565910128620/source”,
“state”: “file”,
“uid”: 0
}
}
TASK [frontend : Fetch the fence_host.sh] ********************************************************************************************************************************************************************************
skipping: [testmachine2]
ok: [testmachine1 -> testmachine1]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“fence_host, group_names”: “({‘changed’: False, ‘md5sum’: ‘7bb73d0d0ffce907562d75f6cd779fdc’, ‘file’: ‘/var/lib/one/remotes/hooks/ft/fence_host.sh’, ‘dest’: ‘/home/brucewayne/ansible/opennebula-frontend/buffer/tmp/fence_host.sh’, ‘checksum’: ‘ef5e59d9a3d6d7a55d554928057bf85f5dea5f1f’, ‘failed’: False}, [‘apache_servers’, ‘frontend_server_primary’, ‘mysql_servers’])”
}
ok: [testmachine2] => {
“fence_host, group_names”: “({‘changed’: False, ‘skipped’: True, ‘skip_reason’: ‘Conditional result was False’}, [‘apache_servers’, ‘frontend_HA’, ‘mysql_servers’])”
}
TASK [frontend : Copy the fence.sh to frontend_HA hosts] *****************************************************************************************************************************************************************
skipping: [testmachine1]
ok: [testmachine2]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“fence_host”: {
“changed”: false,
“skip_reason”: “Conditional result was False”,
“skipped”: true
}
}
ok: [testmachine2] => {
“fence_host”: {
“changed”: false,
“checksum”: “ef5e59d9a3d6d7a55d554928057bf85f5dea5f1f”,
“dest”: “/var/lib/one/remotes/hooks/ft/fence_host.sh”,
“diff”: {
“after”: {
“path”: “/var/lib/one/remotes/hooks/ft/fence_host.sh”
},
“before”: {
“path”: “/var/lib/one/remotes/hooks/ft/fence_host.sh”
}
},
“failed”: false,
“gid”: 9869,
“group”: “admin”,
“mode”: “0750”,
“owner”: “admin”,
“path”: “/var/lib/one/remotes/hooks/ft/fence_host.sh”,
“size”: 4370,
“state”: “file”,
“uid”: 9869
}
}
TASK [frontend : Create tar of /etc/one/] ********************************************************************************************************************************************************************************
skipping: [testmachine2]
changed: [testmachine1]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“tar”: {
“changed”: true,
“cmd”: “cd /etc/one;tar -cvf /etc/one/one.tar *”,
“delta”: “0:00:00.016645”,
“end”: “2022-04-06 03:21:20.659494”,
“failed”: false,
“msg”: “”,
“rc”: 0,
“start”: “2022-04-06 03:21:20.642849”,
“stderr”: “”,
“stderr_lines”: [],
“stdout”: “auth/\nauth/certificates/\nauth/x509_auth.conf\nauth/server_x509_auth.conf\nauth/ldap_auth.conf\naz_driver.conf\naz_driver.default\ncli/\ncli/onevmgroup.yaml\ncli/onevnet.yaml\ncli/oneshowback.yaml\ncli/onehook.yaml\ncli/onetemplate.yaml\ncli/onemarketapp.yaml\ncli/onesecgroup.yaml\ncli/oneacct.yaml\ncli/oneacl.yaml\ncli/onemarket.yaml\ncli/onegroup.yaml\ncli/onevm.yaml\ncli/oneflowtemplate.yaml\ncli/onevrouter.yaml\ncli/onezone.yaml\ncli/oneimage.yaml\ncli/onecluster.yaml\ncli/oneuser.yaml\ncli/onevntemplate.yaml\ncli/onevdc.yaml\ncli/onehost.yaml\ncli/onedatastore.yaml\ncli/oneflow.yaml\ndefaultrc\nec2_driver.conf\nec2_driver.default\nfireedge/\nfireedge/provision/\nfireedge/provision/providers.d/\nfireedge/provision/providers.d/vultr_virtual.yaml\nfireedge/provision/providers.d/digitalocean.yaml\nfireedge/provision/providers.d/vultr_metal.yaml\nfireedge/provision/providers.d/equinix.yaml\nfireedge/provision/providers.d/google.yaml\nfireedge/provision/providers.d/aws.yaml\nfireedge/provision/providers.d/dummy.yaml\nfireedge/provision/provision-server.conf\nfireedge/sunstone/\nfireedge/sunstone/user/\nfireedge/sunstone/user/vm-tab.yaml\nfireedge/sunstone/user/vm-template-tab.yaml\nfireedge/sunstone/sunstone-server.conf\nfireedge/sunstone/admin/\nfireedge/sunstone/admin/vm-tab.yaml\nfireedge/sunstone/admin/cluster-tab.yaml\nfireedge/sunstone/admin/vm-template-tab.yaml\nfireedge/sunstone/admin/host-tab.yaml\nfireedge/sunstone/sunstone-views.yaml\nfireedge-server.conf\nhm/\nhm/hmrc\nmonitord.conf\noned.conf\noneflow-server.conf\nonegate-server.conf\nonehem-server.conf\nsched.conf\nsunstone-logos.yaml\nsunstone-server.conf\nsunstone-views/\nsunstone-views/vcenter/\nsunstone-views/vcenter/admin.yaml\nsunstone-views/vcenter/user.yaml\nsunstone-views/vcenter/groupadmin.yaml\nsunstone-views/vcenter/cloud.yaml\nsunstone-views/mixed/\nsunstone-views/mixed/admin.yaml\nsunstone-views/mixed/user.yaml\nsunstone-views/mixed/groupadmin.yaml\nsunstone-views/mixed/cloud.yaml\nsunstone-views/kvm/\nsunstone-views/kvm/admin.yaml\nsunstone-views/kvm/user.yaml\nsunstone-views/kvm/groupadmin.yaml\nsunstone-views/kvm/cloud.yaml\nsunstone-views.yaml\ntmrc\nvcenter_driver.default\nvmm_exec/\nvmm_exec/vmm_execrc\nvmm_exec/vmm_exec_kvm.conf”,
“stdout_lines”: [
“auth/”,
“auth/certificates/”,
“auth/x509_auth.conf”,
“auth/server_x509_auth.conf”,
“auth/ldap_auth.conf”,
“az_driver.conf”,
“az_driver.default”,
“cli/”,
“cli/onevmgroup.yaml”,
“cli/onevnet.yaml”,
“cli/oneshowback.yaml”,
“cli/onehook.yaml”,
“cli/onetemplate.yaml”,
“cli/onemarketapp.yaml”,
“cli/onesecgroup.yaml”,
“cli/oneacct.yaml”,
“cli/oneacl.yaml”,
“cli/onemarket.yaml”,
“cli/onegroup.yaml”,
“cli/onevm.yaml”,
“cli/oneflowtemplate.yaml”,
“cli/onevrouter.yaml”,
“cli/onezone.yaml”,
“cli/oneimage.yaml”,
“cli/onecluster.yaml”,
“cli/oneuser.yaml”,
“cli/onevntemplate.yaml”,
“cli/onevdc.yaml”,
“cli/onehost.yaml”,
“cli/onedatastore.yaml”,
“cli/oneflow.yaml”,
“defaultrc”,
“ec2_driver.conf”,
“ec2_driver.default”,
“fireedge/”,
“fireedge/provision/”,
“fireedge/provision/providers.d/”,
“fireedge/provision/providers.d/vultr_virtual.yaml”,
“fireedge/provision/providers.d/digitalocean.yaml”,
“fireedge/provision/providers.d/vultr_metal.yaml”,
“fireedge/provision/providers.d/equinix.yaml”,
“fireedge/provision/providers.d/google.yaml”,
“fireedge/provision/providers.d/aws.yaml”,
“fireedge/provision/providers.d/dummy.yaml”,
“fireedge/provision/provision-server.conf”,
“fireedge/sunstone/”,
“fireedge/sunstone/user/”,
“fireedge/sunstone/user/vm-tab.yaml”,
“fireedge/sunstone/user/vm-template-tab.yaml”,
“fireedge/sunstone/sunstone-server.conf”,
“fireedge/sunstone/admin/”,
“fireedge/sunstone/admin/vm-tab.yaml”,
“fireedge/sunstone/admin/cluster-tab.yaml”,
“fireedge/sunstone/admin/vm-template-tab.yaml”,
“fireedge/sunstone/admin/host-tab.yaml”,
“fireedge/sunstone/sunstone-views.yaml”,
“fireedge-server.conf”,
“hm/”,
“hm/hmrc”,
“monitord.conf”,
“oned.conf”,
“oneflow-server.conf”,
“onegate-server.conf”,
“onehem-server.conf”,
“sched.conf”,
“sunstone-logos.yaml”,
“sunstone-server.conf”,
“sunstone-views/”,
“sunstone-views/vcenter/”,
“sunstone-views/vcenter/admin.yaml”,
“sunstone-views/vcenter/user.yaml”,
“sunstone-views/vcenter/groupadmin.yaml”,
“sunstone-views/vcenter/cloud.yaml”,
“sunstone-views/mixed/”,
“sunstone-views/mixed/admin.yaml”,
“sunstone-views/mixed/user.yaml”,
“sunstone-views/mixed/groupadmin.yaml”,
“sunstone-views/mixed/cloud.yaml”,
“sunstone-views/kvm/”,
“sunstone-views/kvm/admin.yaml”,
“sunstone-views/kvm/user.yaml”,
“sunstone-views/kvm/groupadmin.yaml”,
“sunstone-views/kvm/cloud.yaml”,
“sunstone-views.yaml”,
“tmrc”,
“vcenter_driver.default”,
“vmm_exec/”,
“vmm_exec/vmm_execrc”,
“vmm_exec/vmm_exec_kvm.conf”
]
}
}
ok: [testmachine2] => {
“tar”: {
“changed”: false,
“skip_reason”: “Conditional result was False”,
“skipped”: true
}
}
TASK [frontend : Fetch the one.tar] **************************************************************************************************************************************************************************************
skipping: [testmachine2]
changed: [testmachine1 -> testmachine1]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“fence_host, group_names”: “({‘changed’: True, ‘md5sum’: ‘acec4258dbbf2bde83d12f3eb29824a7’, ‘dest’: ‘/home/brucewayne/ansible/opennebula-frontend/buffer/tmp/one.tar’, ‘remote_md5sum’: None, ‘checksum’: ‘2da21a3124f4eb5a78c0126e9791c8d8c9c5c770’, ‘remote_checksum’: ‘2da21a3124f4eb5a78c0126e9791c8d8c9c5c770’, ‘failed’: False}, [‘apache_servers’, ‘frontend_server_primary’, ‘mysql_servers’])”
}
ok: [testmachine2] => {
“fence_host, group_names”: “({‘changed’: False, ‘skipped’: True, ‘skip_reason’: ‘Conditional result was False’}, [‘apache_servers’, ‘frontend_HA’, ‘mysql_servers’])”
}
TASK [frontend : Copy the one.tar to frontend_HA hosts] ******************************************************************************************************************************************************************
skipping: [testmachine1]
changed: [testmachine2]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“fence_host”: {
“changed”: false,
“skip_reason”: “Conditional result was False”,
“skipped”: true
}
}
ok: [testmachine2] => {
“fence_host”: {
“changed”: true,
“checksum”: “2da21a3124f4eb5a78c0126e9791c8d8c9c5c770”,
“dest”: “/etc/one/one.tar”,
“diff”: [],
“failed”: false,
“gid”: 0,
“group”: “root”,
“md5sum”: “acec4258dbbf2bde83d12f3eb29824a7”,
“mode”: “0644”,
“owner”: “root”,
“size”: 542720,
“src”: “/home/brucewayne/.ansible/tmp/ansible-tmp-1649211681.6244745-9943-99432484341658/source”,
“state”: “file”,
“uid”: 0
}
}
TASK [frontend : untar one.tar in /etc/one on the frontend_HA hosts] *****************************************************************************************************************************************************
skipping: [testmachine1]
changed: [testmachine2]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“untar”: {
“changed”: false,
“skip_reason”: “Conditional result was False”,
“skipped”: true
}
}
ok: [testmachine2] => {
“untar”: {
“changed”: true,
“cmd”: “cd /etc/one;tar -xvf /etc/one/one.tar”,
“delta”: “0:00:00.018409”,
“end”: “2022-04-06 03:21:23.162427”,
“failed”: false,
“msg”: “”,
“rc”: 0,
“start”: “2022-04-06 03:21:23.144018”,
“stderr”: “”,
“stderr_lines”: [],
“stdout”: “auth/\nauth/certificates/\nauth/x509_auth.conf\nauth/server_x509_auth.conf\nauth/ldap_auth.conf\naz_driver.conf\naz_driver.default\ncli/\ncli/onevmgroup.yaml\ncli/onevnet.yaml\ncli/oneshowback.yaml\ncli/onehook.yaml\ncli/onetemplate.yaml\ncli/onemarketapp.yaml\ncli/onesecgroup.yaml\ncli/oneacct.yaml\ncli/oneacl.yaml\ncli/onemarket.yaml\ncli/onegroup.yaml\ncli/onevm.yaml\ncli/oneflowtemplate.yaml\ncli/onevrouter.yaml\ncli/onezone.yaml\ncli/oneimage.yaml\ncli/onecluster.yaml\ncli/oneuser.yaml\ncli/onevntemplate.yaml\ncli/onevdc.yaml\ncli/onehost.yaml\ncli/onedatastore.yaml\ncli/oneflow.yaml\ndefaultrc\nec2_driver.conf\nec2_driver.default\nfireedge/\nfireedge/provision/\nfireedge/provision/providers.d/\nfireedge/provision/providers.d/vultr_virtual.yaml\nfireedge/provision/providers.d/digitalocean.yaml\nfireedge/provision/providers.d/vultr_metal.yaml\nfireedge/provision/providers.d/equinix.yaml\nfireedge/provision/providers.d/google.yaml\nfireedge/provision/providers.d/aws.yaml\nfireedge/provision/providers.d/dummy.yaml\nfireedge/provision/provision-server.conf\nfireedge/sunstone/\nfireedge/sunstone/user/\nfireedge/sunstone/user/vm-tab.yaml\nfireedge/sunstone/user/vm-template-tab.yaml\nfireedge/sunstone/sunstone-server.conf\nfireedge/sunstone/admin/\nfireedge/sunstone/admin/vm-tab.yaml\nfireedge/sunstone/admin/cluster-tab.yaml\nfireedge/sunstone/admin/vm-template-tab.yaml\nfireedge/sunstone/admin/host-tab.yaml\nfireedge/sunstone/sunstone-views.yaml\nfireedge-server.conf\nhm/\nhm/hmrc\nmonitord.conf\noned.conf\noneflow-server.conf\nonegate-server.conf\nonehem-server.conf\nsched.conf\nsunstone-logos.yaml\nsunstone-server.conf\nsunstone-views/\nsunstone-views/vcenter/\nsunstone-views/vcenter/admin.yaml\nsunstone-views/vcenter/user.yaml\nsunstone-views/vcenter/groupadmin.yaml\nsunstone-views/vcenter/cloud.yaml\nsunstone-views/mixed/\nsunstone-views/mixed/admin.yaml\nsunstone-views/mixed/user.yaml\nsunstone-views/mixed/groupadmin.yaml\nsunstone-views/mixed/cloud.yaml\nsunstone-views/kvm/\nsunstone-views/kvm/admin.yaml\nsunstone-views/kvm/user.yaml\nsunstone-views/kvm/groupadmin.yaml\nsunstone-views/kvm/cloud.yaml\nsunstone-views.yaml\ntmrc\nvcenter_driver.default\nvmm_exec/\nvmm_exec/vmm_execrc\nvmm_exec/vmm_exec_kvm.conf”,
“stdout_lines”: [
“auth/”,
“auth/certificates/”,
“auth/x509_auth.conf”,
“auth/server_x509_auth.conf”,
“auth/ldap_auth.conf”,
“az_driver.conf”,
“az_driver.default”,
“cli/”,
“cli/onevmgroup.yaml”,
“cli/onevnet.yaml”,
“cli/oneshowback.yaml”,
“cli/onehook.yaml”,
“cli/onetemplate.yaml”,
“cli/onemarketapp.yaml”,
“cli/onesecgroup.yaml”,
“cli/oneacct.yaml”,
“cli/oneacl.yaml”,
“cli/onemarket.yaml”,
“cli/onegroup.yaml”,
“cli/onevm.yaml”,
“cli/oneflowtemplate.yaml”,
“cli/onevrouter.yaml”,
“cli/onezone.yaml”,
“cli/oneimage.yaml”,
“cli/onecluster.yaml”,
“cli/oneuser.yaml”,
“cli/onevntemplate.yaml”,
“cli/onevdc.yaml”,
“cli/onehost.yaml”,
“cli/onedatastore.yaml”,
“cli/oneflow.yaml”,
“defaultrc”,
“ec2_driver.conf”,
“ec2_driver.default”,
“fireedge/”,
“fireedge/provision/”,
“fireedge/provision/providers.d/”,
“fireedge/provision/providers.d/vultr_virtual.yaml”,
“fireedge/provision/providers.d/digitalocean.yaml”,
“fireedge/provision/providers.d/vultr_metal.yaml”,
“fireedge/provision/providers.d/equinix.yaml”,
“fireedge/provision/providers.d/google.yaml”,
“fireedge/provision/providers.d/aws.yaml”,
“fireedge/provision/providers.d/dummy.yaml”,
“fireedge/provision/provision-server.conf”,
“fireedge/sunstone/”,
“fireedge/sunstone/user/”,
“fireedge/sunstone/user/vm-tab.yaml”,
“fireedge/sunstone/user/vm-template-tab.yaml”,
“fireedge/sunstone/sunstone-server.conf”,
“fireedge/sunstone/admin/”,
“fireedge/sunstone/admin/vm-tab.yaml”,
“fireedge/sunstone/admin/cluster-tab.yaml”,
“fireedge/sunstone/admin/vm-template-tab.yaml”,
“fireedge/sunstone/admin/host-tab.yaml”,
“fireedge/sunstone/sunstone-views.yaml”,
“fireedge-server.conf”,
“hm/”,
“hm/hmrc”,
“monitord.conf”,
“oned.conf”,
“oneflow-server.conf”,
“onegate-server.conf”,
“onehem-server.conf”,
“sched.conf”,
“sunstone-logos.yaml”,
“sunstone-server.conf”,
“sunstone-views/”,
“sunstone-views/vcenter/”,
“sunstone-views/vcenter/admin.yaml”,
“sunstone-views/vcenter/user.yaml”,
“sunstone-views/vcenter/groupadmin.yaml”,
“sunstone-views/vcenter/cloud.yaml”,
“sunstone-views/mixed/”,
“sunstone-views/mixed/admin.yaml”,
“sunstone-views/mixed/user.yaml”,
“sunstone-views/mixed/groupadmin.yaml”,
“sunstone-views/mixed/cloud.yaml”,
“sunstone-views/kvm/”,
“sunstone-views/kvm/admin.yaml”,
“sunstone-views/kvm/user.yaml”,
“sunstone-views/kvm/groupadmin.yaml”,
“sunstone-views/kvm/cloud.yaml”,
“sunstone-views.yaml”,
“tmrc”,
“vcenter_driver.default”,
“vmm_exec/”,
“vmm_exec/vmm_execrc”,
“vmm_exec/vmm_exec_kvm.conf”
]
}
}
TASK [frontend : updates the rafthook and federation configurations for fronteend_HA secondary servers] ******************************************************************************************************************
skipping: [testmachine1]
changed: [testmachine2]
TASK [frontend : start OpenNebula] ***************************************************************************************************************************************************************************************
skipping: [testmachine2]
changed: [testmachine1]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“group_names”: [
“apache_servers”,
“frontend_server_primary”,
“mysql_servers”
]
}
ok: [testmachine2] => {
“group_names”: [
“apache_servers”,
“frontend_HA”,
“mysql_servers”
]
}
TASK [frontend : finding frontend_HA list] *******************************************************************************************************************************************************************************
skipping: [testmachine1] => (item=apache_servers)
skipping: [testmachine1] => (item=frontend_server_primary)
skipping: [testmachine1] => (item=mysql_servers)
skipping: [testmachine2] => (item=apache_servers)
ok: [testmachine2] => (item=frontend_HA)
skipping: [testmachine2] => (item=mysql_servers)
TASK [frontend : Add Secondary Node frontends to the zone] ***************************************************************************************************************************************************************
skipping: [testmachine2] => (item=testmachine2)
changed: [testmachine1] => (item=testmachine2)
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“addzone, group_names”: “({‘results’: [{‘changed’: True, ‘stdout’: ”, ‘stderr’: ”, ‘rc’: 0, ‘cmd’: ‘onezone server-add 0 –name testmachine2 –rpc http://192.168.86.65:2633/RPC2’, ‘start’: ‘2022-04-06 03:21:33.920788’, ‘end’: ‘2022-04-06 03:21:34.174098’, ‘delta’: ‘0:00:00.253310’, ‘msg’: ”, ‘invocation’: {‘module_args’: {‘_raw_params’: ‘onezone server-add 0 –name testmachine2 –rpc http://192.168.86.65:2633/RPC2’, ‘_uses_shell’: True, ‘warn’: False, ‘stdin_add_newline’: True, ‘strip_empty_ends’: True, ‘argv’: None, ‘chdir’: None, ‘executable’: None, ‘creates’: None, ‘removes’: None, ‘stdin’: None}}, ‘stdout_lines’: [], ‘stderr_lines’: [], ‘failed’: False, ‘item’: ‘testmachine2’, ‘ansible_loop_var’: ‘item’}], ‘skipped’: False, ‘changed’: True, ‘msg’: ‘All items completed’}, [‘apache_servers’, ‘frontend_server_primary’, ‘mysql_servers’])”
}
ok: [testmachine2] => {
“addzone, group_names”: “({‘results’: [{‘changed’: False, ‘skipped’: True, ‘skip_reason’: ‘Conditional result was False’, ‘item’: ‘testmachine2’, ‘ansible_loop_var’: ‘item’}], ‘skipped’: True, ‘msg’: ‘All items skipped’, ‘changed’: False}, [‘apache_servers’, ‘frontend_HA’, ‘mysql_servers’])”
}
TASK [frontend : Restore database to secondary nodes] ********************************************************************************************************************************************************************
skipping: [testmachine1]
changed: [testmachine2]
TASK [frontend : debug] **************************************************************************************************************************************************************************************************
ok: [testmachine1] => {
“restoredb”: {
“changed”: false,
“skip_reason”: “Conditional result was False”,
“skipped”: true
}
}
ok: [testmachine2] => {
“restoredb”: {
“changed”: true,
“cmd”: “onedb restore -f -S localhost -u admin -p admin -d opennebula /tmp/opennebula.sql”,
“delta”: “0:00:00.988908”,
“end”: “2022-04-06 03:21:35.749776”,
“failed”: false,
“msg”: “”,
“rc”: 0,
“start”: “2022-04-06 03:21:34.760868”,
“stderr”: “”,
“stderr_lines”: [],
“stdout”: “MySQL DB opennebula at localhost restored.”,
“stdout_lines”: [
“MySQL DB opennebula at localhost restored.”
]
}
}
PLAY RECAP ***************************************************************************************************************************************************************************************************************
testmachine1 : ok=70 changed=38 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0
testmachine2 : ok=71 changed=37 unreachable=0 failed=0 skipped=7 rescued=0 ignored=0
How to pass an API key with Ansible
https://chronosphere.io/ – Third Party Cloud Monitoring Solution
Chronocollector: – https://github.com/Perfect10NickTailor/chronocollector
This role deploys the chronocollector management service which sends the data to domain.chronosphere.io For those of you who don’t know what it is. Its basically a cloud monitoring tool that scrapes data on your instances and then you can create dashboards or even export the data to promethus to make it look pretty and easy to read. You will likely pay for subscription, they will give you a subdomain which becomes your gateway address (domain.chronosphere.io)
Special note: You then need to deploy the node_exporter to push to the hosts you want scraped. That is a separate playbook and stupid easy.
#nowthatsjustfunny: So its debatable on how to approach passing {{ api_keys }} in a scalable and secure way. A lot of people create an “ansible vault encrypted variable”. This is so that when they push their code to their git repos. The {{ api_key }} isn’t exposed to someone simply glancing by the code. The issue with this approach is now you have to remember a vault password to pass to ansible, so it can decrypt the {{ api_key }} to pass, inorder for it to work when you run the playbook.(LAME)
#nowthatsjustcool: So just for the purposes of this post and for fun. I wrote it so that you can simply pass the {{ api_key }} during runtime. This way instead of being prompted for the vault-pass, you are prompted for the api_key to pass as a variable when you run the book. This gets rid of the need to setup a encrypted variable in your code entirely. Everyone has their own way of doing things, but I tend to think outside the box, so it always way more fun to be different in how you think.
Ansible Operational Documentation
How to use this role:
Example file: hosts.dev or hosts.staging
Running your playbook:
Example: of ansible/chronocollector.yml
– hosts: all
gather_facts: no
vars_prompt:
– name: api_key
prompt: Enter the API key
roles:
– role: chronocollector
Command:
ansible-playbook -i inventory/dev/hosts.dev chronocollector.yml -u nickadmin -Kkb –ask-become –limit=’testmachine3′
Successful run:
Notice: It asks you for the API key at runtime.
ntailor@jumphost:~/ansible2$ ansible-playbook -i ansible/inventory/dev/hosts.dev chronocollector.yml -u nicktadmin -Kkb –ask-become –limit=’testmachine3′
SSH password:
BECOME password[defaults to SSH password]:
Enter the API key:
PLAY [all] ***************************************************************************************************************************************************************************************************************
TASK [chronocollector : download node collector] *************************************************************************************************************************************************************************
ok: [testmachine3]
TASK [chronocollector : move collector to /usr/local/bin] ****************************************************************************************************************************************************************
ok: [testmachine3]
TASK [chronocollector : mkdir directory /etc/chronocollector] ************************************************************************************************************************************************************
ok: [testmachine3]
TASK [chronocollector : Copy default config.yml to /etc/chronocollector/] ************************************************************************************************************************************************
ok: [testmachine3]
TASK [chronocollector : Touch again the same file, but do not change times this makes the task idempotent] ***************************************************************************************************************
changed: [testmachine3]
TASK [chronocollector : Ensure API key is present in config file] ********************************************************************************************************************************************************
changed: [testmachine3]
TASK [chronocollector : Change file ownership, group and permissions apitoken file to secure it from prying eyes other than root] ****************************************************************************************
changed: [testmachine3]
TASK [chronocollector : Check that the service file /etc/systemd/system/collector.service exists] ************************************************************************************************************************
ok: [testmachine3]
TASK [chronocollector : Include add systemd task if service file does not exist] *****************************************************************************************************************************************
included: ansible/roles/chronocollector/tasks/systemd.yml for testmachine3
TASK [chronocollector : Create startup file for collector in systemd] ****************************************************************************************************************************************************
changed: [testmachine3]
TASK [chronocollector : Create systemd collector.service] ****************************************************************************************************************************************************************
changed: [testmachine3]
TASK [chronocollector : check whether custom line exists] ****************************************************************************************************************************************************************
changed: [testmachine3]
TASK [chronocollector : Start Collector Service via systemd] *************************************************************************************************************************************************************
changed: [testmachine3]
TASK [chronocollector : Show status of collector from systemd] ***********************************************************************************************************************************************************
changed: [testmachine3]
TASK [chronocollector : debug] *******************************************************************************************************************************************************************************************
ok: [testmachine3] => {
“status.stdout”: ” Active: failed (Result: exit-code) since Thu 2022-05-19 10:31:49 BST; 315ms ago”
}
PLAY RECAP ***************************************************************************************************************************************************************************************************************
testmachine3 : ok=15 changed=8 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
How to deploy Netplan with Ansible
Ansible-Netplan: – https://github.com/Perfect10NickTailor/ansible-netplan
Netplan.io- what is it is? Basically yaml files to deploy network configurations in a scalable manner by Ubuntu
How to use this role:
Example file: hosts.dev, hosts.staging, hosts.prod
Note: If there is no group simply list the server outside grouping, the –limit flag will pick it
up.
Descriptions:
Operational Use:
Okay now here is where VSC is handy. You want to connect your visual studio code to the management server under your user. I have provided a link which shows you how to setup your keys and get VSC working with it.
Note: You don’t have to use VSC you can use good old nano or vim, but it’s a pain. Up to you.
https://medium.com/@sujaypillai/connect-to-your-remote-servers-from-visual-studio-code-eb5a5875e348
ansible/inventory/dev/host_var$ testmachine1 (with Bonding)
—
# testmachine1 netplan config
# This is the network for testmachine1 with network bonding
netplan_configuration:
network:
bonds:
bond0:
interfaces:
– ens1f0
– ens1f1
parameters:
mode: balance-rr
ethernets:
eno1:
dhcp4: false
eno2:
dhcp4: false
ens1f0: {}
ens1f1: {}
version: 2
vlans:
vlan.180:
id: 180
link: bond0
# dhcp4: false
# dhcp6: false
vlan.3200:
id: 3200
link: bond0
# dhcp4: false
# dhcp6: false
vlan.3300:
id: 3300
link: bond0
# dhcp4: false
# dhcp6: false
bridges:
br200:
interfaces: [ vlan.200 ]
addresses: [ 192.168.50.9/24 ]
gateway4: 192.168.50.1
nameservers:
addresses: [ 8.8.8.8,8.8.4.8 ]
search: [ nicktailor.com ]
dhcp4: false
dhcp6: false
br3000:
interfaces: [ vlan.3000 ]
dhcp4: false
dhcp6: false
br3200:
interfaces: [ vlan.3200 ]
dhcp4: false
dhcp6: false
Example files:
ansible/inventory/dev/host_var$ testmachine1 (without Bonding)
Example Yaml Block :
#testmachine1
netplan_configuration:
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: false
dhcp6: false
eno2:
dhcp4: false
dhcp6: false
bridges:
br0:
interfaces: [ eno1 ]
dhcp4: false
dhcp6: false
br1:
interfaces: [ eno2 ]
dhcp4: false
dhcp6: false
br1110:
interfaces: [ vlan1110 ]
dhcp4: false
dhcp6: false
addresses: [ 172.16.52.10/26 ]
gateway4: 172.17.52.1
nameservers:
addresses: [ 8.8.8.8,8.8.4.8 ]
br600:
interfaces: [ vlan600 ]
dhcp4: false
dhcp6: false
addresses: [ 192.168.0.34/24 ]
br800:
interfaces: [ vlan800 ]
dhcp4: false
dhcp6: false
br802:
interfaces: [ vlan802 ]
dhcp4: false
dhcp6: false
br801:
interfaces: [ vlan801 ]
dhcp4: false
dhcp6: false
vlans:
vlan600:
id: 600
link: br0
dhcp4: false
dhcp6: false
vlan800:
id: 800
link: br1
dhcp4: false
dhcp6: false
vlan801:
id: 801
link: br1
dhcp4: false
dhcp6: false
vlan802:
id: 802
link: br1
dhcp4: false
dhcp6: false
Example: of ansible/deploynetplan.yml
– hosts: all
gather_facts: yes
any_errors_fatal: true
roles:
– role: ansible-netplan
netplan_enabled: true
ansible-playbook -i inventory/dev/hosts deploynetplan.yml -u nickadmin -Kkb –ask-become –limit=’testmachine1′
Successful example run with bonding:
ntailor@KVM–test–box:~/ansible$ ansible–playbook –i inventory/dev/hosts deploynetplan.yml –u nickadmin –Kkb —ask–become —limit=‘testmachine1’
SSH password:
BECOME password[defaults to SSH password]:
PLAY [all] *********************************************************************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************************************************
ok: [testmachine1]
TASK [ansible–netplan : Install netplan] ***************************************************************************************************************************************************************
ok: [testmachine1]
TASK [ansible–netplan : Backup exitsing configurations before removing live ones] **********************************************************************************************************************
changed: [testmachine1]
TASK [ansible–netplan : copy 00–install* netplan existing file to /etc/netplan/backups] ****************************************************************************************************************
changed: [testmachine1]
TASK [ansible–netplan : keep only 7 days of backups of previous network config /etc/netplan/backups] ***************************************************************************************************
changed: [testmachine1]
TASK [ansible–netplan : Capturing Existing Configurations] *********************************************************************************************************************************************
skipping: [testmachine1]
TASK [ansible–netplan : debug] *************************************************************************************************************************************************************************
skipping: [testmachine1]
TASK [ansible–netplan : Removing Existing Configurations] **********************************************************************************************************************************************
skipping: [testmachine1]
TASK [ansible–netplan : Configuring Netplan] ***********************************************************************************************************************************************************
ok: [testmachine1]
TASK [ansible–netplan : netplan apply] *****************************************************************************************************************************************************************
changed: [testmachine1]
TASK [ansible–netplan : debug] *************************************************************************************************************************************************************************
ok: [testmachine1] => {
“netplanapply”: {
“changed”: true,
“cmd”: “netplan apply”,
“delta”: “0:00:00.601112”,
“end”: “2022-01-31 16:43:45.295708”,
“failed”: false,
“msg”: “”,
“rc”: 0,
“start”: “2022-01-31 16:43:44.694596”,
“stderr”: “”,
“stderr_lines”: [],
“stdout”: “”,
“stdout_lines”: []
}
}
TASK [ansible–netplan : Show vlans that are up or down] ************************************************************************************************************************************************
changed: [testmachine1]
TASK [ansible–netplan : debug] *************************************************************************************************************************************************************************
ok: [testmachine1] => {
“vlan.stdout_lines”: [
“14: vlan.180@bond0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000”,
“15: vlan.3300@bond0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000”
]
}
TASK [ansible–netplan : show bridge details] ***********************************************************************************************************************************************************
changed: [testmachine1]
TASK [ansible–netplan : debug] *************************************************************************************************************************************************************************
ok: [testmachine1] => {
“bridges.stdout_lines”: [
“bridge name\tbridge id\t\tSTP enabled\tinterfaces”,
“br180\t\t8000.000000000000\tyes\t\t“,
“br3200\t\t8000.000000000000\tyes\t\t“,
“br3300\t\t8000.000000000000\tyes\t\t“
]
}
PLAY RECAP *********************************************************************************************************************************************************************************************
testmachine1 : ok=12 changed=6 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
Push your inventory/dev/host_var/testmachine1 code to Git :
Once you successfully checked your deploy worked by logging on to the client host and confirming everything looks good. You now want to push your code to git repo. Since you were able to clone you repo, you should be able to push to it.
Git Add Commands.
Git Commit Commands
How to call a json rest API using Ansible
So a very useful thing to understand is rest api’s and how to call them as a lot of organisations have these and want to integrate them into automation, a popular method is the http method
They are very simple calls { GET, POST, PUT, DELETE, PATCH }
For the sake of this post. Im going to use commvault public api’s https://api.commvault.com/
You will need to two things.
- The api endpoint which is usually an http url
Example:
- The raw json body of the of the api
Example:
{ "csFailoverConfigInfo": { "configStatus": 0, "isAutomaticFailoverEnabled": false } }
Now keep in mind if you are using an api that requires a login. In order for it to work, you will need to store the auth token to pass later to the last task later for the api call to work as intended. You can look at one of my other posts under vmware, where i used a http login to handle the tasks later, as a reference.
You can call these preliminary task as includes to store the token.
It will look something like this before it gets to the api task. You can also just do it all one on book if you wanted to. But for the purposes of this post. Im just giving ya highlevel.
- name: Login task include_role: name: commvault_login tasks_from: login.yml - name: Setfact for authtoke set_fact: authtoken: "{{ login_authtoken }}" delegate_to: localhost
Now in order for you to pass json api to ansible. You will need to convert the json raw body into yaml format. You can use visual studio code plugins or a site like https://json2yaml.com/
So if we are to use the above raw json example it would look like this
csFailoverConfigInfo:
configStatus: 0
isAutomaticFailoverEnabled: false
So now we want to pass this information to the task in the form of a variable. A really cool thing with ansible and this type of action. Is you can create a variable name and simply pass the new yaml converted body right below the varible. You can pass this as extra-vars or create a group variable with the same name and use that.
For those you who use tower passing them as extra-vars to test something can be a pain, since it doesn’t allow you to change the passed vars and rerun the previous run just used, you have to start all over. So I prefer the command line way as its easier to be agile
disable_api_body:
csFailoverConfigInfo:
configStatus: 0
isAutomaticFailoverEnabled: false
So now we ansible to use the rest api with ansible. You create a task that after the login is run and the token is stored inside as a fact. It run the following task, in our case this call will be a POST. It will post the headers to the url which will disabled commvault live_sync which is essentially commvault failover redundancy for the backup server itself.
- name: Disable Commvault livesync uri: url: http://{{ commvault_primary }}/webconsole/api/v2/CommServ/Failover method: POST body_format: json body: "{{ disable_api_body }}" return_content: true headers: Accept: application/json Content-Type: application/json Authtoken: "{{ login_authtoken }}" status_code: 200 validate_certs: false register: disable_livesync retries: "4" delay: "10" delegate_to: localhost - debug: var: disable_livesync
When you run the book and your have an active failover setup correctly with commvault. In the command center under the control panel you should see livesync. If you click on this you should see either it is checked or unchecked.
How to Deploy LVM’s with Ansible
Provisioning-LVM-Filesystems:
This role is designed to use ansible-merge-vars module. An Ansible plugin to merge all variables in context with a certain suffix (lists or dicts only) and create a new variable that contains the result of this merge. This is an Ansible action plugin, which is basically an Ansible module that runs on the machine running Ansible rather than on the host that Ansible is provisioning.
Benefits: Configuring disks into LVM
Note: This post assumes you have already ansible installed and running.
Install ansible-merge-vars module:
1. root@KVM-test-box:~# pip install ansible_merge_vars
Requirement already satisfied: ansible_merge_vars in
/usr/local/lib/python3.8/dist-packages (5.0.0)
By default, Ansible will look for action plugins in an action_plugins folder adjacent to the running playbook. For more information on this, or to change the location where ansible looks for action plugin.
from ansible_merge_vars import ActionModule
Role Setup:
Once the plugin has been setup, you now you will want to setup a role.
Now we will create a task that will merge variable names associated with a list and then itemise the list for variables we will pass to provision the filesystem via the inventory/host_var or group_var
– name: Merge VG variables
merge_vars:
suffix_to_merge: vgs__to_merge
merged_var_name: merged_vgs
expected_type: ‘list’
– name: Merge LV variables
merge_vars:
suffix_to_merge: lvs__to_merge
merged_var_name: merged_lvs
expected_type: ‘list’
– name: Merge FS variables
merge_vars:
suffix_to_merge: fs__to_merge
merged_var_name: merged_fs
expected_type: ‘list’
– name: Merge MOUNT variables
merge_vars:
suffix_to_merge: mnt__to_merge
merged_var_name: merged_mnt
expected_type: ‘list’
– name: Create VGs
lvg:
vg: “{{ item.vg }}”
pvs: “{{ item.pvs }}”
with_items: “{{ merged_vgs }}”
– name: Create LVs
lvol:
vg: “{{ item.vg }}”
lv: “{{ item.lv }}”
size: “{{ item.size }}”
pvs: “{{ item.pvs | default(omit) }}”
shrink: no
with_items: “{{ merged_lvs }}”
– name: Create FSs
filesystem:
dev: “{{ item.dev }}”
fstype: “{{ item.fstype }}”
with_items: “{{ merged_fs }}”
– name: Mount FSs
mount:
path: “{{ item.path }}”
src: “{{ item.src }}”
state: mounted
fstype: “{{ item.fstype }}”
opts: “{{ item.opts | default(‘defaults’) }}”
dump: “{{ item.dump | default(‘1’) }}”
passno: “{{ item.passno | default(‘2’) }}”
with_items: “{{ merged_mnt }}”
Note: Now this currently task has no safe guards for /dev/sda or checks to ensure the disk is wiped properly in order for the disks to be added to the volume group. I have created such safe guards for others. But for the purposes of this blog post this is basics. If you want to my help you can contact me via email or the ticketing system.
Now what we are going to do is define our inventory file with what file lvm we want to crave out.
Setup inventory:
1.Go inside your inventory/host_var or group_var file and create a file for testserver1
- .nano inventory/host_var/testserver1
2. save the file.
Definitions of the variables above:
vgs__to_merge: This section is the creation volume/physical groups
– vg: vg_vmguest (this is the volume group name)
pvs: /dev/sdb (this is the physical assigned to the above volume group
– vg: vg_sl_storage (This the second volume name)
pvs: /dev/sdc (This is the second physical disk assigned to the above
volume
*You can add as many as you like*
lvs__to_merge: This section is the logical Volume creations
– vg: vg_vmguest (this is the volume group created)
lv: lv_vg_vmguest (this is the logical volume that is attached to above vg
size: 100%FREE (this says please use the whole disk)
shrink: no (this is needed to so the disk space is used correctly)
– vg: vg_sl_storage (this is the second volume created)
lv: lv_vg_sl_storage (this is the second lvm created attached to above vg)
size: 100%FREE (this is use the whole disk)
shrink: no (this is needed so the disk space is properly used)
fs__to_merge: This section formats the lvm
– dev: /dev/vg_vmguest/lv_vg_vmguest (lvm name)
fstype: ext4 (file system you want to format with)
– dev: /dev/vg_sl_storage/lv_vg_sl_storage (2nd lvm name)
fstype: ext4 (file system you want to format with)
mnt__to_merge: This section will create the path,mount, and add to fstab
– path: /vmguests (path you want created for mount)
src: /dev/vg_vmguest/lv_vg_vmguest (lvm you want to mount)
fstype: ext4 (this is for fstab adding)
– path: /sl_storage (this is second path to create)
src: /dev/vg_sl_storage/lv_vg_sl_storage (second lvm you want to mount)
fstype: ext4 (to add to fstab)
Running your playbook:
cd ansible/
Example: of justdofs.yml
– hosts: all
gather_facts: yes
any_errors_fatal: true
roles:
– role: provision-fs
Command:
ansible/$ ansible-playbook -i inventory/hosts justdofs.yml -u root -k –limit=’testservernick1′
Example of successful play:
ntailor@test-box:~/ansible/computelab$ ansible-playbook –i inventory/hosts justdofs.yml -u root -k –limit=’testservernick1‘
SSH password:
PLAY [all] *******************************************************************************************************************************************************************************************************
TASK [provision-fs : Merge VG variables] *************************************************************************************************************************************************************************
ok: [testservernick1]
TASK [provision-fs : Merge LV variables] *************************************************************************************************************************************************************************
ok: [testservernick1]
TASK [provision-fs : Merge FS variables] *************************************************************************************************************************************************************************
ok: [testservernick1]
TASK [provision-fs : Merge MOUNT variables] **********************************************************************************************************************************************************************
ok: [testservernick1]
TASK [provision-fs : Create VGs] *********************************************************************************************************************************************************************************
ok: [testservernick1] => (item={‘vg’: ‘vg_vmguest‘, ‘pvs‘: ‘/dev/sdb‘})
ok: [testservernick1] => (item={‘vg’: ‘vg_sl_storage‘, ‘pvs‘: ‘/dev/sdc‘})
TASK [provision-fs : Create LVs] *********************************************************************************************************************************************************************************
ok: [testservernick1] => (item={‘vg’: ‘vg_vmguest‘, ‘lv’: ‘lv_vg_vmguest‘, ‘size’: ‘100%FREE’, ‘shrink’: False})
ok: [testservernick1] => (item={‘vg’: ‘vg_sl_storage‘, ‘lv’: ‘lv_vg_sl_storage‘, ‘size’: ‘100%FREE’, ‘shrink’: False})
TASK [provision-fs : Create FSs] *********************************************************************************************************************************************************************************
ok: [testservernick1] => (item={‘dev’: ‘/dev/vg_vmguest/lv_vg_vmguest‘, ‘fstype‘: ‘ext4’})
ok: [testservernick1] => (item={‘dev’: ‘/dev/vg_sl_storage/lv_vg_sl_storage‘, ‘fstype‘: ‘ext4’})
TASK [provision-fs : Mount FSs] **********************************************************************************************************************************************************************************
ok: [testservernick1] => (item={‘path’: ‘/vmguests‘, ‘src‘: ‘/dev/vg_vmguest/lv_vg_vmguest‘, ‘fstype‘: ‘ext4’})
ok: [testservernick1] => (item={‘path’: ‘/sl_storage‘, ‘src‘: ‘/dev/vg_sl_storage/lv_vg_sl_storage‘, ‘fstype‘: ‘ext4’})
PLAY RECAP *******************************************************************************************************************************************************************************************************
testservernick1 : ok=8 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
HOW TO CHECK CPU, MEMORY, & DISKS THRESHHOLDS on an ARRAY of HOSTS.
So I was tinkering around as usual. I thought this will come in handy for other engineers
If you a large cluster of servers that can suddenly over night loose all its MEM,CPU,DISK due to the nature of your businesses. Its difficult to monitor that from a GUI and on an array of hosts more often than not.
Cloud Scenario……
Say you find a node that is dying because too many clients are using resources and you need migrate instances off to another node, only you don’t know which nodes have the needed resources without having to go look at all the nodes individually.
This tends be every engineers pain point. So I decide to come up with quick easy solution for emergency situations, where you don’t have time to sifting through alert systems that only show you data on a per host basis, that tend to load very slowly.
This bash script will check the CPU, MEM, DISK MOUNTS (including NFS) and tell which ones are okay and which ones are
CPU – calculated by the = 100MaxThrottle – Cpu-idle = CPU-usage
note: it also creates a log /opt/cpu.log on each host
MEM – calculate by Total Mem / Used Memory * 100 = Percentage of Used Memory
note: it also creates a log /opt/mem.log on each host
Disk – Any mount that reaches the warn threshold… COMPLAIN
Now, itemised the bash script so you can just comment out item you don’t want to use at the bottom of the script if you wanted to say just check CPU/MEM
#Written By Nick Tailor
#!/bin/bash
now=`date -u -d”+8 hour” +’%Y-%m-%d %H:%M:%S’`
#cpu use threshold
cpu_warn=’75’
#disk use threshold
disk_warn=’80’
#—cpu
item_cpu () {
cpu_idle=`top -b -n 1 | grep Cpu | awk ‘{print $8}’|cut -f 1 -d “.”`
cpu_use=`expr 100 – $cpu_idle`
echo “now current cpu utilization rate of $cpu_use $(hostname) as on $(date)” >> /opt/cpu.log
if [ $cpu_use -gt $cpu_warn ]
then
echo “cpu warning!!! $cpu_use Currently HIGH $(hostname)”
else
echo “cpu ok!!! $cpu_use% use Currently LOW $(hostname)”
fi
}
#—mem
item_mem () {
#MB units
LOAD=’80.00′
mem_free_read=`free -h | grep “Mem” | awk ‘{print $4+$6}’`
MEM_LOAD=`free -t | awk ‘FNR == 2 {printf(“%.2f%”), $3/$2*100}’`
echo “Now the current memory space remaining ${mem_free_read} GB $(hostname) as on $(date)” >> /opt/mem.log
if [[ $MEM_LOAD > $LOAD ]]
then
echo “$MEM_LOAD not good!! MEM USEAGE is HIGH – Free-MEM-${mem_free_read}GB $(hostname)”
else
echo “$MEM_LOAD ok!! MEM USAGE is beLOW 80% – Free-MEM-${mem_free_read}GB $(hostname)”
fi
}
#—disk
item_disk () {
df -H | grep -vE ‘^Filesystem|tmpfs|cdrom’ | awk ‘{ print $5 ” ” $1 }’ | while read output;
do
echo $output
usep=$(echo $output | awk ‘{ print $1}’ | cut -d’%’ -f1 )
partition=$(echo $output | awk ‘{ print $2 }’ )
if [ $usep -ge $disk_warn ]; then
echo “AHH SHIT!, MOVE SOME VOLUMES IDIOT…. \”$partition ($usep%)\” on $(hostname) as on $(date)”
fi
done
}
item_cpu
item_mem
#item_disk – This is so you can comment out whole sections of the script without having to do the whole section by individual lines.
Now the cool part.
Now if you have a centrally managed jump host that allows you to get out from your estate. Ideally you would want to setup ssh keys on the hosts and ensure you have sudo permissions on the those hosts.
We want to loop this script through an array of hosts and have it run and then report back all the findings in once place. This is extremely handy if your in resource crunch.
This assumes you have SSH KEYS SETUP & SUDO for your user setup.
Create the script
Next
Server1
Server2
Server3
Server4
Run your forloop with ssh keys and sudo already setup.
Logfile – cpumem.status.DEV – will be the log file that has all the info
Output:
cpu ok!!! 3% use Currently dev1.nicktailor.com
17.07% ok!! MEM USAGE is beLOW 80% – Free-MEM-312.7GB dev1.nicktailor.com
5% /dev/mapper/VolGroup00-root
3% /dev/sda2
5% /dev/sda1
1% /dev/mapper/VolGroup00-var_log
72% 192.168.1.101:/data_1
28% 192.168.1.102:/data_2
80% 192.168.1.103:/data_3
AHH SHIT!, MOVE SOME VOLUMES IDIOT…. “192.168.1.104:/data4 (80%)” on dev1.nicktailor.com as on Fri Apr 30 11:55:16 EDT 2021
Okay so now I’m gonna show you a dirty way to do it, because im just dirty. So say your in horrible place that doesn’t use keys, because they’re waiting to be hacked by password. 😛
DIRTY WAY – So this assumes you have sudo permissions on the hosts.
Note: I do not recommend doing this way if you are a newb. Doing it this way will basically log your password in the bash history and if you don’t know how to clean up after yourself, well………………….you’re going to get owned.
I’m only showing you this because some cyber security “folks” believe that not using keys is easier to deal with in some parallel realities iv visited… You can do the exact same thing above, without keys. But leave massive trail behind you. Hence why you should use secure keys with passwords.
Not Recommended for Newbies:
Forloop AND passing your ssh password inside it.
Log file – cpumem.status.DEV – will be the log file that has all the info
Output:
cpu ok!!! 3% use Currently dev1.nicktailor.com
17.07% ok!! MEM USAGE is beLOW 80% – Free-MEM-312.7GB dev1.nicktailor.com
5% /dev/mapper/VolGroup00-root
3% /dev/sda2
5% /dev/sda1
1% /dev/mapper/VolGroup00-var_log
72% 192.168.1.101:/data_1
28% 192.168.1.102:/data_2
80% 192.168.1.103:/data_3
AHH SHIT!, MOVE SOME VOLUMES IDIOT…. “192.168.1.104:/data4 (80%)” on dev1.nicktailor.com as on Fri Apr 30 11:55:16 EDT 2021