Category: Linux

How to Power Up or Power Down multiple instances in OCI using CLI with Ansible

 This assume you have already configured the OCI cli and added your key to the user inside the OCI interface so your Ubuntu or Jump box can connect to your OCI infrastructure
 Ansible
 Role to control power up/down instances using the OCI CLI
 This assume you already have ansible setup
 You will need to install the ansible oci collections

.

Now the reason why you would probably want this is over terraform is because terraform is more suited for infrastructure orchestration and not really suited to deal with the instances once they are up and running.

If you have scaled servers out in OCI powering servers up and down in bulk currently is not available. If you are doing a migration or using a staging environment that you need need to use the machine when building or doing troubleshooting.

Then having a way to power up/down multiple machines at once is convenient.

.

Install the OCI collections if you don’t have it already.

Linux/macOS

curl -L https://raw.githubusercontent.com/oracle/oci-ansible-collection/master/scripts/install.sh | bash -s — —verbose

.

ansible-galaxy collection list – Will list the collections installed

# /path/to/ansible/collections

Collection Version

——————- ——-

amazon.aws 1.4.0

ansible.builtin 1.3.0

ansible.posix 1.3.0

oracle.oci 2.10.0

.

Once you have it installed you need to test the OCI client is working

oci iam compartment list –all (this will list out the compartment ID list for your instances.

Compartments in OCI are a way to organise infrastructure and control access to those resources. This is great for if you have contractors coming and you only want them to have access to certain things not everything.

Now there are two ways you can your instance names.

 One logging in via the OCI interface and going the correct compartment, which is very slow and mind numbing to wait for.
 Or you can use automated approaches which is what you should be doing with everything you do that needs to be done over and over.

.

Bash Script to get the instances names from OCI

 This will use the OCI CLI and provide all instances name and ips
 It loops through each availability domain.
 for each availability domain, it lists the instance IDs and writes them to instance_ids.txt.
 It cleans up the instance_ids.txt file to remove brackets, quotes, and commas.
 It reads each instance ID from instance_ids.txt.
 For each instance, it retrieves the VNIC information.
 It extracts the display name, public IP, and private IP, and prints them.
 The script ends the loop and moves to the next availability domain.

compartment_id=ocid1.compartment.oc1..insert compartment ID here

.

# Explicitly define the availability domains based on your provided data

availability_domains=(“zcLB:US-CHICAGO-1-AD-1” “zcLB:US-CHICAGO-1-AD-2” “zcLB:US-CHICAGO-1-AD-3”)

.

# For each availability domain, list the instances

for ad in “${availability_domains[@]}”; do

.

    # List instances within the specific AD and compartment, extracting the “id” field

    oci compute instance list –compartment-id $compartment_id –availability-domain $ad –query data[].id” –raw-output > instance_ids.txt

.

    # Clean up the instance IDs (removing brackets, quotes, etc.)

    sed i ‘s/\[//g’ instance_ids.txt

    sed i ‘s/\]//g’ instance_ids.txt

    sed i ‘s/”//g’ instance_ids.txt

    sed i ‘s/,//g’ instance_ids.txt

.

    # Read each instance ID from instance_ids.txt

    while read -r instance_id; do

        # Get instance VNIC information

        instance_info=$(oci compute instance list-vnics –instance-id $instance_id)

.

        # Extract the required fields and print them

        display_name=$(echo $instance_info | jq -r ‘.data[0].”display-name”‘)

        public_ip=$(echo $instance_info | jq -r ‘.data[0].”public-ip“‘)

        private_ip=$(echo $instance_info | jq -r ‘.data[0].”private-ip“‘)

.

        echo “Availability Domain: $ad

        echo “Display Name: $display_name

        echo “Public IP: $public_ip

        echo “Private IP: $private_ip

        echo “—————————————–“

    done < instance_ids.txt

done

.

The output of the script when piped in to a file will look like

Instance.names

Availability Domain: zcLB:US-CHICAGO-1-AD-1

Display Name: Instance1

Public IP: 192.0.2.1

Private IP: 10.0.0.1

—————————————–

Availability Domain: zcLB:US-CHICAGO-1-AD-1

Display Name: Instance2

Public IP: 192.0.2.2

Private IP: 10.0.0.2

—————————————–

.

.

You can now grep this file for the name of the servers you want to power on or off quickly

 grep instance.names | grep <Instance*>

.

Now we have an ansible playbook that can power on or power off the instance by name provided by the OCI client

Ansible playbook to power on or off multiple instances via OCI CLI

name: Control OCI Instance Power State based on Instance Names

  hosts: localhost

  vars:

    instance_names_to_stop:

       instance1

      # Add more instance names here if you wish to stop them…

.

    instance_names_to_start:

      # List the instance names you wish to start here…

      # Example:

       Instance2

.

  tasks:

   name: Fetch all instance details in the compartment

    command:

      cmd: oci compute instance list –compartment-id ocid1.compartment.oc1..aaaaaaaak7jc7tn2su2oqzmrbujpr5wmnuucj4mwj4o4g7rqlzemy4yvxrza –output json

    register: oci_output

.

   set_fact:

      instances: {{ oci_output.stdout | from_json }}”

.

   name: Extract relevant information

    set_fact:

      clean_instances: {{ clean_instances | default([]) + [{ ‘name’: item[‘display-name’], ‘id’: item.id, ‘state’: item[‘lifecycle-state’] }] }}”

    loop: {{ instances.data }}”

    when: “‘display-name’ in item and ‘id’ in item and ‘lifecycle-state’ in item”

.

   name: Filter out instances to stop

    set_fact:

      instances_to_stop: {{ instances_to_stop | default([]) + [item] }}”

    loop: {{ clean_instances }}”

    when: “item.name in instance_names_to_stop and item.state == ‘RUNNING'”

.

   name: Filter out instances to start

    set_fact:

      instances_to_start: {{ instances_to_start | default([]) + [item] }}”

    loop: {{ clean_instances }}”

    when: “item.name in instance_names_to_start and item.state == ‘STOPPED'”

.

   name: Filter out instances to stop

    set_fact:

      instances_to_stop: {{ clean_instances | selectattr(‘name’, ‘in’, instance_names_to_stop) | selectattr(‘state’, ‘equalto‘, ‘RUNNING’) | list }}”

.

   name: Filter out instances to start

    set_fact:

      instances_to_start: {{ clean_instances | selectattr(‘name’, ‘in’, instance_names_to_start) | selectattr(‘state’, ‘equalto‘, ‘STOPPED’) | list }}”

.

   name: Display instances to stop (you can remove this debug task later)

    debug:

      var: instances_to_stop

.

   name: Display instances to start (you can remove this debug task later)

    debug:

      var: instances_to_start

.

   name: Power off instances

    command:

      cmd: oci compute instance action —action STOP –instance-id {{ item.id }}”

    loop: {{ instances_to_stop }}”

    when: instances_to_stop | length > 0

    register: state

.

#  – debug:

#      var: state

.

   name: Power on instances

    command:

      cmd: oci compute instance action —action START –instance-id {{ item.id }}”

    loop: {{ instances_to_start }}”

    when: instances_to_start | length > 0

.

The output will look like

PLAY [Control OCI Instance Power State based on Instance Names] **********************************************************************************

.

TASK [Gathering Facts] ***************************************************************************************************************************

ok: [localhost]

.

TASK [Fetch all instance details in the compartment] *********************************************************************************************

changed: [localhost]

.

TASK [Parse the OCI CLI output] ******************************************************************************************************************

ok: [localhost]

.

TASK [Extract relevant information] **************************************************************************************************************

ok: [localhost] => (item={‘display-name’: ‘Instance1’, ‘id’: ‘ocid1.instance.oc1..exampleuniqueID1’, ‘lifecycle-state’: ‘STOPPED’})

ok: [localhost] => (item={‘display-name’: ‘Instance2’, ‘id’: ‘ocid1.instance.oc1..exampleuniqueID2’, ‘lifecycle-state’: ‘RUNNING’})

.

TASK [Filter out instances to stop] **************************************************************************************************************

ok: [localhost]

.

TASK [Filter out instances to start] *************************************************************************************************************

ok: [localhost]

.

TASK [Display instances to stop (you can remove this debug task later)] **************************************************************************

ok: [localhost] => {

    instances_to_stop: [

        {

            “name”: “Instance2”,

            “id”: ocid1.instance.oc1..exampleuniqueID2″,

            “state”: RUNNING”

        }

    ]

}

.

TASK [Display instances to start (you can remove this debug task later)] *************************************************************************

ok: [localhost] => {

    instances_to_start: [

        {

            “name”: “Instance1”,

            “id”: ocid1.instance.oc1..exampleuniqueID1″,

            “state”: STOPPED”

        }

    ]

}

.

TASK [Power off instances] ***********************************************************************************************************************

changed: [localhost] => (item={‘name’: ‘Instance2’, ‘id’: ‘ocid1.instance.oc1..exampleuniqueID2’, ‘state’: ‘RUNNING’})

.

TASK [Power on instances] ************************************************************************************************************************

changed: [localhost] => (item={‘name’: ‘Instance1’, ‘id’: ‘ocid1.instance.oc1..exampleuniqueID1’, ‘state’: ‘STOPPED’})

.

PLAY RECAP ****************************************************************************************************************************************

localhost                  : ok=9    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

.

.

How to Configure Redhat 7 & 8 Network Interfaces using Ansible

 This role will configure redhat 7 and up interfaces for virtual and physical.
(bonded nics, gateways, routes, interface names)

How to use this role:

1.You must first download the git repository into your roles directory usually ansible/role/
2.Now you want edit the hosts.client file name file or create it if it doesn’t exist under your “ansible/inventory/dev:staging:prod” directory. This is a good way to separate environments with ansible, inside each environment you should have a hosts.file like indicated below.

Example file: hosts.dev, hosts.staging, hosts.prod

c.Put your server under the appropriate group inside the file and save
d.testmachine1 ansible_host=192.168.1.101

.

Cool Stuff: If you deployed a virtual-machine using the ansible-vmware modules it will set the hostname of the host using the same shortname of the vm. If you require the fqdn vs the shortname on the host. To solve this I added some code to set the fdqn as the new_hostname if you define it under you hosts.file as shown below.

e.testmachine1 ansible_host=192.168.1.101 new_hostname=testmachine1.nicktailor.com

.

Now inside this directory you should see hosts & host_vars, group_vars

Descriptions:

f.Hosts. – is where you will list your servers under specific groups which tell the playbook (what the server is, if it the server should have a specific task run on it, and how to find it)
g.Host_vars – Inside this directory is where you list the server by name which is you will list under hosts. Inside these files you pass variable parameters to the specific roles when running your playbook. Without these the playbook cant do the tasks you want it to.
h.Group_varsAre how a way to group variables for sets of servers and this keeps code cleaners and easier to manage.

Operational Use:

3.Move inside host_var
i.cd host_var
j.create a file called {{ servername }} and save it for us its testmachine1.nicktailor.com

.

4.Now inside this directory you should see hosts & host_vars, group_vars

Descriptions:

k.Hosts. – is where you will list your servers under specific groups which tell the playbook (what the server is, if it the server should have a specific task run on it, and how to find it)
l.Host_vars – Inside this directory is where you list the server by name which is you will list under hosts. Inside these files you pass variable parameters to the specific roles when running your playbook. Without these the playbook cant do the tasks you want it to.
m.Group_varsAre how a way to group variables for sets of servers and this keeps code cleaners and easier to manage.

Operational Use:

5.Move inside host_var
n.cd host_var
o.create a file called {{ servername }} and save it for us its testmachine1.nicktailor.com
p.add the following parameters to your inventory file and save.

passed parameters: example: var/testmachine1

#Configure network can be used on physical and virtual-machines

nic_devices:

    – device: ens192

      ip: 192.168.10.100

      nm: 255.255.255.0

      gw: 192.168.10.254

      uuid:

      mac:

..

Note: you do not need to specify the UUID, you can if you wish. You do need the MAC. if you are doing bonded nics on the hosts. If you are using physical machines with satellite deployments. Then its probably a good to idea to use the mac of the nic you want the dhcp request to hit to avoid accidently deploying to the wrong host. When dealing with physical machines you don’t really have the same forgiveness of snapshots or quickly rebuilding as a vm. You can do more complicated configurations as indicated below….You can always email or contact me via linkedin, top right of the blog if you need assistance.

More Advanced configurations: bonded nics, routes, multiple nics and gateways

bond_devices:

    – device: ens1

      mac: ec:0d:9a:05:3b:f0

      master: mgt

      eth_opts: ‘-C ${DEVICE} adaptive-rx off rx-usecs 0 rx-frames 0; -K ${DEVICE} lro off’

    – device: ens1d1

      mac: ec:0d:9a:05:3b:f1

      master: mgt

      eth_opts: ‘-C ${DEVICE} adaptive-rx off rx-usecs 0 rx-frames 0; -K ${DEVICE} lro off’

    – device: mgt

      ip: 10.100.1.2

      nm: 255.255.255.0

      gw: 10.100.1.254

      pr: ens1

    – device: ens6

      mac: ec:0d:9a:05:16:g0

      master: app

    – device: ens6d1

      mac: ec:0d:9a:05:16:g1

      master: app

    – device: app

      ip: 10.101.1.3

      nm: 255.255.255.0

      pr: ens6

routes:

    – device: app

      route:

        – 100.240.136.0/24

        – 100.240.138.0/24

.

    – device: app

      gw: 10.156.177.1

      route:

        – 10.156.148.0/24

.

.

Running your playbook:

1.You must run your play book from inside parent directory always “ansible
2.Now there is a playbook called setup-networkonly.yml in the ansible directory which simply calls the setup-redhat-interfaces role inside the roles directory.

Example: of ansible/ setup-networkonly.yml

hosts: all

  gather_facts: no

  roles:

   – role: setup-redhat-interfaces

.

Command:

ansible-playbook -i inventory/dev/hosts setup-networkonly.yml–limit=’testmachine1.nicktailor.com’

.

 -i : This flag tells ansibe-playbook command which hosts file to use, these are always defined by environment like hosts.dev or hosts.staging
 -u : this is the ssh_user you will be connecting to the servers with
 -Kkb : this tells ansible that you will be using sudo su – for the ssh_user when running all role/tasks
 -ask-beocme : is saying become root
 -limit=’server’ : this allows you to segement which server you want to run the playbook against.

.

.

Test Run:

[root@ansible-home]# ansible-playbook –i inventory/dev/hosts setup-metworkonly.yml –limit=’testmachine1.nicktailor.com’ -k

SSH password:

.

PLAY [all] *************************************************************************************************************************************************************************

.

TASK [setup-redhat-network : Gather facts] ************************************************************************************************************************************

ok: [testmachine1.nicktailor.com]

.

TASK [setup-redhat-network : set_fact] ****************************************************************************************************************************************

ok: [testmachine1.nicktailor.com]

.

TASK [setup-redhat-network : Cleanup network confguration] ********************************************************************************************************************

ok: [testmachine1.nicktailor.com]

.

TASK [setup-redhat-network : find] ********************************************************************************************************************************************

ok: [testmachine1.nicktailor.com]

.

TASK [setup-redhat-network : file] ********************************************************************************************************************************************

changed: [testmachine1.nicktailor.com] => (item={u’rusr: True, u’uid: 0, u’rgrp: True, u’xoth: False, u’islnk: False, u’woth: False, u’nlink: 1, u’issock: False, u’mtime: 1530272815.953706, u’gr_name: u’root‘, u’path: u’/etc/sysconfig/network-scripts/ifcfg-enp0s3′, u’xusr: False, u’atime: 1665494779.63, u’inode: 1055173, u’isgid: False, u’size: 285, u’isdir: False, u’ctime: 1530272816.3037066, u’isblk: False, u’wgrp: False, u’xgrp: False, u’isuid: False, u’dev: 64769, u’roth: True, u’isreg: True, u’isfifo: False, u’mode: u’0644′, u’pw_name: u’root‘, u’gid: 0, u’ischr: False, u’wusr: True})

changed: [testmachine1.nicktailor.com] => (item={u’rusr: True, u’uid: 0, u’rgrp: True, u’xoth: False, u’islnk: False, u’woth: False, u’nlink: 1, u’issock: False, u’mtime: 1530272848.538762, u’gr_name: u’root‘, u’path: u’/etc/sysconfig/network-scripts/ifcfg-enp0s8′, u’xusr: False, u’atime: 1665494779.846, u’inode: 2769059, u’isgid: False, u’size: 203, u’isdir: False, u’ctime: 1530272848.6417623, u’isblk: False, u’wgrp: False, u’xgrp: False, u’isuid: False, u’dev: 64769, u’roth: True, u’isreg: True, u’isfifo: False, u’mode: u’0644′, u’pw_name: u’root‘, u’gid: 0, u’ischr: False, u’wusr: True})

.

TASK [setup-redhat-network : file] ********************************************************************************************************************************************

ok: [testmachine1.nicktailor.com]

.

TASK [setup-redhat-network : Setup bond devices] ******************************************************************************************************************************

changed: [testmachine1.nicktailor.com] => (item={u’device: u’enp0s8′, u’mac: u’08:00:27:13:b2:73′, u’master: u’mgt‘})

changed: [testmachine1.nicktailor.com] => (item={u’device: u’enp0s9′, u’mac: u’08:00:27:e8:cf:cd’, u’master: u’mgt‘})

changed: [testmachine1.nicktailor.com] => (item={u’device: u’mgt‘, u’ip: u’192.168.10.200‘, u’nm: u’255.255.255.0′, u’gw: u’10.0.2.2′, u’pr: u’enp0s8′})

.

TASK [setup-redhat-network : Setup NIC] ***************************************************************************************************************************************

.

TASK [setup-redhat-network : Setup static routes] *****************************************************************************************************************************

.

PLAY RECAP *************************************************************************************************************************************************************************

testmachine1.nicktailor.com : ok=7    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0

.

[root@testmachine1.nicktailor.com]# cat /proc/net/bonding/mgt

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

.

Bonding Mode: fault-tolerance (active-backup)

Primary Slave: enp0s8 (primary_reselect failure)

Currently Active Slave: enp0s8

MII Status: up

MII Polling Interval (ms): 100

Up Delay (ms): 0

Down Delay (ms): 0

.

Slave Interface: enp0s8

MII Status: up

Speed: 1000 Mbps

Duplex: full

Link Failure Count: 0

Permanent HW addr: 08:00:27:13:b2:73

Slave queue ID: 0

.

Slave Interface: enp0s9

MII Status: up

Speed: 1000 Mbps

Duplex: full

Link Failure Count: 0

Permanent HW addr: 08:00:27:e8:cf:cd

Slave queue ID: 0

.

[root@testmachine1.nicktailor.com]# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

    link/ether 08:00:27:63:63:0e brd ff:ff:ff:ff:ff:ff

    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic enp0s3

       valid_lft 86074sec preferred_lft 86074sec

    inet6 fe80::a162:1b49:98b7:6c54/64 scope link noprefixroute

       valid_lft forever preferred_lft forever

3: enp0s8: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master mgt state UP group default qlen 1000

    link/ether 08:00:27:13:b2:73 brd ff:ff:ff:ff:ff:ff

4: enp0s9: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master mgt state UP group default qlen 1000

    link/ether 08:00:27:13:b2:73 brd ff:ff:ff:ff:ff:ff

5: enp0s10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

    link/ether 08:00:27:05:b4:e8 brd ff:ff:ff:ff:ff:ff

6: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN group default qlen 1000

    link/ether ae:db:dc:52:22:f8 brd ff:ff:ff:ff:ff:ff

7: mgt: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000

    link/ether 08:00:27:13:b2:73 brd ff:ff:ff:ff:ff:ff

    inet 192.168.10.200/24 brd 192.168.56.255 scope global mgt

       valid_lft forever preferred_lft forever

    inet6 fe80::a00:27ff:fe13:b273/64 scope link

       valid_lft forever preferred_lft forever

.

How to deploy OpenNebula Frontends via Ansible

Frontend: This role deploys the OpenNebula Cloud platform frontends via Ansible

Ansible Operational Documentation – OpenNebula Frontend Deployments

https://opennebula.io/ – OpenNebula is basically a opensource inhouse cloud platform that you can deploy and manage virtual machines using a kvm backend on the host which is scalable. OpenNebula support give you a document to run manual commands, and would not provide the opensource playbook they use to deploy frontends.

So I reverse engineered one for others to use and edit as needed. As nobody runs commands manually anymore. If you are not automating then you are basically a dinosaur

Note: You will still need to buy your own enterprise license to get access to the apt source. You can find that below and you can plug those into defaults/main.yml before you run the book.

This role handles the following when deploying OpenNebula Frontends in standalone or HA using groups to distinguish how to deploy in scale using apache.

 Apache.yml – separate task that independently deploys apache and the configutation. This is so if you simply if wanted to run rerun the apache configuration with a new domain, you don’t need to rerun the whole book.
 Mysql.yml – this task uses a custom python library that is not apart of native ansible, located in side the library folder and handles the following
It deploys mysql
It changes the root password
Removes_anonymous_user
Disables_root_login_remotely
Removes the testdb
It will create the database, user, and grantpermission for new the new database
 Main.yml – This is the primary task that deploys the ON frontend
Install depenancies for ON
Imports the keys for Ubuntu, phusionpassenger, ON
Install and configures mysql (using custom python library)
Install and configures apache for ON
Configures sunstone.conf
Configures oned.conf
Is able to distinguish between standalone and HA setup
Copies ssh keys for ON to secondary FE’s
Copies rafthook scripts to secondary FE’s
Adds primary server to ON zone
Updates zone endpoint
Backups up primary mysql db and copies to secondary nodes in HA
Setups up federation configuration if defined
Stop and starts services at specific times during the installation for everything to work correctly. (super important) do not change the order without reviewing

How to use this role:

1.You must first download the git repository
b.git clone git@github.com:Perfect10NickTailor/opennebula-frontends.git
2.Under your user you will see a directory called opennebula-frontends cd into this directory
c.cd opennebula-frontends
3.Move inside the inventory directory to the appropriate client directory
a.Create a file called hosts.opennebula
4.Now you want edit the {{ hosts.opennebula }} file name file or create it if it doesn’t exist

Example file: hosts.opennebula

d.Put your server under the appropriate group inside the file and save

Example: This is how you would list out 3 frontend hosts

[all:children]

frontend_server_primary # this is where you list ON server number 1

mysql_servers – you list any server that will require mysql install for ON

apache_servers – you list any server that will be running ON apache

frontend_HA – you list any additional front ends that will be used in HA here for OpenNebula

.

[frontend_server_primary]

Testmachine1 ansible_host=192.168.86.61

.

[mysql_servers]

Testmachine1 ansible_host=192.168.86.61

Testmachine2 ansible_host=192.168.86.62

#Testmachine3 ansibel_host=192.168.86.63

[apache_servers]

Testmachine1 ansible_host=192.168.86.61

Testmachine2 ansible_host=192.168.86.62

#Testmachine3 ansibel_host=192.168.86.63

.

[frontend_HA]

Testmachine2 ansible_host=192.168.86.62

#Testmachine3 ansible_host=192.168.86.63

.

Note: For a standalone setup you simply list the same host under the following 3 groups listed below and then in your command under –limt=”testmachine1” instead of ‘testmachine1,testmachine2′. The playbook is smart enough to know what to do from there.

[frontend_server_primary]

Testmachine1 ansible_host=192.168.86.63

[mysql_servers]

Testmachine1 ansible_host=192.168.86.63

[apache_servers]

Testmachine1 ansible_host=192.168.86.63

Special Notes: This playbook is designed so you can choose deploy ON in standalone, in classic centralised mysql(HA), or OpenNebula HA(with mysql deploy individually with rafthook configuration.

We will be deploying the OpenNebula officially supported way.
Although no senior architect would usually choose this approach over classic mysql HA(active/passive), we followed it anyway.

Important things to know:

Group variables for this role that are passed and need to be defined below. If you want to change certificates and configure mysql it has to be done in these group vars for this role to work. You will need to create opennebula ssl keys for the vnc console stuff to work, they are not provided by this playbook.

Dev/group_vars:

Frontend_server_primary

session_memcache: memcache

vnc_proxy_support_wss: true

vnc_proxy_cert_path: /etc/ssl/certs/opennebula.pem

vnc_proxy_key_path: /etc/ssl/private/opennebula.key

vnc_proxy_ipv6: false

vnc_request_password: false

driver: qcow2

.

Frontend_HA

#If these are defined HA setup is pushed.

#It Adds VIP hooks for floating IP and federation server ID:

#these variables can be overidden at at the host_var level.

#If host is listed under frontend_HA group in your host

#then these defaults will be used

.

leader_interface_name: enp0s8

leader_ip: 192.168.50.132/24

follower_ip: 192.168.50.132/24

follower_interface_name: enp0s8

.

Mysql_servers

OpenNebula Mysql Installation

mysqlrootuser: root

mysqlnewinstallpassword: Swordfish123

mysql_admin_user: admin    

mysql_admin_password: admin

database_to_create: opennebula

.

Running your playbook:

1.You must run your play book from inside parent directory of ansible”
2.There is a file called commands.txt with references to help you format your command quickly.
3.Now there is a playbook called ON-frontenddeploly.yml in the ansible directory which simply calls the opennebula-frontends role inside the roles directory.

Example: of opennebula-frontend/ON-frontenddeploy.yml

hosts: all

  become: True

  become_user: root

  gather_facts: no

  roles:

    – role: opennebula-frontend

.

Command: Running – playbook to deploy OpenNebula in HA

ansible-playbook -i inventory/dev/hosts ON-frontenddeploy.yml -u brucewayne -Kkb –ask-become –limit=’testmachine1,testmachine2′

Command: Running – playbook to deploy OpenNebula in Standalone

ansible-playbook -i inventory/dev/hosts ON-frontenddeploy.yml -u brucewayne -Kkb –ask-become –limit=’testmachine1′

.

-i : This flag tells ansibe-playbook command which hosts file to use, these are always defined by customer like hosts.opennebla2  
-u : this is the ssh_user you will be connecting to the servers with
-Kkb : this tells ansible that you will be using sudo su – for the ssh_user when running all role/tasks
-ask-beocme : is saying become root
-limit=’server’ : this allows you to segement which server you want to run the playbook against.

.

Successful run:

.

brucewayne@KVMtestbox:~/ansible/opennebula-frontend$ ansibleplaybooki inventory/dev/hosts.opennebula2 ONfrontenddeploy.ymlu brucewayneKkbaskbecomelimit=‘testmachine1,testmachine2’

SSH password:

BECOME password[defaults to SSH password]:

.

PLAY [all] ***************************************************************************************************************************************************************************************************************

.

TASK [frontend : install debian packages] ********************************************************************************************************************************************************************************

ok: [testmachine2] => (item=curl)

ok: [testmachine1] => (item=curl)

ok: [testmachine1] => (item=gnupg)

ok: [testmachine2] => (item=gnupg)

changed: [testmachine1] => (item=buildessential)

ok: [testmachine1] => (item=dirmngr)

ok: [testmachine1] => (item=cacertificates)

ok: [testmachine1] => (item=memcached)

changed: [testmachine2] => (item=buildessential)

ok: [testmachine2] => (item=dirmngr)

ok: [testmachine2] => (item=cacertificates)

ok: [testmachine2] => (item=memcached)

.

TASK [frontend : import the opennebula apt key] **************************************************************************************************************************************************************************

changed: [testmachine2]

changed: [testmachine1]

.

TASK [frontend : Show Key list] ******************************************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “keylist.stdout_lines”: [

        “/etc/apt/trusted.gpg”,

        “——————–“,

        “pub   rsa2048 2013-06-13 [SC]”,

        ”      92B7 7188 854C F23E 1634  DA89 592F 7F05 85E1 6EBF”,

        “uid           [ unknown] OpenNebula Repository <contact@opennebula.org>”,

        “sub   rsa2048 2013-06-13 [E]”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-archive.gpg”,

        “——————————————————“,

        “pub   rsa4096 2012-05-11 [SC]”,

        ”      790B C727 7767 219C 42C8  6F93 3B4F E6AC C0B2 1F32″,

        “uid           [ unknown] Ubuntu Archive Automatic Signing Key (2012) <ftpmaster@ubuntu.com>”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-cdimage.gpg”,

        “——————————————————“,

        “pub   rsa4096 2012-05-11 [SC]”,

        ”      8439 38DF 228D 22F7 B374  2BC0 D94A A3F0 EFE2 1092″,

        “uid           [ unknown] Ubuntu CD Image Automatic Signing Key (2012) <cdimage@ubuntu.com>”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2018-archive.gpg”,

        “——————————————————“,

        “pub   rsa4096 2018-09-17 [SC]”,

        ”      F6EC B376 2474 EDA9 D21B  7022 8719 20D1 991B C93C”,

        “uid           [ unknown] Ubuntu Archive Automatic Signing Key (2018) <ftpmaster@ubuntu.com>”

    ]

}

ok: [testmachine2] => {

    “keylist.stdout_lines”: [

        “/etc/apt/trusted.gpg”,

        “——————–“,

        “pub   rsa2048 2013-06-13 [SC]”,

        ”      92B7 7188 854C F23E 1634  DA89 592F 7F05 85E1 6EBF”,

        “uid           [ unknown] OpenNebula Repository <contact@opennebula.org>”,

        “sub   rsa2048 2013-06-13 [E]”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-archive.gpg”,

        “——————————————————“,

        “pub   rsa4096 2012-05-11 [SC]”,

        ”      790B C727 7767 219C 42C8  6F93 3B4F E6AC C0B2 1F32″,

        “uid           [ unknown] Ubuntu Archive Automatic Signing Key (2012) <ftpmaster@ubuntu.com>”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-cdimage.gpg”,

        “——————————————————“,

        “pub   rsa4096 2012-05-11 [SC]”,

        ”      8439 38DF 228D 22F7 B374  2BC0 D94A A3F0 EFE2 1092″,

        “uid           [ unknown] Ubuntu CD Image Automatic Signing Key (2012) <cdimage@ubuntu.com>”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2018-archive.gpg”,

        “——————————————————“,

        “pub   rsa4096 2018-09-17 [SC]”,

        ”      F6EC B376 2474 EDA9 D21B  7022 8719 20D1 991B C93C”,

        “uid           [ unknown] Ubuntu Archive Automatic Signing Key (2018) <ftpmaster@ubuntu.com>”

    ]

}

.

TASK [frontend : import the phusionpassenger apt key] ********************************************************************************************************************************************************************

changed: [testmachine2]

changed: [testmachine1]

.

TASK [frontend : Show Key list] ******************************************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “keylist2.stdout_lines”: [

        “/etc/apt/trusted.gpg”,

        “——————–“,

        “pub   rsa2048 2013-06-13 [SC]”,

        ”      92B7 7188 854C F23E 1634  DA89 592F 7F05 85E1 6EBF”,

        “uid           [ unknown] OpenNebula Repository <contact@opennebula.org>”,

        “sub   rsa2048 2013-06-13 [E]”,

        “”,

        “pub   rsa4096 2013-06-30 [SC]”,

        ”      1637 8A33 A6EF 1676 2922  526E 561F 9B9C AC40 B2F7″,

        “uid           [ unknown] Phusion Automated Software Signing (Used by automated tools to sign software packages) <auto-software-signing@phusion.nl>”,

        “sub   rsa4096 2013-06-30 [E]”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-archive.gpg”,

        “——————————————————“,

        “pub   rsa4096 2012-05-11 [SC]”,

        ”      790B C727 7767 219C 42C8  6F93 3B4F E6AC C0B2 1F32″,

        “uid           [ unknown] Ubuntu Archive Automatic Signing Key (2012) <ftpmaster@ubuntu.com>”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-cdimage.gpg”,

        “——————————————————“,

        “pub   rsa4096 2012-05-11 [SC]”,

        ”      8439 38DF 228D 22F7 B374  2BC0 D94A A3F0 EFE2 1092″,

        “uid           [ unknown] Ubuntu CD Image Automatic Signing Key (2012) <cdimage@ubuntu.com>”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2018-archive.gpg”,

        “——————————————————“,

        “pub   rsa4096 2018-09-17 [SC]”,

        ”      F6EC B376 2474 EDA9 D21B  7022 8719 20D1 991B C93C”,

        “uid           [ unknown] Ubuntu Archive Automatic Signing Key (2018) <ftpmaster@ubuntu.com>”

    ]

}

ok: [testmachine2] => {

    “keylist2.stdout_lines”: [

        “/etc/apt/trusted.gpg”,

        “——————–“,

        “pub   rsa2048 2013-06-13 [SC]”,

        ”      92B7 7188 854C F23E 1634  DA89 592F 7F05 85E1 6EBF”,

        “uid           [ unknown] OpenNebula Repository <contact@opennebula.org>”,

        “sub   rsa2048 2013-06-13 [E]”,

        “”,

        “pub   rsa4096 2013-06-30 [SC]”,

        ”      1637 8A33 A6EF 1676 2922  526E 561F 9B9C AC40 B2F7″,

        “uid           [ unknown] Phusion Automated Software Signing (Used by automated tools to sign software packages) <auto-software-signing@phusion.nl>”,

        “sub   rsa4096 2013-06-30 [E]”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-archive.gpg”,

        “——————————————————“,

        “pub   rsa4096 2012-05-11 [SC]”,

        ”      790B C727 7767 219C 42C8  6F93 3B4F E6AC C0B2 1F32″,

        “uid           [ unknown] Ubuntu Archive Automatic Signing Key (2012) <ftpmaster@ubuntu.com>”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-cdimage.gpg”,

        “——————————————————“,

        “pub   rsa4096 2012-05-11 [SC]”,

        ”      8439 38DF 228D 22F7 B374  2BC0 D94A A3F0 EFE2 1092″,

        “uid           [ unknown] Ubuntu CD Image Automatic Signing Key (2012) <cdimage@ubuntu.com>”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2018-archive.gpg”,

        “——————————————————“,

        “pub   rsa4096 2018-09-17 [SC]”,

        ”      F6EC B376 2474 EDA9 D21B  7022 8719 20D1 991B C93C”,

        “uid           [ unknown] Ubuntu Archive Automatic Signing Key (2018) <ftpmaster@ubuntu.com>”

    ]

}

.

TASK [frontend : add opennebula apt repository] **************************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : add bionic phusionpassenger apt repository] *************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : wget apttransporthttps cacertificates] ***************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “install2”: {

        “changed”: true,

        “cmd”: “apt-get -y install wget apt-transport-https ca-certificates”,

        “delta”: “0:00:02.087119”,

        “end”: “2022-04-06 03:13:42.512860”,

        “failed”: false,

        “msg”: “”,

        “rc”: 0,

        “start”: “2022-04-06 03:13:40.425741”,

        “stderr”: “”,

        “stderr_lines”: [],

        “stdout”: “Reading package lists…\nBuilding dependency tree…\nReading state information…\nca-certificates is already the newest version (20210119~20.04.2).\nwget is already the newest version (1.20.3-1ubuntu2).\nwget set to manually installed.\nThe following NEW packages will be installed\n  apt-transport-https\n0 to upgrade, 1 to newly install, 0 to remove and 1 not to upgrade.\nNeed to get 4,680 B of archives.\nAfter this operation, 162 kB of additional disk space will be used.\nGet:1 http://gb.archive.ubuntu.com/ubuntu focal-updates/universe amd64 apt-transport-https all 2.0.6 [4,680 B]\nFetched 4,680 B in 0s (15.1 kB/s)\nSelecting previously unselected package apt-transport-https.\r\n(Reading database … \r(Reading database … 5%\r(Reading database … 10%\r(Reading database … 15%\r(Reading database … 20%\r(Reading database … 25%\r(Reading database … 30%\r(Reading database … 35%\r(Reading database … 40%\r(Reading database … 45%\r(Reading database … 50%\r(Reading database … 55%\r(Reading database … 60%\r(Reading database … 65%\r(Reading database … 70%\r(Reading database … 75%\r(Reading database … 80%\r(Reading database … 85%\r(Reading database … 90%\r(Reading database … 95%\r(Reading database … 100%\r(Reading database … 199304 files and directories currently installed.)\r\nPreparing to unpack …/apt-transport-https_2.0.6_all.deb …\r\nUnpacking apt-transport-https (2.0.6) …\r\nSetting up apt-transport-https (2.0.6) …”,

        “stdout_lines”: [

            “Reading package lists…”,

            “Building dependency tree…”,

            “Reading state information…”,

            “ca-certificates is already the newest version (20210119~20.04.2).”,

            “wget is already the newest version (1.20.3-1ubuntu2).”,

            “wget set to manually installed.”,

            “The following NEW packages will be installed”,

            ”  apt-transport-https”,

            “0 to upgrade, 1 to newly install, 0 to remove and 1 not to upgrade.”,

            “Need to get 4,680 B of archives.”,

            “After this operation, 162 kB of additional disk space will be used.”,

            “Get:1 http://gb.archive.ubuntu.com/ubuntu focal-updates/universe amd64 apt-transport-https all 2.0.6 [4,680 B]”,

            “Fetched 4,680 B in 0s (15.1 kB/s)”,

            “Selecting previously unselected package apt-transport-https.”,

            “(Reading database … “,

            “(Reading database … 5%”,

            “(Reading database … 10%”,

            “(Reading database … 15%”,

            “(Reading database … 20%”,

            “(Reading database … 25%”,

            “(Reading database … 30%”,

            “(Reading database … 35%”,

            “(Reading database … 40%”,

            “(Reading database … 45%”,

            “(Reading database … 50%”,

            “(Reading database … 55%”,

            “(Reading database … 60%”,

            “(Reading database … 65%”,

            “(Reading database … 70%”,

            “(Reading database … 75%”,

            “(Reading database … 80%”,

            “(Reading database … 85%”,

            “(Reading database … 90%”,

            “(Reading database … 95%”,

            “(Reading database … 100%”,

            “(Reading database … 199304 files and directories currently installed.)”,

            “Preparing to unpack …/apt-transport-https_2.0.6_all.deb …”,

            “Unpacking apt-transport-https (2.0.6) …”,

            “Setting up apt-transport-https (2.0.6) …”

        ]

    }

}

ok: [testmachine2] => {

    “install2”: {

        “changed”: true,

        “cmd”: “apt-get -y install wget apt-transport-https ca-certificates”,

        “delta”: “0:00:02.710741”,

        “end”: “2022-04-06 03:13:43.155299”,

        “failed”: false,

        “msg”: “”,

        “rc”: 0,

        “start”: “2022-04-06 03:13:40.444558”,

        “stderr”: “”,

        “stderr_lines”: [],

        “stdout”: “Reading package lists…\nBuilding dependency tree…\nReading state information…\nca-certificates is already the newest version (20210119~20.04.2).\nwget is already the newest version (1.20.3-1ubuntu2).\nwget set to manually installed.\nThe following packages were automatically installed and are no longer required:\n  linux-headers-5.11.0-27-generic linux-hwe-5.11-headers-5.11.0-27\n  linux-image-5.11.0-27-generic linux-modules-5.11.0-27-generic\n  linux-modules-extra-5.11.0-27-generic\nUse ‘sudo apt autoremove’ to remove them.\nThe following NEW packages will be installed\n  apt-transport-https\n0 to upgrade, 1 to newly install, 0 to remove and 37 not to upgrade.\nNeed to get 4,680 B of archives.\nAfter this operation, 162 kB of additional disk space will be used.\nGet:1 http://gb.archive.ubuntu.com/ubuntu focal-updates/universe amd64 apt-transport-https all 2.0.6 [4,680 B]\nFetched 4,680 B in 0s (13.2 kB/s)\nSelecting previously unselected package apt-transport-https.\r\n(Reading database … \r(Reading database … 5%\r(Reading database … 10%\r(Reading database … 15%\r(Reading database … 20%\r(Reading database … 25%\r(Reading database … 30%\r(Reading database … 35%\r(Reading database … 40%\r(Reading database … 45%\r(Reading database … 50%\r(Reading database … 55%\r(Reading database … 60%\r(Reading database … 65%\r(Reading database … 70%\r(Reading database … 75%\r(Reading database … 80%\r(Reading database … 85%\r(Reading database … 90%\r(Reading database … 95%\r(Reading database … 100%\r(Reading database … 202372 files and directories currently installed.)\r\nPreparing to unpack …/apt-transport-https_2.0.6_all.deb …\r\nUnpacking apt-transport-https (2.0.6) …\r\nSetting up apt-transport-https (2.0.6) …”,

        “stdout_lines”: [

            “Reading package lists…”,

            “Building dependency tree…”,

            “Reading state information…”,

            “ca-certificates is already the newest version (20210119~20.04.2).”,

            “wget is already the newest version (1.20.3-1ubuntu2).”,

            “wget set to manually installed.”,

            “The following packages were automatically installed and are no longer required:”,

            ”  linux-headers-5.11.0-27-generic linux-hwe-5.11-headers-5.11.0-27″,

            ”  linux-image-5.11.0-27-generic linux-modules-5.11.0-27-generic”,

            ”  linux-modules-extra-5.11.0-27-generic”,

            “Use ‘sudo apt autoremove’ to remove them.”,

            “The following NEW packages will be installed”,

            ”  apt-transport-https”,

            “0 to upgrade, 1 to newly install, 0 to remove and 37 not to upgrade.”,

            “Need to get 4,680 B of archives.”,

            “After this operation, 162 kB of additional disk space will be used.”,

            “Get:1 http://gb.archive.ubuntu.com/ubuntu focal-updates/universe amd64 apt-transport-https all 2.0.6 [4,680 B]”,

            “Fetched 4,680 B in 0s (13.2 kB/s)”,

            “Selecting previously unselected package apt-transport-https.”,

            “(Reading database … “,

            “(Reading database … 5%”,

            “(Reading database … 10%”,

            “(Reading database … 15%”,

            “(Reading database … 20%”,

            “(Reading database … 25%”,

            “(Reading database … 30%”,

            “(Reading database … 35%”,

            “(Reading database … 40%”,

            “(Reading database … 45%”,

            “(Reading database … 50%”,

            “(Reading database … 55%”,

            “(Reading database … 60%”,

            “(Reading database … 65%”,

            “(Reading database … 70%”,

            “(Reading database … 75%”,

            “(Reading database … 80%”,

            “(Reading database … 85%”,

            “(Reading database … 90%”,

            “(Reading database … 95%”,

            “(Reading database … 100%”,

            “(Reading database … 202372 files and directories currently installed.)”,

            “Preparing to unpack …/apt-transport-https_2.0.6_all.deb …”,

            “Unpacking apt-transport-https (2.0.6) …”,

            “Setting up apt-transport-https (2.0.6) …”

        ]

    }

}

.

TASK [frontend : aptget update] *****************************************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : Include mysql task when groupvar mysqlservers is defined] ***********************************************************************************************************************************************

included: /home/brucewayne/ansible/opennebula-frontend/roles/frontend/tasks/mysql.yml for testmachine1, testmachine2

.

TASK [frontend : install debian packages] ********************************************************************************************************************************************************************************

changed: [testmachine1] => (item=mariadbserver)

changed: [testmachine1] => (item=python3pymysql)

changed: [testmachine2] => (item=mariadbserver)

changed: [testmachine2] => (item=python3pymysql)

.

TASK [frontend : Secure mysql installation] ******************************************************************************************************************************************************************************

[WARNING]: Module did not set no_log for change_root_password

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “mysql_secure”: {

        “changed”: true,

        “failed”: false,

        “meta”: {

            “change_root_pwd”: “True  — But not for all of the hosts”,

            “connected_with_socket?”: true,

            “disallow_root_remotely”: “False — meets the desired state”,

            “hosts_failed”: [

                “127.0.0.1”,

                “::1”

            ],

            “hosts_success”: [

                “localhost”

            ],

            “mysql_version_above_10_3?”: false,

            “new_password_correct?”: false,

            “remove_anonymous_user”: “False — meets the desired state”,

            “remove_test_db”: “False — meets the desired state”,

            “stdout”: “Password for user: root @ Hosts: [‘localhost’] changed to the desired state”

        },

        “warnings”: [

            “Module did not set no_log for change_root_password”

        ]

    }

}

ok: [testmachine2] => {

    “mysql_secure”: {

        “changed”: true,

        “failed”: false,

        “meta”: {

            “change_root_pwd”: “True  — But not for all of the hosts”,

            “connected_with_socket?”: true,

            “disallow_root_remotely”: “False — meets the desired state”,

            “hosts_failed”: [

                “::1”,

                “127.0.0.1”

            ],

            “hosts_success”: [

                “localhost”

            ],

            “mysql_version_above_10_3?”: false,

            “new_password_correct?”: false,

            “remove_anonymous_user”: “False — meets the desired state”,

            “remove_test_db”: “False — meets the desired state”,

            “stdout”: “Password for user: root @ Hosts: [‘localhost’] changed to the desired state”

        },

        “warnings”: [

            “Module did not set no_log for change_root_password”

        ]

    }

}

.

TASK [frontend : Create opennebula database] *****************************************************************************************************************************************************************************

changed: [testmachine2]

changed: [testmachine1]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “database”: {

        “changed”: true,

        “db”: “opennebula”,

        “db_list”: [

            “opennebula”

        ],

        “executed_commands”: [

            “CREATE DATABASE `opennebula`”

        ],

        “failed”: false

    }

}

ok: [testmachine2] => {

    “database”: {

        “changed”: true,

        “db”: “opennebula”,

        “db_list”: [

            “opennebula”

        ],

        “executed_commands”: [

            “CREATE DATABASE `opennebula`”

        ],

        “failed”: false

    }

}

.

TASK [frontend : create user ‘admin’ with password ‘admin’ for ‘{{opennebula_db}}’ and grant all priveleges] *******************************************************************************************************

changed: [testmachine2]

changed: [testmachine1]

.

TASK [frontend : install opennebula packages] ****************************************************************************************************************************************************************************

changed: [testmachine1] => (item=opennebula)

changed: [testmachine1] => (item=opennebulasunstone)

changed: [testmachine1] => (item=opennebulagate)

changed: [testmachine1] => (item=opennebulaflow)

ok: [testmachine1] => (item=opennebularubygems)

changed: [testmachine1] => (item=opennebulafireedge)

ok: [testmachine1] => (item=gnupg)

changed: [testmachine2] => (item=opennebula)

changed: [testmachine2] => (item=opennebulasunstone)

changed: [testmachine2] => (item=opennebulagate)

changed: [testmachine2] => (item=opennebulaflow)

ok: [testmachine2] => (item=opennebularubygems)

changed: [testmachine2] => (item=opennebulafireedge)

ok: [testmachine2] => (item=gnupg)

.

TASK [frontend : Copy oned.conf to server with updated DB(host,user,pass)] ***********************************************************************************************************************************************

changed: [testmachine2]

changed: [testmachine1]

.

TASK [frontend : Copy sunstoneserver.conf to server configs] ************************************************************************************************************************************************************

changed: [testmachine2]

changed: [testmachine1]

.

TASK [frontend : Add credentials to Admin] ****************************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “authfile.stdout_lines”: [

        “admin:IgDeMozOups8”

    ]

}

ok: [testmachine2] => {

    “authfile.stdout_lines”: [

        “admin:Tafwaytofen2”

    ]

}

.

TASK [frontend : Set fact for authfile] **********************************************************************************************************************************************************************************

ok: [testmachine1]

ok: [testmachine2]

.

TASK [frontend : update permissions opennebula permissions] **************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : Include apache configuration] ***************************************************************************************************************************************************************************

included: /home/brucewayne/ansible/opennebula-frontend/roles/frontend/tasks/apache.yml for testmachine1, testmachine2

.

TASK [frontend : restart systemdtimesyncd] ******************************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : install debian packages] ********************************************************************************************************************************************************************************

changed: [testmachine1] => (item=apache2utils)

changed: [testmachine2] => (item=apache2utils)

changed: [testmachine1] => (item=apache2)

changed: [testmachine1] => (item=libapache2modproxymsrpc)

changed: [testmachine2] => (item=apache2)

changed: [testmachine2] => (item=libapache2modproxymsrpc)

changed: [testmachine1] => (item=libapache2modpassenger)

changed: [testmachine2] => (item=libapache2modpassenger)

.

TASK [frontend : copy opennebula apache ssl virtualhost config to server] ************************************************************************************************************************************************

changed: [testmachine1] => (item=/home/brucewayne/ansible/opennebula-frontend/roles/frontend/templates/apache_confs/opennebula.conf)

changed: [testmachine2] => (item=/home/brucewayne/ansible/opennebula-frontend/roles/frontend/templates/apache_confs/opennebula.conf)

.

TASK [frontend : copy opennebul ssl certificate to servers] **************************************************************************************************************************************************************

changed: [testmachine1] => (item=/home/brucewayne/ansible/opennebula-frontend/roles/frontend/templates/certs/opennebula.pem)

changed: [testmachine2] => (item=/home/brucewayne/ansible/opennebula-frontend/roles/frontend/templates/certs/opennebula.pem)

.

TASK [frontend : copy opennebula ssl private key to server] **************************************************************************************************************************************************************

changed: [testmachine1] => (item=/home/brucewayne/ansible/opennebula-frontend/roles/frontend/templates/private/opennebula.key)

changed: [testmachine2] => (item=/home/brucewayne/ansible/opennebula-frontend/roles/frontend/templates/private/opennebula.key)

.

TASK [frontend : Enable SSL virtual host for openebula] ******************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : enable opennebula virtualhost] **************************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : Restart service httpd, in all cases] ********************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : Enable service httpd and ensure it is not masked] *******************************************************************************************************************************************************

ok: [testmachine1]

ok: [testmachine2]

.

TASK [frontend : get service facts] **************************************************************************************************************************************************************************************

ok: [testmachine1]

ok: [testmachine2]

.

TASK [frontend : Check to see if httpd is running] ***********************************************************************************************************************************************************************

ok: [testmachine1] => {

    “ansible_facts.services[\”apache2.service\”]”: {

        “name”: “apache2.service”,

        “source”: “systemd”,

        “state”: “running”,

        “status”: “enabled”

    }

}

ok: [testmachine2] => {

    “ansible_facts.services[\”apache2.service\”]”: {

        “name”: “apache2.service”,

        “source”: “systemd”,

        “state”: “running”,

        “status”: “enabled”

    }

}

.

TASK [frontend : start opennebula] ***************************************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “openebula.state”: “started”

}

ok: [testmachine2] => {

    “openebula.state”: “started”

}

.

TASK [frontend : start opennebulagate] **********************************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “gate.state”: “started”

}

ok: [testmachine2] => {

    “gate.state”: “started”

}

.

TASK [frontend : start opennebulaflow] **********************************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “flow.state”: “started”

}

ok: [testmachine2] => {

    “flow.state”: “started”

}

.

TASK [frontend : start opennebulanovc] **********************************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “novnc.state”: “started”

}

ok: [testmachine2] => {

    “novnc.state”: “started”

}

.

TASK [frontend : start systemdtimesyncd] ********************************************************************************************************************************************************************************

ok: [testmachine1]

ok: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “timesyncd.state”: “started”

}

ok: [testmachine2] => {

    “timesyncd.state”: “started”

}

.

TASK [frontend : Check if server is listed under frontend_HA] ************************************************************************************************************************************************************

skipping: [testmachine1]

ok: [testmachine2]

.

TASK [frontend : Stopping OpenNebula on frontend_server_primary] *********************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “stop, group_names”: “({‘changed’: True, ‘stdout’: ”, ‘stderr’: ”, ‘rc’: 0, ‘cmd’: ‘systemctl stop opennebula’, ‘start’: ‘2022-04-06 03:19:42.714817’, ‘end’: ‘2022-04-06 03:19:48.841833’, ‘delta’: ‘0:00:06.127016’, ‘msg’: ”, ‘stdout_lines’: [], ‘stderr_lines’: [], ‘failed’: False}, [‘apache_servers’, ‘frontend_server_primary’, ‘mysql_servers’])”

}

ok: [testmachine2] => {

    “stop, group_names”: “({‘changed’: True, ‘stdout’: ”, ‘stderr’: ”, ‘rc’: 0, ‘cmd’: ‘systemctl stop opennebula’, ‘start’: ‘2022-04-06 03:19:42.761875’, ‘end’: ‘2022-04-06 03:21:14.632276’, ‘delta’: ‘0:01:31.870401’, ‘msg’: ”, ‘stdout_lines’: [], ‘stderr_lines’: [], ‘failed’: False}, [‘apache_servers’, ‘frontend_HA’, ‘mysql_servers’])”

}

.

TASK [frontend : delete sqlfile if it exists to create a current one.] ***************************************************************************************************************************************************

changed: [testmachine2]

changed: [testmachine1]

.

TASK [frontend : make backup of OpenNebula database] *********************************************************************************************************************************************************************

skipping: [testmachine2]

changed: [testmachine1]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “backup”: {

        “changed”: true,

        “cmd”: “onedb backup -u admin -p admin -d opennebula /var/lib/one/opennebula.sql”,

        “delta”: “0:00:00.406599”,

        “end”: “2022-04-06 03:21:16.346013”,

        “failed”: false,

        “msg”: “”,

        “rc”: 0,

        “start”: “2022-04-06 03:21:15.939414”,

        “stderr”: “”,

        “stderr_lines”: [],

        “stdout”: “MySQL dump stored in /var/lib/one/opennebula.sql\nUse ‘onedb restore’ or restore the DB using the mysql command:\nmysql -u user -h server -P port db_name < backup_file”,

        “stdout_lines”: [

            “MySQL dump stored in /var/lib/one/opennebula.sql”,

            “Use ‘onedb restore’ or restore the DB using the mysql command:”,

            “mysql -u user -h server -P port db_name < backup_file”

        ]

    }

}

ok: [testmachine2] => {

    “backup”: {

        “changed”: false,

        “skip_reason”: “Conditional result was False”,

        “skipped”: true

    }

}

.

TASK [frontend : Fetch the OpenNebula sql dumpfile from frontend_server_primary] *****************************************************************************************************************************************

skipping: [testmachine2]

changed: [testmachine1 -> testmachine1]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “fetch, group_names”: “({‘changed’: True, ‘md5sum’: ‘a54c58c27e96d29cb99a26a595263164’, ‘dest’: ‘/home/brucewayne/ansible/opennebula-frontend/buffer/tmp/opennebula.sql’, ‘remote_md5sum’: None, ‘checksum’: ‘040e9ae687df46fc26a64f038992bd28e1d7e369’, ‘remote_checksum’: ‘040e9ae687df46fc26a64f038992bd28e1d7e369’, ‘failed’: False}, [‘apache_servers’, ‘frontend_server_primary’, ‘mysql_servers’])”

}

ok: [testmachine2] => {

    “fetch, group_names”: “({‘changed’: False, ‘skipped’: True, ‘skip_reason’: ‘Conditional result was False’}, [‘apache_servers’, ‘frontend_HA’, ‘mysql_servers’])”

}

.

TASK [frontend : Copy the ONsqldump file from master to the secondary HA nodes] *****************************************************************************************************************************************

skipping: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “sqlcopy”: {

        “changed”: false,

        “skip_reason”: “Conditional result was False”,

        “skipped”: true

    }

}

ok: [testmachine2] => {

    “sqlcopy”: {

        “changed”: true,

        “checksum”: “040e9ae687df46fc26a64f038992bd28e1d7e369”,

        “dest”: “/tmp/opennebula.sql”,

        “diff”: [],

        “failed”: false,

        “gid”: 0,

        “group”: “root”,

        “md5sum”: “a54c58c27e96d29cb99a26a595263164”,

        “mode”: “0644”,

        “owner”: “root”,

        “size”: 41546,

        “src”: “/home/brucewayne/.ansible/tmp/ansible-tmp-1649211677.4405959-9803-36565910128620/source”,

        “state”: “file”,

        “uid”: 0

    }

}

.

TASK [frontend : Fetch the fence_host.sh] ********************************************************************************************************************************************************************************

skipping: [testmachine2]

ok: [testmachine1 -> testmachine1]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “fence_host, group_names”: “({‘changed’: False, ‘md5sum’: ‘7bb73d0d0ffce907562d75f6cd779fdc’, ‘file’: ‘/var/lib/one/remotes/hooks/ft/fence_host.sh’, ‘dest’: ‘/home/brucewayne/ansible/opennebula-frontend/buffer/tmp/fence_host.sh’, ‘checksum’: ‘ef5e59d9a3d6d7a55d554928057bf85f5dea5f1f’, ‘failed’: False}, [‘apache_servers’, ‘frontend_server_primary’, ‘mysql_servers’])”

}

ok: [testmachine2] => {

    “fence_host, group_names”: “({‘changed’: False, ‘skipped’: True, ‘skip_reason’: ‘Conditional result was False’}, [‘apache_servers’, ‘frontend_HA’, ‘mysql_servers’])”

}

.

TASK [frontend : Copy the fence.sh to frontend_HA hosts] *****************************************************************************************************************************************************************

skipping: [testmachine1]

ok: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “fence_host”: {

        “changed”: false,

        “skip_reason”: “Conditional result was False”,

        “skipped”: true

    }

}

ok: [testmachine2] => {

    “fence_host”: {

        “changed”: false,

        “checksum”: “ef5e59d9a3d6d7a55d554928057bf85f5dea5f1f”,

        “dest”: “/var/lib/one/remotes/hooks/ft/fence_host.sh”,

        “diff”: {

            “after”: {

                “path”: “/var/lib/one/remotes/hooks/ft/fence_host.sh”

            },

            “before”: {

                “path”: “/var/lib/one/remotes/hooks/ft/fence_host.sh”

            }

        },

        “failed”: false,

        “gid”: 9869,

        “group”: “admin”,

        “mode”: “0750”,

        “owner”: “admin”,

        “path”: “/var/lib/one/remotes/hooks/ft/fence_host.sh”,

        “size”: 4370,

        “state”: “file”,

        “uid”: 9869

    }

}

.

TASK [frontend : Create tar of /etc/one/] ********************************************************************************************************************************************************************************

skipping: [testmachine2]

changed: [testmachine1]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “tar”: {

        “changed”: true,

        “cmd”: “cd /etc/one;tar -cvf /etc/one/one.tar *”,

        “delta”: “0:00:00.016645”,

        “end”: “2022-04-06 03:21:20.659494”,

        “failed”: false,

        “msg”: “”,

        “rc”: 0,

        “start”: “2022-04-06 03:21:20.642849”,

        “stderr”: “”,

        “stderr_lines”: [],

        “stdout”: “auth/\nauth/certificates/\nauth/x509_auth.conf\nauth/server_x509_auth.conf\nauth/ldap_auth.conf\naz_driver.conf\naz_driver.default\ncli/\ncli/onevmgroup.yaml\ncli/onevnet.yaml\ncli/oneshowback.yaml\ncli/onehook.yaml\ncli/onetemplate.yaml\ncli/onemarketapp.yaml\ncli/onesecgroup.yaml\ncli/oneacct.yaml\ncli/oneacl.yaml\ncli/onemarket.yaml\ncli/onegroup.yaml\ncli/onevm.yaml\ncli/oneflowtemplate.yaml\ncli/onevrouter.yaml\ncli/onezone.yaml\ncli/oneimage.yaml\ncli/onecluster.yaml\ncli/oneuser.yaml\ncli/onevntemplate.yaml\ncli/onevdc.yaml\ncli/onehost.yaml\ncli/onedatastore.yaml\ncli/oneflow.yaml\ndefaultrc\nec2_driver.conf\nec2_driver.default\nfireedge/\nfireedge/provision/\nfireedge/provision/providers.d/\nfireedge/provision/providers.d/vultr_virtual.yaml\nfireedge/provision/providers.d/digitalocean.yaml\nfireedge/provision/providers.d/vultr_metal.yaml\nfireedge/provision/providers.d/equinix.yaml\nfireedge/provision/providers.d/google.yaml\nfireedge/provision/providers.d/aws.yaml\nfireedge/provision/providers.d/dummy.yaml\nfireedge/provision/provision-server.conf\nfireedge/sunstone/\nfireedge/sunstone/user/\nfireedge/sunstone/user/vm-tab.yaml\nfireedge/sunstone/user/vm-template-tab.yaml\nfireedge/sunstone/sunstone-server.conf\nfireedge/sunstone/admin/\nfireedge/sunstone/admin/vm-tab.yaml\nfireedge/sunstone/admin/cluster-tab.yaml\nfireedge/sunstone/admin/vm-template-tab.yaml\nfireedge/sunstone/admin/host-tab.yaml\nfireedge/sunstone/sunstone-views.yaml\nfireedge-server.conf\nhm/\nhm/hmrc\nmonitord.conf\noned.conf\noneflow-server.conf\nonegate-server.conf\nonehem-server.conf\nsched.conf\nsunstone-logos.yaml\nsunstone-server.conf\nsunstone-views/\nsunstone-views/vcenter/\nsunstone-views/vcenter/admin.yaml\nsunstone-views/vcenter/user.yaml\nsunstone-views/vcenter/groupadmin.yaml\nsunstone-views/vcenter/cloud.yaml\nsunstone-views/mixed/\nsunstone-views/mixed/admin.yaml\nsunstone-views/mixed/user.yaml\nsunstone-views/mixed/groupadmin.yaml\nsunstone-views/mixed/cloud.yaml\nsunstone-views/kvm/\nsunstone-views/kvm/admin.yaml\nsunstone-views/kvm/user.yaml\nsunstone-views/kvm/groupadmin.yaml\nsunstone-views/kvm/cloud.yaml\nsunstone-views.yaml\ntmrc\nvcenter_driver.default\nvmm_exec/\nvmm_exec/vmm_execrc\nvmm_exec/vmm_exec_kvm.conf”,

        “stdout_lines”: [

            “auth/”,

            “auth/certificates/”,

            “auth/x509_auth.conf”,

            “auth/server_x509_auth.conf”,

            “auth/ldap_auth.conf”,

            “az_driver.conf”,

            “az_driver.default”,

            “cli/”,

            “cli/onevmgroup.yaml”,

            “cli/onevnet.yaml”,

            “cli/oneshowback.yaml”,

            “cli/onehook.yaml”,

            “cli/onetemplate.yaml”,

            “cli/onemarketapp.yaml”,

            “cli/onesecgroup.yaml”,

            “cli/oneacct.yaml”,

            “cli/oneacl.yaml”,

            “cli/onemarket.yaml”,

            “cli/onegroup.yaml”,

            “cli/onevm.yaml”,

            “cli/oneflowtemplate.yaml”,

            “cli/onevrouter.yaml”,

            “cli/onezone.yaml”,

            “cli/oneimage.yaml”,

            “cli/onecluster.yaml”,

            “cli/oneuser.yaml”,

            “cli/onevntemplate.yaml”,

            “cli/onevdc.yaml”,

            “cli/onehost.yaml”,

            “cli/onedatastore.yaml”,

            “cli/oneflow.yaml”,

            “defaultrc”,

            “ec2_driver.conf”,

            “ec2_driver.default”,

            “fireedge/”,

            “fireedge/provision/”,

            “fireedge/provision/providers.d/”,

            “fireedge/provision/providers.d/vultr_virtual.yaml”,

            “fireedge/provision/providers.d/digitalocean.yaml”,

            “fireedge/provision/providers.d/vultr_metal.yaml”,

            “fireedge/provision/providers.d/equinix.yaml”,

            “fireedge/provision/providers.d/google.yaml”,

            “fireedge/provision/providers.d/aws.yaml”,

            “fireedge/provision/providers.d/dummy.yaml”,

            “fireedge/provision/provision-server.conf”,

            “fireedge/sunstone/”,

            “fireedge/sunstone/user/”,

            “fireedge/sunstone/user/vm-tab.yaml”,

            “fireedge/sunstone/user/vm-template-tab.yaml”,

            “fireedge/sunstone/sunstone-server.conf”,

            “fireedge/sunstone/admin/”,

            “fireedge/sunstone/admin/vm-tab.yaml”,

            “fireedge/sunstone/admin/cluster-tab.yaml”,

            “fireedge/sunstone/admin/vm-template-tab.yaml”,

            “fireedge/sunstone/admin/host-tab.yaml”,

            “fireedge/sunstone/sunstone-views.yaml”,

            “fireedge-server.conf”,

            “hm/”,

            “hm/hmrc”,

            “monitord.conf”,

            “oned.conf”,

            “oneflow-server.conf”,

            “onegate-server.conf”,

            “onehem-server.conf”,

            “sched.conf”,

            “sunstone-logos.yaml”,

            “sunstone-server.conf”,

            “sunstone-views/”,

            “sunstone-views/vcenter/”,

            “sunstone-views/vcenter/admin.yaml”,

            “sunstone-views/vcenter/user.yaml”,

            “sunstone-views/vcenter/groupadmin.yaml”,

            “sunstone-views/vcenter/cloud.yaml”,

            “sunstone-views/mixed/”,

            “sunstone-views/mixed/admin.yaml”,

            “sunstone-views/mixed/user.yaml”,

            “sunstone-views/mixed/groupadmin.yaml”,

            “sunstone-views/mixed/cloud.yaml”,

            “sunstone-views/kvm/”,

            “sunstone-views/kvm/admin.yaml”,

            “sunstone-views/kvm/user.yaml”,

            “sunstone-views/kvm/groupadmin.yaml”,

            “sunstone-views/kvm/cloud.yaml”,

            “sunstone-views.yaml”,

            “tmrc”,

            “vcenter_driver.default”,

            “vmm_exec/”,

            “vmm_exec/vmm_execrc”,

            “vmm_exec/vmm_exec_kvm.conf”

        ]

    }

}

ok: [testmachine2] => {

    “tar”: {

        “changed”: false,

        “skip_reason”: “Conditional result was False”,

        “skipped”: true

    }

}

.

TASK [frontend : Fetch the one.tar] **************************************************************************************************************************************************************************************

skipping: [testmachine2]

changed: [testmachine1 -> testmachine1]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “fence_host, group_names”: “({‘changed’: True, ‘md5sum’: ‘acec4258dbbf2bde83d12f3eb29824a7’, ‘dest’: ‘/home/brucewayne/ansible/opennebula-frontend/buffer/tmp/one.tar’, ‘remote_md5sum’: None, ‘checksum’: ‘2da21a3124f4eb5a78c0126e9791c8d8c9c5c770’, ‘remote_checksum’: ‘2da21a3124f4eb5a78c0126e9791c8d8c9c5c770’, ‘failed’: False}, [‘apache_servers’, ‘frontend_server_primary’, ‘mysql_servers’])”

}

ok: [testmachine2] => {

    “fence_host, group_names”: “({‘changed’: False, ‘skipped’: True, ‘skip_reason’: ‘Conditional result was False’}, [‘apache_servers’, ‘frontend_HA’, ‘mysql_servers’])”

}

.

TASK [frontend : Copy the one.tar to frontend_HA hosts] ******************************************************************************************************************************************************************

skipping: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “fence_host”: {

        “changed”: false,

        “skip_reason”: “Conditional result was False”,

        “skipped”: true

    }

}

ok: [testmachine2] => {

    “fence_host”: {

        “changed”: true,

        “checksum”: “2da21a3124f4eb5a78c0126e9791c8d8c9c5c770”,

        “dest”: “/etc/one/one.tar”,

        “diff”: [],

        “failed”: false,

        “gid”: 0,

        “group”: “root”,

        “md5sum”: “acec4258dbbf2bde83d12f3eb29824a7”,

        “mode”: “0644”,

        “owner”: “root”,

        “size”: 542720,

        “src”: “/home/brucewayne/.ansible/tmp/ansible-tmp-1649211681.6244745-9943-99432484341658/source”,

        “state”: “file”,

        “uid”: 0

    }

}

.

TASK [frontend : untar one.tar in /etc/one on the frontend_HA hosts] *****************************************************************************************************************************************************

skipping: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “untar”: {

        “changed”: false,

        “skip_reason”: “Conditional result was False”,

        “skipped”: true

    }

}

ok: [testmachine2] => {

    “untar”: {

        “changed”: true,

        “cmd”: “cd /etc/one;tar -xvf /etc/one/one.tar”,

        “delta”: “0:00:00.018409”,

        “end”: “2022-04-06 03:21:23.162427”,

        “failed”: false,

        “msg”: “”,

        “rc”: 0,

        “start”: “2022-04-06 03:21:23.144018”,

        “stderr”: “”,

        “stderr_lines”: [],

        “stdout”: “auth/\nauth/certificates/\nauth/x509_auth.conf\nauth/server_x509_auth.conf\nauth/ldap_auth.conf\naz_driver.conf\naz_driver.default\ncli/\ncli/onevmgroup.yaml\ncli/onevnet.yaml\ncli/oneshowback.yaml\ncli/onehook.yaml\ncli/onetemplate.yaml\ncli/onemarketapp.yaml\ncli/onesecgroup.yaml\ncli/oneacct.yaml\ncli/oneacl.yaml\ncli/onemarket.yaml\ncli/onegroup.yaml\ncli/onevm.yaml\ncli/oneflowtemplate.yaml\ncli/onevrouter.yaml\ncli/onezone.yaml\ncli/oneimage.yaml\ncli/onecluster.yaml\ncli/oneuser.yaml\ncli/onevntemplate.yaml\ncli/onevdc.yaml\ncli/onehost.yaml\ncli/onedatastore.yaml\ncli/oneflow.yaml\ndefaultrc\nec2_driver.conf\nec2_driver.default\nfireedge/\nfireedge/provision/\nfireedge/provision/providers.d/\nfireedge/provision/providers.d/vultr_virtual.yaml\nfireedge/provision/providers.d/digitalocean.yaml\nfireedge/provision/providers.d/vultr_metal.yaml\nfireedge/provision/providers.d/equinix.yaml\nfireedge/provision/providers.d/google.yaml\nfireedge/provision/providers.d/aws.yaml\nfireedge/provision/providers.d/dummy.yaml\nfireedge/provision/provision-server.conf\nfireedge/sunstone/\nfireedge/sunstone/user/\nfireedge/sunstone/user/vm-tab.yaml\nfireedge/sunstone/user/vm-template-tab.yaml\nfireedge/sunstone/sunstone-server.conf\nfireedge/sunstone/admin/\nfireedge/sunstone/admin/vm-tab.yaml\nfireedge/sunstone/admin/cluster-tab.yaml\nfireedge/sunstone/admin/vm-template-tab.yaml\nfireedge/sunstone/admin/host-tab.yaml\nfireedge/sunstone/sunstone-views.yaml\nfireedge-server.conf\nhm/\nhm/hmrc\nmonitord.conf\noned.conf\noneflow-server.conf\nonegate-server.conf\nonehem-server.conf\nsched.conf\nsunstone-logos.yaml\nsunstone-server.conf\nsunstone-views/\nsunstone-views/vcenter/\nsunstone-views/vcenter/admin.yaml\nsunstone-views/vcenter/user.yaml\nsunstone-views/vcenter/groupadmin.yaml\nsunstone-views/vcenter/cloud.yaml\nsunstone-views/mixed/\nsunstone-views/mixed/admin.yaml\nsunstone-views/mixed/user.yaml\nsunstone-views/mixed/groupadmin.yaml\nsunstone-views/mixed/cloud.yaml\nsunstone-views/kvm/\nsunstone-views/kvm/admin.yaml\nsunstone-views/kvm/user.yaml\nsunstone-views/kvm/groupadmin.yaml\nsunstone-views/kvm/cloud.yaml\nsunstone-views.yaml\ntmrc\nvcenter_driver.default\nvmm_exec/\nvmm_exec/vmm_execrc\nvmm_exec/vmm_exec_kvm.conf”,

        “stdout_lines”: [

            “auth/”,

            “auth/certificates/”,

            “auth/x509_auth.conf”,

            “auth/server_x509_auth.conf”,

            “auth/ldap_auth.conf”,

            “az_driver.conf”,

            “az_driver.default”,

            “cli/”,

            “cli/onevmgroup.yaml”,

            “cli/onevnet.yaml”,

            “cli/oneshowback.yaml”,

            “cli/onehook.yaml”,

            “cli/onetemplate.yaml”,

            “cli/onemarketapp.yaml”,

            “cli/onesecgroup.yaml”,

            “cli/oneacct.yaml”,

            “cli/oneacl.yaml”,

            “cli/onemarket.yaml”,

            “cli/onegroup.yaml”,

            “cli/onevm.yaml”,

            “cli/oneflowtemplate.yaml”,

            “cli/onevrouter.yaml”,

            “cli/onezone.yaml”,

            “cli/oneimage.yaml”,

            “cli/onecluster.yaml”,

            “cli/oneuser.yaml”,

            “cli/onevntemplate.yaml”,

            “cli/onevdc.yaml”,

            “cli/onehost.yaml”,

            “cli/onedatastore.yaml”,

            “cli/oneflow.yaml”,

            “defaultrc”,

            “ec2_driver.conf”,

            “ec2_driver.default”,

            “fireedge/”,

            “fireedge/provision/”,

            “fireedge/provision/providers.d/”,

            “fireedge/provision/providers.d/vultr_virtual.yaml”,

            “fireedge/provision/providers.d/digitalocean.yaml”,

            “fireedge/provision/providers.d/vultr_metal.yaml”,

            “fireedge/provision/providers.d/equinix.yaml”,

            “fireedge/provision/providers.d/google.yaml”,

            “fireedge/provision/providers.d/aws.yaml”,

            “fireedge/provision/providers.d/dummy.yaml”,

            “fireedge/provision/provision-server.conf”,

            “fireedge/sunstone/”,

            “fireedge/sunstone/user/”,

            “fireedge/sunstone/user/vm-tab.yaml”,

            “fireedge/sunstone/user/vm-template-tab.yaml”,

            “fireedge/sunstone/sunstone-server.conf”,

            “fireedge/sunstone/admin/”,

            “fireedge/sunstone/admin/vm-tab.yaml”,

            “fireedge/sunstone/admin/cluster-tab.yaml”,

            “fireedge/sunstone/admin/vm-template-tab.yaml”,

            “fireedge/sunstone/admin/host-tab.yaml”,

            “fireedge/sunstone/sunstone-views.yaml”,

            “fireedge-server.conf”,

            “hm/”,

            “hm/hmrc”,

            “monitord.conf”,

            “oned.conf”,

            “oneflow-server.conf”,

            “onegate-server.conf”,

            “onehem-server.conf”,

            “sched.conf”,

            “sunstone-logos.yaml”,

            “sunstone-server.conf”,

            “sunstone-views/”,

            “sunstone-views/vcenter/”,

            “sunstone-views/vcenter/admin.yaml”,

            “sunstone-views/vcenter/user.yaml”,

            “sunstone-views/vcenter/groupadmin.yaml”,

            “sunstone-views/vcenter/cloud.yaml”,

            “sunstone-views/mixed/”,

            “sunstone-views/mixed/admin.yaml”,

            “sunstone-views/mixed/user.yaml”,

            “sunstone-views/mixed/groupadmin.yaml”,

            “sunstone-views/mixed/cloud.yaml”,

            “sunstone-views/kvm/”,

            “sunstone-views/kvm/admin.yaml”,

            “sunstone-views/kvm/user.yaml”,

            “sunstone-views/kvm/groupadmin.yaml”,

            “sunstone-views/kvm/cloud.yaml”,

            “sunstone-views.yaml”,

            “tmrc”,

            “vcenter_driver.default”,

            “vmm_exec/”,

            “vmm_exec/vmm_execrc”,

            “vmm_exec/vmm_exec_kvm.conf”

        ]

    }

}

.

TASK [frontend : updates the rafthook and federation configurations for fronteend_HA secondary servers] ******************************************************************************************************************

skipping: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : start OpenNebula] ***************************************************************************************************************************************************************************************

skipping: [testmachine2]

changed: [testmachine1]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “group_names”: [

        “apache_servers”,

        “frontend_server_primary”,

        “mysql_servers”

    ]

}

ok: [testmachine2] => {

    “group_names”: [

        “apache_servers”,

        “frontend_HA”,

        “mysql_servers”

    ]

}

.

TASK [frontend : finding frontend_HA list] *******************************************************************************************************************************************************************************

skipping: [testmachine1] => (item=apache_servers)

skipping: [testmachine1] => (item=frontend_server_primary)

skipping: [testmachine1] => (item=mysql_servers)

skipping: [testmachine2] => (item=apache_servers)

ok: [testmachine2] => (item=frontend_HA)

skipping: [testmachine2] => (item=mysql_servers)

.

TASK [frontend : Add Secondary Node frontends to the zone] ***************************************************************************************************************************************************************

skipping: [testmachine2] => (item=testmachine2)

changed: [testmachine1] => (item=testmachine2)

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “addzone, group_names”: “({‘results’: [{‘changed’: True, ‘stdout’: ”, ‘stderr’: ”, ‘rc’: 0, ‘cmd’: ‘onezone server-add 0 –name testmachine2 –rpc http://192.168.86.65:2633/RPC2’, ‘start’: ‘2022-04-06 03:21:33.920788’, ‘end’: ‘2022-04-06 03:21:34.174098’, ‘delta’: ‘0:00:00.253310’, ‘msg’: ”, ‘invocation’: {‘module_args’: {‘_raw_params’: ‘onezone server-add 0 –name testmachine2 –rpc http://192.168.86.65:2633/RPC2’, ‘_uses_shell’: True, ‘warn’: False, ‘stdin_add_newline’: True, ‘strip_empty_ends’: True, ‘argv’: None, ‘chdir’: None, ‘executable’: None, ‘creates’: None, ‘removes’: None, ‘stdin’: None}}, ‘stdout_lines’: [], ‘stderr_lines’: [], ‘failed’: False, ‘item’: ‘testmachine2’, ‘ansible_loop_var’: ‘item’}], ‘skipped’: False, ‘changed’: True, ‘msg’: ‘All items completed’}, [‘apache_servers’, ‘frontend_server_primary’, ‘mysql_servers’])”

}

ok: [testmachine2] => {

    “addzone, group_names”: “({‘results’: [{‘changed’: False, ‘skipped’: True, ‘skip_reason’: ‘Conditional result was False’, ‘item’: ‘testmachine2’, ‘ansible_loop_var’: ‘item’}], ‘skipped’: True, ‘msg’: ‘All items skipped’, ‘changed’: False}, [‘apache_servers’, ‘frontend_HA’, ‘mysql_servers’])”

}

.

TASK [frontend : Restore database to secondary nodes] ********************************************************************************************************************************************************************

skipping: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “restoredb”: {

        “changed”: false,

        “skip_reason”: “Conditional result was False”,

        “skipped”: true

    }

}

ok: [testmachine2] => {

    “restoredb”: {

        “changed”: true,

        “cmd”: “onedb restore -f -S localhost -u admin -p admin -d opennebula /tmp/opennebula.sql”,

        “delta”: “0:00:00.988908”,

        “end”: “2022-04-06 03:21:35.749776”,

        “failed”: false,

        “msg”: “”,

        “rc”: 0,

        “start”: “2022-04-06 03:21:34.760868”,

        “stderr”: “”,

        “stderr_lines”: [],

        “stdout”: “MySQL DB opennebula at localhost restored.”,

        “stdout_lines”: [

            “MySQL DB opennebula at localhost restored.”

        ]

    }

}

.

PLAY RECAP ***************************************************************************************************************************************************************************************************************

testmachine1               : ok=70   changed=38   unreachable=0    failed=0    skipped=8    rescued=0    ignored=0   

testmachine2               : ok=71   changed=37   unreachable=0    failed=0    skipped=7    rescued=0    ignored=0   

.

.

How to pass an API key with Ansible

https://chronosphere.io/ – Third Party Cloud Monitoring Solution

Chronocollector: – https://github.com/Perfect10NickTailor/chronocollector

This role deploys the chronocollector management service which sends the data to domain.chronosphere.io For those of you who don’t know what it is. Its basically a cloud monitoring tool that scrapes data on your instances and then you can create dashboards or even export the data to promethus to make it look pretty and easy to read. You will likely pay for subscription, they will give you a subdomain which becomes your gateway address (domain.chronosphere.io)

Special note: You then need to deploy the node_exporter to push to the hosts you want scraped. That is a separate playbook and stupid easy.

This role will download the latest collector
It will install the latest collector
It will check to see if the ‘service’ its added to systemd, if nots.. adds it, if the service is there, it will move on and simply start the service.

#nowthatsjustfunny: So its debatable on how to approach passing {{ api_keys }} in a scalable and secure way. A lot of people create an “ansible vault encrypted variable”. This is so that when they push their code to their git repos. The {{ api_key }} isn’t exposed to someone simply glancing by the code. The issue with this approach is now you have to remember a vault password to pass to ansible, so it can decrypt the {{ api_key }} to pass, inorder for it to work when you run the playbook.(LAME)

.

#nowthatsjustcool: So just for the purposes of this post and for fun. I wrote it so that you can simply pass the {{ api_key }} during runtime. This way instead of being prompted for the vault-pass, you are prompted for the api_key to pass as a variable when you run the book. This gets rid of the need to setup a encrypted variable in your code entirely. Everyone has their own way of doing things, but I tend to think outside the box, so it always way more fun to be different in how you think.

.

Ansible Operational Documentation

How to use this role:

1.You must first download the git repository
a.git clone git@github.com:Perfect10NickTailor/chronocollector.git
2.Under your user you will see a directory called chronocollector cd into this directory here you will see the defaults/main.yml where you can see what you can pass to groups_vars & host_vars
b.cd chronocollector

.

3.Next you want edit the hosts.client inside your ansible/inventory/dev/hosts.client

Example file: hosts.dev or hosts.staging

c.Put your server under the appropriate group inside the file and save
i.Testmachine1 ansible_host=192.168.60.10

Running your playbook:

1.You must run your play book from inside parent ansible directory

.

2.Now there is a playbook called chronocollector.yml in the ansible directory which simply calls the chronocollector role inside the ansible/roles/chronocollector directory, where the role should be living.

Example: of ansible/chronocollector.yml

hosts: all

  gather_facts: no

  vars_prompt:

  – name: api_key

    prompt: Enter the API key

  roles:

    – role: chronocollector

.

Command:

ansible-playbook -i inventory/dev/hosts.dev chronocollector.yml -u nickadmin -Kkb –ask-become –limit=’testmachine3′

-i : This flag tells ansibe-playbook command which hosts file to use, these are always defined by customer like hosts.dev or hosts.staging
-u : this is the ssh_user you will be connecting to the servers with
-Kkb : this tells ansible that you will be using sudo su – for the ssh_user when running all role/tasks
-ask-beocme : is saying become root
-limit=’server’ : this allows you to segement which server you want to run the playbook against.

.

Successful run:

.

Notice: It asks you for the API key at runtime.

ntailor@jumphost:~/ansible2$ ansible-playbook -i ansible/inventory/dev/hosts.dev chronocollector.yml -u nicktadmin -Kkb –ask-become –limit=’testmachine3′

SSH password:

BECOME password[defaults to SSH password]:

Enter the API key:

.

PLAY [all] ***************************************************************************************************************************************************************************************************************

.

TASK [chronocollector : download node collector] *************************************************************************************************************************************************************************

ok: [testmachine3]

.

TASK [chronocollector : move collector to /usr/local/bin] ****************************************************************************************************************************************************************

ok: [testmachine3]

.

TASK [chronocollector : mkdir directory /etc/chronocollector] ************************************************************************************************************************************************************

ok: [testmachine3]

.

TASK [chronocollector : Copy default config.yml to /etc/chronocollector/] ************************************************************************************************************************************************

ok: [testmachine3]

.

TASK [chronocollector : Touch again the same file, but do not change times this makes the task idempotent] ***************************************************************************************************************

changed: [testmachine3]

.

TASK [chronocollector : Ensure API key is present in config file] ********************************************************************************************************************************************************

changed: [testmachine3]

.

TASK [chronocollector : Change file ownership, group and permissions apitoken file to secure it from prying eyes other than root] ****************************************************************************************

changed: [testmachine3]

.

TASK [chronocollector : Check that the service file /etc/systemd/system/collector.service exists] ************************************************************************************************************************

ok: [testmachine3]

.

TASK [chronocollector : Include add systemd task if service file does not exist] *****************************************************************************************************************************************

included: ansible/roles/chronocollector/tasks/systemd.yml for testmachine3

.

TASK [chronocollector : Create startup file for collector in systemd] ****************************************************************************************************************************************************

changed: [testmachine3]

.

TASK [chronocollector : Create systemd collector.service] ****************************************************************************************************************************************************************

changed: [testmachine3]

.

TASK [chronocollector : check whether custom line exists] ****************************************************************************************************************************************************************

changed: [testmachine3]

.

TASK [chronocollector : Start Collector Service via systemd] *************************************************************************************************************************************************************

changed: [testmachine3]

.

TASK [chronocollector : Show status of collector from systemd] ***********************************************************************************************************************************************************

changed: [testmachine3]

.

TASK [chronocollector : debug] *******************************************************************************************************************************************************************************************

ok: [testmachine3] => {

“status.stdout”: ” Active: failed (Result: exit-code) since Thu 2022-05-19 10:31:49 BST; 315ms ago”

}

.

PLAY RECAP ***************************************************************************************************************************************************************************************************************

testmachine3 : ok=15 changed=8 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

How to deploy Netplan with Ansible

Ansible-Netplan: – https://github.com/Perfect10NickTailor/ansible-netplan

This role will push out the config to the designated host and apply it
It will make a backup of the previous config before applying the new config, this is just incase your config change had an yaml error and you need to quickly go in and revert back.
There is a defaults/main.yml file that all the flags and how to use them.

.

Netplan.io- what is it is? Basically yaml files to deploy network configurations in a scalable manner by Ubuntu

How to use this role:

1.You must first download the git repository into your roles directory usually ansible/role/

.

2.Now you want edit the hosts.client file name file or create it if it doesn’t exist under your “ansible/inventory/dev:staging:prod” directory. This is a good way to separate environments with ansible, inside each environment you should have a hosts.file like indicated below.

Example file: hosts.dev, hosts.staging, hosts.prod

Put your server under the appropriate group inside the file and save
i.Testmachine1 ansible_host=192.168.90.10

Note: If there is no group simply list the server outside grouping, the –limit flag will pick it

up.

3.Now inside this directory you should see hosts & host_vars, group_vars

Descriptions:

Hosts. – is where you will list your servers under specific groups which tell the playbook (what the server is, if it the server should have a specific task run on it, and how to find it)
Host_vars – Inside this directory is where you list the server by name which is you will list under hosts. Inside these files you pass variable parameters to the specific roles when running your playbook. Without these the playbook cant do the tasks you want it to.
Group_vars – Are how a way to group variables for sets of servers and this keeps code cleaners and easier to manage.

Operational Use:

4.Move inside host_var
cd host_var
create a file called {{ servername }} and save it for us its testmachine1

Okay now here is where VSC is handy. You want to connect your visual studio code to the management server under your user. I have provided a link which shows you how to setup your keys and get VSC working with it.

.

Note: You don’t have to use VSC you can use good old nano or vim, but it’s a pain. Up to you.

https://medium.com/@sujaypillai/connect-to-your-remote-servers-from-visual-studio-code-eb5a5875e348

.

.

5.Now Netplans can be simple or very complicated. Ansible-netplan is broken up into segments that look for these variables to pass.
Network, vlans, ethernets, bridges & bonds

.

6.Now my advice is not to copy the block from this document and to copy download the repo open in visual studio and copy it there.

.

Example files:

ansible/inventory/dev/host_var$ testmachine1 (with Bonding)

 

.

Example Yaml Block :

# testmachine1 netplan config

# This is the network for testmachine1 with network bonding

netplan_configuration:

    network:

      bonds:

        bond0:

          interfaces:

          – ens1f0

          – ens1f1

          parameters:

            mode: balance-rr

      ethernets:

        eno1:

          dhcp4: false

        eno2:

          dhcp4: false

        ens1f0: {}

        ens1f1: {}

      version: 2

.

      vlans:

        vlan.180:

          id: 180

          link: bond0

        #  dhcp4: false

        #  dhcp6: false

        vlan.3200:

          id: 3200

          link: bond0

        #  dhcp4: false

        #  dhcp6: false

        vlan.3300:

          id: 3300

          link: bond0

        #  dhcp4: false

        #  dhcp6: false

.

      bridges:

        br200:

          interfaces: [ vlan.200 ]

          addresses: [ 192.168.50.9/24 ]

          gateway4: 192.168.50.1

          nameservers:

                  addresses: [ 8.8.8.8,8.8.4.8 ]

                  search: [ nicktailor.com ]        

          dhcp4: false

          dhcp6: false

        br3000:

          interfaces: [ vlan.3000 ]

          dhcp4: false

          dhcp6: false

        br3200:

          interfaces: [ vlan.3200 ]

          dhcp4: false

          dhcp6: false

.

Example files:
ansible/inventory/dev/host_var$ testmachine1 (without Bonding)

.

Example Yaml Block :

#testmachine1

netplan_configuration:

    network:

      version: 2

      renderer: networkd

      ethernets:

        eno1:

          dhcp4: false

          dhcp6: false

        eno2:

          dhcp4: false

          dhcp6: false

.

      bridges:

        br0:

          interfaces: [ eno1 ]

          dhcp4: false

          dhcp6: false

        br1:

          interfaces: [ eno2 ]

          dhcp4: false

          dhcp6: false

        br1110:

          interfaces: [ vlan1110 ]

          dhcp4: false

          dhcp6: false

          addresses: [ 172.16.52.10/26 ]

          gateway4: 172.17.52.1

          nameservers:

                  addresses: [ 8.8.8.8,8.8.4.8 ]

.

        br600:

          interfaces: [ vlan600 ]

          dhcp4: false

          dhcp6: false

          addresses: [ 192.168.0.34/24 ]

        br800:

          interfaces: [ vlan800 ]

          dhcp4: false

          dhcp6: false

        br802:

          interfaces: [ vlan802 ]

          dhcp4: false

          dhcp6: false

        br801:

          interfaces: [ vlan801 ]

          dhcp4: false

          dhcp6: false

.

      vlans:

        vlan600:

          id: 600

          link: br0

          dhcp4: false

          dhcp6: false

        vlan800:

          id: 800

          link: br1

          dhcp4: false

          dhcp6: false

        vlan801:

          id: 801

          link: br1

          dhcp4: false

          dhcp6: false          

        vlan802:

          id: 802

          link: br1

          dhcp4: false

          dhcp6: false  

          

.

.

8.You must now edit the the appropriate lines and save the file
vlans, ethernets, blond, addresses, & bridges

.

9.Once saved you want to run the playbook against a test server before you push the code into the git repository. So it good to have a test vm to run your code against first.

.

Running your playbook:

1.You must run your play book from inside parent directory always “ansible
2.Now create a playbook called deploynetplan.yml in the ansible directory which simply calls the ansible-netplan role inside the roles directory.

Example: of ansible/deploynetplan.yml

hosts: all

  gather_facts: yes

  any_errors_fatal: true

  roles:

    – role: ansible-netplan

      netplan_enabled: true

.

Command:

ansible-playbook -i inventory/dev/hosts deploynetplan.yml -u nickadmin -Kkb –ask-become –limit=’testmachine1′

-i : This flag tells ansibe-playbook command which hosts file to use, these are always defined by environment like hosts.dev or hosts.staging  
-u : this is the ssh_user you will be connecting to the servers with
-Kkb : this tells ansible that you will be using sudo su – for the ssh_user when running all role/tasks
-ask-beocme : is saying become root
-limit=’server’ : this allows you to segement which server you want to run the playbook against.

.

Successful example run with bonding:

.

ntailor@KVMtestbox:~/ansible$ ansibleplaybooki inventory/dev/hosts deploynetplan.ymlu nickadminKkbaskbecomelimit=‘testmachine1’

SSH password:

BECOME password[defaults to SSH password]:

.

PLAY [all] *********************************************************************************************************************************************************************************************

.

TASK [Gathering Facts] *********************************************************************************************************************************************************************************

ok: [testmachine1]

.

TASK [ansiblenetplan : Install netplan] ***************************************************************************************************************************************************************

ok: [testmachine1]

.

TASK [ansiblenetplan : Backup exitsing configurations before removing live ones] **********************************************************************************************************************

changed: [testmachine1]

.

TASK [ansiblenetplan : copy 00install* netplan existing file to /etc/netplan/backups] ****************************************************************************************************************

changed: [testmachine1]

.

TASK [ansiblenetplan : keep only 7 days of backups of previous network config /etc/netplan/backups] ***************************************************************************************************

changed: [testmachine1]

.

TASK [ansiblenetplan : Capturing Existing Configurations] *********************************************************************************************************************************************

skipping: [testmachine1]

.

TASK [ansiblenetplan : debug] *************************************************************************************************************************************************************************

skipping: [testmachine1]

.

TASK [ansiblenetplan : Removing Existing Configurations] **********************************************************************************************************************************************

skipping: [testmachine1]

.

TASK [ansiblenetplan : Configuring Netplan] ***********************************************************************************************************************************************************

ok: [testmachine1]

.

TASK [ansiblenetplan : netplan apply] *****************************************************************************************************************************************************************

changed: [testmachine1]

.

TASK [ansiblenetplan : debug] *************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “netplanapply”: {

        “changed”: true,

        “cmd”: “netplan apply”,

        “delta”: “0:00:00.601112”,

        “end”: “2022-01-31 16:43:45.295708”,

        “failed”: false,

        “msg”: “”,

        “rc”: 0,

        “start”: “2022-01-31 16:43:44.694596”,

        “stderr”: “”,

        “stderr_lines”: [],

        “stdout”: “”,

        “stdout_lines”: []

    }

}

.

TASK [ansiblenetplan : Show vlans that are up or down] ************************************************************************************************************************************************

changed: [testmachine1]

.

TASK [ansiblenetplan : debug] *************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “vlan.stdout_lines”: [

        “14: vlan.180@bond0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000”,

        “15: vlan.3300@bond0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000”

    ]

}

.

TASK [ansiblenetplan : show bridge details] ***********************************************************************************************************************************************************

changed: [testmachine1]

.

TASK [ansiblenetplan : debug] *************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “bridges.stdout_lines”: [

        “bridge name\tbridge id\t\tSTP enabled\tinterfaces”,

        “br180\t\t8000.000000000000\tyes\t\t,

        “br3200\t\t8000.000000000000\tyes\t\t,

        “br3300\t\t8000.000000000000\tyes\t\t

    ]

}

.

PLAY RECAP *********************************************************************************************************************************************************************************************

testmachine1               : ok=12   changed=6    unreachable=0    failed=0    skipped=3    rescued=0    ignored=0   

.

.

.

Push your inventory/dev/host_var/testmachine1 code to Git :

 

Once you successfully checked your deploy worked by logging on to the client host and confirming everything looks good. You now want to push your code to git repo. Since you were able to clone you repo, you should be able to push to it.

.

Git Add Commands.

1.Git add . (will do every file you changed)
2.Git add filename will only add the file you want

.

Git Commit Commands

1.Git commit
a.This will take you to a message screen. Just type a note of what you did save the file
2.Git push
b.This will push your changes

.

.

How to call a json rest API using Ansible        

So a very useful thing to understand is rest api’s and how to call them as a lot of organisations have these and want to integrate them into automation, a popular method is the http method

They are very simple calls { GET, POST, PUT, DELETE, PATCH }

For the sake of this post. Im going to use commvault public api’s https://api.commvault.com/

You will need to two things.

  1. The api endpoint which is usually an http url

Example:

  1. http://WebConsoleHostName/webconsole/api/CommServ/Failover
  1. The raw  json body of the of the api

Example:

{

    "csFailoverConfigInfo": {

        "configStatus": 0,

        "isAutomaticFailoverEnabled": false

    }

}

Now keep in mind if you are using an api that requires a login. In order for it to work, you will need to store the auth token to pass later to the last task later for the api call to work as intended. You can look at one of my other posts under vmware, where i used a http login to handle the tasks later, as a reference.

You can call these preliminary task as includes to store the token.

It will look something like this before it gets to the api task. You can also just do it all one on book if you wanted to. But for the purposes of this post. Im just giving ya highlevel.

- name: Login task

  include_role:

    name: commvault_login

    tasks_from: login.yml

- name: Setfact for authtoke

  set_fact:

    authtoken: "{{ login_authtoken }}"

  delegate_to: localhost

Now in order for you to pass json api to ansible. You will need to convert the json raw body into yaml format. You can use visual studio code plugins or a site like https://json2yaml.com/

So if we are to use the above raw json example it would look like this

csFailoverConfigInfo:

  configStatus: 0

  isAutomaticFailoverEnabled: false

So now we want to pass this information to the task in the form of a variable. A really cool thing with ansible and this type of action. Is you can create a variable name and simply pass the new yaml converted body right below the varible. You can pass this as extra-vars or create a group variable with the same name and use that.

For those you who use tower passing them as extra-vars to test something can be a pain, since it doesn’t allow you to change the passed vars and rerun the previous run just used, you have to start all over. So I prefer the command line way as its easier to be agile

disable_api_body:

csFailoverConfigInfo:

  configStatus: 0

  isAutomaticFailoverEnabled: false

So now we ansible to use the rest api with ansible. You create a task that after the login is run and the token is stored inside as a fact. It run the following task, in our case this call will be a POST. It will post the headers to the url which will disabled commvault live_sync which is essentially commvault failover redundancy for the backup server itself.

- name: Disable Commvault livesync 

  uri:

    url: http://{{ commvault_primary }}/webconsole/api/v2/CommServ/Failover

    method: POST

    body_format: json

    body: "{{ disable_api_body }}"

    return_content: true

    headers:

      Accept: application/json

      Content-Type: application/json

      Authtoken: "{{ login_authtoken }}"

    status_code: 200

    validate_certs: false

  register: disable_livesync

  retries: "4"

  delay: "10"

  delegate_to: localhost

- debug: 
    var: disable_livesync
     

When you run the book and your have an active failover setup correctly with commvault. In the command center under the control panel you should see livesync. If you click on this you should see either it is checked or unchecked.

How to Deploy LVM’s with Ansible

Provisioning-LVM-Filesystems:

This role is designed to use ansible-merge-vars module. An Ansible plugin to merge all variables in context with a certain suffix (lists or dicts only) and create a new variable that contains the result of this merge. This is an Ansible action plugin, which is basically an Ansible module that runs on the machine running Ansible rather than on the host that Ansible is provisioning.

Benefits: Configuring disks into LVM

 Adds Physical Disk to Volume to Volume Group
 Creates Logical Volume assigned to Volume Group
 Creates the size of the disk to full or whatever you wish to set
 It will format the LVM to whatever file format you wish
 It will then create the mount point and mount the lvm
 It will also add the lvm to fstab upon mounting

Note: This post assumes you have already ansible installed and running.

Install ansible-merge-vars module:

1.       root@KVM-test-box:~# pip install ansible_merge_vars
          Requirement already satisfied: ansible_merge_vars in
          /usr/local/lib/python3.8/dist-packages (5.0.0)

2.Create an action_plugins directory in the directory in which you run Ansible.

By default, Ansible will look for action plugins in an action_plugins folder adjacent to the running playbook. For more information on this, or to change the location where ansible looks for action plugin.

 

3.Create a file called merge_vars.py (or whatever name you picked) in the action_plugins directory, with one line:

  from ansible_merge_vars import ActionModule

.

4.Save the file

.

Role Setup:

Once the plugin has been setup, you now you will want to setup a role.

1.Move inside the ansible/roles directory
a.cd ansible/roles/
2.Create a directory for the role provision-fs
b.mkdir -p provision-fs/tasks
3.Move inside the tasks directory
c.cd ansible/roles/provision-fs/tasks

Now we will create a task that will merge variable names associated with a list and then itemise the list for variables we will pass to provision the filesystem via the inventory/host_var or group_var

.

4.Create a file called main.yml inside provision-fs/tasks/main.yml

.

name: Merge VG variables

  merge_vars:

    suffix_to_merge: vgs__to_merge

    merged_var_name: merged_vgs

    expected_type: ‘list’

.

name: Merge LV variables

  merge_vars:

    suffix_to_merge: lvs__to_merge

    merged_var_name: merged_lvs

    expected_type: ‘list’

.

name: Merge FS variables

  merge_vars:

    suffix_to_merge: fs__to_merge

    merged_var_name: merged_fs

    expected_type: ‘list’

.

name: Merge MOUNT variables

  merge_vars:

    suffix_to_merge: mnt__to_merge

    merged_var_name: merged_mnt

    expected_type: ‘list’

.

name: Create VGs

  lvg:

    vg: {{ item.vg }}”

    pvs: {{ item.pvs }}”

  with_items: {{ merged_vgs }}”

.

name: Create LVs

  lvol:

    vg: {{ item.vg }}”

    lv: {{ item.lv }}”

    size: {{ item.size }}”

    pvs: {{ item.pvs | default(omit) }}”

    shrink: no

  with_items: {{ merged_lvs }}”

.

name: Create FSs

  filesystem:

    dev: {{ item.dev }}”

    fstype: {{ item.fstype }}”

  with_items: {{ merged_fs }}”

.

name: Mount FSs

  mount:

    path: {{ item.path }}”

    src: {{ item.src }}”

    state: mounted

    fstype: {{ item.fstype }}”

    opts: {{ item.opts | default(‘defaults’) }}”

    dump: {{ item.dump | default(‘1’) }}”

    passno: {{ item.passno | default(‘2’) }}”

  with_items: {{ merged_mnt }}”

.

.

5.Save the file

.

Note: Now this currently task has no safe guards for /dev/sda or checks to ensure the disk is wiped properly in order for the disks to be added to the volume group. I have created such safe guards for others. But for the purposes of this blog post this is basics. If you want to my help you can contact me via email or the ticketing system.

.

Now what we are going to do is define our inventory file with what file lvm we want to crave out.

.

Setup inventory:
1.Go inside your inventory/host_var or group_var file and create a file for testserver1

  • .nano inventory/host_var/testserver1
vgs__to_merge:
  – vg: vg_vmguest
    pvs: /dev/sdb
  – vg: vg_sl_storage
    pvs: /dev/sdc
lvs__to_merge:
  – vg: vg_vmguest
    lv: lv_vg_vmguest
    size: 100%FREE
    shrink: no
  – vg: vg_sl_storage
    lv: lv_vg_sl_storage
    size: 100%FREE
    shrink: no
fs__to_merge:
  – dev: /dev/vg_vmguest/lv_vg_vmguest
    fstype: ext4
  – dev: /dev/vg_sl_storage/lv_vg_sl_storage
    fstype: ext4
mnt__to_merge:
  – path: /vmguests
    src: /dev/vg_vmguest/lv_vg_vmguest
    fstype: ext4
  – path: /sl_storage
    src: /dev/vg_sl_storage/lv_vg_sl_storage
    fstype: ext4

.

2. save the file.

.

Definitions of the variables above:

vgs__to_merge: This section is the creation volume/physical groups

  – vg: vg_vmguest (this is the volume group name)

    pvs: /dev/sdb (this is the physical assigned to the above volume group

  – vg: vg_sl_storage (This the second volume name)

    pvs: /dev/sdc (This is the second physical disk assigned to the above

volume
*You can add as many as you like*

.

lvs__to_merge: This section is the logical Volume creations

  – vg: vg_vmguest (this is the volume group created)

    lv: lv_vg_vmguest (this is the logical volume that is attached to above vg

    size: 100%FREE (this says please use the whole disk)

    shrink: no (this is needed to so the disk space is used correctly)

  – vg: vg_sl_storage (this is the second volume created)

    lv: lv_vg_sl_storage (this is the second lvm created attached to above vg)

    size: 100%FREE (this is use the whole disk)

    shrink: no (this is needed so the disk space is properly used)

.

fs__to_merge: This section formats the lvm

  – dev: /dev/vg_vmguest/lv_vg_vmguest (lvm name)

    fstype: ext4 (file system you want to format with)

  – dev: /dev/vg_sl_storage/lv_vg_sl_storage (2nd lvm name)

    fstype: ext4 (file system you want to format with)

.

mnt__to_merge: This section will create the path,mount, and add to fstab

  – path: /vmguests (path you want created for mount)

    src: /dev/vg_vmguest/lv_vg_vmguest (lvm you want to mount)

    fstype: ext4 (this is for fstab adding)

  – path: /sl_storage (this is second path to create)

    src: /dev/vg_sl_storage/lv_vg_sl_storage (second lvm you want to mount)

    fstype: ext4 (to add to fstab)

.

Running your playbook:

1.Go inside your ansible home directory

cd ansible/

2.Create a file called justdofs.yml and save it

Example: of justdofs.yml

hosts: all

  gather_facts: yes

  any_errors_fatal: true

  roles:

    – role: provision-fs

.

3.Ensure the testervernick1 is listed under inventory/hosts
a.Testservernick1 ansible_host=192.168.80.200

Command:

ansible/$ ansible-playbook -i inventory/hosts justdofs.yml -u root -k –limit=’testservernick1′

.

Example of successful play:

ntailor@test-box:~/ansible/computelab$ ansible-playbook –i inventory/hosts justdofs.yml -u root -k –limit=’testservernick1

SSH password:

.

PLAY [all] *******************************************************************************************************************************************************************************************************

.

TASK [provision-fs : Merge VG variables] *************************************************************************************************************************************************************************

ok: [testservernick1]

.

TASK [provision-fs : Merge LV variables] *************************************************************************************************************************************************************************

ok: [testservernick1]

.

TASK [provision-fs : Merge FS variables] *************************************************************************************************************************************************************************

ok: [testservernick1]

.

TASK [provision-fs : Merge MOUNT variables] **********************************************************************************************************************************************************************

ok: [testservernick1]

.

TASK [provision-fs : Create VGs] *********************************************************************************************************************************************************************************

ok: [testservernick1] => (item={‘vg’: ‘vg_vmguest‘, ‘pvs‘: ‘/dev/sdb‘})

ok: [testservernick1] => (item={‘vg’: ‘vg_sl_storage‘, ‘pvs‘: ‘/dev/sdc‘})

.

TASK [provision-fs : Create LVs] *********************************************************************************************************************************************************************************

ok: [testservernick1] => (item={‘vg’: ‘vg_vmguest‘, ‘lv’: ‘lv_vg_vmguest‘, ‘size’: ‘100%FREE’, ‘shrink’: False})

ok: [testservernick1] => (item={‘vg’: ‘vg_sl_storage‘, ‘lv’: ‘lv_vg_sl_storage‘, ‘size’: ‘100%FREE’, ‘shrink’: False})

.

TASK [provision-fs : Create FSs] *********************************************************************************************************************************************************************************

ok: [testservernick1] => (item={‘dev’: ‘/dev/vg_vmguest/lv_vg_vmguest‘, ‘fstype‘: ‘ext4’})

ok: [testservernick1] => (item={‘dev’: ‘/dev/vg_sl_storage/lv_vg_sl_storage‘, ‘fstype‘: ‘ext4’})

.

TASK [provision-fs : Mount FSs] **********************************************************************************************************************************************************************************

ok: [testservernick1] => (item={‘path’: ‘/vmguests‘, ‘src‘: ‘/dev/vg_vmguest/lv_vg_vmguest‘, ‘fstype‘: ‘ext4’})

ok: [testservernick1] => (item={‘path’: ‘/sl_storage‘, ‘src‘: ‘/dev/vg_sl_storage/lv_vg_sl_storage‘, ‘fstype‘: ‘ext4’})

.

PLAY RECAP *******************************************************************************************************************************************************************************************************

testservernick1 : ok=8    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0  

.

HOW TO CHECK CPU, MEMORY, & DISKS THRESHHOLDS on an ARRAY of HOSTS.

So I was tinkering around as usual. I thought this will come in handy for other engineers

If you a large cluster of servers that can suddenly over night loose all its MEM,CPU,DISK due to the nature of your businesses. Its difficult to monitor that from a GUI and on an array of hosts more often  than not.

Cloud Scenario……

Say you find a node that is dying because too many clients are using resources and you need migrate instances off to another node, only you don’t know which nodes have the needed resources without having to go look at all the nodes individually.

This tends be every engineers pain point. So I decide to come up with quick easy solution for emergency situations, where you don’t have time to sifting through alert systems that only show you data on a per host basis, that tend to load very slowly.

This bash script will check the CPU, MEM, DISK MOUNTS (including NFS) and tell which ones are okay and which ones are

CPU – calculated by the = 100MaxThrottle – Cpu-idle = CPU-usage
note: it also creates a log /opt/cpu.log on each host

MEM – calculate by Total Mem / Used Memory * 100 = Percentage of Used Memory
note: it also creates a log /opt/mem.log on each host

Disk – Any mount that reaches the warn threshold… COMPLAIN

.

Now, itemised the bash script so you can just comment out item you don’t want to use at the bottom of the script if you wanted to say just check CPU/MEM

#Written By Nick Tailor

#!/bin/bash

now=`date -u -d”+8 hour” +’%Y-%m-%d %H:%M:%S’`

#cpu use threshold

cpu_warn=’75’

#disk use threshold

disk_warn=’80’

#—cpu

item_cpu () {

cpu_idle=`top -b -n 1 | grep Cpu | awk ‘{print $8}’|cut -f 1 -d “.”`

cpu_use=`expr 100 – $cpu_idle`

echo “now current cpu utilization rate of $cpu_use $(hostname) as on $(date)” >> /opt/cpu.log

if [ $cpu_use -gt $cpu_warn ]

then

echo “cpu warning!!! $cpu_use Currently HIGH $(hostname)”

else

echo “cpu ok!!! $cpu_use% use Currently LOW $(hostname)”

fi

}

#—mem

item_mem () {

#MB units

LOAD=’80.00′

mem_free_read=`free -h | grep “Mem” | awk ‘{print $4+$6}’`

MEM_LOAD=`free -t | awk ‘FNR == 2 {printf(“%.2f%”), $3/$2*100}’`

echo “Now the current memory space remaining ${mem_free_read} GB $(hostname) as on $(date)” >> /opt/mem.log

if [[ $MEM_LOAD > $LOAD ]]

then

echo “$MEM_LOAD not good!! MEM USEAGE is HIGH – Free-MEM-${mem_free_read}GB $(hostname)”

else

echo “$MEM_LOAD ok!! MEM USAGE is beLOW 80% – Free-MEM-${mem_free_read}GB $(hostname)”

fi

}

#—disk

item_disk () {

df -H | grep -vE ‘^Filesystem|tmpfs|cdrom’ | awk ‘{ print $5 ” ” $1 }’ | while read output;

do

echo $output

  usep=$(echo $output | awk ‘{ print $1}’ | cut -d’%’ -f1 )

partition=$(echo $output | awk ‘{ print $2 }’ )

if [ $usep -ge $disk_warn ]; then

echo “AHH SHIT!, MOVE SOME VOLUMES IDIOT…. \”$partition ($usep%)\” on $(hostname) as on $(date)”

fi

done

}

item_cpu

item_mem

#item_disk – This is so you can comment out whole sections of the script without having to do the whole section by individual lines.

Now the cool part.

Now if you have a centrally managed jump host that allows you to get out from your estate. Ideally you would want to setup ssh keys on the hosts and ensure you have sudo permissions on the those hosts.

We want to loop this script through an array of hosts and have it run and then report back all the findings in once place. This is extremely handy if your in resource crunch.

This assumes you have SSH KEYS SETUP & SUDO for your user setup.

Create the script

1.On your jump host as your “user” not root
a.vi coolchecks.sh
b.Copy the above code and paste
c.Save the file
2.Next chmod the permission to executable
d.chmod +x coolcheck.sh

Next

3.Create a servers.txt file
e.vi servers.txt
f.List out servers in a column

Server1
Server2

Server3

Server4

g.Save the file.
4.Now we want to loop that through the list of servers and then have it spit out the results and pipe the information to a file on the jumps host.

Run your forloop with ssh keys and sudo already setup.

.

1.for HOST in $(cat servers.txt); do ssh $HOST “sudo bash -s” < coolcheck.sh; done 2>&1 | tee -a cpumem.status.DEV

Logfile – cpumem.status.DEVwill be the log file that has all the info

Output:

cpu ok!!! 3% use Currently dev1.nicktailor.com

17.07% ok!! MEM USAGE is beLOW 80% – Free-MEM-312.7GB dev1.nicktailor.com

5% /dev/mapper/VolGroup00-root

3% /dev/sda2

5% /dev/sda1

1% /dev/mapper/VolGroup00-var_log

72% 192.168.1.101:/data_1

28% 192.168.1.102:/data_2

80% 192.168.1.103:/data_3

AHH SHIT!, MOVE SOME VOLUMES IDIOT…. “192.168.1.104:/data4 (80%)” on dev1.nicktailor.com as on Fri Apr 30 11:55:16 EDT 2021

.

Okay so now I’m gonna show you a dirty way to do it, because im just dirty. So say your in horrible place that doesn’t use keys, because they’re waiting to be hacked by password. 😛

.

DIRTY WAY – So this assumes you have sudo permissions on the hosts.

Note: I do not recommend doing this way if you are a newb. Doing it this way will basically log your password in the bash history and if you don’t know how to clean up after yourself, well………………….you’re going to get owned.

I’m only showing you this because some cyber security “folks” believe that not using keys is easier to deal with in some parallel realities iv visited… You can do the exact same thing above, without keys. But leave massive trail behind you. Hence why you should use secure keys with passwords.

.

Not Recommended for Newbies:
Forloop AND passing your ssh password inside it.

2.for HOST in $(cat servers.txt); do sshpass -p’SHHPASSWORD!‘ ssh -o ‘StrictHostKeyChecking no’ -p 22 $HOST “sudo bash -s” < coolcheck.sh; done 2>&1 | tee -a cpumem.status.DEV

.

Log file – cpumem.status.DEVwill be the log file that has all the info

Output:

cpu ok!!! 3% use Currently dev1.nicktailor.com

17.07% ok!! MEM USAGE is beLOW 80% – Free-MEM-312.7GB dev1.nicktailor.com

5% /dev/mapper/VolGroup00-root

3% /dev/sda2

5% /dev/sda1

1% /dev/mapper/VolGroup00-var_log

72% 192.168.1.101:/data_1

28% 192.168.1.102:/data_2

80% 192.168.1.103:/data_3 

AHH SHIT!, MOVE SOME VOLUMES IDIOT…. “192.168.1.104:/data4 (80%)” on dev1.nicktailor.com as on Fri Apr 30 11:55:16 EDT 2021

.

How to deploy Open-AKC(Authorized Key Chain)

.

Acting as a centralised trust management platform:


By allowing the “authorized_keys” mechanism on the hosts to be completely disabled, OpenAKC

permits SSH trust across an entire estate to be managed (with rich control and monitoring features)

centrally by “systems administration” or “information security” staff. This means that users, or

application developers etc. cannot add or remove trust relationships, effectively enforcing any

whitelist or approval process you might want to establish for the creation of trust relationships

within an estate.

.

A practical ‘Jump Host’:

 

Having worked in many Linux environments, the author of the software has seen a number of ‘jump

host’ solutions using dubious mechanisms such as “sudo” and others used to map users from AD or

LDAP directories to SSH trust relationships using shared private keys etc. and these solutions are

almost universally highly insecure.

OpenAKC implements a new approach and acts as an “drop in” upgrade for a legacy solution by simply migrating users to personal rather than sharedkeys with a “self service” key management mechanism. Requiring personal keys have pass phrases ensures increased security, and avoids ad-hoc automation from user accounts. The system provides rich control and monitoring features, while users can use familiar tools with little to no disruption to their workflow.

.

The problems everyone thinks about, but never finds a good solution.

 Root access to my servers so that everything is auditable
 How to handle Identity Access Management(IAM) without joining every server to the domain.
The problem with joining every server to the domain:
 If I were to gain access using your AD user. I could then browse /home and find any user accounts that logged as admin, which I can then brute force attack to gain root.
 I can also use ‘id’ to see which groups this user is attached to leaving me open to other forms or attacks and holes.

 As soon as someone sudo’s to root. There is zero control on limiting what this root user can do. Which is a huge problem when getting compromised. Every key stroke is not logged and if you have multiple people sudo as root at the same time the logs can get blurry.

 Imagine…having the ability to give your admins root but also tying the root user’s hands from doing certain things that you deem too sensitive any one outside of the security team to do.
 Imagine not having to join every server to the domain but still have root access controls
 Imagine being able to eliminate user/pass login entirely
 Imagine being able to deploy this faster than ldap or sssd across multiple distros.

.

Well guess what….this is exactly what im talking about today. This architecture does take a few steps and time to understand its inner workings but security wise trumps anything out there currently being used by most folks.

.

This is a overview of a simplified architecture. This can be scaled out to new or legacy environment using many distros without interruption.

Combined jump host/security server architecture: This approach is generally used when it’s a small group managing things:

.

.

Diagram Description automatically generated

Benefits: The advantage here is your security server and your jumphost are on the same server. If you’re admin team is also managing the security then this approach is ideal because when diagnosing and editing role rules everything is in one place. Another key benefit… There are only couple client packages that are deployed on client machines upon deployment the client is already brought into the trust, so no additional work is required to bring it in.

.

Cons: If your jumphost goes down, so does your security server. Now this can be scaled out so there are two servers if one is unavailable it tries the second. But if you have one this could be a single point of failure. Keep in mind this is a VM and would takes seconds to bring back up, so durability is very good from a infrastructure stand point..

.

.

Or the alternative:

.

A Segregated jumphost/security server architecture: This approach is generally used when you have large groups and large infrastructure managing things.

Diagram Description automatically generated

.

Benefits: Now the advantage here is if you have a multiple groups and need to tighten the security. You can have a your security server and your jump hosts segregated. These two machines are the only two machines that talk you AD. You have roles setup on the security server. If your user is listed in the role on the security server and your apart the AD group that is allowed to login via ssh.

The security server will allow you to login as root via registered ssh keys. You can also strip away “roots” abilities, and every keystroke is logged per user on the security server. A key benefit here is that because the client machines are not joined to the domain. A hacker would never be able to determine information that leads to the heart of your network such as AD, the groups setup, and any of its users. The other benefit is the jump hosts are easily deployed as needed and ready to go if needed. Since the security stuff is not local, the jump host is also secured.

.

Cons: If your security servers are not functioning and you have disabled user/pass. You will be locked out unless you single user. However this is easily fixed by setting up a strong root/pass that can bypass all security and only give this to one person, such as your manager or someone in secureops. Which would be only used ever used in an extreme dire situation. I have yet to see this ever required once this is implemented correctly.

.

.

Special Features:

.

 You can setup incident logging for production machines. (eg, Service now ticketing).
 Incident#, RFC#, Jira, brief description.
 You can make it so that root can not change any file that an immutable flag. Thus protecting key files, this is just one of the many attributes you can give or remove from root.

.

.

How to deploy Segregated jumphost/security server architecture:

.

.

Okay so the first thing we want to do is setup our security server host to Active Directory. The reason for this is we want to be able to setup centralised user management coupled with using authorised keys to jump to client machines.

.

Centos 7

 This post assumes you have already deployed two centos 7 machines. Don’t use centos 8 because. Redhat recently decided to screw everyone and kill the centos project. Centos 7 will have updates till 2024.
 This post also assumes you have active directory setup and you have a user inside linuxgroups of somekind.

.

Note: make sure you disable firewalld and selinux on your machine. Just a extra layer that not needed and only serves to tie your own hands behind your own back.

.

.

1.Join centos 7 to an existing domain

Install the following packages will be be using sssd/kerebros

a.yum install oddjob realmd samba samba-common oddjob-mkhomedir sssd adcli

.

2.Next you want to edit your /etc/resolv.conf so that it has one of the nameservers as your AD server. This is so it can resolve the necessary dns records.
b.Vi /etc/resolv.conf  
 Nameserver 192.168.1.300
 Nameserver 192.168.1.301

.

3.Now you need to discover the realm by searching for the name of your AD host
c.  realm discover AD.NICKTAILOR.COM (this is case sensitive)

.

.

[root@securityack1 ~]# realm discover AD.NICKTAILOR.COM

ad.nicktailor.com

type: kerberos

realm-name: AD.NICKTAILOR.COM

domain-name: ad.nicktailor.com

configured: kerberosmember

server-software: active-directory

client-software: sssd

  required-package: oddjob

required-package: oddjob-mkhomedir

  required-package: sssd

  required-package: adcli

required-package: samba-common-tools

login-formats: %U

login-policy: allow-realm-logins

.

.

.

.

.

4.You now want to join your jump host to the domain
d.realm join –user=admin ad.nicktailor.com
it will ask you for the AD admin password, enter it and it should go to the next prompt. This step will create the all the necessary files /etc/sssd/sssd.conf /etc/krb5.conf /etc/smb.conf file you need.

.

e.if it works you can check it by
 id nicktailor@ad.nicktailor.com
1.you can set it in the /etc/sssd/sssd.conf So you don’t need the @ad.nicktaior.com when your run your ‘id’

.

.

5.Now we want to add nicktailor user to sudo on the wheel
f.usermodaG sudo nicktailor

.

.

6.Now you we want to install the openAKC repository
g.curl https://netlore.github.io/OpenAKC/repos/openakc-el7.repo | sudo tee /etc/yum.repos.d/openakc.repo
h.Now install the openakc package for the jump host

yum install openakc-server

i.Next change su to the user account your going to make admin on the security server
· Su nickatilor

.

j.You want to generate new ssh keys
 ssh-keygen -t rsa
k.Now you want to register that key to openakc security server
 openakc register

.

· cd /home/nicktailor/.openakc/
· ls -al /home/nicktailor/.openakc/

.

 check to see if the openakc-user-client-nicktailorpubkey.pem is there

.

.

l.Copy the key to the security keys holding (might have do this as root
 cp openakc-user-client-nicktailorpubkey.pem /var/lib/openakc/keys/

.

Example:

.

[nicktailor@security1 ~]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/nicktailor/.ssh/id_rsa):

/home/tailorn/.ssh/id_rsa already exists.

Overwrite (y/n)? y

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/nictailor/.ssh/id_rsa.

Your public key has been saved in /home/nicktailor/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:udhNKEp0txzfup7IxhUwNA+VSviWP1mu/aKPA5vZb3w tailorn@jumphost1.nicktailor.com

The key’s randomart image is:

+—[RSA 2048]—-+

| o+… |

| . ++. |

| . .oo=. |

| . . o=*… |

| . ..S.o=. |

| . . +.+=.. |

| . ..oBo= |

| .*.++= E |

| .o.**o+. |

+—-[SHA256]—–+

[nicktailor@jumphost1 ~]$ openakc register

OpenAKC Copyright (C) 2019-2020 A. James Lewis. Version is 1.0.0~alpha18-1.el7.

.

This program comes with ABSOLUTELY NO WARRANTY; see “license” option.

This is free software, and you are welcome to redistribute it

under certain conditions; See LICENSE file for further details.

.

Passphrase is requested to ensure you own this key.

.

Enter passphrase:

.

Escalating to perform API call

.

Connected to OpenAKC server. Sending key registration request

OK: Request processed

.

.

7.Now we want to setup a role on the security server for the new user while you are su’d to nicktailor still.
 openakc editrole root@DEFAULT

.

.

8.You will also need to copy the key you created to the jumphost or you can just create another on the jump host and then copy it over the security server.
m.ssh-copy-id –i ~/.ssh/id_rsa.pub root@192.168.1.200

.

Here you will create a role block that will allow said user to jump to any client so long as your key is registered the openakc security server.

.

RULE=2020/01/13 19:17,2030/01/13 20:17,user,nicktailor

DAY=any

TIM=any

SHELL=/bin/bash

CMD=any

SCP=s,^/,/data/,g

CAP=cap_linux_immutable

REC=yes

FROM=any

.

RULE=2020/01/13 19:17,2030/01/13 20:17,group,linuxusers

DAY=any

TIM=any

SHELL=/bin/bash

CMD=any

SCP=s,^/,/data/,g

CAP=cap_linux_immutable

REC=yes

FROM=any

.

 Save the file

.

.

.

Note: The CAP section allows you to disable root abilities there is a long list of things. This particular one. Disables root from being able to edit any file with an immutable flag. So all your root users can not change these types of file eve as root.

.

You starting to see why this is how you do shit? Think for yourselves not because someone else said that best practice. Jeez…

.

.

.

Note: this is just a development setup I am doing for you. It can be scaled with two security servers using gluster of nfs to share the files system. This server will be what all the clients check with before allowing anyone to enter. They will need to first exist in AD and then their root key will need to be registered here, and then they will need to be allowed in the appropriate role and sssd and ssh groups before they can get in. This also eliminate the need to join any clients to the domain and protects against any hacker from being able to query the domain controller for user groups and if users exist. The would of also needed to get on to the jump host, have your key and also know your root passphrase. Unlikely going to happen

.

.

Okay now were going to setup the jumphost jump1.nickatilor.com

.

Note: You can have as many jumphosts as you like. This should be the only entry point to your servers. The jump host also talks to the security server. So even if you tried to go directly the server. You couldn’t since its only allowed from here. This is how you set shit up so you don’t end up like solarwinds and so many other idiots who follow rules old and out of date rule blindly.

.

.

1.First thing we wan to do is add the the repository on the jump host.

.

.

Note: You also add this server to the domain. So please follow the joining of the domain setups from above and then carry on from here.

.

 curl https://netlore.github.io/OpenAKC/repos/openakc-el7.repo | sudo tee /etc/yum.repos.d/openakc.repo

.

2.Now you want to install the openakc-tools package on the jumphost
a.yum install openakctools

.

3.Now edit the /etc/openakc/openakc.conf

APIS=”nickack1.nicktailor.com

PORT=”889″

.

4.Now login in as nicktailor or su as nicktailor to allow sssd to create the home directory.
b.su ntailor
c.copy over the key from the security server logged in from the security server

ssh-copy-id –i ~/.ssh/id_rsa.pub root@192.168.1.200

.

5.Now while you are nicktailor on the jump host your going to be a ping to ensure its communication is working:

.

[nicktailor@jumphost1 keys]$ openakc ping

OpenAKC Copyright (C) 2019-2020 A. James Lewis. Version is 1.0.0~alpha18-1.el7.

.

This program comes with ABSOLUTELY NO WARRANTY; see “license” option.

This is free software, and you are welcome to redistribute it

under certain conditions; See LICENSE file for further details.

.

Connected to OpenAKC server. Sending Test Run Ping Message

Test Run Response – OK: Pong! – from server – securityakc1.nicktailor.com

.

If you see that above that is a good sign

.

.

.

Now were going to add a client machine so you jump to it. This is the easiest part. So say you have a bunch of legacy systems and you want to centralise login without joining to the domain, you want everyone who using root to be tracked by logging every keystroke and logging what incident they logged in for to use root.

.

.

1.Logged on to the client machine as root
 Add the openack repo

curl https://netlore.github.io/OpenAKC/repos/openakc-el7.repo | sudo tee

/etc/yum.repos.d/openakc.repo

.

2.Next install openakc client package, now this is confusing because he didn’t add client next to the package. If you install the wrong one it’s a bitch to clean up so get right the first time…lol

.

a.yum install openakc

.

3.Tell it where the security server lives by editing this file and saving it

  vi /etc/openakc/openakc.conf

APIS=”192.168.1.200

ENABLED=”yes”

PORT=”889″

CACHETIME=”60″

DEBUG=”no”

PERMITROOT=”yes”

AUDIT=”yes”

QUIZ=”no”

HIDE=”restrict”

FAKESUDO=”yes”

.

Note: Quiz if you set it to yes. When you log into root it will ask for a service now ticket number and description which will get logged on the security server.

.

4. save the file

.

That’s it!….

.

.

Now if you go back the jump host and try to log in

.

[@jumphost1 ~]$ ssh root@192.168.1.38

Enter passphrase for key ‘/home/nicktailor/.ssh/id_rsa‘:

OpenAKC (v1.0.0~alpha18-1.el7) – Interactive Session Initialized

.

[root@nickclient1 ~]#

.

.

This is session is now being logged and and you can not see if the user belongs to the domain

.

[root@nickclient1 ~]# id nickatilor

id: nicktailor: no such user

.

.

This is how you setup security like pro. Hope you enjoyed this.

.

Hope you enjoyed this.

.

Special thank you the author of the project “James Lewis” I enjoyed learning this and am a big fan of the innovation behind the open-akc project.

.

.

.

.

How to add new users:

.

1.Add user to the active directory and said linux group
2.Login vi ssh to the jump host
3.Create an ssh key as your new user
a.Ssh-keygen -t rsa with a passphrase
4.Register key to openakc while on the jumphost
b.Openakc register

.

That’s it your done. Now the new user is able to login into the estate via ssh from the jump host.

.

.

.

.

.

.

.

.

How to add a custom tomcat installation to SystemD with ansible.

Okay so say you have a custom install of tomcat and java, which is what a lot of people do because java update and tomcat updates can bring things down. So things need to be tested before updates and standard patch cycles can end up affecting the environment.

But you want to handle the startup and stopping via systemd to be able to get status outputs and let system handle the service on reboots. This is how to do it slick.

.

Ansible Setup:

 This post assumes you have ansible setup and running. If you don’t search through my blog and you should find a post on how to setup.

Role:

 We are going to setup a custom role to add your custom tomcat install system

Setup the new role:

.

 Create a new directory in /etc/ansible/role for your new role
 mkdir -p /etc/ansible/roles/AddtomcatSystemD/tasks/

.

 Now create a yaml file that will run a set of tasks to set this up for ya.
 vi main.yml

.

Main.yml

===========================================

Note: this will install the redhat tomcat version of tomcat. Do not worry we are not going to be using this tomcat. This is just so redhat automatically setups all the needed services and locations. We will then update the SystemD config for tomcat to use the custom version.

– name: Install the latest version of tomcat

package:

name: tomcat

state: latest

.

Note: This symlink is important as tomcat default install by redhat is inside /opt/tomcat. Update the src to the custom location of your tomcat

.

– name: Create symbolic link for “tomcat” in /opt

file:

    src: /custom/install/tomcat

path: /opt/tomcat

force: yes

state: link

.

Note: This will enable tomcat to start up on reboot

.

– name: Enable tomcat service on startup

shell: systemctl enable tomcat

.

Note: This is the tomcat systemd service file that systemd uses for the default install. We are going to empty.

.

– name: Null tomcat.service file

shell: “>/etc/systemd/system/tomcat.service

.

Note: We are now going to add our custom block for tomcat into the tomcat.service file we just emptied above using the blockinfle module. This means that this whole section will also be managed by ansible as well. Make sure you adjust the java_home if your java isn’t location inside tomcat. Along with the user,group,umask for to your custom tomcat.

.

– name: Edit tomcat.service for systemd

  blockinfile:

    dest: /etc/systemd/system/tomcat.service

    insertafter:

block: |

[Unit]

Description=Apache Tomcat Web Application Container

After=syslog.target network.target

      

[Service]

Type=forking

.

Environment=JAVA_HOME=/opt/tomcat

Environment=CATALINA_PID=/opt/tomcat/temp/tomcat.pid

Environment=CATALINA_HOME=/opt/tomcat

Environment=CATALINA_BASE=/opt/tomcat

Environment=’CATALINA_OPTS=-Xms512M -Xmx1024M -server –XX:+UseParallelGC

Environment=’JAVA_OPTS=-Djava.awt.headless=true –Djava.security.egd=file:/dev/./urandom

.

ExecStart=/opt/tomcat/bin/startup.sh

      ExecStop=/bin/kill -15 $MAINPID

.

User=tomcat

Group=tomcat

      UMask=

      RestartSec=10

Restart=always

      

[Install]

      WantedBy=multi-user.target

.

Note: This will then reload the custom tomcat via systemd

– name: Start tomcat service with Systemd

  systemd:

name: tomcat

    daemon_reload: yes

.

Note: This will then check to see if the new tomcat is service running and out to the ansible playbook log.

    

– name: get service facts

  service_facts:

.

– name: Check to see if tomcat is running

debug:

var: ansible_facts.services[“tomcat.service“]

.

.

Ansibe playbook log:

.

[root@nickansible]# ansible-playbook –i inventory/DEV/hosts justtomcatrole.yml –limit ‘nicktestvm‘ -k

.

SSH password:

.

PLAY [all] ************************************************************************************************************************************************************************************************

.

TASK [AddTomCatSystemD : Create symbolic link for “tomcat” in /opt] ***************************************************************************************************************************************

changed: nicktestvm]

.

TASK [AddTomCatSystemD : Enable tomcat service on startup] ************************************************************************************************************************************************

changed: nicktestvm]

.

TASK [AddTomCatSystemD : Null tomcat.service file] ********************************************************************************************************************************************************

changed: nicktestvm]

.

TASK [AddTomCatSystemD : Edit tomcat.service for systemd] *************************************************************************************************************************************************

changed: nicktestvm]

.

TASK [AddTomCatSystemD : Start tomcat service with Systemd] ***********************************************************************************************************************************************

ok: nicktestvm]

.

TASK [AddTomCatSystemD : get service facts] ***************************************************************************************************************************************************************

ok: nicktestvm]

.

TASK [AddTomCatSystemD : Check to see if tomcat is running] ***********************************************************************************************************************************************

ok: nicktestvm] => {

ansible_facts.services[\”tomcat.service\”]”: {

“name”: “tomcat.service“,

“source”: “systemd“,

“state”: “running”,

“status”: “enabled”

}

}

.

PLAY RECAP ************************************************************************************************************************************************************************************************

nicktestvm : ok=7 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

.

.

.

==========================

[root@nicktestvm ~]# cat /etc/systemd/system/tomcat.service

# BEGIN ANSIBLE MANAGED BLOCK

[Unit]

Description=Apache Tomcat Web Application Container

After=syslog.target network.target

.

[Service]

Type=forking

.

Environment=JAVA_HOME=/opt/tomcat

Environment=CATALINA_PID=/opt/tomcat/temp/tomcat.pid

Environment=CATALINA_HOME=/opt/tomcat

Environment=CATALINA_BASE=/opt/tomcat

Environment=’CATALINA_OPTS=-Xms512M -Xmx1024M -server -XX:+UseParallelGC’

Environment=’JAVA_OPTS=-Djava.awt.headless=true -Djava.security.egd=file:/dev/./urandom’

.

ExecStart=/opt/tomcat/bin/startup.sh

ExecStop=/bin/kill -15 $MAINPID

.

User=tomcat

Group=tomcat

UMask=0028

RestartSec=10

Restart=always

.

[Install]

WantedBy=multi-user.target

# END ANSIBLE MANAGED BLOCK

.

.

SystemD Status:

.

root@nicktestvm ~]# systemctl status tomcat

tomcat.service – Apache Tomcat Web Application Container

Loaded: loaded (/etc/systemd/system/tomcat.service; enabled; vendor preset: disabled)

Active: active (running) since Thu 2020-12-24 05:11:21 GMT; 21h ago

Process: 6333 ExecStop=/bin/kill -15 $MAINPID (code=exited, status=0/SUCCESS)

Process: 6353 ExecStart=/opt/tomcat/bin/startup.sh (code=exited, status=0/SUCCESS)

Main PID: 6363 (java)

   CGroup: /system.slice/tomcat.service

└─6363 /usr/local/java/java -Djava.util.logging.config.file=/opt/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -server -Xms1…

.

Dec 24 05:11:21 nicktestvm systemd[1]: Starting Apache Tomcat Web Application Container…

Dec 24 05:11:21 nicktestvm startup.sh[6353]: Existing PID file found during start.

Dec 24 05:11:21 nicktestvm startup.sh[6353]: Removing/clearing stale PID file.

Dec 24 05:11:21 nicktestvm systemd[1]: Started Apache Tomcat Web Application Container.

.

.

0