Author: admin

How to renable the tempurl in latest Cpanel

.

As some of you have noticed the new cpanel by default has a bunch of new default settings that nobody likes.

FTPserver is not configured out of the box.

TempURL is disabled for security reasons. Under certain conditions, a user can attack another user’s account if they access a malicious script through a mod_userdir URL.

So they removed it by default.

.

They did not provide instructions for people who need it. You can easily enable it BUT php wont work on the temp url unless you do the following

remove below: and by remove I mean you need to recompile easyapache 4 the following changes.

mod_ruid2
mod_passenger
mod_mpm_itk
mod_proxy_fcgi
mod_fcgid

Install

Mod_suexec
mod_suphp

Then go into Apache_mode_user dir tweak and enable it and exclude default host only.
It
wont save the setting in the portal, but the configuration is updated. If you go back and look it will look like the settings didnt take. Looks like a bug in cpanel they need to fix on their front end.

Then PHP will work again on the tempurl

.

How to integrate VROPS with Ansible

Automating VMware vRealize Operations (vROps) with Ansible

In the world of IT operations, automation is the key to efficiency and consistency. VMware’s vRealize Operations (vROps) provides powerful monitoring and management capabilities for virtualized environments. Integrating vROps with Ansible, an open-source automation tool, can take your infrastructure management to the next level. In this blog post, we’ll explore how to achieve this integration and demonstrate its benefits with a practical example.

What is vRealize Operations (vROps)?

vRealize Operations (vROps) is a comprehensive monitoring and analytics solution from VMware. It helps IT administrators manage the performance, capacity, and overall health of their virtual environments. Key features of vROps include:

 Performance Monitoring: Continuous tracking of VMs, hosts, and other resources.
 Capacity Management: Planning and optimizing resource usage.
 Troubleshooting: Identifying and resolving issues promptly.
 Automated Actions: Responding to specific events with predefined actions.

Why Integrate vROps with Ansible?

Integrating vROps with Ansible allows you to automate routine tasks, enforce consistent configurations, and rapidly respond to changes or issues in your virtual environment. This integration enables you to:

 Automate Monitoring Setup: Configure monitoring for new virtual machines or environments automatically.
 Trigger Remediation Actions: Automate responses to alerts generated by vROps.
 Generate Reports: Automate the creation and distribution of performance and capacity reports.
 Maintain Configuration Compliance: Ensure consistent vROps configurations across environments.

Setting Up the Integration

Prerequisites

Before you start, ensure you have:

1.vROps Environment: A running instance of VMware vRealize Operations.
2.Ansible Installed: Ansible should be installed on your control node.

Step-by-Step Guide

Step 1: Configure API Access in vROps

First, ensure you have the necessary API access in vROps. You’ll need:

 vROps Host: The URL of your vROps instance.
 vROps Username: A user with API access permissions.
 vROps Password: The password for the above user.

Step 2: Install Ansible

If you haven’t installed Ansible yet, you can do so by following these commands:

sh

sudo apt update

sudo apt install ansible

Step 3: Create an Ansible Playbook

Create an Ansible playbook to interact with vROps. Below is an example playbook that retrieves the status of vROps resources.

Note: to use the other api end points you will need to acquire the token and set it as a fact to pass later.

Example

If you want to acquire the auth token:

.

name: Authenticate with vROps and Check vROps Status

  hosts: localhost

  vars:

    vrops_host: “your-vrops-host”

    vrops_username: “your-username”

    vrops_password: “your-password”

  tasks:

    – name: Authenticate with vROps

      uri:

        url: “https://{{ vrops_host }}/suite-api/api/auth/token/acquire”

        method: POST

        body_format: json

        body:

          username: {{ vrops_username }}”

          password: {{ vrops_password }}”

        headers:

          Content-Type: “application/json

        validate_certs: no

      register: auth_response

.

    – name: Fail if authentication failed

      fail:

        msg: “Authentication with vROps failed: {{ auth_response.json }}”

      when: auth_response.status != 200

.

    – name: Set auth token as fact

      set_fact:

        auth_token: {{ auth_response.json.token }}”

.

    – name: Get vROps status

      uri:

        url: “https://{{ vrops_host }}/suite-api/api/resources”

        method: GET

        headers:

          Authorization: vRealizeOpsToken {{ auth_token }}”

          Content-Type: “application/json

        validate_certs: no

      register: vrops_response

.

    – name: Display vROps status

      debug:

        msg: vROps response: {{ vrops_response.json }}”

.

Save this playbook to a file, for example, check_vrops_status.yml.

Step 4: Define Variables

Create a variables file to store your vROps credentials and host information.
Save it as vars.yml:

vrops_host: your-vrops-host

vrops_username: your-username

vrops_password: your-password

Step 5: Run the Playbook

Execute the playbook using the following command:

sh

ansible-playbook -e @vars.yml check_vrops_status.yml

This above command runs the playbook and retrieves the status of vROps resources, displaying the results if you used the first example.

Here are some of the key API functions you can use:

The Authentication to use the endpoints listed below, you will need to acquire the auth token and set it as a fact to pass to other tasks inside ansible to use with the various endpoints below.

 Login: Authenticate and get a session token.
 Endpoint: POST /suite-api/api/auth/token/acquire

Resource Management

 Get Resources: Retrieve a list of resources managed by vROps.
 Endpoint: GET /suite-api/api/resources
 Get Resource by ID: Retrieve details of a specific resource.
 Endpoint: GET /suite-api/api/resources/{resourceId}
 Create Resource: Add a new resource to vROps.
 Endpoint: POST /suite-api/api/resources
 Update Resource: Update information for an existing resource.
 Endpoint: PUT /suite-api/api/resources/{resourceId}
 Delete Resource: Remove a resource from vROps.
 Endpoint: DELETE /suite-api/api/resources/{resourceId}

Metrics and Data

 Get Metrics for a Resource: Retrieve metrics for a specific resource.
 Endpoint: GET /suite-api/api/resources/{resourceId}/stats
 Get Metric Definitions: List available metrics for a resource kind.
 Endpoint: GET /suite-api/api/resources/kind/{resourceKindKey}/statkeys
 Get Historical Metrics: Retrieve historical metric data for a resource.
 Endpoint: GET /suite-api/api/resources/{resourceId}/stats/historical

Alerts and Notifications

 Get Alerts: Retrieve a list of alerts.
 Endpoint: GET /suite-api/api/alerts
 Get Alert by ID: Retrieve details of a specific alert.
 Endpoint: GET /suite-api/api/alerts/{alertId}
 Acknowledge Alert: Acknowledge a specific alert.
 Endpoint: POST /suite-api/api/alerts/{alertId}/acknowledge
 Cancel Alert: Cancel a specific alert.
 Endpoint: POST /suite-api/api/alerts/{alertId}/cancel
 Generate Notifications: Send notifications based on specific conditions.
 Endpoint: POST /suite-api/api/notifications

Policies and Configurations

 Get Policies: Retrieve a list of policies.
 Endpoint: GET /suite-api/api/policies
 Get Policy by ID: Retrieve details of a specific policy.
 Endpoint: GET /suite-api/api/policies/{policyId}
 Create Policy: Add a new policy.
 Endpoint: POST /suite-api/api/policies
 Update Policy: Update an existing policy.
 Endpoint: PUT /suite-api/api/policies/{policyId}
 Delete Policy: Remove a policy.
 Endpoint: DELETE /suite-api/api/policies/{policyId}

Dashboards and Reports

 Get Dashboards: Retrieve a list of dashboards.
 Endpoint: GET /suite-api/api/dashboards
 Get Dashboard by ID: Retrieve details of a specific dashboard.
 Endpoint: GET /suite-api/api/dashboards/{dashboardId}
 Create Dashboard: Add a new dashboard.
 Endpoint: POST /suite-api/api/dashboards
 Update Dashboard: Update an existing dashboard.
 Endpoint: PUT /suite-api/api/dashboards/{dashboardId}
 Delete Dashboard: Remove a dashboard.
 Endpoint: DELETE /suite-api/api/dashboards/{dashboardId}
 Get Reports: Retrieve a list of reports.
 Endpoint: GET /suite-api/api/reports
 Generate Report: Generate a new report based on a template.
 Endpoint: POST /suite-api/api/reports/{reportTemplateId}/generate
 Get Report by ID: Retrieve details of a specific report.
 Endpoint: GET /suite-api/api/reports/{reportId}

Capacity and Utilization

 Get Capacity Remaining: Retrieve remaining capacity for a specific resource.
 Endpoint: GET /suite-api/api/resources/{resourceId}/capacity/remaining
 Get Capacity Usage: Retrieve capacity usage for a specific resource.
 Endpoint: GET /suite-api/api/resources/{resourceId}/capacity/usage

Additional Functionalities

 Get Custom Groups: Retrieve a list of custom groups.
 Endpoint: GET /suite-api/api/groups
 Create Custom Group: Add a new custom group.
 Endpoint: POST /suite-api/api/groups
 Update Custom Group: Update an existing custom group.
 Endpoint: PUT /suite-api/api/groups/{groupId}
 Delete Custom Group: Remove a custom group.
 Endpoint: DELETE /suite-api/api/groups/{groupId}
 Get Recommendations: Retrieve a list of recommendations.
 Endpoint: GET /suite-api/api/recommendations
 Get Recommendation by ID: Retrieve details of a specific recommendation.
 Endpoint: GET /suite-api/api/recommendations/{recommendationId}

These are just a few examples of the many functions available through the vROps REST API.

.

.

How to Power Up or Power Down multiple instances in OCI using CLI with Ansible

 This assume you have already configured the OCI cli and added your key to the user inside the OCI interface so your Ubuntu or Jump box can connect to your OCI infrastructure
 Ansible
 Role to control power up/down instances using the OCI CLI
 This assume you already have ansible setup
 You will need to install the ansible oci collections

.

Now the reason why you would probably want this is over terraform is because terraform is more suited for infrastructure orchestration and not really suited to deal with the instances once they are up and running.

If you have scaled servers out in OCI powering servers up and down in bulk currently is not available. If you are doing a migration or using a staging environment that you need need to use the machine when building or doing troubleshooting.

Then having a way to power up/down multiple machines at once is convenient.

.

Install the OCI collections if you don’t have it already.

Linux/macOS

curl -L https://raw.githubusercontent.com/oracle/oci-ansible-collection/master/scripts/install.sh | bash -s — —verbose

.

ansible-galaxy collection list – Will list the collections installed

# /path/to/ansible/collections

Collection Version

——————- ——-

amazon.aws 1.4.0

ansible.builtin 1.3.0

ansible.posix 1.3.0

oracle.oci 2.10.0

.

Once you have it installed you need to test the OCI client is working

oci iam compartment list –all (this will list out the compartment ID list for your instances.

Compartments in OCI are a way to organise infrastructure and control access to those resources. This is great for if you have contractors coming and you only want them to have access to certain things not everything.

Now there are two ways you can your instance names.

 One logging in via the OCI interface and going the correct compartment, which is very slow and mind numbing to wait for.
 Or you can use automated approaches which is what you should be doing with everything you do that needs to be done over and over.

.

Bash Script to get the instances names from OCI

 This will use the OCI CLI and provide all instances name and ips
 It loops through each availability domain.
 for each availability domain, it lists the instance IDs and writes them to instance_ids.txt.
 It cleans up the instance_ids.txt file to remove brackets, quotes, and commas.
 It reads each instance ID from instance_ids.txt.
 For each instance, it retrieves the VNIC information.
 It extracts the display name, public IP, and private IP, and prints them.
 The script ends the loop and moves to the next availability domain.

compartment_id=ocid1.compartment.oc1..insert compartment ID here

.

# Explicitly define the availability domains based on your provided data

availability_domains=(“zcLB:US-CHICAGO-1-AD-1” “zcLB:US-CHICAGO-1-AD-2” “zcLB:US-CHICAGO-1-AD-3”)

.

# For each availability domain, list the instances

for ad in “${availability_domains[@]}”; do

.

    # List instances within the specific AD and compartment, extracting the “id” field

    oci compute instance list –compartment-id $compartment_id –availability-domain $ad –query data[].id” –raw-output > instance_ids.txt

.

    # Clean up the instance IDs (removing brackets, quotes, etc.)

    sed i ‘s/\[//g’ instance_ids.txt

    sed i ‘s/\]//g’ instance_ids.txt

    sed i ‘s/”//g’ instance_ids.txt

    sed i ‘s/,//g’ instance_ids.txt

.

    # Read each instance ID from instance_ids.txt

    while read -r instance_id; do

        # Get instance VNIC information

        instance_info=$(oci compute instance list-vnics –instance-id $instance_id)

.

        # Extract the required fields and print them

        display_name=$(echo $instance_info | jq -r ‘.data[0].”display-name”‘)

        public_ip=$(echo $instance_info | jq -r ‘.data[0].”public-ip“‘)

        private_ip=$(echo $instance_info | jq -r ‘.data[0].”private-ip“‘)

.

        echo “Availability Domain: $ad

        echo “Display Name: $display_name

        echo “Public IP: $public_ip

        echo “Private IP: $private_ip

        echo “—————————————–“

    done < instance_ids.txt

done

.

The output of the script when piped in to a file will look like

Instance.names

Availability Domain: zcLB:US-CHICAGO-1-AD-1

Display Name: Instance1

Public IP: 192.0.2.1

Private IP: 10.0.0.1

—————————————–

Availability Domain: zcLB:US-CHICAGO-1-AD-1

Display Name: Instance2

Public IP: 192.0.2.2

Private IP: 10.0.0.2

—————————————–

.

.

You can now grep this file for the name of the servers you want to power on or off quickly

 grep instance.names | grep <Instance*>

.

Now we have an ansible playbook that can power on or power off the instance by name provided by the OCI client

Ansible playbook to power on or off multiple instances via OCI CLI

name: Control OCI Instance Power State based on Instance Names

  hosts: localhost

  vars:

    instance_names_to_stop:

       instance1

      # Add more instance names here if you wish to stop them…

.

    instance_names_to_start:

      # List the instance names you wish to start here…

      # Example:

       Instance2

.

  tasks:

   name: Fetch all instance details in the compartment

    command:

      cmd: oci compute instance list –compartment-id ocid1.compartment.oc1..aaaaaaaak7jc7tn2su2oqzmrbujpr5wmnuucj4mwj4o4g7rqlzemy4yvxrza –output json

    register: oci_output

.

   set_fact:

      instances: {{ oci_output.stdout | from_json }}”

.

   name: Extract relevant information

    set_fact:

      clean_instances: {{ clean_instances | default([]) + [{ ‘name’: item[‘display-name’], ‘id’: item.id, ‘state’: item[‘lifecycle-state’] }] }}”

    loop: {{ instances.data }}”

    when: “‘display-name’ in item and ‘id’ in item and ‘lifecycle-state’ in item”

.

   name: Filter out instances to stop

    set_fact:

      instances_to_stop: {{ instances_to_stop | default([]) + [item] }}”

    loop: {{ clean_instances }}”

    when: “item.name in instance_names_to_stop and item.state == ‘RUNNING'”

.

   name: Filter out instances to start

    set_fact:

      instances_to_start: {{ instances_to_start | default([]) + [item] }}”

    loop: {{ clean_instances }}”

    when: “item.name in instance_names_to_start and item.state == ‘STOPPED'”

.

   name: Filter out instances to stop

    set_fact:

      instances_to_stop: {{ clean_instances | selectattr(‘name’, ‘in’, instance_names_to_stop) | selectattr(‘state’, ‘equalto‘, ‘RUNNING’) | list }}”

.

   name: Filter out instances to start

    set_fact:

      instances_to_start: {{ clean_instances | selectattr(‘name’, ‘in’, instance_names_to_start) | selectattr(‘state’, ‘equalto‘, ‘STOPPED’) | list }}”

.

   name: Display instances to stop (you can remove this debug task later)

    debug:

      var: instances_to_stop

.

   name: Display instances to start (you can remove this debug task later)

    debug:

      var: instances_to_start

.

   name: Power off instances

    command:

      cmd: oci compute instance action —action STOP –instance-id {{ item.id }}”

    loop: {{ instances_to_stop }}”

    when: instances_to_stop | length > 0

    register: state

.

#  – debug:

#      var: state

.

   name: Power on instances

    command:

      cmd: oci compute instance action —action START –instance-id {{ item.id }}”

    loop: {{ instances_to_start }}”

    when: instances_to_start | length > 0

.

The output will look like

PLAY [Control OCI Instance Power State based on Instance Names] **********************************************************************************

.

TASK [Gathering Facts] ***************************************************************************************************************************

ok: [localhost]

.

TASK [Fetch all instance details in the compartment] *********************************************************************************************

changed: [localhost]

.

TASK [Parse the OCI CLI output] ******************************************************************************************************************

ok: [localhost]

.

TASK [Extract relevant information] **************************************************************************************************************

ok: [localhost] => (item={‘display-name’: ‘Instance1’, ‘id’: ‘ocid1.instance.oc1..exampleuniqueID1’, ‘lifecycle-state’: ‘STOPPED’})

ok: [localhost] => (item={‘display-name’: ‘Instance2’, ‘id’: ‘ocid1.instance.oc1..exampleuniqueID2’, ‘lifecycle-state’: ‘RUNNING’})

.

TASK [Filter out instances to stop] **************************************************************************************************************

ok: [localhost]

.

TASK [Filter out instances to start] *************************************************************************************************************

ok: [localhost]

.

TASK [Display instances to stop (you can remove this debug task later)] **************************************************************************

ok: [localhost] => {

    instances_to_stop: [

        {

            “name”: “Instance2”,

            “id”: ocid1.instance.oc1..exampleuniqueID2″,

            “state”: RUNNING”

        }

    ]

}

.

TASK [Display instances to start (you can remove this debug task later)] *************************************************************************

ok: [localhost] => {

    instances_to_start: [

        {

            “name”: “Instance1”,

            “id”: ocid1.instance.oc1..exampleuniqueID1″,

            “state”: STOPPED”

        }

    ]

}

.

TASK [Power off instances] ***********************************************************************************************************************

changed: [localhost] => (item={‘name’: ‘Instance2’, ‘id’: ‘ocid1.instance.oc1..exampleuniqueID2’, ‘state’: ‘RUNNING’})

.

TASK [Power on instances] ************************************************************************************************************************

changed: [localhost] => (item={‘name’: ‘Instance1’, ‘id’: ‘ocid1.instance.oc1..exampleuniqueID1’, ‘state’: ‘STOPPED’})

.

PLAY RECAP ****************************************************************************************************************************************

localhost                  : ok=9    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

.

.

How to Deploy Another VPC in AWS with Scalable EC2’s for HA using Terraform

 This will configure a new VPC
 Create a new subnet for use
 Create a new security group with bunch rules
 Create a key pair for your new instances
 Allow you to scale your instances cleanly use the count attribute.

So we are going to do this a bit different than the other post. As the other post is just deploying one instance in an existing VPC.

This one is more fun. The structure we will use this time will allow you to scale your ec2 instances very cleanly. If you are using git repos to push out changes. Then having a main.tf for your instance is much simpler to manage at scale.

File structure:

terraform-project/

├── main.tf <– Your main configuration file

├── variables.tf <– Variables file that has the inputs to pass

├── outputs.tf <– Outputs file

├── security_group.tf <– File containing security group rules

└── modules/

└── instance/

        ├── main.tf <- this file contains your ec2 instances

└── variables.tf <- variable file that defines we will pass for the module in main.tf to use

.

Explaining the process:

Main.tf

 We have defined the provider and availability zone; if you have more than one cloud, then its good create a provider.tf and carve them out. 
 The key-pair to import into aws in the second availability zone that was generated locally in my terraform directory using.
 ssh-keygen -t rsa -b 2048 -f ./terraform-aws-key
 We are then saying lets create a new vpc called vpc2
 with the subnet cidr block 10.0.1.0/24 to use internally
 this will also map the public address to the new internal address assigned upon launch
 We will be creating servers using variables defined in the variables.tf
 Instance type
 AMID
 key_pair name to use
 new subnet to use
 and assign the new security group to the ec2 instance deployed
 We also added a count on the module so when we deploy ec2’s we can simply adjust the count number and pushed the code with a one tiny change as opposed to an entire block. You will see what I mean later.
main.tf

provider aws {

  region = “us-west-2”

}

.

resource aws_key_pair “my-nick-test-key” {

  key_name   = “my-nick-test-key”

  public_key = file(${path.module}/terraform-aws-key.pub”)

}

.

resource aws_vpc “vpc2” {

  cidr_block = “10.0.0.0/16”

}

.

resource aws_subnet newsubnet {

  vpc_id                  = aws_vpc.vpc2.id

  cidr_block              = “10.0.1.0/24”

  map_public_ip_on_launch = true

}

.

module web_server {

  source           = “./module/instance”

  ami_id           = var.ami_id

  instance_type    = var.instance_type

  key_name         = var.key_name_instance

  subnet_id        = aws_subnet.newsubnet.id

  instance_count   = 2  // Specify the number of instances you want

  security_group_id = aws_security_group.newcpanel.id

}

.

Variables.tf

 Here we define the variables we want to pass to the module in main.tf for the instance.
 The linux image
 Instance type (size of the machine)
 Key-pair to use for the image

variable ami_id {

  description = “The AMI ID for the instance”

  default     = “ami-0913c47048d853921” // Amazon Linux 2 AMI ID

}

.

variable instance_type {

  description = “The instance type for the instance”

  default     = “t2.micro

}

.

variable key_name_instance {

  description = “The key pair name for the instance”

  default     = “my-nick-test-key”

}

.

Security_group.tf

 This will create a new security group in the us-west-2 with inbound rules similar to cpanel with the name newcpanel

resource aws_security_group newcpanel {

  name        = newcpanel

  description = “Allow inbound traffic”

  vpc_id      = aws_vpc.vpc2.id

.

  // POP3 TCP 110

  ingress {

    from_port   = 110

    to_port     = 110

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // Custom TCP 20

  ingress {

    from_port   = 20

    to_port     = 20

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // Custom TCP 587

  ingress {

    from_port   = 587

    to_port     = 587

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // DNS (TCP) TCP 53

  ingress {

    from_port   = 53

    to_port     = 53

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // SMTPS TCP 465

  ingress {

    from_port   = 465

    to_port     = 465

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // HTTPS TCP 443

  ingress {

    from_port   = 443

    to_port     = 443

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // DNS (UDP) UDP 53

  ingress {

    from_port   = 53

    to_port     = 53

    protocol    = udp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // IMAP TCP 143

  ingress {

    from_port   = 143

    to_port     = 143

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // IMAPS TCP 993

  ingress {

    from_port   = 993

    to_port     = 993

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // Custom TCP 21

  ingress {

    from_port   = 21

    to_port     = 21

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // Custom TCP 2086

  ingress {

    from_port   = 2086

    to_port     = 2086

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // Custom TCP 2096

  ingress {

    from_port   = 2096

    to_port     = 2096

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // HTTP TCP 80

  ingress {

    from_port   = 80

    to_port     = 80

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // SSH TCP 22

  ingress {

    from_port   = 22

    to_port     = 22

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // POP3S TCP 995

  ingress {

    from_port   = 995

    to_port     = 995

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // Custom TCP 2083

  ingress {

    from_port   = 2083

    to_port     = 2083

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // Custom TCP 2087

  ingress {

    from_port   = 2087

    to_port     = 2087

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // Custom TCP 2095

  ingress {

    from_port   = 2095

    to_port     = 2095

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

.

  // Custom TCP 2082

  ingress {

    from_port   = 2082

    to_port     = 2082

    protocol    = tcp

    cidr_blocks = [“0.0.0.0/0”]

  }

}

output newcpanel_sg_id {

  value       = aws_security_group.newcpanel.id

  description = “The ID of the security group ‘newcpanel‘”

}

.

.

Outputs.tf

 We want some information to be outputted upon creating the machines like the assigned public addresses. In terraform it needs somethings outputted for the checks to work. In ansible arent forced to do this, but it looks like in terraform you are.

output public_ips {

  value       = module.web_server.public_ips

  description = “List of public IP addresses for the instances.”

}

.

Okay so now we want to create the scalable ec2

 Up on deployment in the us-west-2 which essentially is for HA purposes.
 You want the key pair to used
 And the security group we defined earlier to be added the instance.

We create a modules/instance directory and inside here define the instances as resources

 Now there are a couple of ways to do this. Depends on how you grew your infrastructure out. If all your machines are the same then you don’t need a resource block for each instance which can make the code uglier to manage. You can use the count attribute to simply add or subtract inside the main.tf where the instance_count is defined under the module  instance_count   = 2

modules/instance/main.tf

resource aws_instance “Tailor-Server” {

  count          = var.instance_count  // Control the number of instances with a variable

.

  ami            = var.ami_id

  instance_type  = var.instance_type

  subnet_id      = var.subnet_id

  key_name       = var.key_name

  vpc_security_group_ids = [var.security_group_id]

.

  tags = {

    Name = format(“Tailor-Server%02d”, count.index + 1)  // Naming instances with a sequential number

  }

.

  root_block_device {

    volume_type           = “gp2”

    volume_size           = 30

    delete_on_termination = true

  }

}

.

Modules/instance/variables.tf

Each variable serves as an input that can be set externally when the module is called, allowing for flexibility and reusability of the module across different environments or scenarios.

So here we defining it as a list of items we need to pass for the module to work. We will later provide the actual parameter to pass to the variables being called in the main.tf

Cheat sheet:

ami_id: Specifies the Amazon Machine Image (AMI) ID that will be used to launch the EC2 instances. The AMI determines the operating system and software configurations that will be loaded onto the instances when they are created.

instance_type: Determines the type of EC2 instance to launch. This affects the computing resources available to the instance (CPU, memory, etc.).

Type: It is expected to be a string that matches one of AWS’s predefined instance types (e.g., t2.micro, m5.large).

key_name: Specifies the name of the key pair to be used for SSH access to the EC2 instances. This key should already exist in the AWS account.

subnet_id: Identifies the subnet within which the EC2 instances will be launched. The subnet is part of a specific VPC (Virtual Private Cloud).

instance_names: A list of names to be assigned to the instances. This helps in identifying the instances within the AWS console or when querying using the AWS CLI.

security_group_Id: Specifies the ID of the security group to attach to the EC2 instances. Security groups act as a virtual firewall for your instances to control inbound and outbound traffic.

 We are also adding a count here so we can scale ec2 very efficiently, especially if you have a lot of hands working in the pot keeps things very easy to manage.

variable ami_id {}

variable instance_type {}

variable key_name {}

variable subnet_id {}

variable instance_names {

  type        = list(string)

  description = “List of names for the instances to create.”

}

variable security_group_id {

  description = “Security group ID to assign to the instance”

  type        = string

}

variable instance_count {

  description = “The number of instances to create”

  type        = number

  default     = 1  // Default to one instance if not specified

}

.

Time to deploy your code: I didnt bother showing the plan here just the apply

my-terraform-vpc$ terraform apply

Do you want to perform these actions?

  Terraform will perform the actions described above.

  Only ‘yes’ will be accepted to approve.

.

  Enter a value: yes

.

aws_subnet.newsubnet: Destroying… [id=subnet-016181a8999a58cb4]

aws_subnet.newsubnet: Destruction complete after 1s

aws_subnet.newsubnet: Creating…

aws_subnet.newsubnet: Still creating… [10s elapsed]

aws_subnet.newsubnet: Creation complete after 11s [id=subnet-0a5914443d2944510]

module.web_server.aws_instance.Tailor-Server[1]: Creating…

module.web_server.aws_instance.Tailor-Server[0]: Creating…

module.web_server.aws_instance.Tailor-Server[1]: Still creating… [10s elapsed]

module.web_server.aws_instance.Tailor-Server[0]: Still creating… [10s elapsed]

module.web_server.aws_instance.Tailor-Server[0]: Still creating… [20s elapsed]

module.web_server.aws_instance.Tailor-Server[1]: Still creating… [20s elapsed]

module.web_server.aws_instance.Tailor-Server[1]: Still creating… [30s elapsed]

module.web_server.aws_instance.Tailor-Server[0]: Still creating… [30s elapsed]

module.web_server.aws_instance.Tailor-Server[0]: Still creating… [40s elapsed]

module.web_server.aws_instance.Tailor-Server[1]: Still creating… [40s elapsed]

module.web_server.aws_instance.Tailor-Server[1]: Still creating… [50s elapsed]

module.web_server.aws_instance.Tailor-Server[0]: Still creating… [50s elapsed]

module.web_server.aws_instance.Tailor-Server[0]: Creation complete after 52s [id=i-0d103937dcd1ce080]

module.web_server.aws_instance.Tailor-Server[1]: Still creating… [1m0s elapsed]

module.web_server.aws_instance.Tailor-Server[1]: Still creating… [1m10s elapsed]

module.web_server.aws_instance.Tailor-Server[1]: Creation complete after 1m12s [id=i-071bac658ce51d415]

.

Apply complete! Resources: 3 added, 0 changed, 1 destroyed.

.

Outputs:

.

newcpanel_sg_id = “sg-0df86c53b5de7b348”

public_ips = [

  “34.219.34.165”,

  “35.90.247.94”,

]

.

Results:

VPC successful:

EC2 successful:

Security-Groups:

Key Pairs:

Ec2 assigned SG group:

How to deploy an EC2 instance in AWS with Terraform

.

  • How to install terraform
  • How to configure your aws cli
  • How to steup your file structure
  • How to deploy your instance
  • You must have an AWS account already setup
    • You have an existing VPC
    • You have existing security groups

Depending on which machine you like to use. I use varied distros for fun.

For this we will use Ubuntu 22.04

How to install terraform

  • Once you are logged into your linux jump box or whatever you choose to manage.

wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg –dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg

echo “deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main” | sudo tee /etc/apt/sources.list.d/hashicorp.list

sudo apt update && sudo apt install terraform

.

ThanosJumpBox:~/myterraform$ terraform -v

Terraform v1.8.2

on linux_amd64

+ provider registry.terraform.io/hashicorp/aws v5.47.

  • Okay next you want to install the awscli

sudo apt update

sudo apt install awscli

2. Okay Now you need to go into your aws and create a user and aws cli key

  • Log into your aws console
  • Go to IAM
    • Under users create a user called Terrform-thanos

Next you want to either create a group or add it to an existing. To make things easy for now we are going to add it administrator group

Next click on the new user and create the ACCESS KEY

Next select the use case for the key

Once you create the ACCESS-KEY you will see the key and secret

Copy these to a text pad and save them somewhere safe.

Next you we going to create the RSA key pair

  • Go under EC2 Dashboard
  • Then Network & ecurity
  • Then Key Pairs
  • Create a new key pair and give it a name

Now configure your Terrform to use the credentials

thanosjumpbox-myterraform$ aws configure

AWS Access Key ID [****************RKFE]:

AWS Secret Access Key [****************aute]:

Default region name [us-west-1]:

Default output format [None]:

.

So a good terraform file structure to use in work environment would be

my-terraform-project/

├── main.tf

├── variables.tf

├── outputs.tf

├── provider.tf

├── modules/

   ├── vpc/

      ├── main.tf

      ├── variables.tf

      └── outputs.tf

   └── ec2/

       ├── main.tf

       ├── variables.tf

       └── outputs.tf

├── environments/

   ├── dev/

      ├── main.tf

      ├── variables.tf

      └── outputs.tf

   ├── prod/

      ├── main.tf

      ├── variables.tf

      └── outputs.tf

├── terraform.tfstate

├── terraform.tfvars

└─ .gitignore

That said for the purposes of this post we will keep it simple. I will be adding separate posts to deploy vpc’s, autoscaling groups, security groups etc.

This would also be very easy to display if you VSC to connect to your
linux machine

mkdir myterraform

cd myterraform

touch main.tf outputs.tf variables.tf

.

So we are going to create an Instance as follows

 EC2 in my existing VPC
 Using a AMI Amazon Linux 2023 AMI (AMI CATALOG will have the ID)
 ami-0827b6c5b977c020e (ID)
 t2.micro instance type
 using a subnet that is available in the us-west-1 zone available for my vpc
 You can find the ID in the console VPC-subnets
 Security groups again will be found under Network & Security > Security Groups
 Use a general purpose volume 30G SSD
 Which is using a custom security group I created earlier
 The outputs will provided via the outputs.tf

Main.tf

provider “aws” {

  region = var.region

}

.

resource “aws_instance” “my_instance” {

  ami           = “ami-0827b6c5b977c020e  # Use a valid AMI ID for your region

  instance_type = “t2.micro              # Free Tier eligible instance type

  key_name      = “”           # Ensure this key pair is already created in your AWS account

.

  subnet_id              = “subnet-0e80683fe32a75513  # Ensure this is a valid subnet in your VPC

  vpc_security_group_ids = [“sg-0db2bfe3f6898d033]  # Ensure this is a valid security group ID

.

  tags = {

    Name = “thanos-lives”

  }

.

  root_block_device {

    volume_type = “gp2  # General Purpose SSD, which is included in the Free Tier

    volume_size = 30     # Maximum size covered by the Free Tier

  }

.

Outputs.tf

output “instance_ip_addr” {

  value = aws_instance.my_instance.public_ip

  description = “The public IP address of the EC2 instance.”

}

.

output “instance_id” {

  value = aws_instance.my_instance.id

  description = “The ID of the EC2 instance.”

}

.

output “first_security_group_id” {

  value = tolist(aws_instance.my_instance.vpc_security_group_ids)[0]

  description = “The first Security Group ID associated with the EC2 instance.”

}

.

Variables.tf

variable “region” {

  description = “The AWS region to create resources in.”

  default     = “us-west-1”

}

.

variable “ami_id” {

  description = “The AMI ID to use for the server.”

}

.

.

Terraform.tfsvars

region = “us-west-1”

ami_id = “ami-0827b6c5b977c020e  # Replace with your chosen AMI ID

.

.

Deploying your code:

thanosjumpbox:~/my-terraform$ terraform init

.

Initializing the backend…

.

Initializing provider plugins…

Reusing previous version of hashicorp/aws from the dependency lock file

Using previously-installed hashicorp/aws v5.47.0

.

Terraform has been successfully initialized!

.

You may now begin working with Terraform. Try running “terraform plan” to see

any changes that are required for your infrastructure. All Terraform commands

should now work.

.

If you ever set or change modules or backend configuration for Terraform,

rerun this command to reinitialize your working directory. If you forget, other

commands will detect it and remind you to do so if necessary.

thanosjumpbox:~/my-terraform$ terraform$

.

thanosjumpbox:~/my-terraform$ terraform$ terraform plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:

  + create

.

Terraform will perform the following actions:

.

  # aws_instance.my_instance will be created

  + resource “aws_instance” “my_instance” {

      + ami                                  = “ami-0827b6c5b977c020e”

      + arn                                  = (known after apply)

      + associate_public_ip_address          = (known after apply)

      + availability_zone                    = (known after apply)

      + cpu_core_count                       = (known after apply)

      + cpu_threads_per_core                 = (known after apply)

      + disable_api_stop                     = (known after apply)

      + disable_api_termination              = (known after apply)

      + ebs_optimized                        = (known after apply)

      + get_password_data                    = false

      + host_id                              = (known after apply)

      + host_resource_group_arn              = (known after apply)

      + iam_instance_profile                 = (known after apply)

      + id                                   = (known after apply)

      + instance_initiated_shutdown_behavior = (known after apply)

      + instance_lifecycle                   = (known after apply)

      + instance_state                       = (known after apply)

      + instance_type                        = “t2.micro

      + ipv6_address_count                   = (known after apply)

      + ipv6_addresses                       = (known after apply)

      + key_name                             = “nicktailor-aws”

      + monitoring                           = (known after apply)

      + outpost_arn                          = (known after apply)

      + password_data                        = (known after apply)

      + placement_group                      = (known after apply)

      + placement_partition_number           = (known after apply)

      + primary_network_interface_id         = (known after apply)

      + private_dns                          = (known after apply)

      + private_ip                           = (known after apply)

      + public_dns                           = (known after apply)

      + public_ip                            = (known after apply)

      + secondary_private_ips                = (known after apply)

      + security_groups                      = (known after apply)

      + source_dest_check                    = true

      + spot_instance_request_id             = (known after apply)

      + subnet_id                            = “subnet-0e80683fe32a75513”

      + tags                                 = {

          + “Name” = “Thanos-lives”

        }

      + tags_all                             = {

          + “Name” = “Thanos-lives”

        }

      + tenancy                              = (known after apply)

      + user_data                            = (known after apply)

      + user_data_base64                     = (known after apply)

      + user_data_replace_on_change          = false

      + vpc_security_group_ids               = [

          + “sg-0db2bfe3f6898d033”,

        ]

.

      + root_block_device {

          + delete_on_termination = true

          + device_name           = (known after apply)

          + encrypted             = (known after apply)

          + iops                  = (known after apply)

          + kms_key_id            = (known after apply)

          + tags_all              = (known after apply)

          + throughput            = (known after apply)

          + volume_id             = (known after apply)

          + volume_size           = 30

          + volume_type           = “gp2”

        }

    }

.

Plan: 1 to add, 0 to change, 0 to destroy.

.

Changes to Outputs:

  + first_security_group_id = “sg-0db2bfe3f6898d033”

  + instance_id             = (known after apply)

  + instance_ip_addr        = (known after apply)

.

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

.

Note: You didn’t use the -out option to save this plan, so Terraform can’t guarantee to take exactly these actions if you run “terraform

apply” now.

.

thanosjumpbox:~/my-terraform$ terraform$ terraform apply

.

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:

  + create

.

Terraform will perform the following actions:

.

  # aws_instance.my_instance will be created

  + resource “aws_instance” “my_instance” {

      + ami                                  = “ami-0827b6c5b977c020e”

      + arn                                  = (known after apply)

      + associate_public_ip_address          = (known after apply)

      + availability_zone                    = (known after apply)

      + cpu_core_count                       = (known after apply)

      + cpu_threads_per_core                 = (known after apply)

      + disable_api_stop                     = (known after apply)

      + disable_api_termination              = (known after apply)

      + ebs_optimized                        = (known after apply)

      + get_password_data                    = false

      + host_id                              = (known after apply)

      + host_resource_group_arn              = (known after apply)

      + iam_instance_profile                 = (known after apply)

      + id                                   = (known after apply)

      + instance_initiated_shutdown_behavior = (known after apply)

      + instance_lifecycle                   = (known after apply)

      + instance_state                       = (known after apply)

      + instance_type                        = “t2.micro

      + ipv6_address_count                   = (known after apply)

      + ipv6_addresses                       = (known after apply)

      + key_name                             = “nicktailor-aws”

      + monitoring                           = (known after apply)

      + outpost_arn                          = (known after apply)

      + password_data                        = (known after apply)

      + placement_group                      = (known after apply)

      + placement_partition_number           = (known after apply)

      + primary_network_interface_id         = (known after apply)

      + private_dns                          = (known after apply)

      + private_ip                           = (known after apply)

      + public_dns                           = (known after apply)

      + public_ip                            = (known after apply)

      + secondary_private_ips                = (known after apply)

      + security_groups                      = (known after apply)

      + source_dest_check                    = true

      + spot_instance_request_id             = (known after apply)

      + subnet_id                            = “subnet-0e80683fe32a75513”

      + tags                                 = {

          + “Name” = “Thanos-lives”

        }

      + tags_all                             = {

          + “Name” = “Thanos-lives”

        }

      + tenancy                              = (known after apply)

      + user_data                            = (known after apply)

      + user_data_base64                     = (known after apply)

      + user_data_replace_on_change          = false

      + vpc_security_group_ids               = [

          + “sg-0db2bfe3f6898d033”,

        ]

.

      + root_block_device {

          + delete_on_termination = true

          + device_name           = (known after apply)

          + encrypted             = (known after apply)

          + iops                  = (known after apply)

          + kms_key_id            = (known after apply)

          + tags_all              = (known after apply)

          + throughput            = (known after apply)

          + volume_id             = (known after apply)

          + volume_size           = 30

          + volume_type           = “gp2”

        }

    }

.

Plan: 1 to add, 0 to change, 0 to destroy.

.

Changes to Outputs:

  + first_security_group_id = “sg-0db2bfe3f6898d033”

  + instance_id             = (known after apply)

  + instance_ip_addr        = (known after apply)

.

Do you want to perform these actions?

  Terraform will perform the actions described above.

  Only ‘yes’ will be accepted to approve.

.

  Enter a value: yes

.

aws_instance.my_instance: Creating…

aws_instance.my_instance: Still creating… [10s elapsed]

aws_instance.my_instance: Still creating… [20s elapsed]

aws_instance.my_instance: Creation complete after 22s [id=i-0ee382e24ad28ecb8]

.

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

.

Outputs:

.

first_security_group_id = “sg-0db2bfe3f6898d033”

instance_id = “i-0ee382e24ad28ecb8”

instance_ip_addr = “50.18.90.217”

Result:

TightVNC Security Hole

Virtual Network Computing (VNC) is a graphical desktop-sharing system that uses the Remote Frame Buffer protocol (RFB) to remotely control another computer. It transmits the keyboard and mouse input from one computer to another, relaying the graphical-screen updates, over a network.[1]

VNC servers work on a variety of platforms, allowing you to share screens and keyboards between Windows, Mac, Linux, and Raspberry Pi devices. RDP server is proprietary and only works with one operating system. VNC vs RDP performance. RDP provides a better and faster remote connection.

There are a number of reasons why people use it.

 RDP requires licenses and VNC does not.
 You can also have multiple sessions on a user
 You can set it so it will connect to an existing session (which is what a lot folks use it for)
 It can be used on multiple OS’s including linux; while RDP is just for windows

There are a few VNC tools out there.

RealVNC

 They have enterprise version
 Requires Licenses
 Has no AD authentication
 Has decent Encryption

UltraVNC – Best one to use.

 Has AD authentication
 Has good Encryption
 AD authentication
 File transfer inside the VNC connection
 Multiuser connections and Existing
 Loads of features the others don’t have
 Is considered the most secure
 Free for personal and commercial use
 Available through chocolatey package manager

Tight-VNC – Security Hole

 Free
 Has encryption but its DES with an 8 character limit
 Available through chocolatey package manager

.

Tight-VNC has their encryption algorithm hardcoded into its software and appears they have NOT updated its encryption standards in years.

.

.

DES Encryption used

# This is hardcoded in VNC applications like TightVNC.

    $magicKey = [byte[]]@(0xE8, 0x4A, 0xD6, 0x60, 0xC4, 0x72, 0x1A, 0xE0)

    $ansi = [System.Text.Encoding]::GetEncoding(

        [System.Globalization.CultureInfo]::CurrentCulture.TextInfo.ANSICodePage)

.

    $pass = [System.Net.NetworkCredential]::new(, $Password).Password    

    $byteCount = $ansi.GetByteCount($pass)

    if ($byteCount gt 8) {

        $err = [System.Management.Automation.ErrorRecord]::new(

            [ArgumentException]‘Password must not exceed 8 characters’,

            PasswordTooLong,

            [System.Management.Automation.ErrorCategory]::InvalidArgument,

            $null)

        $PSCmdlet.WriteError($err)

        return

    }

.

    $toEncrypt = [byte[]]::new(8)

    $null = $ansi.GetBytes($pass, 0, $pass.Length, $toEncrypt, 0)

    

    $des = $encryptor = $null

    try {

        $des = [System.Security.Cryptography.DES]::Create()

        $des.Padding = ‘None’

        $encryptor = $des.CreateEncryptor($magicKey, [byte[]]::new(8))

.

        $data = [byte[]]::new(8)

        $null = $encryptor.TransformBlock($toEncrypt, 0, $toEncrypt.Length, $data, 0)

.

        , $data

    }

    finally {

        if ($encryptor) { $encryptor.Dispose() }

        if ($des) { $des.Dispose() }

    }

}

.

What this means is…IF you are using admin credentials on your machine while using Tight-VNC a hacker that is way better than I… Could gain access to your infrastructure by simply glimpsing the windows registry. Im sure there ways to exploit it.

I will demonstrate:

Now you can install Tight-vnc manually or via chocolatey. I used chocolatey and this from a public available repo.

.

.

Now lets set the password by right clicking tightvnc icon in the bottom corner and setting the password to an 8 character password, by clicking on change primary password and typing in whatever you like

‘Suck3r00’

.

A screenshot of a computer

Description automatically generated

.

.

Now lets open powershell without administrator privileges. Lets say I got in remotely and chocolatey is there and I want to check to see if tight-vnc is there.

.

A screenshot of a computer

Description automatically generated

.

As you can see I find this without administrator privilege.

.

Now lets say I was able to view the registry and get the encrypted value for tight-vnc; all I need to do is see for a few seconds.

.

.

Now there are tools online where you can convert that hexadecimal to binary decimal values long before AI was around. But since I love GPT im going to ask it to convert that for me

.

.

I have script that didn’t take long to put together from digging around for about an hour online. Which im obviously not going to share, BUT if I can do it……someone with skills could do pretty easy. A professional hacker NO SWEAT.

A computer screen shot of a blue screen

Description automatically generated

.

.As you can see if you have rolled this out how dangerous it is.

Having said that I have also written an Ansible Role which will purge tightvnc from your infrastructure and deploy ultravnc which will use encryption and AD authentication. Which the other two currently do NOT do.

.

Hope you enjoyed getting P0WNed.

.

.

.

.

How to Deploy VM’s in Hyper-V with Ansible

Thought it would be fun to do…..

If you can find another public repo that has it working online. Please send me a message so I can kick myself.

 This role will allow you to use a vhdx image to deploy vm’s in hyperv
 It will use the vm name to create a sub-folder to place the new vm image in
 It will configure the network switch
 It will setup the vlan tag/id and enable it
 It will also set the smart-paging file location to the destination path of the vm
 It will configure the OS network configuration 
 It will power on the machine and wait for response successfully
 It can also remove vms
 You can also call the role with tags if you want…

.

How to use this role: ansible-hyperv repo is set to private you must request access

1.You must first download the git repository into your roles directory usually ansible/role/
2.Now you want edit the hosts.client file name file or create it if it doesn’t exist under your “ansible/inventory/dev:staging:prod” directory. This is a good way to separate environments with ansible, inside each environment you should have a hosts.file like indicated below.

Example file: hosts.dev, hosts.staging, hosts.prod

b.Put your server under the appropriate group inside the file and save
i.testmachine1.nicktailor.com ansible_host=192.168.1.101 (the ip is pointed to the hypervisor)

Note: If there is no group simply list the server outside grouping, the –limit flag will pick it

up.

3.Now inside this directory you should see hosts & host_vars, group_vars

Descriptions:

c.Hosts. – is where you will list your servers under specific groups which tell the playbook (what the server is, if it the server should have a specific task run on it, and how to find it)
d.Host_vars – Inside this directory is where you list the server by name which is you will list under hosts. Inside these files you pass variable parameters to the specific roles when running your playbook. Without these the playbook cant do the tasks you want it to.
e.Group_varsAre how a way to group variables for sets of servers and this keeps code cleaners and easier to manage.

Operational Use:

4.Move inside host_var
f.cd host_var
g.create a file called {{ servername }} and save it for us its testmachine1.nicktailor.com

.

5.Now inside this directory you should see hosts & host_vars, group_vars

Descriptions:

h.Hosts. – is where you will list your servers under specific groups which tell the playbook (what the server is, if it the server should have a specific task run on it, and how to find it)
i.Host_vars – Inside this directory is where you list the server by name which is you will list under hosts. Inside these files you pass variable parameters to the specific roles when running your playbook. Without these the playbook cant do the tasks you want it to.
j.Group_varsAre how a way to group variables for sets of servers and this keeps code cleaners and easier to manage.

Operational Use:

6.Move inside host_var
k.cd host_var
l.create a file called {{ servername }} and save it for us its testmachine1.nicktailor.com
m.add the following parameters to your inventory file and save.

passed parameters: example: inventory/host_vars/testmachine.nicktailor.com

vms:

  – type: testserver

    name: nicktest

.

    cpu: 2   

    memory: 4096MB

.

    network:

      ip: 192.168.23.26

      netmask: 255.255.255.0

      gateway: 192.168.23.254

      dns: 192.168.0.17,192.168.0.18

      

#    network_switch: ‘External Virtual Switch’

    network_switch: ‘Cisco VIC Ethernet Interface #6 – Virtual Switch’

    vlanid: 1113

.

#   source-image

    src_vhd: ‘Z:\volumes\devops\devopssysprep\devopssysprep.vhdx

.

#   destination will be created in Z:\\volumes\servername\servername.vhdx by default

#   to change the paths you need to update the prov_vm.yml’s first three task paths

.

Running your playbook:

1.You must run your play book from inside parent directory always “ansible
2.Now there is a playbook called createvm.yml in the ansible directory which simply calls the ansible-hyperv role inside the roles directory.

Example: of ansible/createvm.yml

name: Provision VM

  hosts: hypervdev.nicktailor.com

  gather_facts: no

.

  tasks:

    – import_tasks: roles/ansible-hyperv/tasks/prov_vm.yml

.

Command:

ansible-playbook –i inventory/dev/hosts createvm.ymllimit=’testmachine1.nicktailor.com

 -i : This flag tells ansibe-playbook command which hosts file to use, these are always defined by environment like hosts.dev or hosts.staging
 -u : this is the ssh_user you will be connecting to the servers with
 -Kkb : this tells ansible that you will be using sudo su – for the ssh_user when running all role/tasks
 -ask-beocme : is saying become root
 -limit=’server’ : this allows you to segement which server you want to run the playbook against.

.

Successful example run of the book:

.

[ntailor@ansible-home ~]$ ansible-playbook –i inventory/hosts createvm.yml –limit=’testmachine1.nicktailor.com

.

PLAY [Provision VM] ****************************************************************************************************************************************************************

.

TASK [Create directory structure] **************************************************************************************************************************************************

ok: [testmachine1.nicktailor.com] => (item={‘type’: testservers, ‘name’: nicktest, cpu: 2, ‘memory’: ‘4096MB’, ‘network’: {ip: ‘192.168.23.36’, ‘netmask’: ‘255.255.255.0’, ‘gateway’: ‘192.168.23.254’, dns: 192.168.0.17,192.168.0.18}, network_switch: ‘Cisco VIC Ethernet Interface #6 – Virtual Switch’, vlanid: 1113, ‘src_vhd: ‘C:\\volumes\\devops\\devopssysprep\\devopssysprep.vhdx})

.

TASK [Check whether vhdx already exists] *******************************************************************************************************************************************

ok: [testmachine1.nicktailor.com] => (item={‘type’: testservers, ‘name’: nicktest, cpu: 2, ‘memory’: ‘4096MB’, ‘network’: {ip: ‘192.168.23.36’, ‘netmask’: ‘255.255.255.0’, ‘gateway’: ‘192.168.23.254’, dns: 192.168.0.17,192.168.0.18}, network_switch: ‘Cisco VIC Ethernet Interface #6 – Virtual Switch’, vlanid: 1113, ‘src_vhd: ‘C:\\volumes\\devops\\devopssysprep\\devopssysprep.vhdx})

.

TASK [Clone vhdx] ******************************************************************************************************************************************************************

changed: [testmachine1.nicktailor.com] => (item={‘changed’: False, ‘invocation’: {module_args: {‘path’: ‘Z:\\\\volumes\\\\devops\\nicktest\\nicktest.vhdx, checksum_algorithm: ‘sha1’, get_checksum: False, ‘follow’: False, ‘get_md5’: False}}, ‘stat’: {‘exists’: False}, ‘failed’: False, ‘item’: {‘type’: testservers, ‘name’: nicktest, cpu: 2, ‘memory’: ‘4096MB’, ‘network’: {ip: ‘192.168.23.36’, ‘netmask’: ‘255.255.255.0’, ‘gateway’: ‘192.168.23.254’, dns: 192.168.0.17,192.168.0.18}, network_switch: ‘Cisco VIC Ethernet Interface #6 – Virtual Switch’, vlanid: 1113, src_vhd: ‘C:\\volumes\\devops\\devopssysprep\\devopssysprep.vhdx}, ansible_loop_var: ‘item’})

.

TASK [set_fact] ********************************************************************************************************************************************************************

ok: [testmachine1.nicktailor.com]

.

TASK [debug] ***********************************************************************************************************************************************************************

ok: [testmachine1.nicktailor.com] => {

    path_folder: “Z:\\\\volumes\\\\devops\\nicktest\\nicktest.vhdx”

}

.

TASK [set_fact] ********************************************************************************************************************************************************************

ok: [testmachine1.nicktailor.com]

.

TASK [debug] ***********************************************************************************************************************************************************************

ok: [testmachine1.nicktailor.com] => {

    page_folder: “Z:\\\\volumes\\\\devops\\nicktest”

}

.

TASK [Create VMs] ******************************************************************************************************************************************************************

changed: [testmachine1.nicktailor.com] => (item={‘type’: testservers, ‘name’: nicktest, cpu: 2, ‘memory’: ‘4096MB’, ‘network’: {ip: ‘192.168.23.36’, ‘netmask’: ‘255.255.255.0’, ‘gateway’: ‘192.168.23.254’, dns: 192.168.0.17,192.168.0.18}, network_switch: ‘Cisco VIC Ethernet Interface #6 – Virtual Switch’, vlanid: 1113, ‘src_vhd: ‘C:\\volumes\\devops\\devopssysprep\\devopssysprep.vhdx})

.

TASK [Set SmartPaging File Location for new Virtual Machine to use destination image path] *****************************************************************************************

changed: [testmachine1.nicktailor.com] => (item={‘type’: testservers, ‘name’: nicktest, cpu: 2, ‘memory’: ‘4096MB’, ‘network’: {ip: ‘192.168.23.36’, ‘netmask’: ‘255.255.255.0’, ‘gateway’: ‘192.168.23.254’, dns: 192.168.0.17,192.168.0.18}, network_switch: ‘Cisco VIC Ethernet Interface #6 – Virtual Switch’, vlanid: 1113, ‘src_vhd: ‘C:\\volumes\\devops\\devopssysprep\\devopssysprep.vhdx})

.

TASK [Set Network VlanID] **********************************************************************************************************************************************************

changed: [testmachine1.nicktailor.com] => (item={‘type’: testservers, ‘name’: nicktest, cpu: 2, ‘memory’: ‘4096MB’, ‘network’: {ip: ‘192.168.23.36’, ‘netmask’: ‘255.255.255.0’, ‘gateway’: ‘192.168.23.254’, dns: 192.168.0.17,192.168.0.18}, network_switch: ‘Cisco VIC Ethernet Interface #6 – Virtual Switch’, vlanid: 1113, ‘src_vhd: ‘C:\\volumes\\devops\\devopssysprep\\devopssysprep.vhdx})

.

TASK [Configure VMs IP] ************************************************************************************************************************************************************

changed: [testmachine1.nicktailor.com] => (item={‘type’: testservers, ‘name’: nicktest, cpu: 2, ‘memory’: ‘4096MB’, ‘network’: {ip: ‘192.168.23.36’, ‘netmask’: ‘255.255.255.0’, ‘gateway’: ‘192.168.23.254’, dns: 192.168.0.17,192.168.0.18}, network_switch: ‘Cisco VIC Ethernet Interface #6 – Virtual Switch’, vlanid: 1113, ‘src_vhd: ‘C:\\volumes\\devops\\devopssysprep\\devopssysprep.vhdx})

.

TASK [add_host] ********************************************************************************************************************************************************************

changed: [testmachine1.nicktailor.com] => (item={‘changed’: True, ‘failed’: False, ‘item’: {‘type’: testservers, ‘name’: nicktest, cpu: 2, ‘memory’: ‘4096MB’, ‘network’: {ip: ‘192.168.23.36’, ‘netmask’: ‘255.255.255.0’, ‘gateway’: ‘192.168.23.254’, dns: 192.168.0.17,192.168.0.18}, network_switch: ‘Cisco VIC Ethernet Interface #6 – Virtual Switch’, vlanid: 1113, src_vhd: ‘C:\\volumes\\devops\\devopssysprep\\devopssysprep.vhdx}, ansible_loop_var: ‘item’})

.

TASK [Poweron VMs] *****************************************************************************************************************************************************************

changed: [testmachine1.nicktailor.com] => (item={‘type’: testservers, ‘name’: nicktest, cpu: 2, ‘memory’: ‘4096MB’, ‘network’: {ip: ‘192.168.23.36’, ‘netmask’: ‘255.255.255.0’, ‘gateway’: ‘192.168.23.254’, dns: 192.168.0.17,192.168.0.18}, network_switch: ‘Cisco VIC Ethernet Interface #6 – Virtual Switch’, vlanid: 1113, ‘src_vhd: ‘C:\\volumes\\devops\\devopssysprep\\devopssysprep.vhdx})

.

TASK [Wait for VM to be running] ***************************************************************************************************************************************************

ok: [testmachine1.nicktailor.com -> localhost] => (item={‘type’: testservers, ‘name’: nicktest, cpu: 2, ‘memory’: ‘4096MB’, ‘network’: {ip: ‘192.168.23.36’, ‘netmask’: ‘255.255.255.0’, ‘gateway’: ‘192.168.23.254’, dns: 192.168.0.17,192.168.0.18}, network_switch: ‘Cisco VIC Ethernet Interface #6 – Virtual Switch’, vlanid: 1113, ‘src_vhd: ‘C:\\volumes\\devops\\devopssysprep\\devopssysprep.vhdx})

.

TASK [debug] ***********************************************************************************************************************************************************************

ok: [testmachine1.nicktailor.com] => {

    “wait”: {

        “changed”: false,

        msg: “All items completed”,

        “results”: [

            {

                ansible_loop_var: “item”,

                “changed”: false,

                “elapsed”: 82,

                “failed”: false,

                “invocation”: {

                    module_args: {

                        active_connection_states: [

                            “ESTABLISHED”,

                            “FIN_WAIT1”,

                            “FIN_WAIT2”,

                            “SYN_RECV”,

                            “SYN_SENT”,

                            “TIME_WAIT”

                        ],

                        connect_timeout: 5,

                        “delay”: 0,

                        exclude_hosts: null,

                        “host”: “192.168.23.36”,

                        msg: null,

                        “path”: null,

                        “port”: 5986,

                        search_regex: null,

                        “sleep”: 1,

                        “state”: “started”,

                        “timeout”: 100

                    }

                },

                “item”: {

                    cpu: 2,

                    “memory”: “4096MB”,

                    “name”: nicktest,

                    “network”: {

                        dns: 192.168.0.17,192.168.0.18,

                        “gateway”: “192.168.23.254”,

                        ip: “192.168.23.36”,

                        “netmask”: “255.255.255.0”

                    },

                    network_switch: “Cisco VIC Ethernet Interface #6 – Virtual Switch”,

                    src_vhd: “C:\\volumes\\devops\\devopssysprep\\devopssysprep.vhdx”,

                    “type”: testservers,

                    vlanid: 1113

                },

                match_groupdict: {},

                match_groups: [],

                “path”: null,

                “port”: 5986,

                search_regex: null,

                “state”: “started”

            }

        ]

    }

}

.

PLAY RECAP *************************************************************************************************************************************************************************

testmachine1.nicktailor.com      : ok=15   changed=7    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

.

.

.

How to Configure Redhat 7 & 8 Network Interfaces using Ansible

 This role will configure redhat 7 and up interfaces for virtual and physical.
(bonded nics, gateways, routes, interface names)

How to use this role:

1.You must first download the git repository into your roles directory usually ansible/role/
2.Now you want edit the hosts.client file name file or create it if it doesn’t exist under your “ansible/inventory/dev:staging:prod” directory. This is a good way to separate environments with ansible, inside each environment you should have a hosts.file like indicated below.

Example file: hosts.dev, hosts.staging, hosts.prod

c.Put your server under the appropriate group inside the file and save
d.testmachine1 ansible_host=192.168.1.101

.

Cool Stuff: If you deployed a virtual-machine using the ansible-vmware modules it will set the hostname of the host using the same shortname of the vm. If you require the fqdn vs the shortname on the host. To solve this I added some code to set the fdqn as the new_hostname if you define it under you hosts.file as shown below.

e.testmachine1 ansible_host=192.168.1.101 new_hostname=testmachine1.nicktailor.com

.

Now inside this directory you should see hosts & host_vars, group_vars

Descriptions:

f.Hosts. – is where you will list your servers under specific groups which tell the playbook (what the server is, if it the server should have a specific task run on it, and how to find it)
g.Host_vars – Inside this directory is where you list the server by name which is you will list under hosts. Inside these files you pass variable parameters to the specific roles when running your playbook. Without these the playbook cant do the tasks you want it to.
h.Group_varsAre how a way to group variables for sets of servers and this keeps code cleaners and easier to manage.

Operational Use:

3.Move inside host_var
i.cd host_var
j.create a file called {{ servername }} and save it for us its testmachine1.nicktailor.com

.

4.Now inside this directory you should see hosts & host_vars, group_vars

Descriptions:

k.Hosts. – is where you will list your servers under specific groups which tell the playbook (what the server is, if it the server should have a specific task run on it, and how to find it)
l.Host_vars – Inside this directory is where you list the server by name which is you will list under hosts. Inside these files you pass variable parameters to the specific roles when running your playbook. Without these the playbook cant do the tasks you want it to.
m.Group_varsAre how a way to group variables for sets of servers and this keeps code cleaners and easier to manage.

Operational Use:

5.Move inside host_var
n.cd host_var
o.create a file called {{ servername }} and save it for us its testmachine1.nicktailor.com
p.add the following parameters to your inventory file and save.

passed parameters: example: var/testmachine1

#Configure network can be used on physical and virtual-machines

nic_devices:

    – device: ens192

      ip: 192.168.10.100

      nm: 255.255.255.0

      gw: 192.168.10.254

      uuid:

      mac:

..

Note: you do not need to specify the UUID, you can if you wish. You do need the MAC. if you are doing bonded nics on the hosts. If you are using physical machines with satellite deployments. Then its probably a good to idea to use the mac of the nic you want the dhcp request to hit to avoid accidently deploying to the wrong host. When dealing with physical machines you don’t really have the same forgiveness of snapshots or quickly rebuilding as a vm. You can do more complicated configurations as indicated below….You can always email or contact me via linkedin, top right of the blog if you need assistance.

More Advanced configurations: bonded nics, routes, multiple nics and gateways

bond_devices:

    – device: ens1

      mac: ec:0d:9a:05:3b:f0

      master: mgt

      eth_opts: ‘-C ${DEVICE} adaptive-rx off rx-usecs 0 rx-frames 0; -K ${DEVICE} lro off’

    – device: ens1d1

      mac: ec:0d:9a:05:3b:f1

      master: mgt

      eth_opts: ‘-C ${DEVICE} adaptive-rx off rx-usecs 0 rx-frames 0; -K ${DEVICE} lro off’

    – device: mgt

      ip: 10.100.1.2

      nm: 255.255.255.0

      gw: 10.100.1.254

      pr: ens1

    – device: ens6

      mac: ec:0d:9a:05:16:g0

      master: app

    – device: ens6d1

      mac: ec:0d:9a:05:16:g1

      master: app

    – device: app

      ip: 10.101.1.3

      nm: 255.255.255.0

      pr: ens6

routes:

    – device: app

      route:

        – 100.240.136.0/24

        – 100.240.138.0/24

.

    – device: app

      gw: 10.156.177.1

      route:

        – 10.156.148.0/24

.

.

Running your playbook:

1.You must run your play book from inside parent directory always “ansible
2.Now there is a playbook called setup-networkonly.yml in the ansible directory which simply calls the setup-redhat-interfaces role inside the roles directory.

Example: of ansible/ setup-networkonly.yml

hosts: all

  gather_facts: no

  roles:

   – role: setup-redhat-interfaces

.

Command:

ansible-playbook -i inventory/dev/hosts setup-networkonly.yml–limit=’testmachine1.nicktailor.com’

.

 -i : This flag tells ansibe-playbook command which hosts file to use, these are always defined by environment like hosts.dev or hosts.staging
 -u : this is the ssh_user you will be connecting to the servers with
 -Kkb : this tells ansible that you will be using sudo su – for the ssh_user when running all role/tasks
 -ask-beocme : is saying become root
 -limit=’server’ : this allows you to segement which server you want to run the playbook against.

.

.

Test Run:

[root@ansible-home]# ansible-playbook –i inventory/dev/hosts setup-metworkonly.yml –limit=’testmachine1.nicktailor.com’ -k

SSH password:

.

PLAY [all] *************************************************************************************************************************************************************************

.

TASK [setup-redhat-network : Gather facts] ************************************************************************************************************************************

ok: [testmachine1.nicktailor.com]

.

TASK [setup-redhat-network : set_fact] ****************************************************************************************************************************************

ok: [testmachine1.nicktailor.com]

.

TASK [setup-redhat-network : Cleanup network confguration] ********************************************************************************************************************

ok: [testmachine1.nicktailor.com]

.

TASK [setup-redhat-network : find] ********************************************************************************************************************************************

ok: [testmachine1.nicktailor.com]

.

TASK [setup-redhat-network : file] ********************************************************************************************************************************************

changed: [testmachine1.nicktailor.com] => (item={u’rusr: True, u’uid: 0, u’rgrp: True, u’xoth: False, u’islnk: False, u’woth: False, u’nlink: 1, u’issock: False, u’mtime: 1530272815.953706, u’gr_name: u’root‘, u’path: u’/etc/sysconfig/network-scripts/ifcfg-enp0s3′, u’xusr: False, u’atime: 1665494779.63, u’inode: 1055173, u’isgid: False, u’size: 285, u’isdir: False, u’ctime: 1530272816.3037066, u’isblk: False, u’wgrp: False, u’xgrp: False, u’isuid: False, u’dev: 64769, u’roth: True, u’isreg: True, u’isfifo: False, u’mode: u’0644′, u’pw_name: u’root‘, u’gid: 0, u’ischr: False, u’wusr: True})

changed: [testmachine1.nicktailor.com] => (item={u’rusr: True, u’uid: 0, u’rgrp: True, u’xoth: False, u’islnk: False, u’woth: False, u’nlink: 1, u’issock: False, u’mtime: 1530272848.538762, u’gr_name: u’root‘, u’path: u’/etc/sysconfig/network-scripts/ifcfg-enp0s8′, u’xusr: False, u’atime: 1665494779.846, u’inode: 2769059, u’isgid: False, u’size: 203, u’isdir: False, u’ctime: 1530272848.6417623, u’isblk: False, u’wgrp: False, u’xgrp: False, u’isuid: False, u’dev: 64769, u’roth: True, u’isreg: True, u’isfifo: False, u’mode: u’0644′, u’pw_name: u’root‘, u’gid: 0, u’ischr: False, u’wusr: True})

.

TASK [setup-redhat-network : file] ********************************************************************************************************************************************

ok: [testmachine1.nicktailor.com]

.

TASK [setup-redhat-network : Setup bond devices] ******************************************************************************************************************************

changed: [testmachine1.nicktailor.com] => (item={u’device: u’enp0s8′, u’mac: u’08:00:27:13:b2:73′, u’master: u’mgt‘})

changed: [testmachine1.nicktailor.com] => (item={u’device: u’enp0s9′, u’mac: u’08:00:27:e8:cf:cd’, u’master: u’mgt‘})

changed: [testmachine1.nicktailor.com] => (item={u’device: u’mgt‘, u’ip: u’192.168.10.200‘, u’nm: u’255.255.255.0′, u’gw: u’10.0.2.2′, u’pr: u’enp0s8′})

.

TASK [setup-redhat-network : Setup NIC] ***************************************************************************************************************************************

.

TASK [setup-redhat-network : Setup static routes] *****************************************************************************************************************************

.

PLAY RECAP *************************************************************************************************************************************************************************

testmachine1.nicktailor.com : ok=7    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0

.

[root@testmachine1.nicktailor.com]# cat /proc/net/bonding/mgt

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

.

Bonding Mode: fault-tolerance (active-backup)

Primary Slave: enp0s8 (primary_reselect failure)

Currently Active Slave: enp0s8

MII Status: up

MII Polling Interval (ms): 100

Up Delay (ms): 0

Down Delay (ms): 0

.

Slave Interface: enp0s8

MII Status: up

Speed: 1000 Mbps

Duplex: full

Link Failure Count: 0

Permanent HW addr: 08:00:27:13:b2:73

Slave queue ID: 0

.

Slave Interface: enp0s9

MII Status: up

Speed: 1000 Mbps

Duplex: full

Link Failure Count: 0

Permanent HW addr: 08:00:27:e8:cf:cd

Slave queue ID: 0

.

[root@testmachine1.nicktailor.com]# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

    link/ether 08:00:27:63:63:0e brd ff:ff:ff:ff:ff:ff

    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic enp0s3

       valid_lft 86074sec preferred_lft 86074sec

    inet6 fe80::a162:1b49:98b7:6c54/64 scope link noprefixroute

       valid_lft forever preferred_lft forever

3: enp0s8: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master mgt state UP group default qlen 1000

    link/ether 08:00:27:13:b2:73 brd ff:ff:ff:ff:ff:ff

4: enp0s9: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master mgt state UP group default qlen 1000

    link/ether 08:00:27:13:b2:73 brd ff:ff:ff:ff:ff:ff

5: enp0s10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

    link/ether 08:00:27:05:b4:e8 brd ff:ff:ff:ff:ff:ff

6: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN group default qlen 1000

    link/ether ae:db:dc:52:22:f8 brd ff:ff:ff:ff:ff:ff

7: mgt: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000

    link/ether 08:00:27:13:b2:73 brd ff:ff:ff:ff:ff:ff

    inet 192.168.10.200/24 brd 192.168.56.255 scope global mgt

       valid_lft forever preferred_lft forever

    inet6 fe80::a00:27ff:fe13:b273/64 scope link

       valid_lft forever preferred_lft forever

.

How to Join Windows Servers to your DC with Ansible

 This role will simply join a new windows server to the domain
 You simply need to define the passed parameters in defaults/main.yml indicated below
 This role will ask you for the domain admin password at runtime so you will need to know it. Don’t need to worry about vaulting the admin AD password in the code
 This role assume your windows host is already configured to use winrm

How to use this role:

1.You must first download the git repository into your roles directory usually ansible/role/
2.Now you want edit the hosts.client file name file or create it if it doesn’t exist under your “ansible/inventory/dev:staging:prod” directory. This is a good way to separate environments with ansible, inside each environment you should have a hosts.file like indicated below.

Example file: hosts.dev, hosts.staging, hosts.prod

c.Put your server under the appropriate group inside the file and save
i.Testmachine1.nicktailor.coml ansible_host=192.168.1.101

Note: If there is no group simply list the server outside grouping, the –limit flag will pick it

up.

3.Now inside this directory you should see hosts & host_vars, group_vars

Descriptions:

d.Hosts. – is where you will list your servers under specific groups which tell the playbook (what the server is, if it the server should have a specific task run on it, and how to find it)
e.Host_vars – Inside this directory is where you list the server by name which is you will list under hosts. Inside these files you pass variable parameters to the specific roles when running your playbook. Without these the playbook cant do the tasks you want it to.
f.Group_varsAre how a way to group variables for sets of servers and this keeps code cleaners and easier to manage.

Operational Use:

4.Move inside host_var
g.cd host_var
h.create a file called {{ servername }} and save it for us its testmachine1.nicktailor.com

.

5.Now inside this directory you should see hosts & host_vars, group_vars

Descriptions:

i.Hosts. – is where you will list your servers under specific groups which tell the playbook (what the server is, if it the server should have a specific task run on it, and how to find it)
j.Host_vars – Inside this directory is where you list the server by name which is you will list under hosts. Inside these files you pass variable parameters to the specific roles when running your playbook. Without these the playbook cant do the tasks you want it to.
k.Group_varsAre how a way to group variables for sets of servers and this keeps code cleaners and easier to manage.

Operational Use:

6.Move inside host_var
l.cd host_var
m.create a file called {{ servername }} and save it for us its testmachine1.nicktailor.com
n.add the following parameters to your inventory file and save.

passed parameters: example: roles/add-server-to-dc/default/main.yml

dns_domain_name: ad.nicktailor.com

computer_name: testmachine1

domain_ou_path: “OU=Admin,DC=nicktailor,DC=local”

domain_admin_user: adminuser@nicktailor.com

state: domain

.

Running your playbook:

1.You must run your play book from inside parent directory always “ansible
2.Now there is a playbook called joinservertodomain.yml in the ansible directory which simply calls the add-servers-to-dc role inside the roles directory.

Example: of ansible/joinservertodomain.yml

hosts: all

  gather_facts: no

  vars_prompt:

  – name: domain_pass

    prompt: Enter Admin Domain Password

  roles:

    – role: addservers-todc

.

Command:

ansible-playbook –i inventory/dev/hosts joinservertodomain.ymllimit=’testmachine1.nicktailor.com

 -i : This flag tells ansibe-playbook command which hosts file to use, these are always defined by environment like hosts.dev or hosts.staging
 -u : this is the ssh_user you will be connecting to the servers with
 -Kkb : this tells ansible that you will be using sudo su – for the ssh_user when running all role/tasks
 -ask-beocme : is saying become root
 -limit=’server’ : this allows you to segement which server you want to run the playbook against.

.

Successful example run of the book:

.

[alfred@ansible.nicktailor.com ~]$ ansible-playbook –i inventory/hosts joinservertodomain.yml –limit=’testmachine1.nicktailor.com

ansible-playbook 2.9.27

  config file = /etc/ansible/ansible.cfg

  configured module search path = [‘/home/alfred/.ansible/plugins/modules’, ‘/usr/share/ansible/plugins/modules’]

  ansible python module location = /usr/lib/python3.6/site-packages/ansible

  executable location = /usr/bin/ansible-playbook

  python version = 3.6.8 (default, Nov 10 2021, 06:50:23) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3.0.2)]

.

PLAYBOOK: joinservertodomain.yml *****************************************************************************************************************************************************

Positional arguments: joinservertodomain.yml

verbosity: 4

connection: smart

timeout: 10

become_method: sudo

tags: (‘all’,)

inventory: (‘/home/alfred/inventory/hosts’,)

subset: testmachine1.nicktailor.com

forks: 5

1 plays in joinservertodomain.yml

Enter Domain Password:

.

PLAY [all] ***********************************************************************************************************************************************************************

META: ran handlers

.

TASK [addservertodc : Join windows host to Domain Controller] ********************************************************************************************************************

task path: /home/alfred/roles/addservertodc/tasks/main.yml:1

Using module file /usr/lib/python3.6/site-packages/ansible/modules/windows/win_domain_membership.ps1

Pipelining is enabled.

<testmachine1.nicktailor.com> ESTABLISH WINRM CONNECTION FOR USER: ansibleuser on PORT 5986 TO testmachine1.nicktailor.com

EXEC (via pipeline wrapper)

changed: [testmachine1.nicktailor.com] => {

    “changed”: true,

    reboot_required: true

}

.

TASK [addservertodc : win_reboot] ************************************************************************************************************************************************

win_reboot: system successfully rebooted

changed: [testmachine1.nicktailor.com] => {

    “changed”: true,

    “elapsed”: 23,

    “rebooted”: true

}

META: ran handlers

META: ran handlers

.

PLAY RECAP ***********************************************************************************************************************************************************************

testmachine1.nicktailor.com       : ok=2    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

.

.

.

.

How to deploy OpenNebula Frontends via Ansible

Frontend: This role deploys the OpenNebula Cloud platform frontends via Ansible

Ansible Operational Documentation – OpenNebula Frontend Deployments

https://opennebula.io/ – OpenNebula is basically a opensource inhouse cloud platform that you can deploy and manage virtual machines using a kvm backend on the host which is scalable. OpenNebula support give you a document to run manual commands, and would not provide the opensource playbook they use to deploy frontends.

So I reverse engineered one for others to use and edit as needed. As nobody runs commands manually anymore. If you are not automating then you are basically a dinosaur

Note: You will still need to buy your own enterprise license to get access to the apt source. You can find that below and you can plug those into defaults/main.yml before you run the book.

This role handles the following when deploying OpenNebula Frontends in standalone or HA using groups to distinguish how to deploy in scale using apache.

 Apache.yml – separate task that independently deploys apache and the configutation. This is so if you simply if wanted to run rerun the apache configuration with a new domain, you don’t need to rerun the whole book.
 Mysql.yml – this task uses a custom python library that is not apart of native ansible, located in side the library folder and handles the following
It deploys mysql
It changes the root password
Removes_anonymous_user
Disables_root_login_remotely
Removes the testdb
It will create the database, user, and grantpermission for new the new database
 Main.yml – This is the primary task that deploys the ON frontend
Install depenancies for ON
Imports the keys for Ubuntu, phusionpassenger, ON
Install and configures mysql (using custom python library)
Install and configures apache for ON
Configures sunstone.conf
Configures oned.conf
Is able to distinguish between standalone and HA setup
Copies ssh keys for ON to secondary FE’s
Copies rafthook scripts to secondary FE’s
Adds primary server to ON zone
Updates zone endpoint
Backups up primary mysql db and copies to secondary nodes in HA
Setups up federation configuration if defined
Stop and starts services at specific times during the installation for everything to work correctly. (super important) do not change the order without reviewing

How to use this role:

1.You must first download the git repository
b.git clone git@github.com:Perfect10NickTailor/opennebula-frontends.git
2.Under your user you will see a directory called opennebula-frontends cd into this directory
c.cd opennebula-frontends
3.Move inside the inventory directory to the appropriate client directory
a.Create a file called hosts.opennebula
4.Now you want edit the {{ hosts.opennebula }} file name file or create it if it doesn’t exist

Example file: hosts.opennebula

d.Put your server under the appropriate group inside the file and save

Example: This is how you would list out 3 frontend hosts

[all:children]

frontend_server_primary # this is where you list ON server number 1

mysql_servers – you list any server that will require mysql install for ON

apache_servers – you list any server that will be running ON apache

frontend_HA – you list any additional front ends that will be used in HA here for OpenNebula

.

[frontend_server_primary]

Testmachine1 ansible_host=192.168.86.61

.

[mysql_servers]

Testmachine1 ansible_host=192.168.86.61

Testmachine2 ansible_host=192.168.86.62

#Testmachine3 ansibel_host=192.168.86.63

[apache_servers]

Testmachine1 ansible_host=192.168.86.61

Testmachine2 ansible_host=192.168.86.62

#Testmachine3 ansibel_host=192.168.86.63

.

[frontend_HA]

Testmachine2 ansible_host=192.168.86.62

#Testmachine3 ansible_host=192.168.86.63

.

Note: For a standalone setup you simply list the same host under the following 3 groups listed below and then in your command under –limt=”testmachine1” instead of ‘testmachine1,testmachine2′. The playbook is smart enough to know what to do from there.

[frontend_server_primary]

Testmachine1 ansible_host=192.168.86.63

[mysql_servers]

Testmachine1 ansible_host=192.168.86.63

[apache_servers]

Testmachine1 ansible_host=192.168.86.63

Special Notes: This playbook is designed so you can choose deploy ON in standalone, in classic centralised mysql(HA), or OpenNebula HA(with mysql deploy individually with rafthook configuration.

We will be deploying the OpenNebula officially supported way.
Although no senior architect would usually choose this approach over classic mysql HA(active/passive), we followed it anyway.

Important things to know:

Group variables for this role that are passed and need to be defined below. If you want to change certificates and configure mysql it has to be done in these group vars for this role to work. You will need to create opennebula ssl keys for the vnc console stuff to work, they are not provided by this playbook.

Dev/group_vars:

Frontend_server_primary

session_memcache: memcache

vnc_proxy_support_wss: true

vnc_proxy_cert_path: /etc/ssl/certs/opennebula.pem

vnc_proxy_key_path: /etc/ssl/private/opennebula.key

vnc_proxy_ipv6: false

vnc_request_password: false

driver: qcow2

.

Frontend_HA

#If these are defined HA setup is pushed.

#It Adds VIP hooks for floating IP and federation server ID:

#these variables can be overidden at at the host_var level.

#If host is listed under frontend_HA group in your host

#then these defaults will be used

.

leader_interface_name: enp0s8

leader_ip: 192.168.50.132/24

follower_ip: 192.168.50.132/24

follower_interface_name: enp0s8

.

Mysql_servers

OpenNebula Mysql Installation

mysqlrootuser: root

mysqlnewinstallpassword: Swordfish123

mysql_admin_user: admin    

mysql_admin_password: admin

database_to_create: opennebula

.

Running your playbook:

1.You must run your play book from inside parent directory of ansible”
2.There is a file called commands.txt with references to help you format your command quickly.
3.Now there is a playbook called ON-frontenddeploly.yml in the ansible directory which simply calls the opennebula-frontends role inside the roles directory.

Example: of opennebula-frontend/ON-frontenddeploy.yml

hosts: all

  become: True

  become_user: root

  gather_facts: no

  roles:

    – role: opennebula-frontend

.

Command: Running – playbook to deploy OpenNebula in HA

ansible-playbook -i inventory/dev/hosts ON-frontenddeploy.yml -u brucewayne -Kkb –ask-become –limit=’testmachine1,testmachine2′

Command: Running – playbook to deploy OpenNebula in Standalone

ansible-playbook -i inventory/dev/hosts ON-frontenddeploy.yml -u brucewayne -Kkb –ask-become –limit=’testmachine1′

.

-i : This flag tells ansibe-playbook command which hosts file to use, these are always defined by customer like hosts.opennebla2  
-u : this is the ssh_user you will be connecting to the servers with
-Kkb : this tells ansible that you will be using sudo su – for the ssh_user when running all role/tasks
-ask-beocme : is saying become root
-limit=’server’ : this allows you to segement which server you want to run the playbook against.

.

Successful run:

.

brucewayne@KVMtestbox:~/ansible/opennebula-frontend$ ansibleplaybooki inventory/dev/hosts.opennebula2 ONfrontenddeploy.ymlu brucewayneKkbaskbecomelimit=‘testmachine1,testmachine2’

SSH password:

BECOME password[defaults to SSH password]:

.

PLAY [all] ***************************************************************************************************************************************************************************************************************

.

TASK [frontend : install debian packages] ********************************************************************************************************************************************************************************

ok: [testmachine2] => (item=curl)

ok: [testmachine1] => (item=curl)

ok: [testmachine1] => (item=gnupg)

ok: [testmachine2] => (item=gnupg)

changed: [testmachine1] => (item=buildessential)

ok: [testmachine1] => (item=dirmngr)

ok: [testmachine1] => (item=cacertificates)

ok: [testmachine1] => (item=memcached)

changed: [testmachine2] => (item=buildessential)

ok: [testmachine2] => (item=dirmngr)

ok: [testmachine2] => (item=cacertificates)

ok: [testmachine2] => (item=memcached)

.

TASK [frontend : import the opennebula apt key] **************************************************************************************************************************************************************************

changed: [testmachine2]

changed: [testmachine1]

.

TASK [frontend : Show Key list] ******************************************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “keylist.stdout_lines”: [

        “/etc/apt/trusted.gpg”,

        “——————–“,

        “pub   rsa2048 2013-06-13 [SC]”,

        ”      92B7 7188 854C F23E 1634  DA89 592F 7F05 85E1 6EBF”,

        “uid           [ unknown] OpenNebula Repository <contact@opennebula.org>”,

        “sub   rsa2048 2013-06-13 [E]”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-archive.gpg”,

        “——————————————————“,

        “pub   rsa4096 2012-05-11 [SC]”,

        ”      790B C727 7767 219C 42C8  6F93 3B4F E6AC C0B2 1F32″,

        “uid           [ unknown] Ubuntu Archive Automatic Signing Key (2012) <ftpmaster@ubuntu.com>”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-cdimage.gpg”,

        “——————————————————“,

        “pub   rsa4096 2012-05-11 [SC]”,

        ”      8439 38DF 228D 22F7 B374  2BC0 D94A A3F0 EFE2 1092″,

        “uid           [ unknown] Ubuntu CD Image Automatic Signing Key (2012) <cdimage@ubuntu.com>”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2018-archive.gpg”,

        “——————————————————“,

        “pub   rsa4096 2018-09-17 [SC]”,

        ”      F6EC B376 2474 EDA9 D21B  7022 8719 20D1 991B C93C”,

        “uid           [ unknown] Ubuntu Archive Automatic Signing Key (2018) <ftpmaster@ubuntu.com>”

    ]

}

ok: [testmachine2] => {

    “keylist.stdout_lines”: [

        “/etc/apt/trusted.gpg”,

        “——————–“,

        “pub   rsa2048 2013-06-13 [SC]”,

        ”      92B7 7188 854C F23E 1634  DA89 592F 7F05 85E1 6EBF”,

        “uid           [ unknown] OpenNebula Repository <contact@opennebula.org>”,

        “sub   rsa2048 2013-06-13 [E]”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-archive.gpg”,

        “——————————————————“,

        “pub   rsa4096 2012-05-11 [SC]”,

        ”      790B C727 7767 219C 42C8  6F93 3B4F E6AC C0B2 1F32″,

        “uid           [ unknown] Ubuntu Archive Automatic Signing Key (2012) <ftpmaster@ubuntu.com>”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-cdimage.gpg”,

        “——————————————————“,

        “pub   rsa4096 2012-05-11 [SC]”,

        ”      8439 38DF 228D 22F7 B374  2BC0 D94A A3F0 EFE2 1092″,

        “uid           [ unknown] Ubuntu CD Image Automatic Signing Key (2012) <cdimage@ubuntu.com>”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2018-archive.gpg”,

        “——————————————————“,

        “pub   rsa4096 2018-09-17 [SC]”,

        ”      F6EC B376 2474 EDA9 D21B  7022 8719 20D1 991B C93C”,

        “uid           [ unknown] Ubuntu Archive Automatic Signing Key (2018) <ftpmaster@ubuntu.com>”

    ]

}

.

TASK [frontend : import the phusionpassenger apt key] ********************************************************************************************************************************************************************

changed: [testmachine2]

changed: [testmachine1]

.

TASK [frontend : Show Key list] ******************************************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “keylist2.stdout_lines”: [

        “/etc/apt/trusted.gpg”,

        “——————–“,

        “pub   rsa2048 2013-06-13 [SC]”,

        ”      92B7 7188 854C F23E 1634  DA89 592F 7F05 85E1 6EBF”,

        “uid           [ unknown] OpenNebula Repository <contact@opennebula.org>”,

        “sub   rsa2048 2013-06-13 [E]”,

        “”,

        “pub   rsa4096 2013-06-30 [SC]”,

        ”      1637 8A33 A6EF 1676 2922  526E 561F 9B9C AC40 B2F7″,

        “uid           [ unknown] Phusion Automated Software Signing (Used by automated tools to sign software packages) <auto-software-signing@phusion.nl>”,

        “sub   rsa4096 2013-06-30 [E]”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-archive.gpg”,

        “——————————————————“,

        “pub   rsa4096 2012-05-11 [SC]”,

        ”      790B C727 7767 219C 42C8  6F93 3B4F E6AC C0B2 1F32″,

        “uid           [ unknown] Ubuntu Archive Automatic Signing Key (2012) <ftpmaster@ubuntu.com>”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-cdimage.gpg”,

        “——————————————————“,

        “pub   rsa4096 2012-05-11 [SC]”,

        ”      8439 38DF 228D 22F7 B374  2BC0 D94A A3F0 EFE2 1092″,

        “uid           [ unknown] Ubuntu CD Image Automatic Signing Key (2012) <cdimage@ubuntu.com>”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2018-archive.gpg”,

        “——————————————————“,

        “pub   rsa4096 2018-09-17 [SC]”,

        ”      F6EC B376 2474 EDA9 D21B  7022 8719 20D1 991B C93C”,

        “uid           [ unknown] Ubuntu Archive Automatic Signing Key (2018) <ftpmaster@ubuntu.com>”

    ]

}

ok: [testmachine2] => {

    “keylist2.stdout_lines”: [

        “/etc/apt/trusted.gpg”,

        “——————–“,

        “pub   rsa2048 2013-06-13 [SC]”,

        ”      92B7 7188 854C F23E 1634  DA89 592F 7F05 85E1 6EBF”,

        “uid           [ unknown] OpenNebula Repository <contact@opennebula.org>”,

        “sub   rsa2048 2013-06-13 [E]”,

        “”,

        “pub   rsa4096 2013-06-30 [SC]”,

        ”      1637 8A33 A6EF 1676 2922  526E 561F 9B9C AC40 B2F7″,

        “uid           [ unknown] Phusion Automated Software Signing (Used by automated tools to sign software packages) <auto-software-signing@phusion.nl>”,

        “sub   rsa4096 2013-06-30 [E]”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-archive.gpg”,

        “——————————————————“,

        “pub   rsa4096 2012-05-11 [SC]”,

        ”      790B C727 7767 219C 42C8  6F93 3B4F E6AC C0B2 1F32″,

        “uid           [ unknown] Ubuntu Archive Automatic Signing Key (2012) <ftpmaster@ubuntu.com>”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-cdimage.gpg”,

        “——————————————————“,

        “pub   rsa4096 2012-05-11 [SC]”,

        ”      8439 38DF 228D 22F7 B374  2BC0 D94A A3F0 EFE2 1092″,

        “uid           [ unknown] Ubuntu CD Image Automatic Signing Key (2012) <cdimage@ubuntu.com>”,

        “”,

        “/etc/apt/trusted.gpg.d/ubuntu-keyring-2018-archive.gpg”,

        “——————————————————“,

        “pub   rsa4096 2018-09-17 [SC]”,

        ”      F6EC B376 2474 EDA9 D21B  7022 8719 20D1 991B C93C”,

        “uid           [ unknown] Ubuntu Archive Automatic Signing Key (2018) <ftpmaster@ubuntu.com>”

    ]

}

.

TASK [frontend : add opennebula apt repository] **************************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : add bionic phusionpassenger apt repository] *************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : wget apttransporthttps cacertificates] ***************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “install2”: {

        “changed”: true,

        “cmd”: “apt-get -y install wget apt-transport-https ca-certificates”,

        “delta”: “0:00:02.087119”,

        “end”: “2022-04-06 03:13:42.512860”,

        “failed”: false,

        “msg”: “”,

        “rc”: 0,

        “start”: “2022-04-06 03:13:40.425741”,

        “stderr”: “”,

        “stderr_lines”: [],

        “stdout”: “Reading package lists…\nBuilding dependency tree…\nReading state information…\nca-certificates is already the newest version (20210119~20.04.2).\nwget is already the newest version (1.20.3-1ubuntu2).\nwget set to manually installed.\nThe following NEW packages will be installed\n  apt-transport-https\n0 to upgrade, 1 to newly install, 0 to remove and 1 not to upgrade.\nNeed to get 4,680 B of archives.\nAfter this operation, 162 kB of additional disk space will be used.\nGet:1 http://gb.archive.ubuntu.com/ubuntu focal-updates/universe amd64 apt-transport-https all 2.0.6 [4,680 B]\nFetched 4,680 B in 0s (15.1 kB/s)\nSelecting previously unselected package apt-transport-https.\r\n(Reading database … \r(Reading database … 5%\r(Reading database … 10%\r(Reading database … 15%\r(Reading database … 20%\r(Reading database … 25%\r(Reading database … 30%\r(Reading database … 35%\r(Reading database … 40%\r(Reading database … 45%\r(Reading database … 50%\r(Reading database … 55%\r(Reading database … 60%\r(Reading database … 65%\r(Reading database … 70%\r(Reading database … 75%\r(Reading database … 80%\r(Reading database … 85%\r(Reading database … 90%\r(Reading database … 95%\r(Reading database … 100%\r(Reading database … 199304 files and directories currently installed.)\r\nPreparing to unpack …/apt-transport-https_2.0.6_all.deb …\r\nUnpacking apt-transport-https (2.0.6) …\r\nSetting up apt-transport-https (2.0.6) …”,

        “stdout_lines”: [

            “Reading package lists…”,

            “Building dependency tree…”,

            “Reading state information…”,

            “ca-certificates is already the newest version (20210119~20.04.2).”,

            “wget is already the newest version (1.20.3-1ubuntu2).”,

            “wget set to manually installed.”,

            “The following NEW packages will be installed”,

            ”  apt-transport-https”,

            “0 to upgrade, 1 to newly install, 0 to remove and 1 not to upgrade.”,

            “Need to get 4,680 B of archives.”,

            “After this operation, 162 kB of additional disk space will be used.”,

            “Get:1 http://gb.archive.ubuntu.com/ubuntu focal-updates/universe amd64 apt-transport-https all 2.0.6 [4,680 B]”,

            “Fetched 4,680 B in 0s (15.1 kB/s)”,

            “Selecting previously unselected package apt-transport-https.”,

            “(Reading database … “,

            “(Reading database … 5%”,

            “(Reading database … 10%”,

            “(Reading database … 15%”,

            “(Reading database … 20%”,

            “(Reading database … 25%”,

            “(Reading database … 30%”,

            “(Reading database … 35%”,

            “(Reading database … 40%”,

            “(Reading database … 45%”,

            “(Reading database … 50%”,

            “(Reading database … 55%”,

            “(Reading database … 60%”,

            “(Reading database … 65%”,

            “(Reading database … 70%”,

            “(Reading database … 75%”,

            “(Reading database … 80%”,

            “(Reading database … 85%”,

            “(Reading database … 90%”,

            “(Reading database … 95%”,

            “(Reading database … 100%”,

            “(Reading database … 199304 files and directories currently installed.)”,

            “Preparing to unpack …/apt-transport-https_2.0.6_all.deb …”,

            “Unpacking apt-transport-https (2.0.6) …”,

            “Setting up apt-transport-https (2.0.6) …”

        ]

    }

}

ok: [testmachine2] => {

    “install2”: {

        “changed”: true,

        “cmd”: “apt-get -y install wget apt-transport-https ca-certificates”,

        “delta”: “0:00:02.710741”,

        “end”: “2022-04-06 03:13:43.155299”,

        “failed”: false,

        “msg”: “”,

        “rc”: 0,

        “start”: “2022-04-06 03:13:40.444558”,

        “stderr”: “”,

        “stderr_lines”: [],

        “stdout”: “Reading package lists…\nBuilding dependency tree…\nReading state information…\nca-certificates is already the newest version (20210119~20.04.2).\nwget is already the newest version (1.20.3-1ubuntu2).\nwget set to manually installed.\nThe following packages were automatically installed and are no longer required:\n  linux-headers-5.11.0-27-generic linux-hwe-5.11-headers-5.11.0-27\n  linux-image-5.11.0-27-generic linux-modules-5.11.0-27-generic\n  linux-modules-extra-5.11.0-27-generic\nUse ‘sudo apt autoremove’ to remove them.\nThe following NEW packages will be installed\n  apt-transport-https\n0 to upgrade, 1 to newly install, 0 to remove and 37 not to upgrade.\nNeed to get 4,680 B of archives.\nAfter this operation, 162 kB of additional disk space will be used.\nGet:1 http://gb.archive.ubuntu.com/ubuntu focal-updates/universe amd64 apt-transport-https all 2.0.6 [4,680 B]\nFetched 4,680 B in 0s (13.2 kB/s)\nSelecting previously unselected package apt-transport-https.\r\n(Reading database … \r(Reading database … 5%\r(Reading database … 10%\r(Reading database … 15%\r(Reading database … 20%\r(Reading database … 25%\r(Reading database … 30%\r(Reading database … 35%\r(Reading database … 40%\r(Reading database … 45%\r(Reading database … 50%\r(Reading database … 55%\r(Reading database … 60%\r(Reading database … 65%\r(Reading database … 70%\r(Reading database … 75%\r(Reading database … 80%\r(Reading database … 85%\r(Reading database … 90%\r(Reading database … 95%\r(Reading database … 100%\r(Reading database … 202372 files and directories currently installed.)\r\nPreparing to unpack …/apt-transport-https_2.0.6_all.deb …\r\nUnpacking apt-transport-https (2.0.6) …\r\nSetting up apt-transport-https (2.0.6) …”,

        “stdout_lines”: [

            “Reading package lists…”,

            “Building dependency tree…”,

            “Reading state information…”,

            “ca-certificates is already the newest version (20210119~20.04.2).”,

            “wget is already the newest version (1.20.3-1ubuntu2).”,

            “wget set to manually installed.”,

            “The following packages were automatically installed and are no longer required:”,

            ”  linux-headers-5.11.0-27-generic linux-hwe-5.11-headers-5.11.0-27″,

            ”  linux-image-5.11.0-27-generic linux-modules-5.11.0-27-generic”,

            ”  linux-modules-extra-5.11.0-27-generic”,

            “Use ‘sudo apt autoremove’ to remove them.”,

            “The following NEW packages will be installed”,

            ”  apt-transport-https”,

            “0 to upgrade, 1 to newly install, 0 to remove and 37 not to upgrade.”,

            “Need to get 4,680 B of archives.”,

            “After this operation, 162 kB of additional disk space will be used.”,

            “Get:1 http://gb.archive.ubuntu.com/ubuntu focal-updates/universe amd64 apt-transport-https all 2.0.6 [4,680 B]”,

            “Fetched 4,680 B in 0s (13.2 kB/s)”,

            “Selecting previously unselected package apt-transport-https.”,

            “(Reading database … “,

            “(Reading database … 5%”,

            “(Reading database … 10%”,

            “(Reading database … 15%”,

            “(Reading database … 20%”,

            “(Reading database … 25%”,

            “(Reading database … 30%”,

            “(Reading database … 35%”,

            “(Reading database … 40%”,

            “(Reading database … 45%”,

            “(Reading database … 50%”,

            “(Reading database … 55%”,

            “(Reading database … 60%”,

            “(Reading database … 65%”,

            “(Reading database … 70%”,

            “(Reading database … 75%”,

            “(Reading database … 80%”,

            “(Reading database … 85%”,

            “(Reading database … 90%”,

            “(Reading database … 95%”,

            “(Reading database … 100%”,

            “(Reading database … 202372 files and directories currently installed.)”,

            “Preparing to unpack …/apt-transport-https_2.0.6_all.deb …”,

            “Unpacking apt-transport-https (2.0.6) …”,

            “Setting up apt-transport-https (2.0.6) …”

        ]

    }

}

.

TASK [frontend : aptget update] *****************************************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : Include mysql task when groupvar mysqlservers is defined] ***********************************************************************************************************************************************

included: /home/brucewayne/ansible/opennebula-frontend/roles/frontend/tasks/mysql.yml for testmachine1, testmachine2

.

TASK [frontend : install debian packages] ********************************************************************************************************************************************************************************

changed: [testmachine1] => (item=mariadbserver)

changed: [testmachine1] => (item=python3pymysql)

changed: [testmachine2] => (item=mariadbserver)

changed: [testmachine2] => (item=python3pymysql)

.

TASK [frontend : Secure mysql installation] ******************************************************************************************************************************************************************************

[WARNING]: Module did not set no_log for change_root_password

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “mysql_secure”: {

        “changed”: true,

        “failed”: false,

        “meta”: {

            “change_root_pwd”: “True  — But not for all of the hosts”,

            “connected_with_socket?”: true,

            “disallow_root_remotely”: “False — meets the desired state”,

            “hosts_failed”: [

                “127.0.0.1”,

                “::1”

            ],

            “hosts_success”: [

                “localhost”

            ],

            “mysql_version_above_10_3?”: false,

            “new_password_correct?”: false,

            “remove_anonymous_user”: “False — meets the desired state”,

            “remove_test_db”: “False — meets the desired state”,

            “stdout”: “Password for user: root @ Hosts: [‘localhost’] changed to the desired state”

        },

        “warnings”: [

            “Module did not set no_log for change_root_password”

        ]

    }

}

ok: [testmachine2] => {

    “mysql_secure”: {

        “changed”: true,

        “failed”: false,

        “meta”: {

            “change_root_pwd”: “True  — But not for all of the hosts”,

            “connected_with_socket?”: true,

            “disallow_root_remotely”: “False — meets the desired state”,

            “hosts_failed”: [

                “::1”,

                “127.0.0.1”

            ],

            “hosts_success”: [

                “localhost”

            ],

            “mysql_version_above_10_3?”: false,

            “new_password_correct?”: false,

            “remove_anonymous_user”: “False — meets the desired state”,

            “remove_test_db”: “False — meets the desired state”,

            “stdout”: “Password for user: root @ Hosts: [‘localhost’] changed to the desired state”

        },

        “warnings”: [

            “Module did not set no_log for change_root_password”

        ]

    }

}

.

TASK [frontend : Create opennebula database] *****************************************************************************************************************************************************************************

changed: [testmachine2]

changed: [testmachine1]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “database”: {

        “changed”: true,

        “db”: “opennebula”,

        “db_list”: [

            “opennebula”

        ],

        “executed_commands”: [

            “CREATE DATABASE `opennebula`”

        ],

        “failed”: false

    }

}

ok: [testmachine2] => {

    “database”: {

        “changed”: true,

        “db”: “opennebula”,

        “db_list”: [

            “opennebula”

        ],

        “executed_commands”: [

            “CREATE DATABASE `opennebula`”

        ],

        “failed”: false

    }

}

.

TASK [frontend : create user ‘admin’ with password ‘admin’ for ‘{{opennebula_db}}’ and grant all priveleges] *******************************************************************************************************

changed: [testmachine2]

changed: [testmachine1]

.

TASK [frontend : install opennebula packages] ****************************************************************************************************************************************************************************

changed: [testmachine1] => (item=opennebula)

changed: [testmachine1] => (item=opennebulasunstone)

changed: [testmachine1] => (item=opennebulagate)

changed: [testmachine1] => (item=opennebulaflow)

ok: [testmachine1] => (item=opennebularubygems)

changed: [testmachine1] => (item=opennebulafireedge)

ok: [testmachine1] => (item=gnupg)

changed: [testmachine2] => (item=opennebula)

changed: [testmachine2] => (item=opennebulasunstone)

changed: [testmachine2] => (item=opennebulagate)

changed: [testmachine2] => (item=opennebulaflow)

ok: [testmachine2] => (item=opennebularubygems)

changed: [testmachine2] => (item=opennebulafireedge)

ok: [testmachine2] => (item=gnupg)

.

TASK [frontend : Copy oned.conf to server with updated DB(host,user,pass)] ***********************************************************************************************************************************************

changed: [testmachine2]

changed: [testmachine1]

.

TASK [frontend : Copy sunstoneserver.conf to server configs] ************************************************************************************************************************************************************

changed: [testmachine2]

changed: [testmachine1]

.

TASK [frontend : Add credentials to Admin] ****************************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “authfile.stdout_lines”: [

        “admin:IgDeMozOups8”

    ]

}

ok: [testmachine2] => {

    “authfile.stdout_lines”: [

        “admin:Tafwaytofen2”

    ]

}

.

TASK [frontend : Set fact for authfile] **********************************************************************************************************************************************************************************

ok: [testmachine1]

ok: [testmachine2]

.

TASK [frontend : update permissions opennebula permissions] **************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : Include apache configuration] ***************************************************************************************************************************************************************************

included: /home/brucewayne/ansible/opennebula-frontend/roles/frontend/tasks/apache.yml for testmachine1, testmachine2

.

TASK [frontend : restart systemdtimesyncd] ******************************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : install debian packages] ********************************************************************************************************************************************************************************

changed: [testmachine1] => (item=apache2utils)

changed: [testmachine2] => (item=apache2utils)

changed: [testmachine1] => (item=apache2)

changed: [testmachine1] => (item=libapache2modproxymsrpc)

changed: [testmachine2] => (item=apache2)

changed: [testmachine2] => (item=libapache2modproxymsrpc)

changed: [testmachine1] => (item=libapache2modpassenger)

changed: [testmachine2] => (item=libapache2modpassenger)

.

TASK [frontend : copy opennebula apache ssl virtualhost config to server] ************************************************************************************************************************************************

changed: [testmachine1] => (item=/home/brucewayne/ansible/opennebula-frontend/roles/frontend/templates/apache_confs/opennebula.conf)

changed: [testmachine2] => (item=/home/brucewayne/ansible/opennebula-frontend/roles/frontend/templates/apache_confs/opennebula.conf)

.

TASK [frontend : copy opennebul ssl certificate to servers] **************************************************************************************************************************************************************

changed: [testmachine1] => (item=/home/brucewayne/ansible/opennebula-frontend/roles/frontend/templates/certs/opennebula.pem)

changed: [testmachine2] => (item=/home/brucewayne/ansible/opennebula-frontend/roles/frontend/templates/certs/opennebula.pem)

.

TASK [frontend : copy opennebula ssl private key to server] **************************************************************************************************************************************************************

changed: [testmachine1] => (item=/home/brucewayne/ansible/opennebula-frontend/roles/frontend/templates/private/opennebula.key)

changed: [testmachine2] => (item=/home/brucewayne/ansible/opennebula-frontend/roles/frontend/templates/private/opennebula.key)

.

TASK [frontend : Enable SSL virtual host for openebula] ******************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : enable opennebula virtualhost] **************************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : Restart service httpd, in all cases] ********************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : Enable service httpd and ensure it is not masked] *******************************************************************************************************************************************************

ok: [testmachine1]

ok: [testmachine2]

.

TASK [frontend : get service facts] **************************************************************************************************************************************************************************************

ok: [testmachine1]

ok: [testmachine2]

.

TASK [frontend : Check to see if httpd is running] ***********************************************************************************************************************************************************************

ok: [testmachine1] => {

    “ansible_facts.services[\”apache2.service\”]”: {

        “name”: “apache2.service”,

        “source”: “systemd”,

        “state”: “running”,

        “status”: “enabled”

    }

}

ok: [testmachine2] => {

    “ansible_facts.services[\”apache2.service\”]”: {

        “name”: “apache2.service”,

        “source”: “systemd”,

        “state”: “running”,

        “status”: “enabled”

    }

}

.

TASK [frontend : start opennebula] ***************************************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “openebula.state”: “started”

}

ok: [testmachine2] => {

    “openebula.state”: “started”

}

.

TASK [frontend : start opennebulagate] **********************************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “gate.state”: “started”

}

ok: [testmachine2] => {

    “gate.state”: “started”

}

.

TASK [frontend : start opennebulaflow] **********************************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “flow.state”: “started”

}

ok: [testmachine2] => {

    “flow.state”: “started”

}

.

TASK [frontend : start opennebulanovc] **********************************************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “novnc.state”: “started”

}

ok: [testmachine2] => {

    “novnc.state”: “started”

}

.

TASK [frontend : start systemdtimesyncd] ********************************************************************************************************************************************************************************

ok: [testmachine1]

ok: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “timesyncd.state”: “started”

}

ok: [testmachine2] => {

    “timesyncd.state”: “started”

}

.

TASK [frontend : Check if server is listed under frontend_HA] ************************************************************************************************************************************************************

skipping: [testmachine1]

ok: [testmachine2]

.

TASK [frontend : Stopping OpenNebula on frontend_server_primary] *********************************************************************************************************************************************************

changed: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “stop, group_names”: “({‘changed’: True, ‘stdout’: ”, ‘stderr’: ”, ‘rc’: 0, ‘cmd’: ‘systemctl stop opennebula’, ‘start’: ‘2022-04-06 03:19:42.714817’, ‘end’: ‘2022-04-06 03:19:48.841833’, ‘delta’: ‘0:00:06.127016’, ‘msg’: ”, ‘stdout_lines’: [], ‘stderr_lines’: [], ‘failed’: False}, [‘apache_servers’, ‘frontend_server_primary’, ‘mysql_servers’])”

}

ok: [testmachine2] => {

    “stop, group_names”: “({‘changed’: True, ‘stdout’: ”, ‘stderr’: ”, ‘rc’: 0, ‘cmd’: ‘systemctl stop opennebula’, ‘start’: ‘2022-04-06 03:19:42.761875’, ‘end’: ‘2022-04-06 03:21:14.632276’, ‘delta’: ‘0:01:31.870401’, ‘msg’: ”, ‘stdout_lines’: [], ‘stderr_lines’: [], ‘failed’: False}, [‘apache_servers’, ‘frontend_HA’, ‘mysql_servers’])”

}

.

TASK [frontend : delete sqlfile if it exists to create a current one.] ***************************************************************************************************************************************************

changed: [testmachine2]

changed: [testmachine1]

.

TASK [frontend : make backup of OpenNebula database] *********************************************************************************************************************************************************************

skipping: [testmachine2]

changed: [testmachine1]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “backup”: {

        “changed”: true,

        “cmd”: “onedb backup -u admin -p admin -d opennebula /var/lib/one/opennebula.sql”,

        “delta”: “0:00:00.406599”,

        “end”: “2022-04-06 03:21:16.346013”,

        “failed”: false,

        “msg”: “”,

        “rc”: 0,

        “start”: “2022-04-06 03:21:15.939414”,

        “stderr”: “”,

        “stderr_lines”: [],

        “stdout”: “MySQL dump stored in /var/lib/one/opennebula.sql\nUse ‘onedb restore’ or restore the DB using the mysql command:\nmysql -u user -h server -P port db_name < backup_file”,

        “stdout_lines”: [

            “MySQL dump stored in /var/lib/one/opennebula.sql”,

            “Use ‘onedb restore’ or restore the DB using the mysql command:”,

            “mysql -u user -h server -P port db_name < backup_file”

        ]

    }

}

ok: [testmachine2] => {

    “backup”: {

        “changed”: false,

        “skip_reason”: “Conditional result was False”,

        “skipped”: true

    }

}

.

TASK [frontend : Fetch the OpenNebula sql dumpfile from frontend_server_primary] *****************************************************************************************************************************************

skipping: [testmachine2]

changed: [testmachine1 -> testmachine1]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “fetch, group_names”: “({‘changed’: True, ‘md5sum’: ‘a54c58c27e96d29cb99a26a595263164’, ‘dest’: ‘/home/brucewayne/ansible/opennebula-frontend/buffer/tmp/opennebula.sql’, ‘remote_md5sum’: None, ‘checksum’: ‘040e9ae687df46fc26a64f038992bd28e1d7e369’, ‘remote_checksum’: ‘040e9ae687df46fc26a64f038992bd28e1d7e369’, ‘failed’: False}, [‘apache_servers’, ‘frontend_server_primary’, ‘mysql_servers’])”

}

ok: [testmachine2] => {

    “fetch, group_names”: “({‘changed’: False, ‘skipped’: True, ‘skip_reason’: ‘Conditional result was False’}, [‘apache_servers’, ‘frontend_HA’, ‘mysql_servers’])”

}

.

TASK [frontend : Copy the ONsqldump file from master to the secondary HA nodes] *****************************************************************************************************************************************

skipping: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “sqlcopy”: {

        “changed”: false,

        “skip_reason”: “Conditional result was False”,

        “skipped”: true

    }

}

ok: [testmachine2] => {

    “sqlcopy”: {

        “changed”: true,

        “checksum”: “040e9ae687df46fc26a64f038992bd28e1d7e369”,

        “dest”: “/tmp/opennebula.sql”,

        “diff”: [],

        “failed”: false,

        “gid”: 0,

        “group”: “root”,

        “md5sum”: “a54c58c27e96d29cb99a26a595263164”,

        “mode”: “0644”,

        “owner”: “root”,

        “size”: 41546,

        “src”: “/home/brucewayne/.ansible/tmp/ansible-tmp-1649211677.4405959-9803-36565910128620/source”,

        “state”: “file”,

        “uid”: 0

    }

}

.

TASK [frontend : Fetch the fence_host.sh] ********************************************************************************************************************************************************************************

skipping: [testmachine2]

ok: [testmachine1 -> testmachine1]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “fence_host, group_names”: “({‘changed’: False, ‘md5sum’: ‘7bb73d0d0ffce907562d75f6cd779fdc’, ‘file’: ‘/var/lib/one/remotes/hooks/ft/fence_host.sh’, ‘dest’: ‘/home/brucewayne/ansible/opennebula-frontend/buffer/tmp/fence_host.sh’, ‘checksum’: ‘ef5e59d9a3d6d7a55d554928057bf85f5dea5f1f’, ‘failed’: False}, [‘apache_servers’, ‘frontend_server_primary’, ‘mysql_servers’])”

}

ok: [testmachine2] => {

    “fence_host, group_names”: “({‘changed’: False, ‘skipped’: True, ‘skip_reason’: ‘Conditional result was False’}, [‘apache_servers’, ‘frontend_HA’, ‘mysql_servers’])”

}

.

TASK [frontend : Copy the fence.sh to frontend_HA hosts] *****************************************************************************************************************************************************************

skipping: [testmachine1]

ok: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “fence_host”: {

        “changed”: false,

        “skip_reason”: “Conditional result was False”,

        “skipped”: true

    }

}

ok: [testmachine2] => {

    “fence_host”: {

        “changed”: false,

        “checksum”: “ef5e59d9a3d6d7a55d554928057bf85f5dea5f1f”,

        “dest”: “/var/lib/one/remotes/hooks/ft/fence_host.sh”,

        “diff”: {

            “after”: {

                “path”: “/var/lib/one/remotes/hooks/ft/fence_host.sh”

            },

            “before”: {

                “path”: “/var/lib/one/remotes/hooks/ft/fence_host.sh”

            }

        },

        “failed”: false,

        “gid”: 9869,

        “group”: “admin”,

        “mode”: “0750”,

        “owner”: “admin”,

        “path”: “/var/lib/one/remotes/hooks/ft/fence_host.sh”,

        “size”: 4370,

        “state”: “file”,

        “uid”: 9869

    }

}

.

TASK [frontend : Create tar of /etc/one/] ********************************************************************************************************************************************************************************

skipping: [testmachine2]

changed: [testmachine1]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “tar”: {

        “changed”: true,

        “cmd”: “cd /etc/one;tar -cvf /etc/one/one.tar *”,

        “delta”: “0:00:00.016645”,

        “end”: “2022-04-06 03:21:20.659494”,

        “failed”: false,

        “msg”: “”,

        “rc”: 0,

        “start”: “2022-04-06 03:21:20.642849”,

        “stderr”: “”,

        “stderr_lines”: [],

        “stdout”: “auth/\nauth/certificates/\nauth/x509_auth.conf\nauth/server_x509_auth.conf\nauth/ldap_auth.conf\naz_driver.conf\naz_driver.default\ncli/\ncli/onevmgroup.yaml\ncli/onevnet.yaml\ncli/oneshowback.yaml\ncli/onehook.yaml\ncli/onetemplate.yaml\ncli/onemarketapp.yaml\ncli/onesecgroup.yaml\ncli/oneacct.yaml\ncli/oneacl.yaml\ncli/onemarket.yaml\ncli/onegroup.yaml\ncli/onevm.yaml\ncli/oneflowtemplate.yaml\ncli/onevrouter.yaml\ncli/onezone.yaml\ncli/oneimage.yaml\ncli/onecluster.yaml\ncli/oneuser.yaml\ncli/onevntemplate.yaml\ncli/onevdc.yaml\ncli/onehost.yaml\ncli/onedatastore.yaml\ncli/oneflow.yaml\ndefaultrc\nec2_driver.conf\nec2_driver.default\nfireedge/\nfireedge/provision/\nfireedge/provision/providers.d/\nfireedge/provision/providers.d/vultr_virtual.yaml\nfireedge/provision/providers.d/digitalocean.yaml\nfireedge/provision/providers.d/vultr_metal.yaml\nfireedge/provision/providers.d/equinix.yaml\nfireedge/provision/providers.d/google.yaml\nfireedge/provision/providers.d/aws.yaml\nfireedge/provision/providers.d/dummy.yaml\nfireedge/provision/provision-server.conf\nfireedge/sunstone/\nfireedge/sunstone/user/\nfireedge/sunstone/user/vm-tab.yaml\nfireedge/sunstone/user/vm-template-tab.yaml\nfireedge/sunstone/sunstone-server.conf\nfireedge/sunstone/admin/\nfireedge/sunstone/admin/vm-tab.yaml\nfireedge/sunstone/admin/cluster-tab.yaml\nfireedge/sunstone/admin/vm-template-tab.yaml\nfireedge/sunstone/admin/host-tab.yaml\nfireedge/sunstone/sunstone-views.yaml\nfireedge-server.conf\nhm/\nhm/hmrc\nmonitord.conf\noned.conf\noneflow-server.conf\nonegate-server.conf\nonehem-server.conf\nsched.conf\nsunstone-logos.yaml\nsunstone-server.conf\nsunstone-views/\nsunstone-views/vcenter/\nsunstone-views/vcenter/admin.yaml\nsunstone-views/vcenter/user.yaml\nsunstone-views/vcenter/groupadmin.yaml\nsunstone-views/vcenter/cloud.yaml\nsunstone-views/mixed/\nsunstone-views/mixed/admin.yaml\nsunstone-views/mixed/user.yaml\nsunstone-views/mixed/groupadmin.yaml\nsunstone-views/mixed/cloud.yaml\nsunstone-views/kvm/\nsunstone-views/kvm/admin.yaml\nsunstone-views/kvm/user.yaml\nsunstone-views/kvm/groupadmin.yaml\nsunstone-views/kvm/cloud.yaml\nsunstone-views.yaml\ntmrc\nvcenter_driver.default\nvmm_exec/\nvmm_exec/vmm_execrc\nvmm_exec/vmm_exec_kvm.conf”,

        “stdout_lines”: [

            “auth/”,

            “auth/certificates/”,

            “auth/x509_auth.conf”,

            “auth/server_x509_auth.conf”,

            “auth/ldap_auth.conf”,

            “az_driver.conf”,

            “az_driver.default”,

            “cli/”,

            “cli/onevmgroup.yaml”,

            “cli/onevnet.yaml”,

            “cli/oneshowback.yaml”,

            “cli/onehook.yaml”,

            “cli/onetemplate.yaml”,

            “cli/onemarketapp.yaml”,

            “cli/onesecgroup.yaml”,

            “cli/oneacct.yaml”,

            “cli/oneacl.yaml”,

            “cli/onemarket.yaml”,

            “cli/onegroup.yaml”,

            “cli/onevm.yaml”,

            “cli/oneflowtemplate.yaml”,

            “cli/onevrouter.yaml”,

            “cli/onezone.yaml”,

            “cli/oneimage.yaml”,

            “cli/onecluster.yaml”,

            “cli/oneuser.yaml”,

            “cli/onevntemplate.yaml”,

            “cli/onevdc.yaml”,

            “cli/onehost.yaml”,

            “cli/onedatastore.yaml”,

            “cli/oneflow.yaml”,

            “defaultrc”,

            “ec2_driver.conf”,

            “ec2_driver.default”,

            “fireedge/”,

            “fireedge/provision/”,

            “fireedge/provision/providers.d/”,

            “fireedge/provision/providers.d/vultr_virtual.yaml”,

            “fireedge/provision/providers.d/digitalocean.yaml”,

            “fireedge/provision/providers.d/vultr_metal.yaml”,

            “fireedge/provision/providers.d/equinix.yaml”,

            “fireedge/provision/providers.d/google.yaml”,

            “fireedge/provision/providers.d/aws.yaml”,

            “fireedge/provision/providers.d/dummy.yaml”,

            “fireedge/provision/provision-server.conf”,

            “fireedge/sunstone/”,

            “fireedge/sunstone/user/”,

            “fireedge/sunstone/user/vm-tab.yaml”,

            “fireedge/sunstone/user/vm-template-tab.yaml”,

            “fireedge/sunstone/sunstone-server.conf”,

            “fireedge/sunstone/admin/”,

            “fireedge/sunstone/admin/vm-tab.yaml”,

            “fireedge/sunstone/admin/cluster-tab.yaml”,

            “fireedge/sunstone/admin/vm-template-tab.yaml”,

            “fireedge/sunstone/admin/host-tab.yaml”,

            “fireedge/sunstone/sunstone-views.yaml”,

            “fireedge-server.conf”,

            “hm/”,

            “hm/hmrc”,

            “monitord.conf”,

            “oned.conf”,

            “oneflow-server.conf”,

            “onegate-server.conf”,

            “onehem-server.conf”,

            “sched.conf”,

            “sunstone-logos.yaml”,

            “sunstone-server.conf”,

            “sunstone-views/”,

            “sunstone-views/vcenter/”,

            “sunstone-views/vcenter/admin.yaml”,

            “sunstone-views/vcenter/user.yaml”,

            “sunstone-views/vcenter/groupadmin.yaml”,

            “sunstone-views/vcenter/cloud.yaml”,

            “sunstone-views/mixed/”,

            “sunstone-views/mixed/admin.yaml”,

            “sunstone-views/mixed/user.yaml”,

            “sunstone-views/mixed/groupadmin.yaml”,

            “sunstone-views/mixed/cloud.yaml”,

            “sunstone-views/kvm/”,

            “sunstone-views/kvm/admin.yaml”,

            “sunstone-views/kvm/user.yaml”,

            “sunstone-views/kvm/groupadmin.yaml”,

            “sunstone-views/kvm/cloud.yaml”,

            “sunstone-views.yaml”,

            “tmrc”,

            “vcenter_driver.default”,

            “vmm_exec/”,

            “vmm_exec/vmm_execrc”,

            “vmm_exec/vmm_exec_kvm.conf”

        ]

    }

}

ok: [testmachine2] => {

    “tar”: {

        “changed”: false,

        “skip_reason”: “Conditional result was False”,

        “skipped”: true

    }

}

.

TASK [frontend : Fetch the one.tar] **************************************************************************************************************************************************************************************

skipping: [testmachine2]

changed: [testmachine1 -> testmachine1]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “fence_host, group_names”: “({‘changed’: True, ‘md5sum’: ‘acec4258dbbf2bde83d12f3eb29824a7’, ‘dest’: ‘/home/brucewayne/ansible/opennebula-frontend/buffer/tmp/one.tar’, ‘remote_md5sum’: None, ‘checksum’: ‘2da21a3124f4eb5a78c0126e9791c8d8c9c5c770’, ‘remote_checksum’: ‘2da21a3124f4eb5a78c0126e9791c8d8c9c5c770’, ‘failed’: False}, [‘apache_servers’, ‘frontend_server_primary’, ‘mysql_servers’])”

}

ok: [testmachine2] => {

    “fence_host, group_names”: “({‘changed’: False, ‘skipped’: True, ‘skip_reason’: ‘Conditional result was False’}, [‘apache_servers’, ‘frontend_HA’, ‘mysql_servers’])”

}

.

TASK [frontend : Copy the one.tar to frontend_HA hosts] ******************************************************************************************************************************************************************

skipping: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “fence_host”: {

        “changed”: false,

        “skip_reason”: “Conditional result was False”,

        “skipped”: true

    }

}

ok: [testmachine2] => {

    “fence_host”: {

        “changed”: true,

        “checksum”: “2da21a3124f4eb5a78c0126e9791c8d8c9c5c770”,

        “dest”: “/etc/one/one.tar”,

        “diff”: [],

        “failed”: false,

        “gid”: 0,

        “group”: “root”,

        “md5sum”: “acec4258dbbf2bde83d12f3eb29824a7”,

        “mode”: “0644”,

        “owner”: “root”,

        “size”: 542720,

        “src”: “/home/brucewayne/.ansible/tmp/ansible-tmp-1649211681.6244745-9943-99432484341658/source”,

        “state”: “file”,

        “uid”: 0

    }

}

.

TASK [frontend : untar one.tar in /etc/one on the frontend_HA hosts] *****************************************************************************************************************************************************

skipping: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “untar”: {

        “changed”: false,

        “skip_reason”: “Conditional result was False”,

        “skipped”: true

    }

}

ok: [testmachine2] => {

    “untar”: {

        “changed”: true,

        “cmd”: “cd /etc/one;tar -xvf /etc/one/one.tar”,

        “delta”: “0:00:00.018409”,

        “end”: “2022-04-06 03:21:23.162427”,

        “failed”: false,

        “msg”: “”,

        “rc”: 0,

        “start”: “2022-04-06 03:21:23.144018”,

        “stderr”: “”,

        “stderr_lines”: [],

        “stdout”: “auth/\nauth/certificates/\nauth/x509_auth.conf\nauth/server_x509_auth.conf\nauth/ldap_auth.conf\naz_driver.conf\naz_driver.default\ncli/\ncli/onevmgroup.yaml\ncli/onevnet.yaml\ncli/oneshowback.yaml\ncli/onehook.yaml\ncli/onetemplate.yaml\ncli/onemarketapp.yaml\ncli/onesecgroup.yaml\ncli/oneacct.yaml\ncli/oneacl.yaml\ncli/onemarket.yaml\ncli/onegroup.yaml\ncli/onevm.yaml\ncli/oneflowtemplate.yaml\ncli/onevrouter.yaml\ncli/onezone.yaml\ncli/oneimage.yaml\ncli/onecluster.yaml\ncli/oneuser.yaml\ncli/onevntemplate.yaml\ncli/onevdc.yaml\ncli/onehost.yaml\ncli/onedatastore.yaml\ncli/oneflow.yaml\ndefaultrc\nec2_driver.conf\nec2_driver.default\nfireedge/\nfireedge/provision/\nfireedge/provision/providers.d/\nfireedge/provision/providers.d/vultr_virtual.yaml\nfireedge/provision/providers.d/digitalocean.yaml\nfireedge/provision/providers.d/vultr_metal.yaml\nfireedge/provision/providers.d/equinix.yaml\nfireedge/provision/providers.d/google.yaml\nfireedge/provision/providers.d/aws.yaml\nfireedge/provision/providers.d/dummy.yaml\nfireedge/provision/provision-server.conf\nfireedge/sunstone/\nfireedge/sunstone/user/\nfireedge/sunstone/user/vm-tab.yaml\nfireedge/sunstone/user/vm-template-tab.yaml\nfireedge/sunstone/sunstone-server.conf\nfireedge/sunstone/admin/\nfireedge/sunstone/admin/vm-tab.yaml\nfireedge/sunstone/admin/cluster-tab.yaml\nfireedge/sunstone/admin/vm-template-tab.yaml\nfireedge/sunstone/admin/host-tab.yaml\nfireedge/sunstone/sunstone-views.yaml\nfireedge-server.conf\nhm/\nhm/hmrc\nmonitord.conf\noned.conf\noneflow-server.conf\nonegate-server.conf\nonehem-server.conf\nsched.conf\nsunstone-logos.yaml\nsunstone-server.conf\nsunstone-views/\nsunstone-views/vcenter/\nsunstone-views/vcenter/admin.yaml\nsunstone-views/vcenter/user.yaml\nsunstone-views/vcenter/groupadmin.yaml\nsunstone-views/vcenter/cloud.yaml\nsunstone-views/mixed/\nsunstone-views/mixed/admin.yaml\nsunstone-views/mixed/user.yaml\nsunstone-views/mixed/groupadmin.yaml\nsunstone-views/mixed/cloud.yaml\nsunstone-views/kvm/\nsunstone-views/kvm/admin.yaml\nsunstone-views/kvm/user.yaml\nsunstone-views/kvm/groupadmin.yaml\nsunstone-views/kvm/cloud.yaml\nsunstone-views.yaml\ntmrc\nvcenter_driver.default\nvmm_exec/\nvmm_exec/vmm_execrc\nvmm_exec/vmm_exec_kvm.conf”,

        “stdout_lines”: [

            “auth/”,

            “auth/certificates/”,

            “auth/x509_auth.conf”,

            “auth/server_x509_auth.conf”,

            “auth/ldap_auth.conf”,

            “az_driver.conf”,

            “az_driver.default”,

            “cli/”,

            “cli/onevmgroup.yaml”,

            “cli/onevnet.yaml”,

            “cli/oneshowback.yaml”,

            “cli/onehook.yaml”,

            “cli/onetemplate.yaml”,

            “cli/onemarketapp.yaml”,

            “cli/onesecgroup.yaml”,

            “cli/oneacct.yaml”,

            “cli/oneacl.yaml”,

            “cli/onemarket.yaml”,

            “cli/onegroup.yaml”,

            “cli/onevm.yaml”,

            “cli/oneflowtemplate.yaml”,

            “cli/onevrouter.yaml”,

            “cli/onezone.yaml”,

            “cli/oneimage.yaml”,

            “cli/onecluster.yaml”,

            “cli/oneuser.yaml”,

            “cli/onevntemplate.yaml”,

            “cli/onevdc.yaml”,

            “cli/onehost.yaml”,

            “cli/onedatastore.yaml”,

            “cli/oneflow.yaml”,

            “defaultrc”,

            “ec2_driver.conf”,

            “ec2_driver.default”,

            “fireedge/”,

            “fireedge/provision/”,

            “fireedge/provision/providers.d/”,

            “fireedge/provision/providers.d/vultr_virtual.yaml”,

            “fireedge/provision/providers.d/digitalocean.yaml”,

            “fireedge/provision/providers.d/vultr_metal.yaml”,

            “fireedge/provision/providers.d/equinix.yaml”,

            “fireedge/provision/providers.d/google.yaml”,

            “fireedge/provision/providers.d/aws.yaml”,

            “fireedge/provision/providers.d/dummy.yaml”,

            “fireedge/provision/provision-server.conf”,

            “fireedge/sunstone/”,

            “fireedge/sunstone/user/”,

            “fireedge/sunstone/user/vm-tab.yaml”,

            “fireedge/sunstone/user/vm-template-tab.yaml”,

            “fireedge/sunstone/sunstone-server.conf”,

            “fireedge/sunstone/admin/”,

            “fireedge/sunstone/admin/vm-tab.yaml”,

            “fireedge/sunstone/admin/cluster-tab.yaml”,

            “fireedge/sunstone/admin/vm-template-tab.yaml”,

            “fireedge/sunstone/admin/host-tab.yaml”,

            “fireedge/sunstone/sunstone-views.yaml”,

            “fireedge-server.conf”,

            “hm/”,

            “hm/hmrc”,

            “monitord.conf”,

            “oned.conf”,

            “oneflow-server.conf”,

            “onegate-server.conf”,

            “onehem-server.conf”,

            “sched.conf”,

            “sunstone-logos.yaml”,

            “sunstone-server.conf”,

            “sunstone-views/”,

            “sunstone-views/vcenter/”,

            “sunstone-views/vcenter/admin.yaml”,

            “sunstone-views/vcenter/user.yaml”,

            “sunstone-views/vcenter/groupadmin.yaml”,

            “sunstone-views/vcenter/cloud.yaml”,

            “sunstone-views/mixed/”,

            “sunstone-views/mixed/admin.yaml”,

            “sunstone-views/mixed/user.yaml”,

            “sunstone-views/mixed/groupadmin.yaml”,

            “sunstone-views/mixed/cloud.yaml”,

            “sunstone-views/kvm/”,

            “sunstone-views/kvm/admin.yaml”,

            “sunstone-views/kvm/user.yaml”,

            “sunstone-views/kvm/groupadmin.yaml”,

            “sunstone-views/kvm/cloud.yaml”,

            “sunstone-views.yaml”,

            “tmrc”,

            “vcenter_driver.default”,

            “vmm_exec/”,

            “vmm_exec/vmm_execrc”,

            “vmm_exec/vmm_exec_kvm.conf”

        ]

    }

}

.

TASK [frontend : updates the rafthook and federation configurations for fronteend_HA secondary servers] ******************************************************************************************************************

skipping: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : start OpenNebula] ***************************************************************************************************************************************************************************************

skipping: [testmachine2]

changed: [testmachine1]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “group_names”: [

        “apache_servers”,

        “frontend_server_primary”,

        “mysql_servers”

    ]

}

ok: [testmachine2] => {

    “group_names”: [

        “apache_servers”,

        “frontend_HA”,

        “mysql_servers”

    ]

}

.

TASK [frontend : finding frontend_HA list] *******************************************************************************************************************************************************************************

skipping: [testmachine1] => (item=apache_servers)

skipping: [testmachine1] => (item=frontend_server_primary)

skipping: [testmachine1] => (item=mysql_servers)

skipping: [testmachine2] => (item=apache_servers)

ok: [testmachine2] => (item=frontend_HA)

skipping: [testmachine2] => (item=mysql_servers)

.

TASK [frontend : Add Secondary Node frontends to the zone] ***************************************************************************************************************************************************************

skipping: [testmachine2] => (item=testmachine2)

changed: [testmachine1] => (item=testmachine2)

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “addzone, group_names”: “({‘results’: [{‘changed’: True, ‘stdout’: ”, ‘stderr’: ”, ‘rc’: 0, ‘cmd’: ‘onezone server-add 0 –name testmachine2 –rpc http://192.168.86.65:2633/RPC2’, ‘start’: ‘2022-04-06 03:21:33.920788’, ‘end’: ‘2022-04-06 03:21:34.174098’, ‘delta’: ‘0:00:00.253310’, ‘msg’: ”, ‘invocation’: {‘module_args’: {‘_raw_params’: ‘onezone server-add 0 –name testmachine2 –rpc http://192.168.86.65:2633/RPC2’, ‘_uses_shell’: True, ‘warn’: False, ‘stdin_add_newline’: True, ‘strip_empty_ends’: True, ‘argv’: None, ‘chdir’: None, ‘executable’: None, ‘creates’: None, ‘removes’: None, ‘stdin’: None}}, ‘stdout_lines’: [], ‘stderr_lines’: [], ‘failed’: False, ‘item’: ‘testmachine2’, ‘ansible_loop_var’: ‘item’}], ‘skipped’: False, ‘changed’: True, ‘msg’: ‘All items completed’}, [‘apache_servers’, ‘frontend_server_primary’, ‘mysql_servers’])”

}

ok: [testmachine2] => {

    “addzone, group_names”: “({‘results’: [{‘changed’: False, ‘skipped’: True, ‘skip_reason’: ‘Conditional result was False’, ‘item’: ‘testmachine2’, ‘ansible_loop_var’: ‘item’}], ‘skipped’: True, ‘msg’: ‘All items skipped’, ‘changed’: False}, [‘apache_servers’, ‘frontend_HA’, ‘mysql_servers’])”

}

.

TASK [frontend : Restore database to secondary nodes] ********************************************************************************************************************************************************************

skipping: [testmachine1]

changed: [testmachine2]

.

TASK [frontend : debug] **************************************************************************************************************************************************************************************************

ok: [testmachine1] => {

    “restoredb”: {

        “changed”: false,

        “skip_reason”: “Conditional result was False”,

        “skipped”: true

    }

}

ok: [testmachine2] => {

    “restoredb”: {

        “changed”: true,

        “cmd”: “onedb restore -f -S localhost -u admin -p admin -d opennebula /tmp/opennebula.sql”,

        “delta”: “0:00:00.988908”,

        “end”: “2022-04-06 03:21:35.749776”,

        “failed”: false,

        “msg”: “”,

        “rc”: 0,

        “start”: “2022-04-06 03:21:34.760868”,

        “stderr”: “”,

        “stderr_lines”: [],

        “stdout”: “MySQL DB opennebula at localhost restored.”,

        “stdout_lines”: [

            “MySQL DB opennebula at localhost restored.”

        ]

    }

}

.

PLAY RECAP ***************************************************************************************************************************************************************************************************************

testmachine1               : ok=70   changed=38   unreachable=0    failed=0    skipped=8    rescued=0    ignored=0   

testmachine2               : ok=71   changed=37   unreachable=0    failed=0    skipped=7    rescued=0    ignored=0   

.

.

0