ONLY FOR SELF STUDY, NO COMMERCIAL USAGE!!!
Contents
-
- [***ONLY FOR SELF STUDY, NO COMMERCIAL USAGE!!!***](#ONLY FOR SELF STUDY, NO COMMERCIAL USAGE!!!)
- [Chapter 9 Automating Linux Administration Tasks](#Chapter 9 Automating Linux Administration Tasks)
-
- [Managing Software and Subscriptions](#Managing Software and Subscriptions)
-
- [Managing Packages with Ansible](#Managing Packages with Ansible)
-
- [Optimizing Multiple Package Installation](#Optimizing Multiple Package Installation)
- [Gathering Facts about Installed Packages](#Gathering Facts about Installed Packages)
- [Reviewing Alternative Modules to Manage Packages](#Reviewing Alternative Modules to Manage Packages)
- [Registering and Managing Systems with Red Hat Subscription Management](#Registering and Managing Systems with Red Hat Subscription Management)
-
- [Managing Red Hat Subscription Management from the Command Line](#Managing Red Hat Subscription Management from the Command Line)
- [Managing Red Hat Subscription Management by Using a Role](#Managing Red Hat Subscription Management by Using a Role)
- [Configuring an RPM Package Repository(yum_repository)](#Configuring an RPM Package Repository(yum_repository))
-
- [Declaring an RPM Package Repository](#Declaring an RPM Package Repository)
- [Importing an RPM GPG Key](#Importing an RPM GPG Key)
- References
- Example
- [Managing Users and Authentication](#Managing Users and Authentication)
-
- [The User Module](#The User Module)
-
- [Use the User Module to Generate an SSH Key](#Use the User Module to Generate an SSH Key)
- [The Group Module](#The Group Module)
- [The Known Hosts Module](#The Known Hosts Module)
- [The Authorized Key Module](#The Authorized Key Module)
- [Configuring Sudo Access for Users and Groups](#Configuring Sudo Access for Users and Groups)
- References
- Example
- [Managing the Boot Process and Scheduled Processes](#Managing the Boot Process and Scheduled Processes)
-
- [Scheduling Jobs for Future Execution](#Scheduling Jobs for Future Execution)
-
- [Scheduling Jobs That Run One Time](#Scheduling Jobs That Run One Time)
- [Scheduling Repeating Jobs with Cron](#Scheduling Repeating Jobs with Cron)
- [Controlling Systemd Timer Units](#Controlling Systemd Timer Units)
- [Managing Services](#Managing Services)
- [Setting the Default Boot Target](#Setting the Default Boot Target)
- [Rebooting Managed Hosts](#Rebooting Managed Hosts)
- References
- [Managing the Boot Process and Scheduled Processes](#Managing the Boot Process and Scheduled Processes)
-
- [Scheduling Jobs for Future Execution](#Scheduling Jobs for Future Execution)
-
- [Scheduling Jobs That Run One Time](#Scheduling Jobs That Run One Time)
- [Scheduling Repeating Jobs with Cron](#Scheduling Repeating Jobs with Cron)
- [Controlling Systemd Timer Units](#Controlling Systemd Timer Units)
- [Managing Services](#Managing Services)
- [Setting the Default Boot Target](#Setting the Default Boot Target)
- [Rebooting Managed Hosts](#Rebooting Managed Hosts)
- References
- [Managing Storage](#Managing Storage)
-
- [Mounting Existing File Systems](#Mounting Existing File Systems)
- [Configuring Storage with the Storage System Role](#Configuring Storage with the Storage System Role)
-
- [Managing a File System on an Unpartitioned Device](#Managing a File System on an Unpartitioned Device)
- [Managing LVM with the Storage Role](#Managing LVM with the Storage Role)
- [Configuring Swap Space](#Configuring Swap Space)
- [Managing Partitions and File Systems with Tasks](#Managing Partitions and File Systems with Tasks)
-
- [Managing Partitions](#Managing Partitions)
- [Managing File Systems](#Managing File Systems)
- [Ansible Facts for Storage Configuration](#Ansible Facts for Storage Configuration)
-
- [Facts about Block Devices](#Facts about Block Devices)
- [Facts about Device Links](#Facts about Device Links)
- [Facts about Mounted File Systems](#Facts about Mounted File Systems)
- References
- Example
- [Managing Network Configuration](#Managing Network Configuration)
-
- [Configuring Networking with the Network System Role](#Configuring Networking with the Network System Role)
- [Configuring Networking with Modules](#Configuring Networking with Modules)
- [Ansible Facts for Network Configuration](#Ansible Facts for Network Configuration)
- References
- Example
- [Chapter 9 Example](#Chapter 9 Example)
Chapter 9 Automating Linux Administration Tasks
Managing Software and Subscriptions
Managing Packages with Ansible
The ansible.builtin.dnf
Ansible module uses dnf
on the managed hosts to handle package operations. The following playbook installs the httpd
package on the servera.lab.example.com
managed host:
yaml
---
- name: Install the required packages on the web server
hosts: servera.lab.example.com
tasks:
- name: Install the httpd packages
ansible.builtin.dnf:
name: httpd
state: present
# The `state` keyword indicates the expected state of the package on the managed host:`present`Ansible installs the package if it is not already installed.`absent`Ansible removes the package if it is installed.`latest`Ansible updates the package if it is not al
The following table compares some uses of the ansible.builtin.dnf
Ansible module with the equivalent dnf
command.
Ansible task | DNF command |
---|---|
- name: Install httpd ansible.builtin.dnf: name: httpd state: present |
dnf install httpd |
- name: Install or upgrade httpd ansible.builtin.dnf: name: httpd state: latest |
dnf upgrade httpd or dnf install httpd if the package is not yet installed. |
- name: Upgrade all packages ansible.builtin.dnf: name: '*' state: latest |
dnf upgrade |
- name: Remove httpd ansible.builtin.dnf: name: httpd state: absent |
dnf remove httpd |
- name: Install Development Tools ansible.builtin.dnf: name: '@Development Tools' state: present With the ansible.builtin.dnf Ansible module, you must prefix group names with the at sign (@ ). Remember that you can retrieve the list of groups with the dnf group list command. |
dnf group install "Development Tools" |
- name: Remove Development Tools ansible.builtin.dnf: name: '@Development Tools' state: absent |
dnf group remove "Development Tools" |
- name: Install perl DNF module ansible.builtin.dnf: name: '@perl:5.26/minimal' state: present To manage a package module, prefix its name with the at sign (@ ). The syntax is the same as with the dnf command. For example, you can omit the profile part to use the default profile: @perl:5.26 . Remember that you can list the available package modules with the dnf module list command. |
dnf module install perl:5.26/minimal . |
Run the ansible-navigator doc ansible.builtin.dnf
command for additional parameters and playbook examples.
Optimizing Multiple Package Installation
To operate on several packages, the name
keyword accepts a list. The following playbook installs three packages on the servera.lab.example.com
managed host.
yaml
---
- name: Install the required packages on the web server
hosts: servera.lab.example.com
tasks:
- name: Install the packages
ansible.builtin.dnf:
name:
- httpd
- mod_ssl
- httpd-tools
state: present
With this syntax, Ansible installs the packages in a single DNF transaction. This is equivalent to running the dnf install httpd mod_ssl httpd-tools
command.
A commonly seen but less efficient and slower version of this task is to use a loop.
yaml
---
- name: Install the required packages on the web server
hosts: servera.lab.example.com
tasks:
- name: Install the packages
ansible.builtin.dnf:
name: "{{ item }}""
state: present
loop:
- httpd
- mod_ssl
- httpd-tools
Avoid using this method because it requires the module to perform three individual transactions; one for each package.
Gathering Facts about Installed Packages
The ansible.builtin.package_facts
Ansible module collects the installed package details on managed hosts. It sets the ansible_facts['packages']
variable with the package details.
The following playbook calls the ansible.builtin.package_facts
module, and the ansible.builtin.debug
module to display the content of the ansible_facts['packages']
variable and the version of the installed NetworkManager
package.
yaml
---
- name: Display installed packages
hosts: servera.lab.example.com
gather_facts: false
tasks:
- name: Gather info on installed packages
ansible.builtin.package_facts:
manager: auto
- name: List installed packages
ansible.builtin.debug:
var: ansible_facts['packages']
- name: Display NetworkManager version
ansible.builtin.debug:
var: ansible_facts['packages']['NetworkManager'][0]['version']
when: ansible_facts['packages']['NetworkManager'] is defined
When run, the playbook displays the package list and the version of the NetworkManager
package:
[user@controlnode ~]$ ansible-navigator run -m stdout lspackages.yml
PLAY [Display installed packages] **********************************************
TASK [Gather info on installed packages] ***************************************
ok: [servera.lab.example.com]
TASK [List installed packages] *************************************************
ok: [servera.lab.example.com] => {
"ansible_facts['packages']": {
"NetworkManager": [
{
"arch": "x86_64",
"epoch": 1,
"name": "NetworkManager",
"release": "4.el9_0",
"source": "rpm",
"version": "1.36.0"
}
],
...output omitted...
"zstd": [
{
"arch": "x86_64",
"epoch": null,
"name": "zstd",
"release": "2.el9",
"source": "rpm",
"version": "1.5.1"
}
]
}
}
TASK [Display NetworkManager version] ******************************************
ok: [servera.lab.example.com] => {
"ansible_facts['packages']['NetworkManager'][0]['version']": "1.36.0"
}
PLAY RECAP *********************************************************************
servera.lab.example.com : ok=3 changed=0 unreachable=0 failed=0 ...
Reviewing Alternative Modules to Manage Packages
For other package managers, Ansible usually provides a dedicated module. The ansible.builtin.apt
module uses the APT package tool available on Debian and Ubuntu. The ansible.windows.win_package
module can install software on Microsoft Windows systems.
The following playbook uses conditionals to select the appropriate module in an environment composed of Red Hat Enterprise Linux systems running major versions 7, 8, and 9.
yaml
---
- name: Install the required packages on the web servers
hosts: webservers
tasks:
- name: Install httpd on RHEL 8 and 9
ansible.builtin.dnf:
name: httpd
state: present
when:
- "ansible_facts['distribution'] == 'RedHat'"
- "ansible_facts['distribution_major_version'] >= '8'`"
- name: Install httpd on RHEL 7 and earlier
`ansible.builtin.yum:
name: httpd
state: present
when:
- "ansible_facts['distribution'] == 'RedHat'"
- "ansible_facts['distribution_major_version'] <= `'7'`"
As an alternative, the generic ansible.builtin.package
module automatically detects and uses the package manager available on the managed hosts. With the ansible.builtin.package
module, you can rewrite the previous playbook as follows:
yaml
---
- name: Install the required packages on the web servers
hosts: webservers
tasks:
- name: Install httpd
ansible.builtin.package:
name: httpd
state: present
However, the ansible.builtin.package
module does not support all the features that the more specialized modules provide.
Also, operating systems often have different names for the packages they provide. For example, the package that installs the Apache HTTP Server is httpd
on Red Hat Enterprise Linux and apache2
on Ubuntu.
In that situation, you still need a conditional for selecting the correct package name depending on the operating system of the managed host.
Registering and Managing Systems with Red Hat Subscription Management
You can entitle your Red Hat Enterprise Linux systems to product subscriptions by using a few different methods:
-
You can use the
subscription-manager
command. -
On Red Hat Enterprise Linux 9.2 systems and later, you can use the
rhel-system-roles.rhc
role available from therhel-system-roles
RPM (version 1.21.1 and later). -
You can use the
redhat.rhel_system_roles.rhc
role from theredhat.rhel_system_roles
collection (version 1.21.1 and later).The
registry.redhat.io/ansible-automation-platform-24/ee-supported-rhel8
automation execution environment contains theredhat.rhel_system_roles
collection. You can also install theredhat.rhel_system_roles
collection in your Ansible project and then use theee-supported-rhel8
automation execution environment available for Ansible Automation Platform 2.2 or 2.3.
Managing Red Hat Subscription Management from the Command Line
Without Ansible, you can use the subscription-manager
command to register a system:
[user@host ~]$ sudo subscription-manager register
Registering to: subscription.rhsm.redhat.com:443/subscription
Username: yourusername
Password: yourpassword
...output omitted...
The following command attaches a subscription using a pool ID. You can list the pools available to your account with the subscription-manager list --available
command.
[user@host ~]$ sudo subscription-manager attach --pool=poolID
After you register a system and attach a subscription, you can use the subscription-manager
command to enable Red Hat software repositories on the system. You might use the subscription-manager repos --list
command to identify available repositories and then use the subscription-manager repos enable
command to enable repositories:
[user@host ~]$ sudo subscription-manager repos \
> --enable "rhel-9-for-x86_64-baseos-rpms" \
> --enable "rhel-9-for-x86_64-appstream-rpms"
Managing Red Hat Subscription Management by Using a Role
Whether you use the rhel-system-roles.rhc
role from the rhel-system-roles
RPM or the redhat.rhel_system_roles.rhc
role from the redhat.rhel_system_roles
collection, the steps for managing Red Hat subscription management are essentially the same.
-
Create a play that includes the desired role:
yaml--- - name: Register systems hosts: all become: true tasks: - name: Include the rhc role ansible.builtin.include_role: name: redhat.rhel_system_roles.rhc
-
Define variables for the role. You might define these variables in the play, in a
group_vars
directory, in ahost_vars
directory, or in a separate variable file:yaml--- rhc_state: present rhc_auth: login: username: yourusername password: yourpassword rhc_insights: state: present rhc_repositories: - name: rhel-9-for-x86_64-baseos-rpms state: enabled - name: rhel-9-for-x86_64-appstream-rpms state: enabled
The rhc_state
variable specifies if the system should be connected (or registered) to Red Hat. Valid values arepresent
,absent
, andreconnect
. When set to eitherpresent
orreconnect
, the role attempts to automatically attach a subscription.The rhc_auth
variable defines additional variables related to authenticating to Red Hat subscription management, such as therhc_auth['login']
andrhc_auth['activation_keys']
variables. One option for authentication is to specify your username and password. If you use this option, then you might consider protecting these variables with Ansible Vault. A second option is to define activation keys for your organization.The rhc_insights
variable defines additional variables related to Red Hat Insights. By default, therhc_insights['state']
variable has a value ofpresent
, which enables Red Hat Insights integration. Additional variables are available for Red Hat Insights when therhc_insights['state']
variable has a value ofpresent
. Set therhc_insights['state']
variable toabsent
to disable Red Hat Insights integration.The rhc_repositories
variable defines a list of repositories to either enable or disable. -
After you create the playbook and define variables for the role, run the playbook to apply the configuration.
Configuring an RPM Package Repository(yum_repository)
To enable support for a third-party Yum repository on a managed host, Ansible provides the ansible.builtin.yum_repository
module.
Declaring an RPM Package Repository
When run, the following playbook declares a new Yum repository on servera.lab.example.com
.
yaml
---
- name: Configure the company YUM/DNF repositories
hosts: servera.lab.example.com
tasks:
- name: Ensure Example Repo exists
ansible.builtin.yum_repository:
file: example
name: example-internal
description: Example Inc. Internal YUM/DNF repo
baseurl: http://materials.example.com/yum/repository/
enabled: true
gpgcheck: true
state: present
The file keyword specifies the name of the file to create under the /etc/yum.repos.d/ directory. The module automatically adds the .repo extension to that name. |
---|
Typically, software providers digitally sign RPM packages using GPG keys. By setting the gpgcheck keyword to true , the RPM system verifies package integrity by confirming that the package was signed by the appropriate GPG key. The RPM system does not install any package whose GPG signature does not match. Use the ansible.builtin.rpm_key Ansible module, described later in this section, to install the required GPG public key. |
When you set the state keyword to present , Ansible creates or updates the .repo file. When state is set to absent , Ansible deletes the file. |
The resulting /etc/yum.repos.d/example.repo
file on servera.lab.example.com
is as follows:
[example-internal]
async = 1
baseurl = http://materials.example.com/yum/repository/
enabled = 1
gpgcheck = 1
name = Example Inc. Internal YUM/DNF repo
The ansible.builtin.yum_repository
module exposes most of the repository configuration parameters as keywords. Run the ansible-navigator doc ansible.builtin.yum_repository
command for additional parameters and playbook examples.
Some third-party repositories provide the configuration file and the GPG public key as part of an RPM package that can be downloaded and installed using the dnf install
command.
For example, the Extra Packages for Enterprise Linux (EPEL) project provides the https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
package that deploys the /etc/yum.repos.d/epel.repo
configuration file.
For this repository, use the ansible.builtin.dnf
module to install the EPEL package instead of the ansible.builtin.yum_repository
module.
Importing an RPM GPG Key
When the gpgcheck
keyword is set to true
in the ansible.builtin.yum_repository
module, you also need to install the GPG key on the managed host. The ansible.builtin.rpm_key
module in the following example deploys the GPG public key hosted on a remote web server to the servera.lab.example.com
managed host.
yaml
---
- name: Configure the company YUM/DNF repositories
hosts: servera.lab.example.com
tasks:
- name: Deploy the GPG public key
ansible.builtin.rpm_key:
key: http://materials.example.com/yum/repository/RPM-GPG-KEY-example
state: present
- name: Ensure Example Repo exists
ansible.builtin.yum_repository:
file: example
name: example-internal
description: Example Inc. Internal YUM/DNF repo
baseurl: http://materials.example.com/yum/repository/
enabled: true
gpgcheck: true
state: present
References
dnf
(8), yum.conf
(5), and subscription-manager
(8) man pages
ansible.builtin.dnf module - Manages Packages with the DNF Package Manager - Ansible Documentation
ansible.builtin.package_facts module - Package Information as Facts - Ansible Documentation
Introduction to the rhel-system-roles.rhc Role
Using the redhat.rhel_system_roles.rhc Collection Role
ansible.builtin.yum_repository module - Add or Remove YUM Repositories - Ansible Documentation
ansible.builtin.rpm_key module - Adds or Removes a GPG Key from the RPM DB - Ansible Documentation
Example
Write a playbook to ensure that the simple-agent
package is installed on all managed hosts. The playbook must also ensure that all managed hosts are configured to use the internal Yum repository.
The repository is located at http://materials.example.com/yum/repository
. All RPM packages in the repository are signed with a GPG key pair. The GPG public key for the repository packages is available at http://materials.example.com/yum/repository/RPM-GPG-KEY-example
.
yaml
[student@workstation system-software]$ cat ansible.cfg
[defaults]
inventory=inventory
remote_user=devops
#Try me...
#callback_whitelist=timer
[privilege_escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=False
[student@workstation system-software]$ cat inventory
servera.lab.example.com
yaml
# repo_playbook.yml
---
- name: Config yum repo for installing simple-agent
hosts: all
gather_facts: false
vars:
custom_pkg: simple-agent
tasks:
- name: Gather information about installed packages
ansible.builtin.package_facts:
manager: auto
# Check if the simple-agent installed and the version of it
- name: Display custom package version
ansible.builtin.debug:
var: ansible_facts['packages'][custom_pkg][0]['version']
when: ansible_facts['packages'][custom_pkg] is defined
- name: Check Example Repo exists
ansible.builtin.yum_repository:
file: example
name: example-internal
baseurl: http://materials.example.com/yum/repository
description: Example Inc. Internal YUM repo
enabled: true
gpgcheck: true
state: present
- name: Ensure Repo RPM key is installed
ansible.builtin.rpm_key:
key: http://materials.example.com/yum/repository/RPM-GPG-KEY-example
state: present
- name: Install Example package
ansible.builtin.dnf:
name: '{{ custom_pkg }}'
state: present
# Gather ansible_facts again after the packages is installed
- name: Gather information about installed packages
ansible.builtin.package_facts:
manager: auto
# show the simple-agent information
- name: Display custom package version
ansible.builtin.debug:
var: ansible_facts['packages'][custom_pkg]
when: custom_pkg in ansible_facts['packages']
Result:
json
[student@workstation system-software]$ ansible-navigator run -m stdout repo_playbook.yml
PLAY [Config yum repo for installing simple-agent] *****************************
TASK [Gather information about installed packages] *****************************
ok: [servera.lab.example.com]
TASK [Display custom package version] ******************************************
skipping: [servera.lab.example.com] # skipped due to the package not installed yet
TASK [Check Example Repo exists] ***********************************************
ok: [servera.lab.example.com]
TASK [Ensure Repo RPM key is installed] ****************************************
ok: [servera.lab.example.com]
TASK [Install Example package] *************************************************
changed: [servera.lab.example.com]
TASK [Gather information about installed packages] *****************************
ok: [servera.lab.example.com]
TASK [Display custom package version] ******************************************
ok: [servera.lab.example.com] => {
"ansible_facts['packages'][custom_pkg]": [
{
"arch": "x86_64",
"epoch": null,
"name": "simple-agent",
"release": "1.el9",
"source": "rpm",
"version": "1.0"
}
]
}
PLAY RECAP *********************************************************************
servera.lab.example.com : ok=6 changed=1 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
Managing Users and Authentication
The User Module
The Ansible ansible.builtin.user
module lets you create, configure, and remove user accounts on managed hosts. You can remove or add a user, set a user's home directory, set the UID for system user accounts, manage passwords, and assign a user to supplementary groups.
To create a user that can log in to the machine, you need to provide a hashed password for the password parameter. See "How do I generate encrypted passwords for the user module?" for information on how to hash a password.
The following example demonstrates the ansible.builtin.user
module:
yaml
- name: Create devops_user if missing, make sure is member of correct groups
ansible.builtin.user:
name: devops_user
shell: /bin/bash
groups: sys_admins, developers
append: true
The name parameter is the only option required by the ansible.builtin.user module. Its value is the name of the service account or user account to create, remove, or modify. |
---|
The shell parameter sets the user's shell. |
The groups parameter, when used with the append parameter, tells the machine to append the supplementary groups sys_admins and developers to this user. If you do not use the append parameter then the groups provided overwrite a user's existing supplementary groups. To set the primary group for a user, use the group option. |
Note
The ansible.builtin.user
module also provides information in return values, such as the user's home directory and a list of groups that the user is a member of. These return values can be registered into a variable and used in subsequent tasks. More information is available in the documentation for the module.
Table 9.1. Commonly Used Parameters for the User Module
Parameter | Comments |
---|---|
comment |
Optionally sets the description of a user account. |
group |
Optionally sets the user's primary group. |
groups |
Optionally sets a list of supplementary groups for the user. When set to a null value, all groups except the primary group are removed. |
home |
Optionally sets the user's home directory location. |
create_home |
Optionally takes a Boolean value of true or false . A home directory is created for the user if the value is set to true . |
system |
Optionally takes a Boolean value of true or false . When creating an account, this makes the user a system account if the value is set to true . This setting cannot be changed on existing users. |
uid |
Sets the UID number of the user. |
state |
If set to present , create the account if it is missing (the default setting). If set to absent , remove the account if it is present. |
Use the User Module to Generate an SSH Key
The ansible.builtin.user
module can generate an SSH key if called with the generate_ssh_key
parameter.
The following example demonstrates how the ansible.builtin.user
module generates an SSH key:
yaml
- name: Create an SSH key for user1
ansible.builtin.user:
name: user1
generate_ssh_key: true
ssh_key_bits: 2048
ssh_key_file: .ssh/id_my_rsa
The generate_ssh_key parameter accepts a Boolean value that specifies whether to generate an SSH key for the user. This does not overwrite an existing SSH key unless the force parameter is provided with the true value. |
---|
The ssh_key_bits parameter sets the number of bits in the new SSH key. |
The ssh_key_file parameter specifies the file name for the new SSH private key (the public key adds the .pub suffix). |
The Group Module
The ansible.builtin.group
module adds, deletes, and modifies groups on the managed hosts. The managed hosts need to have the groupadd
, groupdel
, and groupmod
commands available, which are provided by the shadow-utils
package in Red Hat Enterprise Linux 9. For Microsoft Windows managed hosts, use the win_group
module.
The following example demonstrates how the ansible.builtin.group
module creates a group:
yaml
- name: Verify that the auditors group exists
ansible.builtin.group:
name: auditors
state: present
Table 9.2. Parameters for the Group Module
Parameter | Comments |
---|---|
gid |
This parameter sets the GID number to for the group. If omitted, the number is automatically selected. |
local |
This parameter forces the use of local command alternatives (instead of commands that might change central authentication sources) on platforms that implement it. |
name |
This parameter sets the name of the group to manage. |
state |
This parameter determines whether the group should be present or absent on the remote host. |
system |
If this parameter is set to true , then the group is created as a system group (typically, with a GID number below 1000). |
The Known Hosts Module
The ansible.builtin.known_hosts
module manages SSH host keys by adding or removing them on managed hosts. This ensures that managed hosts can automatically establish the authenticity of SSH connections to other managed hosts, ensuring that users are not prompted to verify a remote managed host's SSH fingerprint the first time they connect to it.
The following example demonstrates how the ansible.builtin.known_hosts
module copies a host key to a managed host:
yaml
- name: Copy host keys to remote servers
ansible.builtin.known_hosts:
path: /etc/ssh/ssh_known_hosts
name: servera.lab.example.com
key: servera.lab.example.com,172.25.250.10 ssh-rsa ASDeararAIUHI324324
The path parameter specifies the path to the known_hosts file to edit. If the file does not exist, then it is created. |
---|
The name parameter specifies the name of the host to add or remove. The name must match the hostname or IP address of the key being added. |
The key parameter is the SSH public host key as a string in a specific format. For example, the value for the key parameter must be in the format <hostname[,IP]> ssh-rsa <pubkey> for an RSA public host key (found in a host's /etc/ssh/ssh_host_rsa_key.pub key file), or <hostname[,IP]> ssh-ed25519 <pubkey> for an Ed25519 public host key (found in a host's /etc/ssh/ssh_host_ed25519_key.pub key file). |
The following example demonstrates how to use the lookup
plug-in to populate the key
parameter from an existing file in the Ansible project:
yaml
- name: Copy host keys to remote servers
ansible.builtin.known_hosts:
path: /etc/ssh/ssh_known_hosts
name: serverb
key: "{{ lookup('ansible.builtin.file', 'pubkeys/serverb') }}"
This Jinja2 expression uses the lookup function with the ansible.builtin.file lookup plug-in to load the content of the pubkeys/serverb key file from the Ansible project as the value of the key option. You can list available lookup plug-ins using the ansible-navigator doc -l -t lookup command |
---|
The following play is an example that uses some advanced techniques to construct an /etc/ssh/ssh_known_hosts
file for all managed hosts in the inventory. There might be more efficient ways to accomplish this, because it runs a nested loop on all managed hosts.
It uses the ansible.builtin.slurp
module to get the content of the RSA and Ed25519 SSH public host keys in Base64 format, and then processes the values of the registered variable with the b64decode
and trim
filters to convert those values back to plain text.
yaml
- name: Configure /etc/ssh/ssh_known_hosts files
hosts: all
tasks:
- name: Collect RSA keys
ansible.builtin.slurp:
src: /etc/ssh/ssh_host_rsa_key.pub
register: rsa_host_keys
- name: Collect Ed25519 keys
ansible.builtin.slurp:
src: /etc/ssh/ssh_host_ed25519_key.pub
register: ed25519_host_keys
- name: Deploy known_hosts
ansible.builtin.known_hosts:
path: /etc/ssh/ssh_known_hosts
name: "{{ item[0] }}"
key: "{{ hostvars[ item[0] ]['ansible_facts']['fqdn'] }} {{ hostvars[ item[0] ][ item[1] ]['content'] | b64decode | trim }}"
state: present
with_nested:
- "{{ ansible_play_hosts }}"
- [ 'rsa_host_keys', 'ed25519_host_keys' ]
item[0] is an inventory hostname from the list in the ansible_play_hosts variable. |
---|
item[1] is the string rsa_host_keys or ed25519_host_keys . The b64decode filter converts the value stored in the variable from Base64 to plain text, and the trim filter removes an unnecessary newline. This is all one line starting with key , and there is a single space between the two Jinja2 expressions. |
ansible_play_hosts is a list of the hosts remaining in the play at this point, taken from the inventory and removing hosts with failed tasks. The play must retrieve the RSA and Ed25519 public host keys for each of the other hosts when it constructs the known_hosts file on each host in the play. |
This is a two-item list of the two variables that the play uses to store host keys. |
Note
Lookup plug-ins and filters are covered in more detail in the course DO374: Developing Advanced Automation with Red Hat Ansible Automation Platform.
The Authorized Key Module
The ansible.posix.authorized_key
module manages SSH authorized keys for user accounts on managed hosts.
/home/your_username/.ssh/authorized_keys
The following example demonstrates how to use the ansible.posix.authorized_key
module to add an SSH key to a managed host:
yaml
- name: Set authorized key
ansible.posix.authorized_key:
user: user1
state: present
key: "{{ lookup('ansible.builtin.file', 'files/user1/id_rsa.pub') }}"
The user parameter specifies the username of the user whose authorized_keys file is modified on the managed host. |
---|
The state parameter accepts the present or absent value with present as the default. |
The key parameter specifies the SSH public key to add or remove. In this example, the lookup function uses the ansible.builtin.file lookup plug-in to load the contents of the files/user1/id_rsa.pub file in the Ansible project as the value for key . As an alternative, you can provide a URL to a public key file as this value. |
Configuring Sudo Access for Users and Groups
In Red Hat Enterprise Linux 9, you can configure access for a user or group to run sudo
commands without requiring a password prompt.
The following example demonstrates how to use the ansible.builtin.lineinfile
module to provide a group with sudo
access to the root
account without prompting the group members for a password:
yaml
- name: Modify sudo to allow the group01 group sudo without a password
ansible.builtin.lineinfile:
path: /etc/sudoers.d/group01
state: present
create: true
mode: 0440
line: "%group01 ALL=(ALL) NOPASSWD: ALL"
validate: /usr/sbin/visudo -cf %s
The path parameter specifies the file to modify in the /etc/sudoers.d/ directory. It is a good practice to match the file name with the name of the user or group you are providing access to. This makes it easier for future reference. |
---|
The state parameter accepts the present or absent value. The default value is present . |
The create parameter takes a Boolean value and specifies if the file should be created if it does not already exist. The default value for the create parameter is false . |
The mode parameter specifies the permissions on the sudoers file. |
The line parameter specifies the line to add to the file. The format is specific, and an example can be found in the /etc/sudoers file under the "Same thing but without a password" comment. If you are configuring sudo access for a group, then you need to add a percent sign (% ) to the beginning of the group name. If you are configuring sudo access for a user, then do not add the percent sign. |
The validate parameter specifies the command to run to verify that the file is correct. When the validate parameter is present, the file is created in a temporary file path and the provided command validates the temporary file. If the validate command succeeds, then the temporary file is copied to the path specified in the path parameter and the temporary file is removed. |
An example of the sudo
validation command can be found in the examples section of the output from the ansible-navigator doc ansible.builtin.lineinfile
command.
References
Users Module Ansible Documentation
How do I generate encrypted passwords for the user module
Group Module Ansible Documentation
SSH Known Hosts Module Ansible Documentation
Authorized Key Module Ansible Documentation
The Lookup Plugin Ansible Documentation
Using Filters to Manipulate Data
The Line in File Module Ansible Documentation
Example
-
Create a new user group.
-
Manage users by using the
ansible.builtin.user
module. -
Populate SSH authorized keys by using the
ansible.posix.authorized_key
module. -
Modify the
/etc/ssh/sshd_config
file and a configuration file in/etc/sudoers.d
by using theansible.builtin.lineinfile
module.[student@workstation system-users]$ tree
.
├── ansible.cfg
├── ansible-navigator.log
├── files
│ ├── user1.key.pub
│ ├── user2.key.pub
│ ├── user3.key.pub
│ ├── user4.key.pub
│ └── user5.key.pub
├── inventory
├── users.yml
└── vars
└── users_vars.yml
contents:
[student@workstation system-users]$ cat ansible.cfg
[defaults]
remote_user=devops
inventory=./inventory
[privilege_escalation]
become=yes
become_method=sudo
[student@workstation system-users]$ cat inventory
[webservers]
servera.lab.example.com
[student@workstation system-users]$ cat vars/users_vars.yml
---
users:
- username: user1
groups: webadmin
- username: user2
groups: webadmin
- username: user3
groups: webadmin
- username: user4
groups: webadmin
- username: user5
groups: webadmin
The final playbook file as :
yaml
---
- name: Adding users
hosts: webservers
vars_files:
- vars/users_vars.yml
tasks:
- name: Creating user group
ansible.builtin.group:
name: webadmin
state: present
- name: Create uses from vars file
ansible.builtin.user:
name: "{{ item['username'] }}"
groups: "{{ item['groups'] }}"
loop: "{{ users }}"
- name: Populate the SSH pub keys
ansible.posix.authorized_key:
user: "{{ item['username'] }}"
state: present
key: "{{ lookup('file', 'files/'+item['username']+'.key.pub') }}"
loop: "{{ users }}"
- name: Modify to make sudo without passwd
ansible.builtin.lineinfile:
path: /etc/sudoers.d/webadmin
state: present
create: true
mode: 0440
line: "%webadmin ALL=(ALL) NOPASSWD: ALL"
validate: /usr/sbin/visudo -cf %s
f
- name: Disable root login via SSH
ansible.builtin.lineinfile:
dest: /etc/ssh/sshd_config
regexp: "^PermitRootLogin"
line: "PermitRootLogin no"
notify: Restart sshd
handlers:
- name: Restart sshd
ansible.builtin.service:
name: sshd
state: restarted
test:
[student@workstation system-users]$ ssh user1@servera
[student@workstation system-users]$ ssh root@servera
root@servera's password:
Permission denied, please try again.
Managing the Boot Process and Scheduled Processes
Scheduling Jobs for Future Execution
- The
at
command schedules jobs that run once at a specified time. - The Cron subsystem schedules jobs to run on a recurring schedule, either in a user's personal
crontab
file, in the system Cron configuration in/etc/crontab
, or as a file in/etc/cron.d
. - The
systemd
subsystem also provides timer units that can start service units on a set schedule.
Scheduling Jobs That Run One Time
Quick one-time scheduling is done with the ansible.posix.at
module. You create the job to run at a future time, and it is held until that time to execute.
Table 9.3. Options for the ansible.posix.at
Module
Option | Comments |
---|---|
command |
The command to schedule to run in the future. |
count |
The integer number of units from now that the job should run. (Must be used with units .) |
units |
Specifies whether count is measured in minutes, hours, days, or weeks. |
script_file |
An existing script file to schedule to run in the future. |
state |
The default value (present ) adds a job; absent removes a matching job if present. |
unique |
If set to true , then if a matching job is already present, a new job is not added. |
In the following example, the task shown uses at
to schedule the userdel -r tempuser
command to run in 20 minutes.
yaml
- name: Remove tempuser
ansible.posix.at:
command: userdel -r tempuser
count: 20
units: minutes
unique: true
Scheduling Repeating Jobs with Cron
You can configure a command that runs on a repeating schedule by using Cron. To set up Cron jobs, use the ansible.builtin.cron
module. The name
option is mandatory, and is inserted in the crontab
as a description of the repeating job. It is also used by the module to determine if the Cron job already exists, or which Cron job to modify or delete.
Some commonly used parameters for the ansible.builtin.cron
module include:
Table 9.4. Options for the ansible.builtin.cron
Module
Options | Comments |
---|---|
name |
The comment identifying the Cron job. |
job |
The command to run. |
minute , hour , day , month , weekday |
The value for the field in the time specification for the job in the crontab entry. If not set, "*" (all values) is assumed. |
state |
If set to present , it creates the Cron job (the default); absent removes it. |
user |
The Cron job runs as this user. If cron_file is not specified, the job is set in that user's crontab file. |
cron_file |
If set, create a system Cron job in cron_file . You must specify user and a time specification. If you use a relative path, then the file is created in /etc/cron.d . |
This first example task creates a Cron job in the testing
user's personal crontab
file. It runs their personal backup-home-dir
script at 16:00 every Friday. You could log in as that user and run crontab -l
after running the playbook to confirm that it worked.
yaml
- name: Schedule backups for my home directory
ansible.builtin.cron:
name: Backup my home directory
user: testing
job: /home/testing/bin/backup-home-dir
minute: 0
hour: 16
weekday: 5
In the following example, the task creates a system Cron job in the /etc/cron.d/flush_bolt
file that runs a command as root
to flush the Bolt cache every morning at 11:45.
yaml
- name: Schedule job to flush the Bolt cache
ansible.builtin.cron:
name: Flush Bolt cache
cron_file: flush_bolt
user: "root"
minute: 45
hour: 11
job: "php ./app/nut cache:clear"
Warning:
Do not use cron_file
to modify the /etc/crontab
file. The file you specify must only be maintained by Ansible and should only contain the entry specified by the task.
Controlling Systemd Timer Units
The ansible.builtin.systemd
module can be used to enable or disable existing systemd
timer units that run recurring jobs (usually systemd
service units that eventually exit).
The following example disables and stops the systemd
timer that automatically populates the dnf
package cache on Red Hat Enterprise Linux 9.
yaml
- name: Disable dnf makecache
ansible.builtin.systemd:
name: dnf-makecache.timer
state: stopped
enabled: false
Managing Services
You can choose between two modules to manage services or reload daemons: ansible.builtin.systemd
and ansible.builtin.service
.
The ansible.builtin.service
module is intended to work with a number of service-management systems, including systemd
, Upstart, SysVinit, BSD init
, and others. Because it provides a generic interface to the initialization system, it offers a basic set of options to start, stop, restart, and enable services and other daemons.
yaml
- name: Start and enable nginx
ansible.builtin.service:
name: nginx
state: started
enabled: true
The ansible.builtin.systemd
module is designed to work with systemd
only, but it offers additional configuration options specific to that system and service manager.
The following example that uses ansible.builtin.systemd
is equivalent to the preceding example that used ansible.builtin.service
:
yaml
- name: Start nginx
ansible.builtin.systemd:
name: nginx
state: started
enabled: true
The next example reloads the httpd
daemon, but before it does that it runs systemctl daemon-reload
to reload the entire systemd
configuration.
yaml
- name: Reload web server
ansible.builtin.systemd:
name: httpd
state: reloaded
daemon_reload: true
Setting the Default Boot Target
The ansible.builtin.systemd
module cannot set the default boot target. You can use the ansible.builtin.command
module to set the default boot target.
yaml
- name: Change default systemd target
hosts: all
gather_facts: false
vars:
systemd_target: "multi-user.target"
tasks:
- name: Get current systemd target
ansible.builtin.command:
cmd: systemctl get-default
# Because this is just gathering information, the task should never report `changed`.
changed_when: false
register: target
- name: Set default systemd target
ansible.builtin.command:
cmd: systemctl set-default {{ systemd_target }}
when: systemd_target not in target['stdout']
# This is the only task in this play that requires `root` access.
become: true
Rebooting Managed Hosts
You can use the dedicated ansible.builtin.reboot
module to reboot managed hosts during playbook execution. This module reboots the managed host, and waits until the managed host comes back up before continuing with playbook execution. The module determines that a managed host is back up by waiting until Ansible can run a command on the managed host.
The following simple example immediately triggers a reboot:
yaml
- name: Reboot now
ansible.builtin.reboot:
By default, the playbook waits up to 600 seconds before deciding that the reboot failed, and another 600 seconds before deciding that the test command failed. You can adjust this value so that the timeouts are each 180 seconds. For example:
yaml
- name: Reboot, shorten timeout
ansible.builtin.reboot:
reboot_timeout: 180
Some other useful options to the module include:
Table 9.5. Options for the ansible.builtin.reboot
Module
Options | Comments |
---|---|
pre_reboot_delay |
The time to wait before reboot. On Linux, this is measured in minutes, and if less than 60, is rounded down to 0. |
msg |
The message to display to users before reboot. |
test_command |
The command used to determine whether the managed host is usable and ready for more Ansible tasks after reboot. The default is whoami . |
References
ansible.builtin.cron module - Manage cron.d and crontab entries --- Ansible Documentation
ansible.builtin.reboot module - Reboot a machine --- Ansible Documentation
ansible.builtin.service module - Manage services --- Ansible Documentation
ansible.builtin.systemd module - Manage systemd units --- Ansible Documentation
Managing the Boot Process and Scheduled Processes
Scheduling Jobs for Future Execution
- The
at
command schedules jobs that run once at a specified time. - The Cron subsystem schedules jobs to run on a recurring schedule, either in a user's personal
crontab
file, in the system Cron configuration in/etc/crontab
, or as a file in/etc/cron.d
. - The
systemd
subsystem also provides timer units that can start service units on a set schedule.
Scheduling Jobs That Run One Time
Quick one-time scheduling is done with the ansible.posix.at
module. You create the job to run at a future time, and it is held until that time to execute.
Table 9.3. Options for the ansible.posix.at
Module
Option | Comments |
---|---|
command |
The command to schedule to run in the future. |
count |
The integer number of units from now that the job should run. (Must be used with units .) |
units |
Specifies whether count is measured in minutes, hours, days, or weeks. |
script_file |
An existing script file to schedule to run in the future. |
state |
The default value (present ) adds a job; absent removes a matching job if present. |
unique |
If set to true , then if a matching job is already present, a new job is not added. |
In the following example, the task shown uses at
to schedule the userdel -r tempuser
command to run in 20 minutes.
yaml
- name: Remove tempuser
ansible.posix.at:
command: userdel -r tempuser
count: 20
units: minutes
unique: true
state: present
Scheduling Repeating Jobs with Cron
You can configure a command that runs on a repeating schedule by using Cron. To set up Cron jobs, use the ansible.builtin.cron
module. The name
option is mandatory, and is inserted in the crontab
as a description of the repeating job. It is also used by the module to determine if the Cron job already exists, or which Cron job to modify or delete.
Some commonly used parameters for the ansible.builtin.cron
module include:
Table 9.4. Options for the ansible.builtin.cron
Module
Options | Comments |
---|---|
name |
The comment identifying the Cron job. |
job |
The command to run. |
minute , hour , day , month , weekday |
The value for the field in the time specification for the job in the crontab entry. If not set, "*" (all values) is assumed. |
state |
If set to present , it creates the Cron job (the default); absent removes it. |
user |
The Cron job runs as this user. If cron_file is not specified, the job is set in that user's crontab file. |
cron_file |
If set, create a system Cron job in cron_file . You must specify user and a time specification. If you use a relative path, then the file is created in /etc/cron.d . |
This first example task creates a Cron job in the testing
user's personal crontab
file. It runs their personal backup-home-dir
script at 16:00 every Friday. You could log in as that user and run crontab -l
after running the playbook to confirm that it worked.
yaml
- name: Schedule backups for my home directory
ansible.builtin.cron:
name: Backup my home directory
user: testing
job: /home/testing/bin/backup-home-dir
minute: 0
hour: 16
weekday: 5
In the following example, the task creates a system Cron job in the /etc/cron.d/flush_bolt
file that runs a command as root
to flush the Bolt cache every morning at 11:45.
yaml
- name: Schedule job to flush the Bolt cache
ansible.builtin.cron:
name: Flush Bolt cache
cron_file: flush_bolt
user: "root"
minute: 45
hour: 11
job: "php ./app/nut cache:clear"
Warning:
Do not use cron_file
to modify the /etc/crontab
file. The file you specify must only be maintained by Ansible and should only contain the entry specified by the task.
Controlling Systemd Timer Units
The ansible.builtin.systemd
module can be used to enable or disable existing systemd
timer units that run recurring jobs (usually systemd
service units that eventually exit).
The following example disables and stops the systemd
timer that automatically populates the dnf
package cache on Red Hat Enterprise Linux 9.
yaml
- name: Disable dnf makecache
ansible.builtin.systemd:
name: dnf-makecache.timer
state: stopped
enabled: false
Managing Services
You can choose between two modules to manage services or reload daemons: ansible.builtin.systemd
and ansible.builtin.service
.
The ansible.builtin.service
module is intended to work with a number of service-management systems, including systemd
, Upstart, SysVinit, BSD init
, and others. Because it provides a generic interface to the initialization system, it offers a basic set of options to start, stop, restart, and enable services and other daemons.
yaml
- name: Start and enable nginx
ansible.builtin.service:
name: nginx
state: started
enabled: true
The ansible.builtin.systemd
module is designed to work with systemd
only, but it offers additional configuration options specific to that system and service manager.
The following example that uses ansible.builtin.systemd
is equivalent to the preceding example that used ansible.builtin.service
:
yaml
- name: Start nginx
ansible.builtin.systemd:
name: nginx
state: started
enabled: true
The next example reloads the httpd
daemon, but before it does that it runs systemctl daemon-reload
to reload the entire systemd
configuration.
yaml
- name: Reload web server
ansible.builtin.systemd:
name: httpd
state: reloaded
daemon_reload: true
Setting the Default Boot Target
The ansible.builtin.systemd
module cannot set the default boot target. You can use the ansible.builtin.command
module to set the default boot target.
yaml
- name: Change default systemd target
hosts: all
gather_facts: false
vars:
systemd_target: "multi-user.target"
tasks:
- name: Get current systemd target
ansible.builtin.command:
cmd: systemctl get-default
changed_when: false
register: target
- name: Set default systemd target
ansible.builtin.command:
cmd: systemctl set-default {{ systemd_target }}
when: systemd_target not in target['stdout']
become: true
Rebooting Managed Hosts
You can use the dedicated ansible.builtin.reboot
module to reboot managed hosts during playbook execution. This module reboots the managed host, and waits until the managed host comes back up before continuing with playbook execution. The module determines that a managed host is back up by waiting until Ansible can run a command on the managed host.
The following simple example immediately triggers a reboot:
yaml
- name: Reboot now
ansible.builtin.reboot:
By default, the playbook waits up to 600 seconds before deciding that the reboot failed, and another 600 seconds before deciding that the test command failed. You can adjust this value so that the timeouts are each 180 seconds. For example:
yaml
- name: Reboot, shorten timeout
ansible.builtin.reboot:
reboot_timeout: 180
Some other useful options to the module include:
Table 9.5. Options for the ansible.builtin.reboot
Module
Options | Comments |
---|---|
pre_reboot_delay |
The time to wait before reboot. On Linux, this is measured in minutes, and if less than 60, is rounded down to 0. |
msg |
The message to display to users before reboot. |
test_command |
The command used to determine whether the managed host is usable and ready for more Ansible tasks after reboot. The default is whoami . |
References
ansible.builtin.cron module - Manage cron.d and crontab entries --- Ansible Documentation
ansible.builtin.reboot module - Reboot a machine --- Ansible Documentation
ansible.builtin.service module - Manage services --- Ansible Documentation
ansible.builtin.systemd module - Manage systemd units --- Ansible Documentation
Example:
[student@workstation system-process]$ cat ansible.cfg
[defaults]
remote_user = devops
inventory = ./inventory
[privilege_escalation]
become = yes
become_method = sudo
[student@workstation system-process]$ cat inventory
[webservers]
servera.lab.example.com
-
Create the
create_crontab_file.yml
playbook in the working directory.Configure the playbook to use the
ansible.builtin.cron
module to create acrontab
file named/etc/cron.d/add-date-time
that schedules a recurring Cron job. The job should run as thedevops
user every two minutes starting at 09:00 and ending at 16:59 from Monday through Friday. The job should append the current date and time to the/home/devops/my_datetime_cron_job
file.yaml--- - name: Create a crontab file hosts: webservers become: true tasks: - name: building jobs ansible.builtin.cron: name: add date and time to a file job: date >> /home/devops/my_date_time_cron_job minute: "*/2" hour: 9-16 weekday: 1-5 user: devops cron_file: add-date-time state: present
-
Create the
remove_cron_job.yml
playbook in the working directory. Configure the playbook to use theansible.builtin.cron
module to remove theAdd date and time to a file
Cron job from the/etc/cron.d/add-date-time
crontab
file.yaml--- - name: Remove a crontab file hosts: webservers become: true tasks: - name: building jobs ansible.builtin.cron: name: add date and time to a file user: devops cron_file: add-date-time state: absent
-
Create the
schedule_at_task.yml
playbook in the working directory. Configure the playbook to use theansible.posix.at
module to schedule a task that runs one minute in the future. The task should run thedate
command and redirect its output to the/home/devops/my_at_date_time
file. Use theunique: true
option to ensure that if the command already exists in theat
queue, a new task is not added.yaml--- - name: How to use AT on a task hosts: webservers become: true become_user: devops tasks: - name: AT task in the future ansible.posix.at: command: date >> /home/devops/my_at_date_time count: 1 units: minutes unique: true state: present
-
Create the
set_default_boot_target_graphical.yml
playbook in the working directory. Write a play in the playbook to set the defaultsystemd
target tographical.target
.yaml--- - name: Set the default boot target graphical hosts: webservers become: true vars: new_target: "graphical.target" tasks: - name: Get current target ansible.builtin.command: cmd: systemctl get-default changed_when: false register: default_target - name: Change to new target mode ansible.builtin.command: cmd: systemctl set-default {{ new_target }} when: new_target not in default_target.stdout become: true
Managing Storage
Mounting Existing File Systems
Use the ansible.posix.mount
module to mount an existing file system. The most common parameters are:
- the
path
parameter, which specifies the path to mount the file system to - the
src
parameter, which specifies the device (this could be a device name, UUID, or NFS volume) - the
fstype
parameter, which specifies the file system type - the
state
parameter, which accepts theabsent
,mounted
,present
,unmounted
, orremounted
values.
The following example task mounts the NFS share available at 172.25.250.100:/share
on the /nfsshare
directory on the managed hosts.
yaml
- name: Mount NFS share
ansible.posix.mount:
path: /nfsshare
src: 172.25.250.100:/share
fstype: nfs
opts: defaults
dump: '0'
passno: '0'
state: mounted
Configuring Storage with the Storage System Role
Red Hat Ansible Automation Platform provides the redhat.rhel_system_roles.storage
system role to configure local storage devices on your managed hosts. It can manage file systems on unpartitioned block devices, and format and create logical volumes on LVM physical volumes based on unpartitioned block devices.
The redhat.rhel_system_roles.storage
role formally supports managing file systems and mount entries for two use cases:
- Unpartitioned devices (whole-device file systems)
- LVM on unpartitioned whole-device physical volumes
If you have other use cases, then you might need to use other modules and roles to implement them.
Managing a File System on an Unpartitioned Device
To create a file system on an unpartitioned block device with the redhat.rhel_system_roles.storage
role, define the storage_volumes
variable. The storage_volumes
variable contains a list of storage devices to manage.
The following dictionary items are available in the storage_volumes
variable:
Table 9.6. Parameters for the storage_volumes
Variable
Parameter | Comments |
---|---|
name |
The name of the volume. |
type |
This value must be disk . |
disks |
Must be a list of exactly one item; the unpartitioned block device. |
mount_point |
The directory on which the file system is mounted. |
fstype |
The file system type to use. (xfs , ext4 , or swap .) |
mount_options |
Custom mount options, such as ro or rw . |
The following example play creates an XFS file system on the /dev/vdg
device, and mounts it on /opt/extra
.
yaml
- name: Example of a simple storage device
hosts: all
roles:
- name: redhat.rhel_system_roles.storage
storage_volumes:
- name: extra
type: disk
disks:
- /dev/vdg
fs_type: xfs
mount_point: /opt/extra
Managing LVM with the Storage Role
To create an LVM volume group with the redhat.rhel_system_roles.storage
role, define the storage_pools
variable. The storage_pools
variable contains a list of pools (LVM volume groups) to manage.
The dictionary items inside the storage_pools
variable are used as follows:
- The
name
variable is the name of the volume group. - The
type
variable must have the valuelvm
. - The
disks
variable is the list of block devices that the volume group uses for its storage. - The
volumes
variable is the list of logical volumes in the volume group.
The following entry creates the volume group vg01
with the type
key set to the value lvm
. The volume group's physical volume is the /dev/vdb
disk.
yaml
---
- name: Configure storage on webservers
hosts: webservers
roles:
- name: redhat.rhel_system_roles.storage
storage_pools:
- name: vg01
type: lvm
disks:
- /dev/vdb
The disks
option only supports unpartitioned block devices for your LVM physical volumes.
To create logical volumes, populate the volumes
variable, nested under the storage_pools
variable, with a list of logical volume names and their parameters. Each item in the list is a dictionary that represents a single logical volume within the storage_pools
variable.
Each logical volume list item has the following dictionary variables:
name
: The name of the logical volume.size
: The size of the logical volume.mount_point
: The directory used as the mount point for the logical volume's file system.fs_type
: The logical volume's file system type.state
: Whether the logical volume should exist using thepresent
orabsent
values.
The following example creates two logical volumes, named lvol01
and lvol02
. The lvol01
logical volume is 128 MB in size, formatted with the xfs
file system, and is mounted at /data
. The lvol02
logical volume is 256 MB in size, formatted with the xfs
file system, and is mounted at /backup
.
yaml
---
- name: Configure storage on webservers
hosts: webservers
roles:
- name: redhat.rhel_system_roles.storage
storage_pools:
- name: vg01
type: lvm
disks:
- /dev/vdb
volumes:
- name: lvol01
size: 128m
mount_point: "/data"
fs_type: xfs
state: present
- name: lvol02
size: 256m
mount_point: "/backup"
fs_type: xfs
state: present
In the following example entry, if the lvol01
logical volume is already created with a size of 128 MB, then the logical volume and file system are enlarged to 256 MB, assuming that the space is available within the volume group.
yaml
volumes:
- name: lvol01
size: 256m
mount_point: "/data"
fs_type: xfs
state: present
Configuring Swap Space
You can use the redhat.rhel_system_roles.storage
role to create logical volumes that are formatted as swap spaces. The role creates the logical volume, the swap file system type, adds the swap volume to the /etc/fstab
file, and enables the swap volume immediately.
The following playbook example creates the lvswap
logical volume in the vgswap
volume group, adds the swap volume to the /etc/fstab
file, and enables the swap space.
yaml
---
- name: Configure a swap volume
hosts: all
roles:
- name: redhat.rhel_system_roles.storage
storage_pools:
- name: vgswap
type: lvm
disks:
- /dev/vdb
volumes:
- name: lvswap
size: 512m
fs_type: swap
state: present
Managing Partitions and File Systems with Tasks
You can manage partitions and file systems on your storage devices without using the system role. However, the most convenient modules for doing this are currently unsupported by Red Hat, which can make this more complicated.
Managing Partitions
If you want to partition your storage devices without using the system role, your options are a bit more complex.
- The unsupported
community.general.parted
module in thecommunity.general
Ansible Content Collection can perform this task. - You can use the
ansible.builtin.command
module to run the partitioning commands on the managed hosts. However, you need to take special care to make sure the commands are idempotent and do not inadvertently destroy data on your existing storage devices.
For example, the following task creates a GPT disk label and a /dev/sda1
partition on the /dev/sda
storage device only if /dev/sda1
does not already exist:
yaml
- name: Ensure that /dev/sda1 exists
ansible.builtin.command:
cmd: parted --script mklabel gpt mkpart primary 1MiB 100%
creates: /dev/sda1
This depends on the fact that if the /dev/sda1
partition exists, then a Linux system creates a /dev/sda1
device file for it automatically.
Managing File Systems
The easiest way to manage file systems without using the system role might be the community.general.filesystem
module. However, Red Hat does not support this module, so you use it at your own risk.
As an alternative, you can use the ansible.builtin.command
module to run commands to format file systems. However, you should use some mechanism to make sure that the device you are formatting does not already contain a file system, to ensure idempotency of your play, and to avoid accidental data loss. One way to do that might be to review storage-related facts gathered by Ansible to determine if a device appears to be formatted with a file system.
Ansible Facts for Storage Configuration
Ansible facts gathered by ansible.builtin.setup
contain useful information about the storage devices on your managed hosts.
Facts about Block Devices
The ansible_facts['devices']
fact includes information about all the storage devices available on the managed host. This includes additional information such as the partitions on each device, or each device's total size.
The following playbook gathers and displays the ansible_facts['devices']
fact for each managed host.
yaml
---
- name: Display storage facts
hosts: all
tasks:
- name: Display device facts
ansible.builtin.debug:
var: ansible_facts['devices']
This fact contains a dictionary of variables named for the devices on the system. Each named device variable itself has a dictionary of variables for its value, which represent information about the device. For example, if you have the /dev/sda
device on your system, you can use the following Jinja2 expression (all on one line) to determine its size in bytes:
jinja2
{{ ansible_facts['devices']['sda']['sectors'] * ansible_facts['devices']['sda']['sectorsize'] }}
Table 9.7. Selected Facts from a Device Variable Dictionary
Fact | Comments |
---|---|
host |
A string that identifies the controller to which the block device is connected. |
model |
A string that identifies the model of the storage device, if applicable. |
partitions |
A dictionary of block devices that are partitions on this device. Each dictionary variable has as its value a dictionary structured like any other device (including values for sectors , size , and so on). |
sectors |
The number of storage sectors the device contains. |
sectorsize |
The size of each sector in bytes. |
size |
A human-readable rough calculation of the device size. |
For example, you could find the size of /dev/sda1
from the following fact:
ansible_facts['devices']['sda']['partitions']['sda1']['size']
Facts about Device Links
The ansible_facts['device_links']
fact includes all the links available for each storage device. If you have multipath devices, you can use this to help determine which devices are alternative paths to the same storage device, or are multipath devices.
The following playbook gathers and displays the ansible_['device_links']
fact for all managed hosts.
yaml
---
- name: Gather device link facts
hosts: all
tasks:
- name: Display device link facts
ansible.builtin.debug:
var: ansible_facts['device_links']
Facts about Mounted File Systems
The ansible_facts['mounts']
fact provides information about the currently mounted devices on the managed host. For each device, this includes the mounted block device, its file system's mount point, mount options, and so on.
The following playbook gathers and displays the ansible_facts['mounts']
fact for managed hosts.
yaml
---
- name: Gather mounts
hosts: all
tasks:
- name: Display mounts facts
ansible.builtin.debug:
var: ansible_facts['mounts']
The fact contains a list of dictionaries for each mounted file system on the managed host.
Table 9.8. Selected Variables from the Dictionary in a Mounted File System List Item
Variable | Comments |
---|---|
mount |
The directory on which this file system is mounted. |
device |
The name of the block device that is mounted. |
fstype |
The type of file system the device is formatted with (such as xfs ). |
options |
The current mount options in effect. |
size_total |
The total size of the device. |
size_available |
How much space is free on the device. |
block_size |
The size of blocks on the file system. |
block_total |
How many blocks are in the file system. |
block_available |
How many blocks are free in the file system. |
inode_available |
How many inodes are free in the file system. |
For example, you can determine the free space on the root (/
) file system on each managed host with the following play:
yaml
- name: Print free space on / file system
hosts: all
gather_facts: true
tasks:
- name: Display free space
ansible.builtin.debug:
msg: >
The root file system on {{ ansible_facts['fqdn'] }} has
{{ item['block_available'] * item['block_size'] / 1000000 }}
megabytes free.
loop: "{{ ansible_facts['mounts'] }}"
when: item['mount'] == '/'
References
mount - Control active and configured mount points --- Ansible Documentation
Roles --- Ansible Documentation
Managing local storage using RHEL System Roles --- Red Hat Documentation
Example
Files:
[student@workstation system-storage]$ ll
total 804
-rw-r--r--. 1 student student 192 Oct 7 23:30 ansible.cfg
drwxrwxr-x. 2 student student 22 Oct 7 23:30 collections
-rw-r--r--. 1 student student 1186 Oct 7 23:30 get-storage.yml
-rw-r--r--. 1 student student 37 Oct 7 23:30 inventory
-rw-r--r--. 1 student student 808333 Oct 7 23:30 redhat-rhel_system_roles-1.19.3.tar.gz
[student@workstation system-storage]$ cat ansible.cfg
[defaults]
remote_user=devops
inventory=./inventory
collections_paths=./collections:~/.ansible/collections:/usr/share/ansible/collections
[privilege_escalation]
become=yes
become_method=sudo
[student@workstation system-storage]$ cat get-storage.yml
---
- name: View storage configuration
hosts: webservers
tasks:
- name: Retrieve physical volumes
ansible.builtin.command: pvs
register: pv_output
- name: Display physical volumes
ansible.builtin.debug:
msg: "{{ pv_output['stdout_lines'] }}"
- name: Retrieve volume groups
ansible.builtin.command: vgs
register: vg_output
- name: Display volume groups
ansible.builtin.debug:
msg: "{{ vg_output['stdout_lines'] }}"
- name: Retrieve logical volumes
ansible.builtin.command: lvs
register: lv_output
- name: Display logical volumes
ansible.builtin.debug:
msg: "{{ lv_output['stdout_lines'] }}"
- name: Retrieve mounted logical volumes
ansible.builtin.shell: "mount | grep lv"
register: mount_output
- name: Display mounted logical volumes
ansible.builtin.debug:
msg: "{{ mount_output['stdout_lines'] }}"
- name: Retrieve /etc/fstab contents
ansible.builtin.command: cat /etc/fstab
register: fstab_output
- name: Display /etc/fstab contents
ansible.builtin.debug:
msg: "{{ fstab_output['stdout_lines'] }}"
[student@workstation system-storage]$ cat inventory
[webservers]
servera.lab.example.com
[student@workstation system-storage]$ ll collections/
total 0
Write a playbook to:
- Use the
/dev/vdb
device as an LVM physical volume, contributing space to the volume groupapache-vg
. - Create two logical volumes named
content-lv
(64 MB in size) andlogs-lv
(128 MB in size), both backed by theapache-vg
volume group. - Create an XFS file system on both logical volumes.
- Mount the
content-lv
logical volume on the/var/www
directory. - Mount the
logs-lv
logical volume on the/var/log/httpd
directory.
yaml
---
- name: Managing storage via role
hosts: webservers
roles:
- name: redhat.rhel_system_roles.storage
storage_pools:
- name: apache-vg
type: lvm
disks:
- /dev/vdb
volumes:
- name: content-lv
size: 64m
fs_type: xfs
mount_point: "/var/www"
state: present
- name: logs-lv
size: 128m
fs_type: xfs
mount_point: "/var/log/httpd"
state: present
Run the get-storage.yml
playbook provided in the project directory to verify that the storage has been properly configured on the managed hosts in the webservers
group.
[student@workstation system-storage]$ ansible-navigator run -m stdout get-storage.yml
PLAY [View storage configuration] **********************************************
TASK [Gathering Facts] *********************************************************
ok: [servera.lab.example.com]
TASK [Retrieve physical volumes] ***********************************************
changed: [servera.lab.example.com]
TASK [Display physical volumes] ************************************************
ok: [servera.lab.example.com] => {
"msg": [
" PV VG Fmt Attr PSize PFree ",
" /dev/vdb apache-vg lvm2 a-- 1020.00m 828.00m"
]
}
TASK [Retrieve volume groups] **************************************************
changed: [servera.lab.example.com]
TASK [Display volume groups] ***************************************************
ok: [servera.lab.example.com] => {
"msg": [
" VG #PV #LV #SN Attr VSize VFree ",
" apache-vg 1 2 0 wz--n- 1020.00m 828.00m"
]
}
TASK [Retrieve logical volumes] ************************************************
changed: [servera.lab.example.com]
TASK [Display logical volumes] *************************************************
ok: [servera.lab.example.com] => {
"msg": [
" LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert",
" content-lv apache-vg -wi-ao---- 64.00m ",
" logs-lv apache-vg -wi-ao---- 128.00m "
]
}
TASK [Retrieve mounted logical volumes] ****************************************
changed: [servera.lab.example.com]
TASK [Display mounted logical volumes] *****************************************
ok: [servera.lab.example.com] => {
"msg": [
"/dev/mapper/apache--vg-content--lv on /var/www type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)",
"/dev/mapper/apache--vg-logs--lv on /var/log/httpd type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)"
]
}
TASK [Retrieve /etc/fstab contents] ********************************************
changed: [servera.lab.example.com]
TASK [Display /etc/fstab contents] *********************************************
ok: [servera.lab.example.com] => {
"msg": [
"UUID=5e75a2b9-1367-4cc8-bb38-4d6abc3964b8\t/boot\txfs\tdefaults\t0\t0",
"UUID=fb535add-9799-4a27-b8bc-e8259f39a767\t/\txfs\tdefaults\t0\t0",
"UUID=7B77-95E7\t/boot/efi\tvfat\tdefaults,uid=0,gid=0,umask=077,shortname=winnt\t0\t2",
"/dev/mapper/apache--vg-content--lv /var/www xfs defaults 0 0",
"/dev/mapper/apache--vg-logs--lv /var/log/httpd xfs defaults 0 0"
]
}
PLAY RECAP *********************************************************************
servera.lab.example.com : ok=11 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Managing Network Configuration
Configuring Networking with the Network System Role
The redhat.rhel_system_roles.network
system role provides a way to automate the configuration of network interfaces and network-related settings on Red Hat Enterprise Linux managed hosts.
This role supports the configuration of Ethernet interfaces, bridge interfaces, bonded interfaces, VLAN interfaces, MACVLAN interfaces, InfiniBand interfaces, and wireless interfaces.
The role is configured by using two variables: network_provider
and network_connections
.
yaml
---
network_provider: nm
network_connections:
- name: ens4
type: ethernet
ip:
address:
- 172.25.250.30/24
The network_provider
variable configures the back-end provider,
-
nm
(NetworkManager) on Red Hat Enterprise Linux 7 and later. -
initscripts
on Red Hat Enterprise Linux 6 systems in Extended Lifecycle Support (ELS), that provider requires that the legacynetwork
service is available on the managed hosts.
The network_connections
variable configures the different connections. It takes as a value a list of dictionaries, each of which represents settings for a specific connection. Use the interface name as the connection name.
The following table lists the options for the network_connections
variable.
Table 9.9. Selected Options for the network_connections
Variable
Option name | Description |
---|---|
name |
For NetworkManager, identifies the connection profile (the connection.id option). For initscripts , identifies the configuration file name (/etc/sysconfig/network-scripts/ifcfg-* name* ). |
state |
The runtime state of a connection profile. Either up , if the connection profile is active, or down if it is not. |
persistent_state |
Identifies if a connection profile is persistent. Either present if the connection profile is persistent (the default), or absent if it is not. |
type |
Identifies the connection type. Valid values are ethernet , bridge , bond , team , vlan , macvlan , infiniband , and wireless . |
autoconnect |
Determines if the connection automatically starts. Set to yes by default. |
mac |
Restricts the connection to be used on devices with this specific MAC address. |
interface_name |
Restricts the connection profile to be used by a specific interface. |
zone |
Configures the firewalld zone for the interface. |
ip |
Determines the IP configuration for the connection. Supports the options address to specify a list of static IPv4 or IPv6 addresses on the interface, gateway4 or gateway6 to specify the IPv4 or IPv6 default router, and dns to configure a list of DNS servers. |
The ip
variable in turn takes a dictionary of variables for its settings. Not all of these need to be used. A connection might just have an address
setting with a single IPv4 address, or it might skip the address setting and have dhcp4: yes
set to enable DHCPv4 addressing.
Table 9.10. Selected Options for the ip
Variable
Option name | Description |
---|---|
address |
A list of static IPv4 or IPv6 addresses and netmask prefixes for the connection. |
gateway4 |
Sets a static address of the default IPv4 router. |
gateway6 |
Sets a static address of the default IPv6 router. |
dns |
A list of DNS name servers for the connection. |
dhcp4 |
Use DHCPv4 to configure the interface. |
auto6 |
Use IPv6 autoconfiguration to configure the interface. |
This is a minimal example network_connections
variable to configure and immediately activate a static IPv4 address for the enp1s0
interface:
yaml
network_connections:
- name: enp1s0
type: ethernet
ip:
address:
- 192.0.2.25/24
state: up
If you were dynamically configuring the interface using DHCP and SLAAC, you might use the following settings instead:
yaml
network_connections:
- name: enp1s0
type: ethernet
ip:
dhcp4: true
auto6: true
state: up
The next example temporarily deactivates an existing network interface:
yaml
network_connections:
- name: enp1s0
type: ethernet
state: down
To delete the configuration for enp1s0
entirely, you would write the variable as follows:
yaml
network_connections:
- name: enp1s0
type: ethernet
state: down
persistent_state: absent
The following example uses some of these options to set up the interface eth0
with a static IPv4 address, set a static DNS name server, and place the interface in the external
zone for firewalld
:
yaml
network_connections:
- name: eth0
persistent_state: present
type: ethernet
autoconnect: yes
mac: 00:00:5e:00:53:5d
ip:
address:
- 172.25.250.40/24
dns:
- 8.8.8.8
zone: external
The following example play sets network_connections
as a play variable and then calls the redhat.rhel_system_roles.network
role:
yaml
- name: NIC Configuration
hosts: webservers
vars:
network_connections:
- name: ens4
type: ethernet
ip:
address:
- 172.25.250.30/24
roles:
- redhat.rhel_system_roles.network
You can specify variables for the network role with the vars
clause, as in the previous example, as role variables. Alternatively, you can create a YAML file with those variables under the group_vars
or host_vars
directories, depending on your use case.
You can use this role to set up 802.11 wireless connections, VLANs, bridges, and other more complex network configurations. See the role's documentation for more details and examples.
Configuring Networking with Modules
In addition to the redhat.rhel_system_roles.network
system role, Ansible includes modules that support the configuration of the hostname and firewall on a system.
The ansible.builtin.hostname
module sets the hostname for a managed host without modifying the /etc/hosts
file. This module uses the name
parameter to specify the new hostname, as in the following task:
yaml
- name: Change hostname
ansible.builtin.hostname:
name: managedhost1
The ansible.posix.firewalld
module supports the management of firewalld
on managed hosts.
This module supports the configuration of firewalld
rules for services and ports. It also supports the zone management, including the association or network interfaces and rules to a specific zone.
The following task shows how to create a firewalld
rule for the http
service on the default zone (public
). The following task configures the rule as permanent, and makes sure it is active:
yaml
- name: Enabling http rule
ansible.posix.firewalld:
service: http
permanent: true
state: enabled
This following task configures the eth0
in the external
firewalld
zone:
yaml
- name: Moving eth0 to external
ansible.posix.firewalld:
zone: external
interface: eth0
permanent: true
state: enabled
The following table lists some parameters for the ansible.posix.firewalld
module.
Parameter name | Description |
---|---|
interface |
Interface name to manage with firewalld . |
port |
Port or port range. Uses the port/protocol or port-port/protocol format. |
rich_rule |
Rich rule for firewalld . |
service |
Service name to manage with firewalld . |
source |
Source network to manage with firewalld . |
zone |
firewalld zone. |
state |
Enable or disable a firewalld configuration. |
type |
Type of device or network connection. |
permanent |
Change persists across reboots. |
immediate |
If the changes are set to permanent , then apply them immediately. |
Ansible Facts for Network Configuration
Ansible collects a number of facts that are related to each managed host's network configuration. For example, a list of the network interfaces on a managed host are available in the ansible_facts['interfaces']
fact.
The following playbook gathers and displays the available interfaces for a host:
yaml
---
- name: Obtain interface facts
hosts: host.lab.example.com
tasks:
- name: Display interface facts
ansible.builtin.debug:
var: ansible_facts['interfaces']
The preceding playbook produces the following list of the network interfaces:
PLAY [Obtain interface facts] **************************************************
TASK [Gathering Facts] *********************************************************
ok: [host.lab.example.com]
TASK [Display interface facts] *************************************************
ok: [host.lab.example.com] => {
"ansible_facts['interfaces']": [
"eth2",
"eth1",
"eth0",
"lo"
]
}
PLAY RECAP *********************************************************************
host.lab.example.com : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
The output in the previous example shows that four network interfaces are available on the host.lab.example.com
managed host: lo
, eth2
, eth1
, and eth0
.
You can retrieve additional information about the configuration for a specific network interface from the ansible_facts['*
NIC_name*']
fact. For example, the following play displays the configuration for the eth0
network interface by printing the value of the ansible_facts['eth0']
fact.
yaml
- name: Obtain eth0 facts
hosts: host.lab.example.com
tasks:
- name: Display eth0 facts
ansible.builtin.debug:
var: ansible_facts['eth0']
The preceding playbook produces the following output:
PLAY [Obtain eth0 facts] *******************************************************
TASK [Gathering Facts] *********************************************************
ok: [host.lab.example.com]
TASK [Display eth0 facts] ******************************************************
ok: [host.lab.example.com] => {
"ansible_facts['eth0']": {
"active": true,
"device": "eth0",
"features": {
...output omitted...
},
"hw_timestamp_filters": [],
"ipv4": {
"address": "172.25.250.10",
"broadcast": "172.25.250.255",
"netmask": "255.255.255.0",
"network": "172.25.250.0",
"prefix": "24"
},
"ipv6": [
{
"address": "fe80::82a0:2335:d88a:d08f",
"prefix": "64",
"scope": "link"
}
],
"macaddress": "52:54:00:00:fa:0a",
"module": "virtio_net",
"mtu": 1500,
"pciid": "virtio0",
"promisc": false,
"speed": -1,
"timestamping": [],
"type": "ether"
}
}
PLAY RECAP *********************************************************************
host.lab.example.com : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
The preceding output displays additional configuration details, such as the IP address configuration both for IPv4 and IPv6, the associated device, and the type of interface.
The following table lists some other useful network-related facts.
Fact name | Description |
---|---|
ansible_facts['dns'] |
A list of the DNS name server IP addresses and the search domains. |
ansible_facts['domain'] |
The subdomain for the managed host. |
ansible_facts['all_ipv4_addresses'] |
All the IPv4 addresses configured on the managed host. |
ansible_facts['all_ipv6_addresses'] |
All the IPv6 addresses configured on the managed host. |
ansible_facts['fqdn'] |
The fully qualified domain name (FQDN) of the managed host. |
ansible_facts['hostname'] |
The unqualified hostname (the part of the hostname before the first period in the FQDN). |
ansible_facts['nodename'] |
The hostname of the managed host as reported by the system. |
Note
Ansible also provides the inventory_hostname
"magic variable" which includes the hostname as configured in the Ansible inventory file.
References
Knowledgebase: Red Hat Enterprise Linux (RHEL) System Roles
linux-system-roles/network at GitHub
ansible.builtin.hostname Module Documentation
ansible.posix.firewalld Module Documentation
Example
[student@workstation system-network]$ ll
total 804
-rw-r--r--. 1 student student 236 Oct 8 01:36 ansible.cfg
drwxrwxr-x. 2 student student 22 Oct 8 01:36 collections
-rw-r--r--. 1 student student 180 Oct 8 01:36 get-eth1.yml
-rw-r--r--. 1 student student 37 Oct 8 01:36 inventory
-rw-r--r--. 1 student student 808333 Oct 8 01:36 redhat-rhel_system_roles-1.19.3.tar.gz
[student@workstation system-network]$ cat ansible.cfg
[defaults]
remote_user = devops
inventory = ./inventory
collections_paths=./collections:~/.ansible/collections:/usr/share/ansible/collections
[privilege_escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=False
[student@workstation system-network]$ cat inventory
[webservers]
servera.lab.example.com
[student@workstation system-network]$
[student@workstation system-network]$ ll collections/
total 0
[student@workstation system-network]$ cat get-eth1.yml
---
- name: Obtain network info for webservers
hosts: webservers
tasks:
- name: Display eth1 info
ansible.builtin.debug:
var: ansible_facts['eth1']['ipv4']
Install the redhat.rhel_system_roles
Ansible Content Collection from the redhat-rhel_system_roles-1.19.3.tar.gz
file to the collections
directory in the project directory.
Create a playbook that uses the redhat.rhel_system_roles.network
role to configure the network interface eth1
on servera.lab.example.com
with the 172.25.250.30/24
IP address and network prefix.
-
Create a new variable file named
network_config.yml
in thegroup_vars/webservers
directory to define thenetwork_connections
role variable for thewebservers
group. -
The value of that variable must configure a network connection for the
eth1
network interface that assigns it the static IP address and network prefix172.25.250.30/24
.
yaml
# network.yml
---
- name: Configure network via system role
hosts: webservers
roles:
- name: redhat.rhel_system_roles.network
# network_config.yml under group_vars/webservers
---
network_provider: nm
network_connections:
- name: eth1
type: ethernet
ip:
address:
- 172.25.250.30/24
Check result:
[student@workstation system-network]$ ansible-navigator run -m stdout get-eth1.yml
PLAY [Obtain network info for webservers] **************************************
TASK [Gathering Facts] *********************************************************
ok: [servera.lab.example.com]
TASK [Display eth1 info] *******************************************************
ok: [servera.lab.example.com] => {
"ansible_facts['eth1']['ipv4']": {
"address": "172.25.250.30",
"broadcast": "172.25.250.255",
"netmask": "255.255.255.0",
"network": "172.25.250.0",
"prefix": "24"
}
}
PLAY RECAP *********************************************************************
servera.lab.example.com : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Chapter 9 Example
Create playbooks for configuring a software repository, users and groups, logical volumes, cron jobs, and additional network interfaces on a managed host.
pre-requirement:
[student@workstation system-review]$ ll
total 800
-rw-r--r--. 1 student student 236 Oct 8 22:26 ansible.cfg
drwxrwxr-x. 2 student student 22 Oct 8 22:26 collections
-rw-r--r--. 1 student student 37 Oct 8 22:26 inventory
-rw-r--r--. 1 student student 808333 Oct 8 22:26 redhat-rhel_system_roles-1.19.3.tar.gz
[student@workstation system-review]$ cat ansible.cfg
[defaults]
remote_user = devops
inventory = ./inventory
collections_paths=./collections:~/.ansible/collections:/usr/share/ansible/collections
[privilege_escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=False
[student@workstation system-review]$
[student@workstation system-review]$ cat inventory
[webservers]
serverb.lab.example.com
[student@workstation system-review]$ ll collections/
total 0
yaml
[student@workstation system-review]$ cat repo_playbook.yml
---
- name: Installing software from YUM
hosts: webservers
vars:
package: rhelver
tasks:
- name: Configing the YUM repository and install packages
ansible.builtin.yum_repository:
file: example
name: example-internal
baseurl: http://materials.example.com/yum/repository
description: Example Inc. Internal YUM repo
enabled: true
gpgcheck: true
state: present
- name: Ensure Repo RPM key is installed
ansible.builtin.rpm_key:
key: http://materials.example.com/yum/repository/RPM-GPG-KEY-example
state: present
- name: Installing the package
ansible.builtin.dnf:
name: "{{ package }}"
state: present
- name: Gather information about the package
ansible.builtin.package_facts:
manager: auto
- name: Show the package information
ansible.builtin.debug:
var: ansible_facts['packages'][package]
when: package in ansible_facts['packages']
yaml
[student@workstation system-review]$ cat users.yml
---
- name: Creating user group
hosts: webservers
vars:
users:
- username: ops1
groups: webadmin
- username: ops2
groups: webadmin
tasks:
- name: Creating the spcific user group
ansible.builtin.group:
name: webadmin
state: present
- name: Adding users to group
ansible.builtin.user:
name: "{{ item['username'] }}"
groups: "{{ item['groups'] }}"
append: true
loop: "{{ users }}"
- name: Gather the result
ansible.builtin.command: "grep webadmin /etc/group"
register: result
- name: Show the result
ansible.builtin.debug:
msg: "{{result['stdout_lines']}}"
yaml
[student@workstation system-review]$ cat storage.yml
---
- name: Configuring LVM using System Roles
hosts: webservers
roles:
- name: redhat.rhel_system_roles.storage
storage_pools:
- name: apache-vg
type: lvm
disks:
- /dev/vdb
volumes:
- name: content-lv
size: 64m
fs_type: xfs
mount_point: "/var/www"
state: present
- name: logs-lv
size: 128m
fs_type: xfs
mount_point: "/var/log/httpd"
state: present
post_tasks:
- name: Gathering LVM vol information
ansible.builtin.command:
cmd: grep lv /etc/fstab
register: result
- name: Show the result
ansible.builtin.debug:
msg: "{{ result['stdout_lines'] }}"
yaml
[student@workstation system-review]$ cat create_crontab_file.yml
---
- name: Creating cron jobs
hosts: webservers
tasks:
- name: Scheduling a cron job
ansible.builtin.cron:
name: Check disk usage
job: df >> /home/devops/disk_usage
user: devops
cron_file: disk_usage
minute: "*/2"
hour: 9-16
weekday: 1-5
- name: Gather result
ansible.builtin.command:
cmd: cat /etc/cron.d/disk_usage
register: result
- name: Show result
ansible.builtin.debug:
msg: "{{ result['stdout_lines'] }}"
yaml
[student@workstation system-review]$ cat network_playbook.yml
---
- name: Config NIC via system roles
hosts: webservers
gather_facts: true
roles:
- name: redhat.rhel_system_roles.network
network_provider: nm
network_connections:
- name: eth1
type: ethernet
ip:
address:
- 172.25.250.40/24
post_tasks:
- name: Gathering netowrk information
ansible.builtin.setup:
- name: Show the result
ansible.builtin.debug:
var: ansible_facts['eth1']
Summary
-
The
ansible.builtin.yum_repository
module configures a Yum repository on a managed host. For repositories that use public keys, you can verify that the key is available with theansible.builtin.rpm_key
module. -
The
ansible.builtin.user
andansible.builtin.group
modules create users and groups respectively on a managed host. -
The
ansible.builtin.known_hosts
module configures SSH known hosts for a server and theansible.posix.authorized_key
modules configures authorized keys for user authentication. -
The
ansible.builtin.cron
module configures system or user Cron jobs on managed hosts. -
The
ansible.posix.at
module configures One-offat
jobs on managed hosts. -
The
redhat.rhel_system_roles
Red Hat Certified Ansible Content Collection includes two particularly useful system roles:storage
, which supports the configuration of LVM logical volumes, andnetwork
, which enables the configuration of network interfaces and connections.
TO BE CONTINUED...