Skip to content

Playbooks

Playbooks are first thing you think of when using Ansible. This section describes some good practices.

Directory structure

The main playbook should have a recognizable name, e.g. referencing the projects name or scope. If you have multiple playbooks, create a new folder playbooks and store all playbooks there, except the main playbook (here called site.yml).

.
├── ansible.cfg
├── site.yml
└── playbooks
    ├── database.yml
    ├── loadbalancer.yml
    └── webserver.yml

The site.yml file contains references to the other playbooks:

---
# Main playbook including all other playbooks

- ansible.builtin.import_playbook: playbooks/database.yml # noqa name[play]
- ansible.builtin.import_playbook: playbooks/webserver.yml # noqa name[play]
- ansible.builtin.import_playbook: playbooks/loadbalancer.yml # noqa name[play]
noqa statement

The file site.yml only references other playbooks, still, the ansible-lint utility would trigger, as every play should have the name parameter.
While this is correct (and you should always name your actual plays), the name parameter on import statements is not shown anyway, as they are pre-processed at the time playbooks are parsed. Take a look at import vs. include in the tasks section

Success

Therefore, silencing the linter in this particular case with the noqa statement is acceptable.

In contrast, include statements like ansible.builtin.include_tasks should have the name parameter, as these statements are processed when they are encountered during the execution of the playbook.

The lower-level playbooks contains actual plays:

playbooks/database.yml
---
- name: Install and configure PostgreSQL database
  hosts: postgres_servers
  roles:
    - postgres

To be able to run the overall playbook, as well as the imported playbooks, add this parameter to your ansible.cfg, otherwise roles are not found:

[defaults]
roles_path = .roles

Playbook definition

Don't put too much logic in your playbook, put it in your roles (or even in custom modules).
A playbook could contain pre_tasks, roles, tasks and post_tasks sections, try to limit your playbooks to a list of a roles.

Warning

Avoid using both roles and tasks sections, the latter possibly containing import_role or include_role tasks. The order of execution between roles and tasks isn’t obvious, and hence mixing them should be avoided.

Either you need only static importing of roles and you can use the roles section, or you need dynamic inclusion and you should use only the tasks section. Of course, for very simple cases, you can just use tasks without roles (but playbooks/projects grow quickly, refactor to roles early).

Plays

Avoid putting multiple plays in a playbook, if not really necessary. As every play most likely targets a different host group, create a separate playbook file for it. This way you achieve to most flexibility.

k8s-installation.yml
---
- name: Initialize Control-Plane Nodes
  hosts: kubemaster
  become: true
  roles:
    - k8s-control-plane

- name: Install and configure Worker Nodes
  hosts: kubeworker
  become: true
  roles:
    - k8s-worker-nodes

Separate the two plays into their respective playbooks files and reference them in an overall playbook file:

k8s-control-plane-playbook.yml
---
- name: Initialize Control-Plane Nodes
  hosts: kubemaster
  become: true
  roles:
    - k8s-control-plane
k8s-worker-node-playbook.yml
---
- name: Install and configure Worker Nodes
  hosts: kubeworker
  become: true
  roles:
    - k8s-worker-nodes
k8s-installation.yml
---
- ansible.builtin.import_playbook: k8s-control-plane-playbook.yml # noqa name[play]
- ansible.builtin.import_playbook: k8s-worker-node-playbook.yml # noqa name[play]

Module defaults

If your playbook uses modules which need the be called with the same set of parameters or arguments, you can define these as module_defaults.
The defaults can be set at play, block or task level.

Module defaults are defined by grouping together modules that share common sets of parameters, especially for modules making heavy use of API-interaction such as cloud modules.

Since ansible-core 2.12, collections can define their own groups in the meta/runtime.yml file. module_defaults does not take the collections keyword into account, so the fully qualified group name must be used for new groups in module_defaults.

---
- name: Demo play with modules which need to call the same arguments
  hosts: aci
  module_defaults:
    group/cisco.aci.all:
      host: "{{ apic_api }}"
      username: "{{ apic_user }}"
      password: "{{ apic_password }}"
      validate_certs: false
  tasks:
    - name: Get system info
      cisco.aci.aci_system:
        state: query

    - name: Create a new demo tenant
      cisco.aci.aci_tenant:
        name: demo-tenant
        description: Tenant for demo purposes
        state: present

Authentication parameters are repeated in every task.

- name: Demo play with modules which need to call the same arguments
  hosts: aci
  tasks:
    - name: Get system info
      cisco.aci.aci_system:
        host: "{{ apic_api }}"
        username: "{{ apic_user }}"
        password: "{{ apic_password }}"
        validate_certs: false
        state: query

    - name: Create a new demo tenant
      cisco.aci.aci_tenant:
        host: "{{ apic_api }}"
        username: "{{ apic_user }}"
        password: "{{ apic_password }}"
        validate_certs: false
        name: demo-tenant
        description: Tenant for demo purposes
        state: present

To identify the correct group (remember, these are not inventory groups), take a look at the meta/runtime.yml of the desired collection. It needs to define the action_groups list, for example:

~/.ansible/collections/ansible_collections/cisco/aci/meta/runtime.yml
---
requires_ansible: '>=2.9.10'
action_groups:
  all:
    - aci_aaa_custom_privilege
    - aci_aaa_domain
    - aci_aaa_role
    - aci_aaa_ssh_auth
    - aci_aaa_user
    - aci_aaa_user_certificate
    - aci_aaa_user_domain
    - aci_aaa_user_role
    - aci_access_port_block_to_access_port
    ...

The group is called all, therefore the module defaults groups needs to be group/cisco.aci.all.

Note

Any module defaults set at the play level (and block/task level when using include_role or import_role) will apply to any roles used, which may cause unexpected behavior in the role.

Collections in playbooks

In a playbook, you can control the collections Ansible searches for modules and action plugins to execute.

tl;dr

This is not recommended, try to avoid this.

- name: Initialize Control-Plane Nodes
  hosts: kubemaster
  collections:
    - kubernetes.core
    - computacenter.utils
  become: true
  roles:
    - k8s-control-plane

With that you could omit the provider.collection part when using modules, by default you would reference a module with the FQCN:

- name: Check if Weave is already installed
  kubernetes.core.k8s_info:
    api_version: v1
    kind: DaemonSet
    name: weave-net
    namespace: kube-system
  register: weave_daemonset

With the collections list defined as part of the play definition, you could write your tasks like this:

- name: Check if Weave is already installed
  k8s_info:
    api_version: v1
    kind: DaemonSet
    name: weave-net
    namespace: kube-system
  register: weave_daemonset

Warning

If your playbook uses both the collections keyword and one or more roles, the roles do not inherit the collections set by the playbook!
The collections keyword merely creates an ordered search path for non-namespaced plugin and role references. It does not install content or otherwise change Ansible’s behavior around the loading of plugins or roles. Note that an FQCN is still required for non-action or module plugins (for example, lookups, filters, tests).

Tip

It is preferable to use a module or plugin’s FQCN over the collections keyword!

Executing playbooks

To run your playbook, use the ansible-playbook command.

ansible-playbook playbook.yml

Some useful command-line parameters when executing your playbook are the following

  • -C or --check runs the playbook without making any modifications
  • -D or --diff shows the differences when changing (small) files and templates
  • --step runs one-step-at-a-time, you need to confirm each task before running
  • --list-tags lists all available tags
  • --list-tasks lists all tasks that would be executed

With Ansible Navigator

To ensure that your Ansible Content works when running it locally during development and when running it in AAP or AWX later, it is advisable to execute it with the same Execution Environment. The ansible-playbook command can't run these, this is where the Navigator comes in.

The Ansible (Content) Navigator is a command-line tool and a text-based user interface (TUI) for creating, reviewing, running and troubleshooting Ansible content, including inventories, playbooks, collections, documentation and container images (execution environments). Take a look at the Installation section on how to install the utility and dependencies.

Use the following minimal configuration for the Navigator and store it in your project root directory:

ansible-navigator.yml

---
ansible-navigator:
  execution-environment:
    image: ghcr.io/ansible-community/community-ee-base:latest # (1)!
    pull:
      policy: missing
  logging:
    level: warning
    file: logs/ansible-navigator.log
  mode: stdout # (2)!
  playbook-artifact:
    enable: true
    save-as: "logs/{playbook_status}-{playbook_name}-{time_stamp}.json" # (3)!
  1. Specifies the name of the execution environment image to use, change this, if you want to use your own. The pull policy will download the image if it is not already present (this also means no updated images will be downloaded!).
    To build and use your own Execution Environment take a look at the section Installation > Execution Environments.
  2. Specifies the user-interface mode, with stdout it will output to standard-out as with the usual ansible-playbook command. Use interactive to use the TUI. You can provide the CLI-parameter -m or --mode to overwrite the configuration.
  3. Specifies the name for artifacts created from completed playbooks. For example, for a successful run of the site.yml playbook a log file like logs/successful-site-2023-11-01T12:20:20.907856+00:00.json. For failed runs it would be logs/failed-site-2023-11-01T12:29:17.020432+00:00.json. With the replay command, you now can observe output of previous playbook runs, e.g. ansible-navigator replay logs/failed-site-2023-11-01T12\:29\:33.129179+00\:00.json.

You can also use the Navigator configuration for all your projects, save it as a hidden file in your home directory (e.g. ~/.ansible-navigator.yml).

Take a look at the official Ansible Navigator Documentation for all other configuration options.

Warning

With the configuration above, playbook artifacts (logs), as well as the Navigator Log-file, will be stored in a logs folder in your playbook directory. Consider ignoring the folder from Git tracking.

.gitignore
logs/

Executing a playbook with the Navigator is as easy as before, just run it like this:

ansible-navigator run site.yml

Append any CLI-parameters (e.g. -i inventory.ini) that you are used to as when executing it with ansible-playbook.

Tip

Using the Interactive mode (the TUI) is encouraged, try around!