Automate Leaf and Spine Deployment - Part3

fabric variables and dynamic inventory

13 February 2021   15 min read

The 3rd post in the ‘Automate Leaf and Spine Deployment’ series goes the through the variables from which the core fabric declaration is made and how this transposes into a dynamic inventory. This uses only the base and fabric roles to create the fabric ready for the service sub-roles (tenant, interface and route) to be deployed on top of the fabric at a later stage.


ansible.yml, base.yml and fabric.yml hold the core variables that are the minimum requirements to build the fabric that tenants and other services can be built upon. All variables are proceeded by ans, bse or fbc to make it easier to identify within playbooks and templates where they came from. From the contents of these var_files a dynamic inventory is built containing host_vars of the fabric interfaces and IP addresses.


Table Of Contents



ansible.yml (ans)

The environmental settings which you would usually find in the all.yml group_var.

dir_path: Base directory location on the Ansible host that stores all the validation and configuration snippets
device_os: Operating system of each device type (spine, leaf and border)
creds_all: hostname (got from the inventory), username and password

base.yml (bse)

The settings required to onboard and manage devices such as hostname format, IP address ranges, aaa, syslog, etc.

device_name: Naming format that the automatically generated ‘Node ID’ (double decimal format) is added to and the group name created from (in lowercase). The name must contain a hyphen (-) and the characters after that hyphen must be either letters, digits or underscore as that is what the group name is created from. For example using DC1-N9K-SPINE would mean that the device is DC1-N9K-SPINE01 and the group is spine

Key Value Information
spine xx-xx Spine switch device and group naming format
border xx-xx Border switch device and group naming format
leaf xx-xx Leaf switch device and group naming format

addr: Subnets from which the device specific IP addresses are generated based on the device-type increment and the Node ID. The majority of subnets need to be at least /27 to cover a maximum network size of 4 spines, 10 leafs and 4 borders (18 addresses)

Key Value MinimumSize Information
lp_net x.x.x.x/26 /26 The range routing (OSPF/BGP), VTEP and vPC loopbacks are from (mask will be /32)
mgmt_net x.x.x.x/27 /27 Management network, by default will use .11 to .30
mlag_peer_net x.x.x.x/26 /26 or /27 Range for OSPF peering between MLAG pairs, is split into /30 per-switch pair. Must be /26 if using same range for keepalive
mlag_kalive_net x.x.x.x/27 /27 Optional keepalive address range (split into /30). If not set uses mlag_peer_net range
mgmt_gw x.x.x.x n/a Management interface default gateway

mlag_kalive_net is only needed if you are not using the management interface for the keepalive or want separate ranges for the peer-link and keepalive interfaces. The keepalive link is created in its own VRF so it can use duplicate IPs or be kept unique by offsetting it with the fabric variable fbc.adv.addr_incre.mlag_kalive_incre.

The rest of the settings within base.yml are not specific to building a leaf and spine fabric, they are the more generic settings you find on most devices. Rather than go in depth into these I have just listed the top level dictionaries, more detailed information can be found in the variable file.

  • services: Services the switch consumes such as DNS, TACACS, NTP, SYSLOG and SNMP. All settings defined under here are optional, they will by default use the management interface and VRF as the source unless specifically set
  • mgmt_acl: Access lists to restrict SSH and SNMP access
  • adv: Advanced base configuration that is less likely to be changed like image and ssh/console timeout

fabric.yml (fbc)

Variables used to determine how the fabric will be built, the network size, interfaces, routing protocols and address increments. At a bare minimum you only need to declare the size of fabric, total number of switch ports and the routing options.

network_size: How many of each device type make up the fabric. Can range from 1 spine and 2 leafs up to a maximum of 4 spines, 4 borders and 10 leafs. The border and leaf switches are MLAG pairs so must be in increments of 2

Key Value Information
num_spines 2 Number of spine switches in increments of 1 up to a maximum of 4
num_borders 2 Number of border switches in increments of 2 up to a maximum of 4
num_leafs 4 Number of leaf switches in increments of 2 up to a maximum of 10

num_intf: The total number of interfaces per-device-type is required to make the interface assignment declarative by ensuring that non-defined interfaces are reset to their default values

Key Value Information
spine 1,64 The first and last interface for a spine switch
border 1,64 The first and last interface for a border switch
leaf 1,64 The first and last interface for a leaf switch

route: Settings related to the fabric routing protocols (OSPF and BGP). BFD is not supported on unnumbered interfaces so the routing protocol timers have been shortened (OSPF 2/8, BGP 3/9), these are set under the variable file advanced settings (adv.route)

Key Value Mandatory Information
ospf.pro string or integer Yes Can be numbered or named
ospf.area x.x.x.x Yes Area this group of interfaces are in, must be in dotted decimal format
bgp.as_num integer Yes Local BGP Autonomous System number
authentication string No Applies to both BGP and OSPF. Hash out if don’t want to set authentication

acast_gw_mac: The distributed gateway anycast MAC address for all leaf and border switches in the format xxxx.xxxx.xxxx

advanced settings (fbc.adv)

The advanced settings allow for further customize of the fabric, these are settings that are less likely to be changed. It is from these values that the declarative model determines which interfaces to use and what address increments to add.

adv.nve_hold_time: Time in seconds the switch will wait after bootup or interface recovery before bringing up the NVE interface
adv.route: Routing protocols hello and dead timers (OSPF 2/8, BGP 3/9)

adv.bse_intf: Interface naming formats and seed interface numbers used to build the fabric

Key Value Information
intf_fmt Ethernet1/ Interface naming format
intf_short Eth1/ Short interface name used in interface descriptions
mlag_fmt port-channel MLAG interface naming format
mlag_short Po Short MLAG interface name used in MLAG interface descriptions
lp_fmt loopback Loopback interface naming format
sp_to_lf 1 First interface used for SPINE to LEAF links (1 to 10)
sp_to_bdr 11 First interface used for SPINE to BORDER links (11 to 14)
lf_to_sp 1 First interface used LEAF to SPINE links (1 to 4)
bdr_to_sp 1 First interface used BORDER to SPINE links (1 to 4)
mlag_peer 5-6 Interfaces used for the MLAG peer Link
mlag_kalive mgmt Interface for the keepalive. If it is not an integer uses the management interface

adv.lp: Loopback interface number (added to fbc.adv.lp_fmt) and description used by the fabric for routing, VTEP and the BGW anycast (for future use if EVPN multi-site was added)

Key Value Information
rtr.num 1 Loopback used for routing protocol RID and peerings
rtr.descr string Description for the routing loopback
vtep.num 2 Loopback used for VTEP Tunnels (PIP) and MLAG (VIP)
vtep.descr string Description for the VTEP loopback
bgw.num 1 Loopback used for multi-site BGW (not currently used)
bgw.descr string Description for the BGW loopback

adv.mlag: All MLAG specific settings except for physical interfaces (peer-link and keepalive) and network ranges

Key Value Information
domain 1 MLAG domain number
peer_po 1 Port-channel number used for peer-link
peer_vlan 2 VLAN used for the OSPF peering over the peer-link
kalive_vrf VPC_KEEPALIVE VRF name for keepalive link. Only needed if management interface is not used for the keepalive

adv.address_incre: Increments added to the ‘Node ID’ and subnet to generate unique device IP addresses. Uniqueness is enforced by using different increments for different device-types and functions

Key Value Information
spine_ip 11 Spine mgmt and routing loopback addresses (default .11 to .14)
border_ip 16 Border mgmt and routing loopback addresses (default .16 to .19)
leaf_ip 21 Leaf mgmt and routing loopback addresses (default .21 to .30)
border_vtep_lp 36 Border VTEP (PIP) loopback addresses (default .36 to .39)
leaf_vtep_lp 41 Leaf VTEP (PIP) loopback addresses (default .41 to .50)
border_mlag_lp 56 Shared MLAG anycast (VIP) loopback addresses for each pair of borders (default .56 to .57)
leaf_mlag_lp 51 Shared MLAG anycast (VIP) loopback addresses for each pair of leafs (default .51 to .55)
border_bgw_lp 58 Shared BGW MS anycast loopback addresses for each pair of borders (default .58 to .59)
mlag_leaf_ip 1 Start IP for leaf OSPF peering over peer-link (default LEAF01 is .1, LEAF02 is .2, LEAF03 is .5, etc)
mlag_border_ip 21 Start IP for border OSPF peering over peer-link (default BORDER01 is .21, BORDER03 is .25, etc)
mlag_kalive_incre 28 Increment added to leaf/border increment (mlag_leaf_ip/mlag_border_ip) for keepalive addresses

If the management interface is not being used for the keepalive link either specify a separate network range (bse.addr.mlag_kalive_net) or use the peer-link range and define an increment (mlag_kalive_incre) that is added to the peer-link increment (mlag_leaf_ip or mlag_border_ip) to generate unique addresses.

Dynamic Inventory

The ansible, base and fabric variables are passed through the inv_from_vars.py inventory_plugin to create the dynamic inventory and host_vars of all the fabric interfaces and IP addresses. By doing this in the inventory the complexity is abstracted from the base and fabric role templates making it easier to expand the playbook to other vendors in the future.

With the exception of intf_mlag and mlag_peer_ip (not on the spines) the following host_vars are created for every host.

  • ansible_host: Devices management address
  • ansible_network_os: Got from ansible var_file and used by napalm device driver
  • intf_fbc: Dictionary of fabric interfaces with interface the keys and description the values
  • intf_lp: List of dictionaries with keys of name, ip and description
  • intf_mlag: Dictionary of MLAG peer-link interfaces with interface the key and description the value
  • mlag_peer_ip: IP of the SVI (default VLAN2) used for the OSPF peering over the MLAG peer-link
  • num_intf: Number of the first and last physical interface on the switch
  • intf_mlag_kalive: Dictionary of MLAG keepalive link interface with interface the key and description the value (only created if defined)
  • mlag_kalive_ip: IP of the keepalive link (only created if defined)

The devices (host-vars) and groups (group-vars) created by the inventory plugin can be checked using the graph flag. It is the inventory config file (.yml) not the inventory plugin (.py) that is referenced when using the dynamic inventory.

ansible-inventory --playbook-dir=$(pwd) -i inv_from_vars_cfg.yml --graph
@all:
|--@border:
| |--DC1-N9K-BORDER01
| |--DC1-N9K-BORDER02
|--@leaf:
| |--DC1-N9K-LEAF01
| |--DC1-N9K-LEAF02
| |--DC1-N9K-LEAF03
| |--DC1-N9K-LEAF04
|--@spine:
| |--DC1-N9K-SPINE01
| |--DC1-N9K-SPINE02
|--@ungrouped:

host shows the host-vars for that specific host whereas list shows everything, all host-vars and group-vars.

ansible-inventory --playbook-dir=$(pwd) -i inv_from_vars_cfg.yml --host DC1-N9K-LEAF01
ansible-inventory --playbook-dir=$(pwd) -i inv_from_vars_cfg.yml --list

An example of the host_vars created for a leaf switch.

{
    "ansible_host": "10.10.108.21",
    "ansible_network_os": "nxos",
    "intf_fbc": {
        "Ethernet1/1": "UPLINK > DC1-N9K-SPINE01 - Eth1/1",
        "Ethernet1/2": "UPLINK > DC1-N9K-SPINE02 - Eth1/1"
    },
    "intf_lp": [
        {
            "descr": "LP > Routing protocol RID and peerings",
            "ip": "192.168.101.21/32",
            "name": "loopback1"
        },
        {
            "descr": "LP > VTEP Tunnels (PIP) and MLAG (VIP)",
            "ip": "192.168.101.41/32",
            "mlag_lp_addr": "192.168.101.51/32",
            "name": "loopback2"
        }
    ],
    "intf_mlag_kalive": {
        "Ethernet1/7": "UPLINK > DC1-N9K-LEAF02 - Eth1/7 < MLAG Keepalive"
    },
    "intf_mlag_peer": {
        "Ethernet1/5": "UPLINK > DC1-N9K-LEAF02 - Eth1/5 < Peer-link",
        "Ethernet1/6": "UPLINK > DC1-N9K-LEAF02 - Eth1/6 < Peer-link",
        "port-channel1": "UPLINK > DC1-N9K-LEAF02 - Po1 < MLAG Peer-link"
    },
    "mlag_kalive_ip": "10.10.10.29/30",
    "mlag_peer_ip": "192.168.202.1/30",
    "num_intf": "1,64"
}

To use the inventory plugin in a playbook reference the inventory config file in place of the normal hosts inventory file (-i).

ansible-playbook PB_build_fabric.yml -i inv_from_vars_cfg.yml

Inventory Config File

The inventory config file inv_from_vars_cfg.yml identifies the inventory_plugin (stored in inventory_plugins), variable files (var_files) and dictionaries from those files (var_dicts) that will be passed into the inventory_plugin. Each var_dicts key is the variable file (without extension) and the value the variables from within it. This is the file that is referenced as the inventory when the playbook is run.

plugin: inv_from_vars

var_files:
  - ansible.yml
  - base.yml
  - fabric.yml

var_dicts:
  ansible:
    - device_os
  base:
    - device_name
    - addr
  fabric:
    - network_size
    - num_intf
    - bse_intf
    - lp
    - mlag
    - addr_incre

Inventory python script (plugin)

The following sections go through the different elements and methods you will find in the inv_from_vars_cfg.py script.

An inventory_plugin will always start with the DOCUMENTATION (mandatory) and EXAMPLES (optional) sections.

  • EXAMPLES: What users see as a way of instructions on how to run the plugin
  • DOCUMENTATION: How the variables and dictionaries defined in the inventory config file are passed into the inventory plugin. options defines the var_files and var_dicts allowing the get_options() method to parse that data from the inventory config file (inv_from_vars_cfg.yml)
DOCUMENTATION = '''
    name: inv_from_vars
    plugin_type: inventory
    version_added: "2.8"
    short_description: Creates inventory from desired state
    description:
        - Dynamically creates inventory from specified number of Leaf & Spine devices
    extends_documentation_fragment:
        - constructed
        - inventory_cache
    options:
        plugin:
            description: Token that ensures this is a source file for the 'inv_from_vars' plugin.
            required: True
            choices: ['inv_from_vars']
        var_files:
            description: Var files in Ansible vars directory where dictionaries will be imported from
            required: True
            type: list
        var_dicts:
            description: Dictionaries that will be imported from the var files
            required: True
            type: dictionary
'''

BaseInventoryPlugin

At a bare minimum import the Ansible BaseInventoryPlugin module (can import others for extra features) and define the class InventoryModule (inherits from BaseInventroyPlugin). The class has the pre-built methods verify_file and parse that will auto-run to build the inventory.

from ansible.errors import AnsibleParserError
from ansible.module_utils._text import to_native, to_text
from ansible.plugins.inventory import BaseInventoryPlugin, Constructable, Cacheable

class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
    NAME = 'inv_from_vars'

verify_file

The verify method runs first and is used to make a quick determination if the inventory source file is usable by the plugin. It does not need to be 100% accurate so is generally is just making sure the file exists and skips parsing if it doesn’t.

    def verify_file(self, path):
        valid = False
        if super(InventoryModule, self).verify_file(path):
            if path.endswith(('inv_from_vars_cfg.yaml', 'inv_from_vars_cfg.yml')):
                valid = True
        return valid

parse

The ‘engine’ of the inventory_plugin that takes the data from the inventory config file (options) and assigns them to variables that are used to create the inventory. When defining the parse method it takes the following 4 arguments:

  • inventory: How the inventory is created. Has the methods add_group, add_host, add_child and set_variable
  • loader: The DataLoader can read files, auto load JSON/YAML and decrypt vaulted data and cache read files
  • path: A string which is the path to the file that has the plugins options(inventory config file)
  • cache: Indicates whether the plugin should use or avoid caches (cache plugin and/or loader)
    def parse(self, inventory, loader, path, cache=False):
        super(InventoryModule, self).parse(inventory, loader, path)

The get_option() method gathers the var_files and var_dicts from the inventory config file (defined in DOCUMENTATION options).

        self._read_config_data(path)
        var_files = self.get_option('var_files')
        var_dicts = self.get_option('var_dicts')

Using zip to iterate through the var_file name with the .yml file extension (var_files) and the var_file name without the file extension (var_dicts key) a nested dictionary (all_vars) is created that holds all the contents of each var_file in the format {var_file_name:var_file_contents}.

        all_vars = {}
        mydir = os.getcwd()
        for dict_name, file_name in zip(var_dicts.keys(), var_files):
            with open(os.path.join(mydir, 'vars/') + file_name, 'r') as file_content:
                all_vars[dict_name] = yaml.load(file_content, Loader=yaml.FullLoader)

From this all_vars dictionary variables are created for each of the individual dictionaries (got from var_dicts value) required to create the inventory. Doing the extraction in this manner keeps it simple to add more host_vars in the future (just add to inventory config file and create a variable).

        for file_name, var_names in var_dicts.items():
            for each_var in var_names:
                if each_var == 'device_os':
                    self.device_os = all_vars[file_name]['ans'][each_var]
                elif each_var == 'device_name':
                    self.device_name = all_vars[file_name]['bse'][each_var]
                elif each_var == 'addr':
                    self.addr = all_vars[file_name]['bse'][each_var]
                elif each_var == 'network_size':
                    self.network_size = all_vars[file_name]['fbc'][each_var]
                elif each_var == 'num_intf':
                    self.num_intf = all_vars[file_name]['fbc'][each_var]
                elif each_var == 'bse_intf':
                    self.bse_intf = all_vars[file_name]['fbc']['adv'][each_var]
                elif each_var == 'lp':
                    self.lp = all_vars[file_name]['fbc']['adv'][each_var]
                elif each_var == 'mlag':
                    self.mlag = all_vars[file_name]['fbc']['adv'][each_var]
                elif each_var == 'addr_incre':
                    self.addr_incre = all_vars[file_name]['fbc']['adv'][each_var]

Data manipulation

These variables are not yet in a suitable format to create the inventory, two custom methods take this input and generate new data models that can then be used to create the inventory.

  • self.create_ip: Uses the concept of ‘Node ID’ to create the device names and interface addresses. It generates:
    • A per-device-type list (self.spine, self.border, self.leaf) holding all the device names of each device-type
    • A per-interface-type dictionary (self.all_lp, self.all_mgmt, self.mlag_peer, self.mlag_kalive) where the key is the device name and the value the interface IP address. The loopback dictionary value is a list of dictionaries as there are multiple loopbacks
  • self.create_intf: Uses the bse_intf seed metrics and the size of the network to create nested dictionaries (self.all_intf, self.mlag_peer_intf, self.mlag_kalive_intf) of all fabric interfaces (no IPs) of each device and their descriptions. These are in the format {device_name: {intf_num: descr}, {intf_num: descr}}

Add groups, hosts and host_vars

There are 3 pre-built methods that can be called to create hosts, groups and host_vars.

  • self.inventory.add_group(grp_name): Creates the groups and automatically adds them to the all group
  • self.inventory.add_host(hst_name, grp_name): Creates the hosts and adds them to the specified group
  • self.inventory.set_variable(grp_or_hst_name, var_name, var_value): Groups only hold hosts, they don’t have any group_vars. Although vars can be added to the group using this method they are actually added to the host_vars rather than group_vars

The custom create_inventory method takes the data models created by self.create_ip and self.create_intf and uses these pre-built methods to create the inventory file with its host_vars.

       groups = [self.device_name['spine'].split('-')[-1].lower(), self.device_name['border'].split('-')[-1].lower(),
                  self.device_name['leaf'].split('-')[-1].lower()]
        for gr in groups:
            self.inventory.add_group(gr)
            if gr in self.device_name['spine'].lower():
                for sp in self.spine:
                    self.inventory.add_host(sp, gr)
                    self.inventory.set_variable(gr, 'ansible_network_os', self.device_type['spine_os'])
                    self.inventory.set_variable(gr, 'num_intf', self.num_intf['spine'])
            if gr in self.device_name['border'].lower():
                for br in self.border:
                    self.inventory.add_host(br, gr)
                    self.inventory.set_variable(gr, 'ansible_network_os', self.device_type['border_os'])
                    self.inventory.set_variable(gr, 'num_intf', self.num_intf['border'])
            if gr in self.device_name['leaf'].lower():
                for lf in self.leaf:
                    self.inventory.add_host(lf, gr)
                    self.inventory.set_variable(gr, 'ansible_network_os', self.device_type['leaf_os'])
                    self.inventory.set_variable(gr, 'num_intf', self.num_intf['leaf'])
        for host, mgmt_ip in self.all_mgmt.items():
            self.inventory.set_variable(host, 'ansible_host', mgmt_ip)
        for host, lp in self.all_lp.items():
            self.inventory.set_variable(host, 'intf_lp', lp)
        for host, mlag_peer in self.mlag_peer.items():
            self.inventory.set_variable(host, 'mlag_peer_ip', mlag_peer)
        for host, mlag_kalive in self.mlag_kalive.items():
            self.inventory.set_variable(host, 'mlag_kalive_ip', mlag_kalive)
        for host, int_details in self.all_intf.items():
            self.inventory.set_variable(host, 'intf_fbc', int_details)
        for host, int_details in self.mlag_peer_intf.items():
            self.inventory.set_variable(host, 'intf_mlag_peer', int_details)
        for host, int_details in self.mlag_kalive_intf.items():
            self.inventory.set_variable(host, 'intf_mlag_kalive', int_details)