I wrote a post a while back about how the world of labbing changed during my time in networking, this is a follow on to see what options I have in terms of ‘labbing as Code’. I want a way to declaratively deploy the initial lab setup (devices, links, addressing, remote access, etc) so that I can concentrate on the features I am actually trying to lab. My idea is to try and use existing tools rather than writing my own, the following repo has all the code and files I used as part of this blog and part2.
Table Of Contents
EVE-NG

As this has been my tried and tested go to labbing program for the last 10 years it seemed the obvious place to start. There is a somewhat limited API for EVE-NG and based off the back of that an SDK which I am using:
- evengsdk: A set of classes to manage EVE-NG servers and network topologies from a python script
- CLI: A set of CLI commands to manage EVE-NG servers and network topologies without the need to write Python code
- Topology Builder: A way to build a topology from a YAML declaration file, this is what I will be using
Topology file
The topology file is made up of 2 main sections:
-
nodes: A list of the devices to be created, everything except for ethernet are mandatory attributes (can set anything that the API supports). Rather than a static config file I used per-device-type jinja templates to generate the config based on variables passed in from the topology file. Below is a few useful commands to gather the template (device type) and image (os version) names.
- eve-ng –host “10.30.10.105” –username admin –password ‘pa$$w0rd’ list-node-templates
- eve-ng –host “10.30.10.105” –username admin –password ‘pa$$w0rd’ show-template <template_name>
- name: # Device name template: # The device type, can see the different options with the cmd "eve-ng --host 'x.y.z.z' --username me --password 'pass' list-node-templates" image: # Template images, for example to see the vios images (are in options >> image >> list dictionary) "eve-ng --host 'x.y.z.z' --username me --password 'pass' show-template vios" node_type: # Majority of time will be qmeu, although can also have ios dynamips (Cisco IOS emulation) and iol (IOS on Linux also known as IOU) left: # Percentage to merge from the left top: # Percentage to merge from the top configuration: # Specify either a static config file or a template and variables (vars) file: # The static configuration file name template: # Jinja template name, by default looks for it in /templates folder unless specified otherwise at runtime with --template-dir vars: # Dictionary of variables used when the template file is rendered
-
links: A list of dictionaries that contain connections between devices (node) and connections between devices and clouds/pnets (network).
network: - {"src": "node_a", "src_label": "intf_a", "dst": "inet"} node: - {"src": "node_a", "src_label": "intf_a", "dst": "node_b", "dst_label": "intf_b"}
The full eveng_lab_topo.yml topology file that will be deployed.
|
|
Deploying lab
First off install evengsdk, I also find chromaterm useful for colourising the output of native API calls (with | ct).
pip install eve-ng chromaterm
A good starting point is to list all the existing labs on EVE-NG and gather their paths as this will be needed in future commands.
eve-ng --host "10.30.10.105" --username admin --password 'pa$$w0rd' lab list
To get a feel for the parameters to be set in the topology file it is useful to look at any existing labs, I found the API was best for this as the SDK dumbed down the info too much. These commands got the nodes and networks in my test1 lab which I than used to build my topology file.
curl -s -b /tmp/cookie -c /tmp/cookie -X POST -d '{"username":"admin","password":"pa$$w0rd"}' http://10.30.10.105/api/auth/login | ct
curl -s -c /tmp/cookie -b /tmp/cookie -X GET -H 'Content-type: application/json' http://10.30.10.105/api/labs/scratch/test1.unl/networks | python -m json.tool | ct
curl -s -c /tmp/cookie -b /tmp/cookie -X GET -H 'Content-type: application/json' http://10.30.10.105/api/labs/scratch/test1.unl/nodes | python -m json.tool | ct
The eveng_lab_topo.yml file defines the topology to be deployed, can use –template-dir to set the jinja templates location (defaults to /templates).
eve-ng --host "10.30.10.105" --username admin --password 'pa$$w0rd' lab create-from-topology -t eveng_lab_topo.yml
This created the following lab topology::

I couldn’t find a way to set the startup config with the topology builder so had to do it manually in the GUI (more actions -> Set nodes startup-cfg to exported). From the startup-configs menus you can also check the configuration that the template has generated.

All nodes in a lab can be started and stopped individual (with node-id) or all at once. The full lab path needs specifying for these and any of other lab verification commands.
eve-ng --host "10.30.10.105" --username admin --password 'pa$$w0rd' node start -n <node_id> --path /scratch/cisco_topo.unl
eve-ng --host "10.30.10.105" --username admin --password 'pa$$w0rd' lab start --path /scratch/cisco_topo.unl
eve-ng --host "10.30.10.105" --username admin --password 'pa$$w0rd' node stop -n <node_id> --path /scratch/cisco_topo.unl
eve-ng --host "10.30.10.105" --username admin --password 'pa$$w0rd' lab stop --path /scratch/cisco_topo.unl
Using SDK CLI command you can check the state of all nodes, check the lab connections or check parameters for an individual node.
$ eve-ng --host "10.30.10.105" --username admin --password 'pa$$w0rd' node list --path /scratch/cisco_topo.unl
Nodes @ /scratch/cisco_topo.unl
┏━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━┳━━━━━┓
┃ Id ┃ Name ┃ Url ┃ Image ┃ Template ┃ Status ┃ Console ┃ Ram ┃ Cpu ┃
┡━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━╇━━━━━┩
│ 1 │ isp01 │ telnet://10.40.10.120:32897 │ vios-adventerprisek9-m-15.6.2T │ vios │ started 🟠 │ telnet │ 1024 │ 1 │
│ 2 │ isp02 │ telnet://10.40.10.120:32898 │ vios-adventerprisek9-m-15.6.2T │ vios │ started 🟠 │ telnet │ 1024 │ 1 │
│ 3 │ csr02 │ telnet://10.40.10.120:32899 │ csr1000vng-universalk9.17.03.04a │ csr1000vng │ building 🔴 │ telnet │ 4096 │ 1 │
│ 4 │ core01 │ telnet://10.40.10.120:32900 │ vios-adventerprisek9-m-15.6.2T │ vios │ started 🟠 │ telnet │ 1024 │ 1 │
│ 5 │ asa01 │ telnet://10.40.10.120:32901 │ asav-992 │ asav │ started 🟠 │ telnet │ 2048 │ 1 │
│ 6 │ csr01 │ telnet://10.40.10.120:32902 │ csr1000vng-universalk9.17.03.04a │ csr1000vng │ building 🔴 │ telnet │ 4096 │ 1 │
│ 7 │ xnet01 │ telnet://10.40.10.120:32903 │ vios-adventerprisek9-m-15.6.2T │ vios │ started 🟠 │ telnet │ 1024 │ 1 │
│ 8 │ access01 │ telnet://10.40.10.120:32904 │ vios-adventerprisek9-m-15.6.2T │ vios │ started 🟠 │ telnet │ 1024 │ 1 │
│ 9 │ workstation01 │ telnet://10.40.10.120:32905 │ win-7 │ win │ started 🟠 │ telnet │ 4096 │ 1 │
└────┴───────────────┴─────────────────────────────┴──────────────────────────────────┴────────────┴─────────────┴─────────┴──────┴─────┘
$ eve-ng --host "10.30.10.105" --username admin --password 'pa$$w0rd' lab topology --path /scratch/cisco_topo.unl
$ eve-ng --host "10.30.10.105" --username admin --password 'pa$$w0rd' node read -n 1 --path /scratch/cisco_topo.unl
Summary
Although this is a step in the right direction, it doesn’t make labbing any faster or easier due to the amount of time and effort it takes to build the topology file. In all honesty it probably makes it more complex as defining the links gets exponentially harder as the lab grows. It would probably be useful if always deploying a similar lab setup with small tweaks or you had a script building the topology file for you, but you wouldn’t go deploying random complex labs using this tool.
The other issue with the SDK is this bug which prevents you from creating links to IOL devices, you can deploy devices but when adding links the script breaks with a TypeError exception.
Cisco Modeling Labs (CML)

Due to costs and a need for non-cisco devices I never really considered using CML but with the recent release of the free-tier I thought I would give it a try and see how it compares to EVE-NG. As we have come to expect with anything supposedly free from Cisco there are limitations:
- Can only run five nodes simultaneously, this doesn’t include shutdown nodes, unmanaged switches or external connectors
- It only comes with the following images (can add others manually using .qcow2 file): IOL, ASAv, unmanaged switch, Alpine, Ubuntu, server (Tiny Core Linux) and desktop (Alpine Linux with GUI)
- Can’t disable CML telemetry from sending usage data (apparently anonymized) to Cisco
If you have never used CML before it is worth having a look at this free CiscoU intro to CML course to get started out. I am not the biggest fan of Cisco training but it is worth doing just for the 6 CE points. In short CML provides the same functionality as EVE-NG but feels a bit more polished as you would expect from an enterprise product. Some of the key points I took from the training:
- cockpit performs the underlying server operations such as upgrades, start/stop services, package installation, console access, system usage and logs as well as networking features such as firewalls, interfaces and DHCP
- Like most modern Cisco products CML is built on top of a REST-based API which the GUI consumes, this means that anything that you can do in the user interface can be done programmatically with API calls
- Can connect to lab nodes via the browser console or remotely via console (telnet) or GUI (VNC) access using a locally run Breakout Tool (small portable executable file)
- Packet captures can be run on any lab node link and viewed locally in realtime or downloaded as a .pcap file
- External connectors are the equivalent EVE-NG clouds (PNETs), although to get the same functionality you have to combine them with an unmanaged switch as they only have 1 port. There are 2 types external connectors:
- bridge: Extends lab networks out of CML by associating CML server NICs to a bridge (is done via cockpit)
- virbr: Provides Internet connectivity to a lab device by assigning a DHCP address which is PATed to the CML management IP
- Can manually add Cisco and third-party images, there are 2 elements that make up a node:
- Node definition (device type): Defines the VM config such as CPU, RAM, NICs, base config, etc (are many pre-defined at CML community)
- Image definition (.qcow file): Is the OS version image, can have multiple image definitions all associated to the 1 node definition
- Node configs are stored in a config tab under the characteristics of the node (configs are only generated for shutdown wiped nodes)
As CML has been built from day1 based on REST-based APIs it has excellent documentation with swagger (https://x.x.x.x/api/v0/ui/) also allowing you to try out API calls against the CML server. These 2 python libraries can be used to manage CML labs programmatically:
- cmlutils: A Python package that allows you to perform many useful operations through the CLI rather than needing to use the GUI or having to write code for every simple task that you need to perform
- virl2-client: Python Client Library that uses a connection handler and prebuilt methods to builds labs, nodes, connections, etc
cmlutils has a function similar to the eveng sdk topology builder for importing a lab from a YAML topology, if the eve2cml package is also installed it will convert and import an EVE-NG .unl file (is also an online tool to convert EVE-NG labs to CML).
Topology file
Rather than creating a lab from scratch cmlutils is performing a lab import which means the file is a lot more verbose when compared to evengsdk. Again this file is made up of 2 main sections:
-
nodes: A list of nodes with each requiring a name and unique ID, the node interfaces are defined in a similar manner. The startup config must also be included in the file, it can’t be templated or sourced externally.
- id: # Unique identifier of the node (nX), is what is used to reference it in the links label: # Friendly name of the node as seen in GUI node_definition: # Is the device type image_definition: # Optionally specify an image if the node_definition has more than one x: # Location from left, works on 0 being centre y: # Location down, works on 0 being centre interfaces: - id: # Unique identifier of the node interface (iX), is what is used to reference it in the links section label: # Name of the interface slot: # Unique number of the interface, not sure how relates but loopback is 0 and then other interfaces start from 1 type: # Can be physical or loopback configuration: - name: # Name of the configuration file as seen in CML GUI content: # Startup config (defined 'inline') to be applied to the node
-
links: A list of dictionaries that contain connections between nodes and connections between nodes and external connectors.
- id: # Unique identifier of the link (lX) n1: # Node ID for A end of the link n2: # Node ID for B end of the link i1: # Interface ID for A end of the link i2: # Interface ID for B end of the link label: # Friendly name for the link, not sure where is used
The full cmlutil_lab_topo.yaml topology file that will be deployed, it has been stripped down to the bare minimum mandatory attributes.
|
|
Deploying lab
Install the cmlutils package and set the CML server and credentials in environment variables (can alternatively set in a .virlrc file)
pip install cmlutils
export VIRL_HOST=10.30.10.107
export VIRL_USERNAME=admin
export VIRL_PASSWORD='pa$$w0rd'
export CML_VERIFY_CERT=False
Can check the credentials are correct by getting a list of all labs on the CML server.
$ cml ls
Labs on Server
╒══════════════════════════════════════╤═══════════════════╤═══════════════╤═════════╤══════════╤═════════╤═════════╤══════════════╕
│ ID │ Title │ Description │ Owner │ Status │ Nodes │ Links │ Interfaces │
╞══════════════════════════════════════╪═══════════════════╪═══════════════╪═════════╪══════════╪═════════╪═════════╪══════════════╡
│ 5fc145a5-b4f4-4100-9424-d1e75880a582 │ edited config │ │ ste │ STOPPED │ 3 │ 2 │ 7 │
├──────────────────────────────────────┼───────────────────┼───────────────┼─────────┼──────────┼─────────┼─────────┼──────────────┤
│ 6f1cadb8-98ae-4494-9799-6664c1e317c1 │ CML UTILs Base │ │ ste │ STOPPED │ 8 │ 8 │ 33 │
├──────────────────────────────────────┼───────────────────┼───────────────┼─────────┼──────────┼─────────┼─────────┼──────────────┤
│ edc56ab4-317e-4c00-b2f1-f906c566ad0a │ CML UTIL Topology │ │ ste │ STARTED │ 11 │ 15 │ 43 │
╘══════════════════════════════════════╧═══════════════════╧═══════════════╧═════════╧══════════╧═════════╧═════════╧══════════════╛
The cmlutil_lab_topo.yaml file defines the topology to be deployed, omit –no-start if you don’t want all the nodes brought up immediately once the lab has been imported
$ cml up --no-start -f cmlutil_lab_topo.yaml

For cmlutils to be able to perform any further actions on the lab you must tell it which lab to use, this can be done with either the lab name or ID (use the ID if have duplicate lab names).
$ cml use [--id | -n] <lab_ID_or_name>
$ cml use -n "CMLUTIL lab topology"
$ cml id
CMLUTIL lab topology (ID: 7325c1d4-f235-422e-a28d-d3652e9a776f)
With the ID set can now perform actions against the lab such as check the state of all the nodes and stop or start them:
$ cml nodes
Here is a list of nodes in this lab
╒══════════════════════════════════════╤══════════╤════════════════════╤════════════════╤═════════════════╤══════════╤══════════════════╕
│ ID │ Label │ Type │ Compute Node │ State │ Wiped? │ L3 Address(es) │
╞══════════════════════════════════════╪══════════╪════════════════════╪════════════════╪═════════════════╪══════════╪══════════════════╡
│ c08d3124-119a-45b9-8059-3e8d4b699893 │ ISP │ csr1000v │ mob-ubt-cml01 │ BOOTED │ False │ │
├──────────────────────────────────────┼──────────┼────────────────────┼────────────────┼─────────────────┼──────────┼──────────────────┤
│ e54d2b92-c36e-445e-823f-6efae39680c8 │ R1 │ iol-xe │ mob-ubt-cml01 │ BOOTED │ False │ │
├──────────────────────────────────────┼──────────┼────────────────────┼────────────────┼─────────────────┼──────────┼──────────────────┤
│ f51a7edb-d18d-496a-ab4e-3c15facfc82d │ R2 │ iol-xe │ mob-ubt-cml01 │ BOOTED │ False │ │
├──────────────────────────────────────┼──────────┼────────────────────┼────────────────┼─────────────────┼──────────┼──────────────────┤
│ 422cd679-79c6-4484-9db1-0c504860e28f │ SWI-XNET │ ioll2-xe │ mob-ubt-cml01 │ BOOTED │ False │ │
├──────────────────────────────────────┼──────────┼────────────────────┼────────────────┼─────────────────┼──────────┼──────────────────┤
│ 9d0c5742-4339-4a34-8870-a0a969236a94 │ XNET-ASA │ asav │ mob-ubt-cml01 │ BOOTED │ False │ │
├──────────────────────────────────────┼──────────┼────────────────────┼────────────────┼─────────────────┼──────────┼──────────────────┤
│ c76e93a2-846e-42c5-8a39-ee9399570ea4 │ CORE_SWI │ unmanaged_switch │ mob-ubt-cml01 │ BOOTED │ False │ │
├──────────────────────────────────────┼──────────┼────────────────────┼────────────────┼─────────────────┼──────────┼──────────────────┤
│ c7d1fad6-96ad-4d4d-842e-2c96b434d300 │ WS01 │ desktop │ Unknown │ DEFINED_ON_CORE │ True │ │
├──────────────────────────────────────┼──────────┼────────────────────┼────────────────┼─────────────────┼──────────┼──────────────────┤
│ bcad5197-1a11-46cf-80c6-ce0b32c9059f │ SVR01 │ server │ Unknown │ DEFINED_ON_CORE │ True │ │
├──────────────────────────────────────┼──────────┼────────────────────┼────────────────┼─────────────────┼──────────┼──────────────────┤
│ 17fcba09-4ab2-4a01-a7c7-f906d2a461b5 │ INET │ external_connector │ mob-ubt-cml01 │ BOOTED │ False │ │
├──────────────────────────────────────┼──────────┼────────────────────┼────────────────┼─────────────────┼──────────┼──────────────────┤
│ 3ce5fb79-3d2b-48a9-9dd5-15ebf1c8cf8f │ MGMT │ external_connector │ mob-ubt-cml01 │ BOOTED │ False │ │
├──────────────────────────────────────┼──────────┼────────────────────┼────────────────┼─────────────────┼──────────┼──────────────────┤
│ db3b44a5-a899-4d9f-ad33-cba143eb9c9f │ MGMT-SWI │ unmanaged_switch │ mob-ubt-cml01 │ BOOTED │ False │ │
╘══════════════════════════════════════╧══════════╧════════════════════╧════════════════╧═════════════════╧══════════╧══════════════════╛
$ cml down
$ cml up
A handy option within cmlutils is the ability to console into any of the lab devices, although guess you would more likely use breakout tool for this.
$ cml console R1
ste@10.30.10.107's password:
Connecting to console for R1
Connected to CML terminalserver.
Escape character is '^]'.
R1>
Summary
Deploying a lab with cmlutils is even more complex than doing so with evengsdk. The topology file is too verbose due to all the extra node details that are required, the links are even more confusing as you have to identify each node interface by ID rather than friendly name.
Custom tool for EVE-NG & CML

For me to be able to declaratively deploy labs with EVE-NG or CML in a non-complex way the only real option I have is to use the SDKs (evengsdk and virl2-client) and build my own tool. Learning from the past 2 experiments I came up with the following guidelines for this tool:
- The topology file should be kept as clean and simplistic as possible, it has to be quicker and more convenient than building labs in the GUI:
- Should define only the absolute bare minimum information that is needed to create a node
- You can’t easily draw the topology from code, therefore randomise node locations to prevent overlaps and tidyup in GUI once deployed
- Define which nodes should be connected (node_a needs to connect to node_b) rather than how they are connected (what interfaces):
- The tool should automatically assign the next available interface for each link. It would be nice to also automatically assign interface IPs but is too complex for now as you have to take into account the different protocols, connection types (PtP or Multi-access) and prefix lengths
- Management interfaces will be the only pre-defined interfaces as their IP address will need to be assigned as part of the initial build
- Startup configurations are to be generated from a jinja template based on variables defined in the topology file:
- Variables should be kept to a bare minimum, only really need a bootstrap configuration to allow for management access
- Don’t repeat common variables, these variables shared by all nodes should only be defined once
- Jinja templates will be per device-type and all use the same topology file variables, no snowflakes
This lab_builder tool can be used to build a EVE-NG or CML lab from a very similar topology file, the only real differences being related to some of the node names. For example, a CSR in EVE-NG is called csr1000vng with GiX interfaces whereas in CML it is csr1000v and GigabitEthernetX.
Topology file
The topology file is split up into 4 parts, in the example any differences between EVE-NG and CML objects are shown in the format EVE/CML.
-
lab: The management details are lab-wise meaning that they will be used by all nodes in the lab.
name: # The lab name, in EVE-NG this must be unique but CML doesn't matter (uses an arbitrary ID for uniqueness) description: # Lab description addr: mgmt_prefix: # Range all management address come from mgmt_gw: # Management range default gateway, used by all nodes
-
nodes: Dictionary of nodes to be created with the key being the node name and value the node details. Type and config are mandatory attributes, other optional settings include the image, number of ethernet ports and eve_type (only needed for non-qmeu). mgmt defines the interface connected to management bridge and the management address (4th octet added to the mgmt_prefix).
nodes: NODE_NAME: type: # Node type, in EVE-NG is known as the "template" and in CML the "node_definition" image: # (optional) Software version (defaults to newest), in EVE-NG known as "image", in CML "image_definition" eve_type: # (optional) Only used by EVE-NG, defaults to qemu so only need to define when using IOL ethernet: # (optional) Number of interfaces, if undefined uses the node type default (normally 4) config: template: # Per device_type jinja template used to create the config vars: hostname: # Variable used in template to configure nodes hostname mgmt: MGMT_INTERFACE_NAME: # 4th octet of mgmt IP intf: INTERFACE_NAME: # x.x.x.x/yy
-
networks: Network objects provide local bridging (bridge/unmanaged_switch) as well as lab breakout (pnet/external_connector). The links for these network objects must be defined here under the object, they can not be defined under device links (links).
- For CML the type must start with ec (an external_connector) and then be followed by whatever bridge or virbr numbers you have setup
- As only 1 device can be connected to a CML external_connector, setting the number of ports (ethernet) or specifying more than 1 connection (links) will automatically create an additional unmanaged_switch (ec_name_SWI) and connects everything to that
networks: NETWORK_NAME: management: # When defined identifies this bridge is used for mgmt, all pre-defined node mgmt interfaces connect this type: # The network object type, EVE-NG can be "bridge" or "pnetX", CML can be "unmanaged_switch", "ec_bridgeX" or "ec_virbrX" links: # List of nodes that connect to this bridge. Uses the next available local and remote interface (for mgmt uses pre-defined remote interfaces) ethernet: # Required for CML external_connectors if have more than 1 connection, automatically creates an unmanaged_switch (xx_SWI) to connect all devices
-
links: A dictionary of connections from node_a (key) to a list of nodes_b (value). Rather than having the verboseness of defining each connections interface you just define what is connected and the script will automatically assign the next available interface.
links: NODE_A: # Dict key is node_a and dict value is a list of all the devices it connects to
If you are going to use IOL nodes in EVE-NG you need to be aware of this bug that breaks link assignment via the API. I submitted a pull request to fix this but am not sure if the project is still maintained, as a workaround you will have to install this branch manually.
The full eve_cisco_topo.yml topology file that will be used to deploy the lab on EVE-NG. This same sample topology for CML can be found here.
|
|
Deploying lab
Can set the EVE-NG/CML server details at runtime, with environment variables or within the lab_builder.py file (in that order of preference).
export LAB_SERVER=x.x.x.x
export LAB_USERNAME=admin
export LAB_PASSWORD='pa$$w0rd'
export LAB_TEMPLATES=templates
As the tool is built on top off click you have the normal runtime options (filename, templates, host, username, password), arguments (EVE, CML) and commands (build, config, down, ls-nodes, up).
$ python lab_builder.py --help
Usage: lab_builder.py [OPTIONS] [PLATFORM] COMMAND [ARGS]...
Build a EVE or CML lab in a semi-declarative fashion based off a YAML topology file
Options:
-f, --filename FILENAME file.yaml or path/file.yaml topology defining the lab, defaults to script var (mylab.yml)
-t, --templates PATH Template directory, defaults to env-var -> script var (templates)
-h, --host TEXT EVE/CML server, defaults to env-var -> script var
-u, --username TEXT EVE/CML username, defaults to env-var -> script var
-p, --password TEXT EVE/CML password, defaults to env-var -> script var
--help Show this message and exit.
Commands:
build Builds the lab based on the loaded topology file
config Regenerate and reapply the startup config to all devices
down Take DOWN all devices in the lab
ls-nodes Table displaying the details and status of all devices in lab
up Bring UP all devices in the lab
The first run of the script creates the lab, so adds all nodes and the defined links.
$ python build_lab.py -f eve_cisco_topo.yml EVE build
or
$ python build_lab.py -f cml_cisco_topo.yml CML build
This will produce a randomised topology layout which you can then adjust in the GUI to be more human friendly.

The ‘build’ command also creates a new topology file (xxx_v1.yml) with an extra per-node intf_links dictionary that describes to whom each interface of the node connects to. It is worth noting this file also has a lab_id dictionary, this is what is used by all other runtime commands (config, up, down, ls-nodes) to know what lab to run the actions on.
nodes:
ISP:
type: csr1000vng
ethernet: 8
config:
template: iosxel3_base.j2
vars:
hostname: ISP
mgmt:
Gi8: 100
intf_links:
Gi1: L1 >> INET
Gi2: L5 >> R1:e0/1
Gi3: L6 >> R2:e0/1
R1:
type: iol
eve_type: iol
ethernet: 2
config:
template: iosxel3_base.j2
vars:
hostname: R1
mgmt:
e1/3: 101
intf_links:
e0/0: L2 >> XNET-SWI
e0/1: L5 >> ISP:Gi2
.......
lab_id: /eve_cisco_topo_initial.unl
To assign the interface IP address for a router it should just be a case of changing the dictionary name to intf and adding the appropriate address to either end of the link, for switches and firewalls it is a bit more complicated as interfaces don’t just have an IP address.
nodes:
ISP:
type: csr1000vng
ethernet: 8
config:
template: iosxel3_base.j2
vars:
hostname: ISP
mgmt:
Gi8: 100
intf:
Gi1: dhcp
Gi2: 10.1.40.1/30
Gi3: 10.1.40.5/30
R1:
type: iol
eve_type: iol
ethernet: 2
config:
template: iosxel3_base.j2
vars:
hostname: R1
mgmt:
e1/3: 101
intf:
e0/0: 10.1.40.9/29
e0/1: 10.1.40.2/30
Once the file has been changed (example here) you can use it regenerate the startup config and apply it to all devices using the ‘config’ runtime command (it will first wipe all device configs).
$ python lab_builder.py -f eve_cisco_topo_ip.yml EVE config
There are a few other commands to bringup or shutdown all nodes as well as produce a table to show the status of all the nodes.
$ python lab_builder.py -f eve_cisco_topo_ip.yml EVE up
$ python lab_builder.py -f eve_cisco_topo_ip.yml EVE ls-nodes
$ python lab_builder.py -f eve_cisco_topo_ip.yml EVE down
Summary
I probably wasted far more time than I should have on this trying to make it a multi-lab provider setup. In theory it seems a good way to setup simple labs, it is certainly a lot easier and less complex than using the EVE-NG or CML native tools. Only time will tell if this tool is fit for purpose to lab-as-code in terms of whether I actually continue using it to build labs or slip back into the GUI. You could quite easily add more functionality or make it more polished, it could also do with proper error handling but I wont be doing much more to it until I am certain it is something I will get a lot of use out of.
One thing this does go to show is that you can very easily use these SDKs to build a lab as part of a CI/CD pipeline to test changes post deployment. It wouldn’t be too difficult to pull the existing running config from a SoT and then commit new changes to that before reporting back on the outcome.