Lab as Code - Part1

using eve-ng and cml

28 February 2025   28 min read

I wrote a post a while back about how the world of labbing changed during my time in networking, this is a follow on to see what options I have in terms of ‘labbing as Code’. I want a way to declaratively deploy the initial lab setup (devices, links, addressing, remote access, etc) so that I can concentrate on the features I am actually trying to lab. My idea is to try and use existing tools rather than writing my own, the following repo has all the code and files I used as part of this blog and part2.


Table Of Contents



EVE-NG

eve-logo

As this has been my tried and tested go to labbing program for the last 10 years it seemed the obvious place to start. There is a somewhat limited API for EVE-NG and based off the back of that an SDK which I am using:

  • evengsdk: A set of classes to manage EVE-NG servers and network topologies from a python script
  • CLI: A set of CLI commands to manage EVE-NG servers and network topologies without the need to write Python code
  • Topology Builder: A way to build a topology from a YAML declaration file, this is what I will be using

Topology file

The topology file is made up of 2 main sections:

  • nodes: A list of the devices to be created, everything except for ethernet are mandatory attributes (can set anything that the API supports). Rather than a static config file I used per-device-type jinja templates to generate the config based on variables passed in from the topology file. Below is a few useful commands to gather the template (device type) and image (os version) names.

    • eve-ng –host “10.30.10.105” –username admin –password ‘pa$$w0rd’ list-node-templates
    • eve-ng –host “10.30.10.105” –username admin –password ‘pa$$w0rd’ show-template <template_name>

    - name: # Device name
        template: # The device type, can see the different options with the cmd "eve-ng --host 'x.y.z.z' --username me --password 'pass' list-node-templates"
        image: # Template images, for example to see the vios images (are in options >> image >> list dictionary) "eve-ng --host 'x.y.z.z' --username me --password 'pass' show-template vios"
        node_type: # Majority of time will be qmeu, although can also have ios dynamips (Cisco IOS emulation) and iol (IOS on Linux also known as IOU)
        left: # Percentage to merge from the left
        top: # Percentage to merge from the top
        configuration: # Specify either a static config file or a template and variables (vars)
          file: # The static configuration file name
          template: # Jinja template name, by default looks for it in /templates folder unless specified otherwise at runtime with --template-dir
          vars: # Dictionary of variables used when the template file is rendered
    
  • links: A list of dictionaries that contain connections between devices (node) and connections between devices and clouds/pnets (network).

    network:
        - {"src": "node_a", "src_label": "intf_a", "dst": "inet"}
    node:
        - {"src": "node_a", "src_label": "intf_a", "dst": "node_b", "dst_label": "intf_b"}
    

The full eveng_lab_topo.yml topology file that will be deployed.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
---
name: cisco_topo
description: Cisco remote site lab
path: "/scratch"
nodes:
  # ISP routers
  - name: isp01
    template: vios
    image: vios-adventerprisek9-m-15.6.2T
    node_type: qemu
    ethernet: 8
    left: 220
    top: 250
    configuration:
      template: iosxel3_base.j2
      vars:
        hostname: isp01
        mgmt_addr: 10.30.20.11 255.255.255.0
        mgmt_gw: 10.30.20.2
        mgmt_intf: GigabitEthernet0/7
        intf: 
          GigabitEthernet0/0: dhcp
          GigabitEthernet0/1: 10.1.40.1 255.255.255.252
  - name: isp02
    template: vios
    image: vios-adventerprisek9-m-15.6.2T
    node_type: qemu
    ethernet: 8
    left: 220
    top: 500
    configuration:
      template: iosxel3_base.j2
      vars:
        hostname: isp02
        mgmt_addr: 10.30.20.12 255.255.255.0
        mgmt_gw: 10.30.20.2
        mgmt_intf: GigabitEthernet0/7
        intf: 
          GigabitEthernet0/0: dhcp
          GigabitEthernet0/1: 10.1.40.5 255.255.255.252
  # Customer edge routers
  - name: csr01
    template: csr1000vng
    image: csr1000vng-universalk9.17.03.04a
    ethernet: 8
    left: 440
    top: 250
    configuration:
      template: iosxel3_base.j2
      vars:
        hostname: csr01
        mgmt_addr: 10.30.20.13 255.255.255.0
        mgmt_gw: 10.30.20.2
        mgmt_intf: GigabitEthernet8
        intf: 
          GigabitEthernet1: 10.1.40.2 255.255.255.252
          GigabitEthernet2: 10.1.40.9 255.255.255.252
          GigabitEthernet3: 10.1.40.17 255.255.255.248
  - name: csr02
    template: csr1000vng
    image: csr1000vng-universalk9.17.03.04a
    ethernet: 8
    left: 440
    top: 500
    configuration:
      template: iosxel3_base.j2
      vars:
        hostname: csr02
        mgmt_addr: 10.30.20.14 255.255.255.0
        mgmt_gw: 10.30.20.2
        mgmt_intf: GigabitEthernet8
        intf: 
          GigabitEthernet1: 10.1.40.6 255.255.255.252
          GigabitEthernet2: 10.1.40.10 255.255.255.252
          GigabitEthernet3: 10.1.40.18 255.255.255.248
  # Edge switch and firewall
  - name: xnet01
    template: viosl2
    image: viosl2-adventerprisek9-m-15.2-2017032
    node_type: qemu
    ethernet: 8
    left: 700
    top: 375
    configuration:
      template: iosxel2_base.j2
      vars:
        hostname: xnet01
        mgmt_addr: 10.30.20.15 255.255.255.0
        mgmt_gw: 10.30.20.2
        mgmt_intf: GigabitEthernet1/3
        vlans:
          99: inet
        access:
          GigabitEthernet0/0: 99
          GigabitEthernet0/1: 99
          GigabitEthernet0/2: 99
  - name: asa01
    template: asav
    image: asav-992
    ethernet: 8
    left: 850
    top: 375
    configuration:
      template: asa_base.j2
      vars:
        hostname: asa01
        mgmt_addr: 10.30.20.16 255.255.255.0
        mgmt_gw: 10.30.20.2
        mgmt_intf: Management0/0
        intf:
          GigabitEthernet0/0: 10.1.40.19 255.255.255.248
          GigabitEthernet0/1: 10.1.40.13 255.255.255.252 
        zones:
          outside: 
            sec_level: 100
            intf: [GigabitEthernet0/0]
          inside: 
            sec_level: 0
            intf: [GigabitEthernet0/1]
  # Core and access switches
  - name: core01
    template: viosl2
    image: viosl2-adventerprisek9-m-15.2-2017032
    node_type: qemu
    ethernet: 8
    left: 1000
    top: 375
    configuration:
      template: iosxel2_base.j2
      vars:
        hostname: core01
        mgmt_addr: 10.30.20.17 255.255.255.0
        mgmt_gw: 10.30.20.2
        mgmt_intf: GigabitEthernet1/3
        vlans:
          10: data
          20: voice
        inft:
          GigabitEthernet0/0: 10.1.40.14 255.255.255.252 
          vlan10: 10.2.10.1 255.255.255.0
          vlan20: 10.2.20.1 255.255.255.0
        trunk:
          GigabitEthernet0/1: 10,20
  - name: access01
    template: viosl2
    image: viosl2-adventerprisek9-m-15.2-2017032
    node_type: qemu
    ethernet: 8
    left: 1100
    top: 375
    configuration:
      template: iosxel2_base.j2
      vars:
        hostname: access01
        mgmt_addr: 10.30.20.18 255.255.255.0
        mgmt_gw: 10.30.20.2
        mgmt_intf: GigabitEthernet1/3
        vlans:
          10: data
          20: voice
        trunk:
          GigabitEthernet0/0: all
          GigabitEthernet0/2: 10,20
        access:
          GigabitEthernet0/1: 10
  - name: workstation01
    template: win
    image: win-7
    node_type: qemu
    left: 1100
    top: 625
# clouds/pnets
networks:
  - name: mgmt
    network_type: pnet1
    visibility: 1
    left: 750
    top: 100
  - name: inet
    network_type: pnet9
    visibility: 1
    left: 40
    top: 375
links:
  # Connections to networks (pnets)
  network:
    - {"src": "isp01", "src_label": "Gi0/0", "dst": "inet"}
    - {"src": "isp02", "src_label": "Gi0/0", "dst": "inet"}
    - {"src": "isp01", "src_label": "Gi0/7", "dst": "mgmt"}
    - {"src": "isp02", "src_label": "Gi0/7", "dst": "mgmt"}
    - {"src": "csr01", "src_label": "Gi8", "dst": "mgmt"}
    - {"src": "csr02", "src_label": "Gi8", "dst": "mgmt"}
    - {"src": "xnet01", "src_label": "Gi1/3", "dst": "mgmt"}
    - {"src": "asa01", "src_label": "Mgmt0/0", "dst": "mgmt"}
    - {"src": "core01", "src_label": "Gi1/3", "dst": "mgmt"}
    - {"src": "access01", "src_label": "Gi1/3", "dst": "mgmt"}
  # Connections between devices
  node:
    - {"src": "isp01", "src_label": "Gi0/1", "dst": "csr01", "dst_label": "Gi1"}
    - {"src": "isp02", "src_label": "Gi0/1", "dst": "csr02", "dst_label": "Gi1"}
    - {"src": "csr01", "src_label": "Gi2", "dst": "csr02", "dst_label": "Gi2"}
    - {"src": "csr01", "src_label": "Gi3", "dst": "xnet01", "dst_label": "Gi0/0"}
    - {"src": "csr02", "src_label": "Gi3", "dst": "xnet01", "dst_label": "Gi0/1"}
    - {"src": "xnet01", "src_label": "Gi0/3", "dst": "asa01", "dst_label": "Gi0/0"}
    - {"src": "asa01", "src_label": "Gi0/1", "dst": "core01", "dst_label": "Gi0/0"}
    - {"src": "core01", "src_label": "Gi0/1", "dst": "access01", "dst_label": "Gi0/0"}
    - {"src": "access01", "src_label": "Gi0/1", "dst": "workstation01", "dst_label": "e0"}

Deploying lab

First off install evengsdk, I also find chromaterm useful for colourising the output of native API calls (with | ct).

pip install eve-ng chromaterm

A good starting point is to list all the existing labs on EVE-NG and gather their paths as this will be needed in future commands.

eve-ng --host "10.30.10.105" --username admin --password 'pa$$w0rd' lab list

To get a feel for the parameters to be set in the topology file it is useful to look at any existing labs, I found the API was best for this as the SDK dumbed down the info too much. These commands got the nodes and networks in my test1 lab which I than used to build my topology file.

curl -s -b /tmp/cookie -c /tmp/cookie -X POST -d '{"username":"admin","password":"pa$$w0rd"}' http://10.30.10.105/api/auth/login | ct
curl -s -c /tmp/cookie -b /tmp/cookie -X GET -H 'Content-type: application/json' http://10.30.10.105/api/labs/scratch/test1.unl/networks | python -m json.tool | ct
curl -s -c /tmp/cookie -b /tmp/cookie -X GET -H 'Content-type: application/json' http://10.30.10.105/api/labs/scratch/test1.unl/nodes | python -m json.tool | ct

The eveng_lab_topo.yml file defines the topology to be deployed, can use –template-dir to set the jinja templates location (defaults to /templates).

eve-ng --host "10.30.10.105" --username admin --password 'pa$$w0rd' lab create-from-topology -t eveng_lab_topo.yml

This created the following lab topology::

I couldn’t find a way to set the startup config with the topology builder so had to do it manually in the GUI (more actions -> Set nodes startup-cfg to exported). From the startup-configs menus you can also check the configuration that the template has generated.

All nodes in a lab can be started and stopped individual (with node-id) or all at once. The full lab path needs specifying for these and any of other lab verification commands.

eve-ng --host "10.30.10.105" --username admin --password 'pa$$w0rd' node start -n <node_id>  --path /scratch/cisco_topo.unl
eve-ng --host "10.30.10.105" --username admin --password 'pa$$w0rd' lab start --path /scratch/cisco_topo.unl
eve-ng --host "10.30.10.105" --username admin --password 'pa$$w0rd' node stop -n <node_id>  --path /scratch/cisco_topo.unl
eve-ng --host "10.30.10.105" --username admin --password 'pa$$w0rd' lab stop --path /scratch/cisco_topo.unl

Using SDK CLI command you can check the state of all nodes, check the lab connections or check parameters for an individual node.

$ eve-ng --host "10.30.10.105" --username admin --password 'pa$$w0rd' node list  --path /scratch/cisco_topo.unl
                                                     Nodes @ /scratch/cisco_topo.unl
┏━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━┳━━━━━┓
┃ Id ┃ Name          ┃ Url                         ┃ Image                            ┃ Template   ┃ Status      ┃ Console ┃ Ram  ┃ Cpu ┃
┡━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━╇━━━━━┩
│  1 │ isp01         │ telnet://10.40.10.120:32897 │ vios-adventerprisek9-m-15.6.2T   │ vios       │ started 🟠  │ telnet  │ 10241   │
│  2 │ isp02         │ telnet://10.40.10.120:32898 │ vios-adventerprisek9-m-15.6.2T   │ vios       │ started 🟠  │ telnet  │ 10241   │
│  3 │ csr02         │ telnet://10.40.10.120:32899 │ csr1000vng-universalk9.17.03.04a │ csr1000vng │ building 🔴 │ telnet  │ 40961   │
│  4 │ core01        │ telnet://10.40.10.120:32900 │ vios-adventerprisek9-m-15.6.2T   │ vios       │ started 🟠  │ telnet  │ 10241   │
│  5 │ asa01         │ telnet://10.40.10.120:32901 │ asav-992                         │ asav       │ started 🟠  │ telnet  │ 20481   │
│  6 │ csr01         │ telnet://10.40.10.120:32902 │ csr1000vng-universalk9.17.03.04a │ csr1000vng │ building 🔴 │ telnet  │ 40961   │
│  7 │ xnet01        │ telnet://10.40.10.120:32903 │ vios-adventerprisek9-m-15.6.2T   │ vios       │ started 🟠  │ telnet  │ 10241   │
│  8 │ access01      │ telnet://10.40.10.120:32904 │ vios-adventerprisek9-m-15.6.2T   │ vios       │ started 🟠  │ telnet  │ 10241   │
│  9 │ workstation01 │ telnet://10.40.10.120:32905 │ win-7                            │ win        │ started 🟠  │ telnet  │ 40961   │
└────┴───────────────┴─────────────────────────────┴──────────────────────────────────┴────────────┴─────────────┴─────────┴──────┴─────┘
$ eve-ng --host "10.30.10.105" --username admin --password 'pa$$w0rd' lab topology  --path /scratch/cisco_topo.unl
$ eve-ng --host "10.30.10.105" --username admin --password 'pa$$w0rd' node read -n 1  --path /scratch/cisco_topo.unl

Summary

Although this is a step in the right direction, it doesn’t make labbing any faster or easier due to the amount of time and effort it takes to build the topology file. In all honesty it probably makes it more complex as defining the links gets exponentially harder as the lab grows. It would probably be useful if always deploying a similar lab setup with small tweaks or you had a script building the topology file for you, but you wouldn’t go deploying random complex labs using this tool.

The other issue with the SDK is this bug which prevents you from creating links to IOL devices, you can deploy devices but when adding links the script breaks with a TypeError exception.

Cisco Modeling Labs (CML)

cml-logo

Due to costs and a need for non-cisco devices I never really considered using CML but with the recent release of the free-tier I thought I would give it a try and see how it compares to EVE-NG. As we have come to expect with anything supposedly free from Cisco there are limitations:

  • Can only run five nodes simultaneously, this doesn’t include shutdown nodes, unmanaged switches or external connectors
  • It only comes with the following images (can add others manually using .qcow2 file): IOL, ASAv, unmanaged switch, Alpine, Ubuntu, server (Tiny Core Linux) and desktop (Alpine Linux with GUI)
  • Can’t disable CML telemetry from sending usage data (apparently anonymized) to Cisco

If you have never used CML before it is worth having a look at this free CiscoU intro to CML course to get started out. I am not the biggest fan of Cisco training but it is worth doing just for the 6 CE points. In short CML provides the same functionality as EVE-NG but feels a bit more polished as you would expect from an enterprise product. Some of the key points I took from the training:

  • cockpit performs the underlying server operations such as upgrades, start/stop services, package installation, console access, system usage and logs as well as networking features such as firewalls, interfaces and DHCP
  • Like most modern Cisco products CML is built on top of a REST-based API which the GUI consumes, this means that anything that you can do in the user interface can be done programmatically with API calls
  • Can connect to lab nodes via the browser console or remotely via console (telnet) or GUI (VNC) access using a locally run Breakout Tool (small portable executable file)
  • Packet captures can be run on any lab node link and viewed locally in realtime or downloaded as a .pcap file
  • External connectors are the equivalent EVE-NG clouds (PNETs), although to get the same functionality you have to combine them with an unmanaged switch as they only have 1 port. There are 2 types external connectors:
    • bridge: Extends lab networks out of CML by associating CML server NICs to a bridge (is done via cockpit)
    • virbr: Provides Internet connectivity to a lab device by assigning a DHCP address which is PATed to the CML management IP
  • Can manually add Cisco and third-party images, there are 2 elements that make up a node:
    • Node definition (device type): Defines the VM config such as CPU, RAM, NICs, base config, etc (are many pre-defined at CML community)
    • Image definition (.qcow file): Is the OS version image, can have multiple image definitions all associated to the 1 node definition
  • Node configs are stored in a config tab under the characteristics of the node (configs are only generated for shutdown wiped nodes)

As CML has been built from day1 based on REST-based APIs it has excellent documentation with swagger (https://x.x.x.x/api/v0/ui/) also allowing you to try out API calls against the CML server. These 2 python libraries can be used to manage CML labs programmatically:

  • cmlutils: A Python package that allows you to perform many useful operations through the CLI rather than needing to use the GUI or having to write code for every simple task that you need to perform
  • virl2-client: Python Client Library that uses a connection handler and prebuilt methods to builds labs, nodes, connections, etc

cmlutils has a function similar to the eveng sdk topology builder for importing a lab from a YAML topology, if the eve2cml package is also installed it will convert and import an EVE-NG .unl file (is also an online tool to convert EVE-NG labs to CML).

Topology file

Rather than creating a lab from scratch cmlutils is performing a lab import which means the file is a lot more verbose when compared to evengsdk. Again this file is made up of 2 main sections:

  • nodes: A list of nodes with each requiring a name and unique ID, the node interfaces are defined in a similar manner. The startup config must also be included in the file, it can’t be templated or sourced externally.

    - id: # Unique identifier of the node (nX), is what is used to reference it in the links
      label: # Friendly name of the node as seen in GUI
      node_definition: # Is the device type
      image_definition: # Optionally specify an image if the node_definition has more than one 
      x: # Location from left, works on 0 being centre
      y: # Location down, works on 0 being centre
      interfaces:
        - id: # Unique identifier of the node interface (iX), is what is used to reference it in the links section 
          label: # Name of the interface
          slot: # Unique number of the interface, not sure how relates but loopback is 0 and then other interfaces start from 1
          type: # Can be physical or loopback
      configuration:
        - name: # Name of the configuration file as seen in CML GUI
          content: # Startup config (defined 'inline') to be applied to the node
    
  • links: A list of dictionaries that contain connections between nodes and connections between nodes and external connectors.

    - id: # Unique identifier of the link (lX)
      n1: # Node ID for A end of the link
      n2: # Node ID for B end of the link
      i1: # Interface ID for A end of the link
      i2: # Interface ID for B end of the link
      label: # Friendly name for the link, not sure where is used
    

The full cmlutil_lab_topo.yaml topology file that will be deployed, it has been stripped down to the bare minimum mandatory attributes.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
---
lab:
  description: ''
  title: CMLUTIL lab topology
  version: 0.3.0
nodes:
  # ISP routers
  - id: n0
    label: ISP
    node_definition: csr1000v
    x: -480
    y: 120
    interfaces:
      - id: i1
        label: GigabitEthernet1
        slot: 0
        type: physical
      - id: i2
        label: GigabitEthernet2
        slot: 1
        type: physical
      - id: i3
        label: GigabitEthernet3
        slot: 2
        type: physical
      - id: i4
        label: GigabitEthernet4
        slot: 3
        type: physical
    configuration:
      - name: iosxe_config.txt
        content: |-
          hostname ISP
          ip domain-name stesworld.com
          no ip domain lookup
          !
          username admin privilege 15 password pa$$w0rd
          !
          vrf definition MGMT
          address-family ipv4
          exit-address-family
          !
          interface GigabitEthernet1
            ip address 10.1.40.1 255.255.255.252
            no shutdown
          interface GigabitEthernet2
            ip address 10.1.40.5 255.255.255.252
            no shutdown
          interface GigabitEthernet3
            ip address dhcp
            no shutdown
          interface GigabitEthernet4
            vrf forwarding MGMT
            ip address 10.30.20.100 255.255.255.0
            no shutdown
          !
          ip route vrf MGMT 0.0.0.0 0.0.0.0 10.30.20.2
          !
          line con 0
            exec-timeout 0 0 
          line vty 0 4
            login local
            privilege level 15
            exec-timeout 60 0 
            transport input all
          !
          crypto key generate rsa modulus 2048
          !
          end          
  # Customer edge routers
  - id: n1
    label: R1
    node_definition: iol-xe
    x: -240
    y: 0
    interfaces:
      - id: i1
        label: Ethernet0/0
        slot: 0
        type: physical
      - id: i2
        label: Ethernet0/1
        slot: 1
        type: physical
      - id: i3
        label: Ethernet0/2
        slot: 2
        type: physical
      - id: i4
        label: Ethernet0/3
        slot: 3
        type: physical
    configuration:
      - name: ios_config.txt
        content: |-
          hostname R1
          ip domain name stesworld.com
          no ip domain-lookup
          !
          username admin privilege 15 password pa$$w0rd
          !
          vrf definition MGMT
            address-family ipv4
          !
          interface Ethernet0/0
            ip address 10.1.40.2 255.255.255.252
            no shutdown
          interface Ethernet0/1
            ip address 10.1.40.9 255.255.255.248
            no shutdown
          interface Ethernet0/3
            vrf forwarding MGMT
            ip address 10.30.20.101 255.255.255.0
            no shutdown
          !
          ip route vrf MGMT 0.0.0.0 0.0.0.0 10.30.20.2
          !
          line con 0
            exec-timeout 0 0 
          line vty 0 4
            login local
            privilege level 15
            exec-timeout 60 0 
            transport input all
          !
          crypto key generate rsa modulus 2048
          !
          end          
  - id: n2
    label: R2
    node_definition: iol-xe
    x: -240
    y: 240
    interfaces:
      - id: i1
        label: Ethernet0/0
        slot: 0
        type: physical
      - id: i2
        label: Ethernet0/1
        slot: 1
        type: physical
      - id: i3
        label: Ethernet0/2
        slot: 2
        type: physical
      - id: i4
        label: Ethernet0/3
        slot: 3
        type: physical
    configuration:
      - name: ios_config.txt
        content: |-
          hostname R2
          ip domain name stesworld.com
          no ip domain-lookup
          !
          username admin privilege 15 password pa$$w0rd
          !
          vrf definition MGMT
            address-family ipv4
          !
          interface Ethernet0/0
            ip address 10.1.40.6 255.255.255.252
            no shutdown
          interface Ethernet0/1
            ip address 10.1.40.10 255.255.255.248
            no shutdown
          interface Ethernet0/3
            vrf forwarding MGMT
            ip address 10.30.20.102 255.255.255.0
            no shutdown
          !
          ip route vrf MGMT 0.0.0.0 0.0.0.0 10.30.20.2
          !
          line con 0
            exec-timeout 0 0 
          line vty 0 4
            login local
            privilege level 15
            exec-timeout 60 0 
            transport input all
          !
          crypto key generate rsa modulus 2048
          !
          end          
  # Edge switch and firewall
  - id: n3
    label: SWI-XNET
    node_definition: ioll2-xe
    x: -40
    y: 120
    interfaces:
      - id: i1
        label: Ethernet0/0
        slot: 0
        type: physical
      - id: i2
        label: Ethernet0/1
        slot: 1
        type: physical
      - id: i3
        label: Ethernet0/2
        slot: 2
        type: physical
      - id: i4
        label: Ethernet0/3
        slot: 3
        type: physical
    configuration:
      - name: ios_config.txt
        content: |-
          hostname SWI-XNET
          ip domain-name stesworld.com
          no ip domain-lookup
          !
          username admin privilege 15 password pa$$w0rd
          !
          vrf definition MGMT
            address-family ipv4
          !
          vlan 99
            name VL_L2_INET
          !
          interface Ethernet0/0
            switchport mode access
            switchport access vlan 99
            no shutdown
          interface Ethernet0/1
            switchport mode access
            switchport access vlan 99
            no shutdown
          interface Ethernet0/2
            switchport mode access
            switchport access vlan 99
            no shutdown
          interface Ethernet0/3
            no switchport
            vrf forwarding MGMT
            ip address 10.30.20.103 255.255.255.0
            no shutdown
          !
          ip route vrf MGMT 0.0.0.0 0.0.0.0 10.30.20.2
          !
          line con 0
            exec-timeout 0 0 
          line vty 0 4
            login local
            privilege level 15
            exec-timeout 60 0 
            transport input all
          !
          crypto key generate rsa modulus 2048
          !
          end          
  - id: n4
    label: XNET-ASA
    node_definition: asav
    x: 160
    y: 120
    interfaces:
      - id: i0
        label: Management0/0
        slot: 0
        type: physical
      - id: i1
        label: GigabitEthernet0/0
        slot: 1
        type: physical
      - id: i2
        label: GigabitEthernet0/1
        slot: 2
        type: physical
    configuration:
      - name: day0-config.txt
        content: |-
          hostname XNET-ASA
          domain-name stesworld.com
          !
          username admin privilege 15 
          username admin password pa$$w0rd
          aaa authentication ssh console LOCAL
          aaa authentication enable console LOCAL
          aaa authorization exec authentication-server auto-enable
          aaa authentication serial console LOCAL
          !
          interface Management0/0
            nameif mgmt
            security-level 100
            ip address 10.30.20.104 255.255.255.0
            no shutdown
          interface GigabitEthernet0/0
            ip address 10.1.40.11 255.255.255.248
            nameif OUTSIDE
            security level 0
            no shutdown
          interface GigabitEthernet0/1
            ip address 10.1.50.1 255.255.255.0
            nameif INSIDE
            security level 100
            no shutdown
          !
          route mgmt 0.0.0.0 0.0.0.0 10.30.20.2
          !
          ssh 0.0.0.0 0.0.0.0 mgmt
          http 0.0.0.0 0.0.0.0 mgmt
          ssh scopy enable
          http server enable 
          ssh timeout 60
          telnet timeout 60
          console timeout 0
          !
          crypto key generate rsa modulus 2048
          !
          end          
  # Core switches and end devices
  - id: n5
    label: CORE_SWI
    node_definition: unmanaged_switch
    x: 360
    y: 120
    interfaces:
      - id: i0
        label: port0
        slot: 0
        type: physical
      - id: i1
        label: port1
        slot: 1
        type: physical
      - id: i2
        label: port2
        slot: 2
        type: physical
      - id: i3
        label: port3
        slot: 3
        type: physical
      - id: i4
        label: port4
        slot: 4
        type: physical
      - id: i5
        label: port5
        slot: 5
        type: physical
      - id: i6
        label: port6
        slot: 6
        type: physical
      - id: i7
        label: port7
        slot: 7
        type: physical
  - id: n6
    label: WS01
    node_definition: desktop
    x: 520
    y: 0
    interfaces:
      - id: i0
        label: eth0
        slot: 0
        type: physical
    configuration:
      - name: node.cfg
        content: |-
          # this is a shell script which will be sourced at boot
          hostname WS01
          # configurable user account
          USERNAME=cisco
          PASSWORD=cisco          
  - id: n7
    label: SVR01
    node_definition: server
    x: 520
    y: 200
    interfaces:
      - id: i0
        label: eth0
        slot: 0
        type: physical
    configuration:
      - name: iosxe_\config.txt
        content: |-
          # this is a shell script which will be sourced at boot
          hostname SVR01
          # configurable user account
          USERNAME=cisco
          PASSWORD=cisco
          # no password for tc user by default
          TC_PASSWORD=          
  # External connectors and unmanaged switches
  - id: n8
    label: INET
    node_definition: external_connector
    x: -600
    y: 120
    interfaces:
      - id: i0
        label: port
        slot: 0
        type: physical
    configuration: []
  - id: n9
    label: MGMT
    node_definition: external_connector
    x: 160
    y: 280
    interfaces:
      - id: i0
        label: port
        slot: 0
        type: physical
    configuration:
      - name: default
        content: bridge1
  - id: n10
    label: MGMT-SWI
    node_definition: unmanaged_switch
    x: 0
    y: 280
    interfaces:
      - id: i0
        label: port0
        slot: 0
        type: physical
      - id: i1
        label: port1
        slot: 1
        type: physical
      - id: i2
        label: port2
        slot: 2
        type: physical
      - id: i3
        label: port3
        slot: 3
        type: physical
      - id: i4
        label: port4
        slot: 4
        type: physical
      - id: i5
        label: port5
        slot: 5
        type: physical
      - id: i6
        label: port6
        slot: 6
        type: physical
      - id: i7
        label: port7
        slot: 7
        type: physical
links:
  # Connections between devices
  - id: l0
    label: ISP-GigabitEthernet1<->R1-Ethernet0/0
    n1: n0
    n2: n1
    i1: i1
    i2: i1
  - id: l1
    label: ISP-GigabitEthernet2<->R2-Ethernet0/0
    n1: n0
    n2: n2
    i1: i2
    i2: i1
  - id: l2
    label: R1-Ethernet0/1<->XNET-SWI-Ethernet0/0
    n1: n1
    n2: n3
    i1: i2
    i2: i1
  - id: l3
    label: R2-Ethernet0/1<->XNET-SWI-Ethernet0/1
    n1: n2
    n2: n3
    i1: i2
    i2: i2
  - id: l4
    label: SWI-XNET-Ethernet0/2<->XNET-ASA-GigabitEthernet0/0
    n1: n3
    n2: n4
    i1: i3
    i2: i1
  - id: l5
    label: XNET-ASA-GigabitEthernet0/1<->SWI-port0
    n1: n4
    n2: n5
    i1: i2
    i2: i0
  - id: l6
    label: WS01-eth0<->SWI-port1
    n1: n6
    n2: n5
    i1: i0
    i2: i1
  - id: l7
    label: SWI-port2<->SVR01-eth0
    n1: n5
    n2: n7
    i1: i2
    i2: i0
# External Connectors and mgmt links
  - id: l8
    label: INET-port<->ISP-GigabitEthernet3
    n1: n8
    n2: n0
    i1: i0
    i2: i3
  - id: l9
    label: MGMT-port<->MGMT-SWI-port0
    n1: n9
    n2: n10
    i1: i0
    i2: i0
  - id: l10
    label: MGMT-SWI-port1<->ISP-GigabitEthernet4
    n1: n10
    n2: n0
    i1: i1
    i2: i4
  - id: l11
    label: MGMT-SWI-port2<->R1-Ethernet0/3
    n1: n10
    n2: n1
    i1: i2
    i2: i4
  - id: l12
    label: MGMT-SWI-port3<->R2-Ethernet0/3
    n1: n10
    n2: n2
    i1: i3
    i2: i4
  - id: l13
    label: MGMT-SWI-port4<->XNET-ASA-Management0/0
    n1: n10
    n2: n4
    i1: i4
    i2: i0
  - id: l14
    label: MGMT-SWI-port5<->SWI-XNET-Ethernet0/3
    n1: n10
    n2: n3
    i1: i5
    i2: i4

Deploying lab

Install the cmlutils package and set the CML server and credentials in environment variables (can alternatively set in a .virlrc file)

pip install cmlutils

export VIRL_HOST=10.30.10.107
export VIRL_USERNAME=admin
export VIRL_PASSWORD='pa$$w0rd'
export CML_VERIFY_CERT=False

Can check the credentials are correct by getting a list of all labs on the CML server.

$ cml ls
Labs on Server
╒══════════════════════════════════════╤═══════════════════╤═══════════════╤═════════╤══════════╤═════════╤═════════╤══════════════╕
│ ID                                   │ Title             │ Description   │ Owner   │ Status   │   Nodes │   Links │   Interfaces │
╞══════════════════════════════════════╪═══════════════════╪═══════════════╪═════════╪══════════╪═════════╪═════════╪══════════════╡
│ 5fc145a5-b4f4-4100-9424-d1e75880a582 │ edited config     │               │ ste     │ STOPPED  │       327 │
├──────────────────────────────────────┼───────────────────┼───────────────┼─────────┼──────────┼─────────┼─────────┼──────────────┤
│ 6f1cadb8-98ae-4494-9799-6664c1e317c1 │ CML UTILs Base    │               │ ste     │ STOPPED  │       8833 │
├──────────────────────────────────────┼───────────────────┼───────────────┼─────────┼──────────┼─────────┼─────────┼──────────────┤
│ edc56ab4-317e-4c00-b2f1-f906c566ad0a │ CML UTIL Topology │               │ ste     │ STARTED  │      111543 │
╘══════════════════════════════════════╧═══════════════════╧═══════════════╧═════════╧══════════╧═════════╧═════════╧══════════════╛

The cmlutil_lab_topo.yaml file defines the topology to be deployed, omit –no-start if you don’t want all the nodes brought up immediately once the lab has been imported

$ cml up  --no-start -f cmlutil_lab_topo.yaml

For cmlutils to be able to perform any further actions on the lab you must tell it which lab to use, this can be done with either the lab name or ID (use the ID if have duplicate lab names).

$ cml use [--id | -n] <lab_ID_or_name>
$ cml use -n "CMLUTIL lab topology"
$ cml id
CMLUTIL lab topology (ID: 7325c1d4-f235-422e-a28d-d3652e9a776f)

With the ID set can now perform actions against the lab such as check the state of all the nodes and stop or start them:

$ cml nodes
Here is a list of nodes in this lab
╒══════════════════════════════════════╤══════════╤════════════════════╤════════════════╤═════════════════╤══════════╤══════════════════╕
│ ID                                   │ Label    │ Type               │ Compute Node   │ State           │ Wiped?   │ L3 Address(es)   │
╞══════════════════════════════════════╪══════════╪════════════════════╪════════════════╪═════════════════╪══════════╪══════════════════╡
│ c08d3124-119a-45b9-8059-3e8d4b699893 │ ISP      │ csr1000v           │ mob-ubt-cml01  │ BOOTED          │ False    │                  │
├──────────────────────────────────────┼──────────┼────────────────────┼────────────────┼─────────────────┼──────────┼──────────────────┤
│ e54d2b92-c36e-445e-823f-6efae39680c8 │ R1       │ iol-xe             │ mob-ubt-cml01  │ BOOTED          │ False    │                  │
├──────────────────────────────────────┼──────────┼────────────────────┼────────────────┼─────────────────┼──────────┼──────────────────┤
│ f51a7edb-d18d-496a-ab4e-3c15facfc82d │ R2       │ iol-xe             │ mob-ubt-cml01  │ BOOTED          │ False    │                  │
├──────────────────────────────────────┼──────────┼────────────────────┼────────────────┼─────────────────┼──────────┼──────────────────┤
│ 422cd679-79c6-4484-9db1-0c504860e28f │ SWI-XNET │ ioll2-xe           │ mob-ubt-cml01  │ BOOTED          │ False    │                  │
├──────────────────────────────────────┼──────────┼────────────────────┼────────────────┼─────────────────┼──────────┼──────────────────┤
│ 9d0c5742-4339-4a34-8870-a0a969236a94 │ XNET-ASA │ asav               │ mob-ubt-cml01  │ BOOTED          │ False    │                  │
├──────────────────────────────────────┼──────────┼────────────────────┼────────────────┼─────────────────┼──────────┼──────────────────┤
│ c76e93a2-846e-42c5-8a39-ee9399570ea4 │ CORE_SWI │ unmanaged_switch   │ mob-ubt-cml01  │ BOOTED          │ False    │                  │
├──────────────────────────────────────┼──────────┼────────────────────┼────────────────┼─────────────────┼──────────┼──────────────────┤
│ c7d1fad6-96ad-4d4d-842e-2c96b434d300 │ WS01     │ desktop            │ Unknown        │ DEFINED_ON_CORE │ True     │                  │
├──────────────────────────────────────┼──────────┼────────────────────┼────────────────┼─────────────────┼──────────┼──────────────────┤
│ bcad5197-1a11-46cf-80c6-ce0b32c9059f │ SVR01    │ server             │ Unknown        │ DEFINED_ON_CORE │ True     │                  │
├──────────────────────────────────────┼──────────┼────────────────────┼────────────────┼─────────────────┼──────────┼──────────────────┤
│ 17fcba09-4ab2-4a01-a7c7-f906d2a461b5 │ INET     │ external_connector │ mob-ubt-cml01  │ BOOTED          │ False    │                  │
├──────────────────────────────────────┼──────────┼────────────────────┼────────────────┼─────────────────┼──────────┼──────────────────┤
│ 3ce5fb79-3d2b-48a9-9dd5-15ebf1c8cf8f │ MGMT     │ external_connector │ mob-ubt-cml01  │ BOOTED          │ False    │                  │
├──────────────────────────────────────┼──────────┼────────────────────┼────────────────┼─────────────────┼──────────┼──────────────────┤
│ db3b44a5-a899-4d9f-ad33-cba143eb9c9f │ MGMT-SWI │ unmanaged_switch   │ mob-ubt-cml01  │ BOOTED          │ False    │                  │
╘══════════════════════════════════════╧══════════╧════════════════════╧════════════════╧═════════════════╧══════════╧══════════════════╛

$ cml down
$ cml up

A handy option within cmlutils is the ability to console into any of the lab devices, although guess you would more likely use breakout tool for this.

$ cml console R1
ste@10.30.10.107's password:
Connecting to console for R1
Connected to CML terminalserver.
Escape character is '^]'.

R1>

Summary

Deploying a lab with cmlutils is even more complex than doing so with evengsdk. The topology file is too verbose due to all the extra node details that are required, the links are even more confusing as you have to identify each node interface by ID rather than friendly name.

Custom tool for EVE-NG & CML

custom-lab-logo

For me to be able to declaratively deploy labs with EVE-NG or CML in a non-complex way the only real option I have is to use the SDKs (evengsdk and virl2-client) and build my own tool. Learning from the past 2 experiments I came up with the following guidelines for this tool:

  • The topology file should be kept as clean and simplistic as possible, it has to be quicker and more convenient than building labs in the GUI:
    • Should define only the absolute bare minimum information that is needed to create a node
    • You can’t easily draw the topology from code, therefore randomise node locations to prevent overlaps and tidyup in GUI once deployed
  • Define which nodes should be connected (node_a needs to connect to node_b) rather than how they are connected (what interfaces):
    • The tool should automatically assign the next available interface for each link. It would be nice to also automatically assign interface IPs but is too complex for now as you have to take into account the different protocols, connection types (PtP or Multi-access) and prefix lengths
    • Management interfaces will be the only pre-defined interfaces as their IP address will need to be assigned as part of the initial build
  • Startup configurations are to be generated from a jinja template based on variables defined in the topology file:
    • Variables should be kept to a bare minimum, only really need a bootstrap configuration to allow for management access
    • Don’t repeat common variables, these variables shared by all nodes should only be defined once
    • Jinja templates will be per device-type and all use the same topology file variables, no snowflakes

This lab_builder tool can be used to build a EVE-NG or CML lab from a very similar topology file, the only real differences being related to some of the node names. For example, a CSR in EVE-NG is called csr1000vng with GiX interfaces whereas in CML it is csr1000v and GigabitEthernetX.

Topology file

The topology file is split up into 4 parts, in the example any differences between EVE-NG and CML objects are shown in the format EVE/CML.

  • lab: The management details are lab-wise meaning that they will be used by all nodes in the lab.

    name: # The lab name, in EVE-NG this must be unique but CML doesn't matter (uses an arbitrary ID for uniqueness)
    description: # Lab description
    addr:
      mgmt_prefix: # Range all management address come from
      mgmt_gw: # Management range default gateway, used by all nodes
    
  • nodes: Dictionary of nodes to be created with the key being the node name and value the node details. Type and config are mandatory attributes, other optional settings include the image, number of ethernet ports and eve_type (only needed for non-qmeu). mgmt defines the interface connected to management bridge and the management address (4th octet added to the mgmt_prefix).

    nodes:
      NODE_NAME:
        type: # Node type, in EVE-NG is known as the "template" and in CML the "node_definition"
        image: # (optional) Software version (defaults to newest), in EVE-NG known as "image", in CML "image_definition"
        eve_type: # (optional) Only used by EVE-NG, defaults to qemu so only need to define when using IOL
        ethernet: # (optional) Number of interfaces, if undefined uses the node type default (normally 4)
        config:
          template: # Per device_type jinja template used to create the config
          vars:
            hostname: # Variable used in template to configure nodes hostname
            mgmt:
              MGMT_INTERFACE_NAME: # 4th octet of mgmt IP
            intf:
              INTERFACE_NAME: # x.x.x.x/yy
    
  • networks: Network objects provide local bridging (bridge/unmanaged_switch) as well as lab breakout (pnet/external_connector). The links for these network objects must be defined here under the object, they can not be defined under device links (links).

    • For CML the type must start with ec (an external_connector) and then be followed by whatever bridge or virbr numbers you have setup
    • As only 1 device can be connected to a CML external_connector, setting the number of ports (ethernet) or specifying more than 1 connection (links) will automatically create an additional unmanaged_switch (ec_name_SWI) and connects everything to that

    networks:
      NETWORK_NAME:
        management: # When defined identifies this bridge is used for mgmt, all pre-defined node mgmt interfaces connect this
        type: # The network object type, EVE-NG can be "bridge" or "pnetX", CML can be "unmanaged_switch", "ec_bridgeX" or "ec_virbrX"
        links: # List of nodes that connect to this bridge. Uses the next available local and remote interface (for mgmt uses pre-defined remote interfaces) 
        ethernet: # Required for CML external_connectors if have more than 1 connection, automatically creates an unmanaged_switch (xx_SWI) to connect all devices
    
  • links: A dictionary of connections from node_a (key) to a list of nodes_b (value). Rather than having the verboseness of defining each connections interface you just define what is connected and the script will automatically assign the next available interface.

    links:
      NODE_A: # Dict key is node_a and dict value is a list of all the devices it connects to
    

If you are going to use IOL nodes in EVE-NG you need to be aware of this bug that breaks link assignment via the API. I submitted a pull request to fix this but am not sure if the project is still maintained, as a workaround you will have to install this branch manually.

The full eve_cisco_topo.yml topology file that will be used to deploy the lab on EVE-NG. This same sample topology for CML can be found here.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
name: eve_cisco_topo_initial
addr:
  mgmt_prefix: 10.40.20.0/24
  mgmt_gw: 10.40.20.1
nodes:
  # ISP routers
  ISP:
    type: csr1000vng
    ethernet: 8
    config:
      template: iosxel3_base.j2
      vars:
        hostname: ISP
        mgmt:
          Gi8: 100
  # Customer edge routers
  R1:
    type: iol
    eve_type: iol                            
    ethernet: 2
    config:
      template: iosxel3_base.j2
      vars:
        hostname: R1
        mgmt:
          e1/3: 101
  R2:
    type: iol
    eve_type: iol
    ethernet: 2
    config:
      template: iosxel3_base.j2
      vars:
        hostname: R2
        mgmt:
          e1/3: 102
  # Edge firewall
  XNET-ASA:
    type: asav
    ethernet: 8
    config:
      template: asa_base.j2
      vars:
        hostname: XNET-ASA
        mgmt:
          Mgmt0/0: 103
  # Core switches and end devices
  CORE-SWI:
    type: iol
    image: L2-ADVENTERPRISEK9-M-17.15.1.bin
    eve_type: iol 
    ethernet: 8
    config:
      template: iosxel2_base.j2
      vars:
        hostname: CORE_SWI
        mgmt:
          e1/3: 104
        vlans:
          10: data
          20: voice
  WS01:
    type: win
    image: win-10
  SVR01:
    type: winserver
# clouds/pnets/bridges and their connections (links)
networks:
  MGMT:
    management: true
    type: pnet1
    links: [ISP, R1, R2, XNET-ASA, CORE-SWI]
  INET:
    type: pnet9
    links: [ISP]
  XNET-SWI:
    type: bridge
    links: [R1, R2, XNET-ASA]
# Connections between devices
links:
  ISP: [R1, R2]
  CORE-SWI: [XNET-ASA, WS01, SVR01]

Deploying lab

Can set the EVE-NG/CML server details at runtime, with environment variables or within the lab_builder.py file (in that order of preference).

export LAB_SERVER=x.x.x.x
export LAB_USERNAME=admin
export LAB_PASSWORD='pa$$w0rd'
export LAB_TEMPLATES=templates

As the tool is built on top off click you have the normal runtime options (filename, templates, host, username, password), arguments (EVE, CML) and commands (build, config, down, ls-nodes, up).

$ python lab_builder.py --help
Usage: lab_builder.py [OPTIONS] [PLATFORM] COMMAND [ARGS]...

  Build a EVE or CML lab in a semi-declarative fashion based off a YAML topology file

Options:
  -f, --filename FILENAME  file.yaml or path/file.yaml topology defining the lab, defaults to script var (mylab.yml)
  -t, --templates PATH     Template directory, defaults to env-var -> script var (templates)
  -h, --host TEXT          EVE/CML server, defaults to env-var -> script var
  -u, --username TEXT      EVE/CML username, defaults to env-var -> script var
  -p, --password TEXT      EVE/CML password, defaults to env-var -> script var
  --help                   Show this message and exit.

Commands:
  build     Builds the lab based on the loaded topology file
  config    Regenerate and reapply the startup config to all devices
  down      Take DOWN all devices in the lab
  ls-nodes  Table displaying the details and status of all devices in lab
  up        Bring UP all devices in the lab

The first run of the script creates the lab, so adds all nodes and the defined links.

$ python build_lab.py -f eve_cisco_topo.yml EVE build
or
$ python build_lab.py -f cml_cisco_topo.yml CML build

This will produce a randomised topology layout which you can then adjust in the GUI to be more human friendly.

The ‘build’ command also creates a new topology file (xxx_v1.yml) with an extra per-node intf_links dictionary that describes to whom each interface of the node connects to. It is worth noting this file also has a lab_id dictionary, this is what is used by all other runtime commands (config, up, down, ls-nodes) to know what lab to run the actions on.

nodes:
  ISP:
    type: csr1000vng
    ethernet: 8
    config:
      template: iosxel3_base.j2
      vars:
        hostname: ISP
        mgmt:
          Gi8: 100
        intf_links:
          Gi1: L1 >> INET
          Gi2: L5 >> R1:e0/1
          Gi3: L6 >> R2:e0/1
  R1:
    type: iol
    eve_type: iol
    ethernet: 2
    config:
      template: iosxel3_base.j2
      vars:
        hostname: R1
        mgmt:
          e1/3: 101
        intf_links:
          e0/0: L2 >> XNET-SWI
          e0/1: L5 >> ISP:Gi2
.......
lab_id: /eve_cisco_topo_initial.unl

To assign the interface IP address for a router it should just be a case of changing the dictionary name to intf and adding the appropriate address to either end of the link, for switches and firewalls it is a bit more complicated as interfaces don’t just have an IP address.

nodes:
  ISP:
    type: csr1000vng
    ethernet: 8
    config:
      template: iosxel3_base.j2
      vars:
        hostname: ISP
        mgmt:
          Gi8: 100
        intf:
          Gi1: dhcp
          Gi2: 10.1.40.1/30
          Gi3: 10.1.40.5/30
  R1:
    type: iol
    eve_type: iol
    ethernet: 2
    config:
      template: iosxel3_base.j2
      vars:
        hostname: R1
        mgmt:
          e1/3: 101
        intf:
          e0/0: 10.1.40.9/29
          e0/1: 10.1.40.2/30

Once the file has been changed (example here) you can use it regenerate the startup config and apply it to all devices using the ‘config’ runtime command (it will first wipe all device configs).

$ python lab_builder.py -f eve_cisco_topo_ip.yml EVE config

There are a few other commands to bringup or shutdown all nodes as well as produce a table to show the status of all the nodes.

$ python lab_builder.py -f eve_cisco_topo_ip.yml EVE up
$ python lab_builder.py -f eve_cisco_topo_ip.yml EVE ls-nodes
$ python lab_builder.py -f eve_cisco_topo_ip.yml EVE down

Summary

I probably wasted far more time than I should have on this trying to make it a multi-lab provider setup. In theory it seems a good way to setup simple labs, it is certainly a lot easier and less complex than using the EVE-NG or CML native tools. Only time will tell if this tool is fit for purpose to lab-as-code in terms of whether I actually continue using it to build labs or slip back into the GUI. You could quite easily add more functionality or make it more polished, it could also do with proper error handling but I wont be doing much more to it until I am certain it is something I will get a lot of use out of.

One thing this does go to show is that you can very easily use these SDKs to build a lab as part of a CI/CD pipeline to test changes post deployment. It wouldn’t be too difficult to pull the existing running config from a SoT and then commit new changes to that before reporting back on the outcome.