A trip down memory lane on how things have changed in labbing from using prehistoric switches bought on eBay through emulators that took longer to configure than the labs to the present day solutions that can programmatically build a multi-vendor lab in minutes. Kids today don’t know they are born…..
A brief history
A few years back I used EVE-NG to create a replica of our production network which we used for testing core network changes. More recently I have used it to design, test and migrate to new leaf and spine fabric. The more I use it the more impressed I am with what you can do with it. This has got me thinking about how the labbing world has changed over the years since I first started out in networking.
Like many who try to get into IT I started out as a textbook engineer doing my CCNA with little or no practical experience. After that I started on my CCNP but soon realised this wasn’t going to work without getting my hands dirty so started picking up old switches and routers off eBay. Imagine running 8 switches and routers in a room the size of a shoe box, enough to send anyone deaf (or crazy). I was still very green at the time, typing commands word for word rather than pasting from notepads (thanks Pavan). It took that long I am surprised the electricity bills didn’t bankrupt me before I made it.
When I eventually found someone daft enough to employ me (thanks Jim, probably more successful than my stint making wiz for cheesesteaks) I started working nights in a data centre and it was here I discovered GNS3. GNS3 at this time was pretty painful, it wasn’t something I could run on low budget laptop and required lots of watering and feeding to get it running smoothly. Luckily due to the perks of the job on the night shifts I would build my super server (rack a temporary server with HDDs I hid away so they weren’t wiped) and practice to my hearts content. I was studying the BSCI exam (route of CCNP) so ran up all these different OSPF and EIGRP labs to get a proper understanding how the protocols worked. Still not real life experience but 10 times better than reading command outputs in a book. One of the most valuable resources I had at that time were the packetlife cheatsheets, if I ever bump into Jeremy I owe him a good few pints, what a legend.
When I made the decision to go down the CCIE R&S route I needed a more robust and reliable lab than GNS3, I needed to focus on studying and not troubleshooting the lab. I bought a HP tower (can’t remember spec, think may have been 64GB), installed ESXi and used it to run 20 CSRs. It was hooked up to 4 physical switches and so I could build any of the labs from the INE CCIE course. It was also great for labbing other virtualised platforms such as F5s, Netscalers and ASAs. I never had any problems with this setup in terms of stability as you are not emulating devices, you have real production grade VMs so all features worked as expected (be it with limited throughput).
Once finished with the exam I downgraded first to running ESX on iMACs before discovering NUCs which I am still using today. You get the best of both worlds with these tiny, completely quiet, reasonably priced. reliable (come with 3 year no quibble guarantee), high powered devices. They can be maxed up to i7 CPU with 64Gb RAM, or as I do you can use cheaper less powerful older ones in a cluster. As most platforms are virtualized you can run near enough anything on them and with the integration of ESX and vSphere with Fusion and workstation getting better with each release it is easier to manage and can copy VMs between environments.
When it came time to recertify rather than using the NUCs I decided to try out WEB-IOU as I didn’t need the full CSR functionality and you could get builds which included the pre-built INE labs. The beauty of this was that although a little fiddly to setup new labs (no drag and drop to build), it was a lot more stable than the old GNS3 and as it used IOS images it didn’t need that much resources. I could run it on fusion with only 4 gig of RAM making the labs portable.
I am assuming EVE-NG came from WEB-IOU as it has the same feel to it but a lot more polished and user friendly. You can have a Lab up and running in minutes and it is all very logical in the way you connect out of it to external devices, networks or the Internet. You do have to put a bit of work in to build templates for things such as windows or Lunix VMs, but is pretty straight forward and well documented. I have built some pretty extensive hybrid labs using N9Ks, CSRs, vIOS, F5s, ASAvs, Firepower and Checkpoints. Apart from some issues with buggy N9Kv images it has been pretty much rock solid.
EVE-NG can be run on most things, I have deployed it as an OVF on ESX and fusion as well as on a Ubuntu host in Azure. The sizing needed depends on what you are running in your lab, for example NXOS needs 2vCPU/8GB, whereas vIOS only needs 1vCPU/512MB. It you look in the community there are plenty of pre-built labs or you can find many more in forums.
So I guess that is were we are today. Is a lot more choice for labbing these days, you got vendor specific offerings such as Cisco CML, hypervisors like ESX or virtualbox or emulators like GNS3 or EVE-NG. At the end of the day all comes down to resources and personal preference, I like EVE-NG but an not saying it is better or worse than any of the others.
The next step for me is to get away from the GUI and use a ‘Network as Code’ idealogy to deploy labs. There are Python SDKs deploying in CML and GNS, some projects for doing a similar thing with EVE-NG and I see others using vagrant to deploy the topology. This netsim-tools project of Ivan’s looks very interesting, I like the idea of deploying a lab and its configuration as a declarative state. I tried something similar with my build_fabric project but is too complicated due to all the variables in a production environment. However, for the base of a lab it is viable as you would only have a simple basic configuration (L3 connectivity and/or simple routing protocol setup) you could build upon.