At the beginning of 2021, Namex IXP has started the rollout of its next-generationpeering platform, the active infrastructure which is at the core of its network interconnection facility. This new platform relies on an IP fabric design with VXLAN as the overlay network and BGP EVPN as the control plane protocol. The development of this project started back in March 2020 and saw Mellanox and Cumulus Networks (both parts of NVIDIA corporation now) as major technological partners.
Before diving into the details, a brief historical note may help to understand the drivers and motivations behind such technical choices.
Netgate has “just” published their first blog post, describing official WireGuard support in the latest development snapshot of pfSense 2.5.0.
As a network engineer, routing enthusiast, technical supporter, and DN42 participant. Hearing about the upcoming WireGuard support for pfSense has me very excited due to the ease of use. And simplistic configuration. Making it – in my opinion – the most attractive VPN solution for P2P-mesh VPN network(s) and Road Warrior access on-the-go. Plus the support for WireGuard is close to ubiquitously supported on *most* major platforms via direct development support (& 3rd party software solutions).
Netgate mentioning – in their blog post – they have been a sponsor for the development needed to get WireGuard supported on FreeBSD has me thankful, even thou I am not a paying customer of theirs (i.e. a prosumer #wfh).
pfSense not having WireGuard support. When OPNsense introduced WireGuard (& ZeroTier) support months ago. Have had me seriously consider over the Christmas period to switching my prosumer firewall solution to OPNsense. Just for the VPN support of WireGuard & ZeroTier alone. Now, however… I am convinced to stick with pfSense for more years to come. And excitedly looking forward to the next stable release that will very hopefully include the recently announced WireGuard support. (/^▽^)/
With that out of the way – I wanted to spend some time in this post talking about the command line tool found on Linux systems called tc. We’ve talked about tc before when we discussed creating some network/traffic simulated topologies and it worked awesome for that use case. If you recall from that earlier post tc is short for Traffic Control and allows users to configure qdiscs. A qdisc is short for Queuing Discipline. I like to think of it as manipulating the Linux kernels packet scheduler.
Demands for connectivity in the data center are rising, especially in hyperscale data centers where 1728- or 3456-fiber cables are becoming more popular. Connecting such high-fiber-count cables to servers and switches is the key challenge because there’s only so much rack space available. Fiber patch panels are at the center of this challenge. To address this issue, the industry is increasing port density in patch panels to accommodate the ongoing thirst for bandwidth.