Blog

Hands-On Testing and Analysis

New In The Lab: 25/40/100Gbps Networking

For the past few years, Ethernet has come in four speeds: 1, 10, 40 and 100Gbps. While the cost of 10Gbps Ethernet has fallen to under $500 a port, the next step up 40Gbps has remained four times as expensive as 10gbps and therefore been relegated to inter-switch links and the like. This year Santa brought us 25Gbps Ethernet which delivers on its promise of 2.5 times the bandwidth of 10Gbps at a small cost premium.

The problem with 40Gbps Ethernet is that a 40Gbps port is really an aggregation of four 10Gbps data channels and that means a 40Gbps port takes four 10Gbps ports on the switch’s internal ASIC and at the physical layer needs four sets of SERDES (Serializer/Deserializer) and media. That means 8 strands of fiber, a fat complicated copper DAC or an expensive wave division multiplexing optic that sends the four channels over different wavelengths of light.

Like 40Gbps, 100Gbps Ethernet aggregates four data channels, with each channel running at 25Gbps, rather than 10Gbps. Twenty-five gig Ethernet leverages all the R&D that’s been done on 100Gbps over the years stealing one of its channels and using it alone. The Ethernet industry just built a better mousetrap from their existing parts bin, and that means that 25Gbps Ethernet came out of the gate ready for prime time. Add in that those 25Gbps ports will work just fine, at 10Gbps, when you plug in 10Gbps optics or DACs and 25Gbps looks like an obvious choice for any data center I’m building in the foreseeable future.

Here at DeepStorage Labs we first started looking at 25Gbps Ethernet seriously back in February when we got a briefing from Qlogic announcing the first 25Gbps NICs. Of course, I asked whose switches I could plug these wonderful cards into. Since the switch vendors hadn’t yet made an announcement, and this being America there must have been a web of nondisclosure agreements between the switch vendors and Qlogic that would have made Aragog himself proud preventing them from actually telling me.

I then tweeted something on the order of “Great briefing from Qlogic on 25Gbps Ethernet NICs. I wonder who’s switch I could use” A few hours later I got a Twitter DM from a friend of mine at Cisco asking for my shipping address. While that usually means I’m getting some sort of promotional swag, the UPS man showed up at my door a few weeks later with a pair of shiny new Nexus 92160YC-X switches not another golf shirt with my twitter handle embroidered on it.

These switches each have 48 10/25Gbps SFP+ ports plus six QSFP four channel ports, 2 of which can run at 100Mbps, the other four limited to a measly 40Gbps. While Cisco pitches them as top of rack switches here at DeepStorage Labs, they’re middle of row switches with just about everything directly connected to them.

 

The Cisco folks tell me the 92160s serve as the data collectors for Cisco’s Tetration analytics, but I haven’t even thought about getting that deep into networking.

Now that we had 25Gbps switches it was time to get some of the 25Gbps NICs that started the whole thing. I mentioned our need to our friends at Mellanox, and they were kind enough to send us half a dozen of their ConnectX-4 cards.

Since the 96 ports of 25Gbps switching the Cisco’s provided may not have been enough one of their SN2100 16 port 100Gbps Ethernet switches along with an assortment of SFP28 (the proper name for a 25Gbps SFP port) and QSFP28 to SFP28 cables.

This cute little half-width addtion to the mix means we can dedicate bandwidth specifically to a system under test, in the past, we would have to pull a switch from the lab’s core for this and run those production systems we have without redundancy. With QSFP28-SFP28 fan-out cables, we get to use each of the 16 100Gbps ports as four 10/25Gbps ports. Mellanox claims 300ns latency between 100Gbps ports which should be just right for testing NVMe over Fabric and other tier 0 solutions.

The SN2100 is an open Ethernet switch via ONIE, the Open Network Install Environment. That means I could install Cumulus Linux or an SDN solution like Big Switch. It’s been a while since I’ve been excited about networking but the next step after today’s HCI is going to integrate the switches and now I get to play with that too.

This new triumvirate of switches replaces the pair of Brocade 8000s that have been the core of the lab network for years. The Brocades have found new homes in the labs of my influencer friends, and I’ll miss the FCoE functionality, but we haven’t had a need for FCoE in a while.

I said at the beginning that the promise of 25Gbps Ethernet is that it delivers two and a half times as much bandwidth at basically the same price. On the switch side, I found the 92160 online for under $11,000 which is actually on the low end of the price range of enterprise 10Gbps switches and about 50% less than their Catalyst 3850-48XS-S. The Mellanox cards are $375 or so online, compared to about $300 for a 10Gbps CNA from Emulex or Qlogic.

When it comes to connecting things together, things are fine as long as you want DAC. Three-meter 25Gbps DACs are just under $80 from Amphenol’s Cables on Demand site and $60 at my favorite direct knockoff from China supplier FiberStore (FS.com). The 10Gig versions are $55 from Cables on Demand and $15 from China. That makes the incremental cost of 25Gig under $100 a port. At those prices, why not?

Unfortunately, demand from customers like Amazon Web Services has sucked up all the 25Gbps optics the vendors can crank out, and the cheapest knockoff optics I’ve found online are over $350 compared to $20 for the equivalent 10Gbps version. At this point, if you’re married to fiber I’d be installing 25Gbps switches and NICs with 10Gbps optics and save the performance boost for when I can buy 25Gbps optics at a hundred or two hundred dollars a pop.

Disclosure: Cisco and Mellanox provided gear for use in DeepStorage Labs as described above. We are very grateful.