How much latency does NSX add? [Part 2 – VXLAN & Distributed Logical Routers]

How much latency does NSX add? [Part 2 – VXLAN & Distributed Logical Routers]

Introduction

This article is the second in a series addressing questions around how much latency the various functions of NSX add when compared to a purely physical network design.  The last article focused on the NSX Distributed Firewall.  This time, we’ll be looking at VTEPs/VXLANs and Distributed Logical Routers from an East/West traffic perspective.  We will hit north/south traffic (i.e. traffic going out through a Perimeter Edge) and other stuff like Bridging in future articles.

As an aside, I apologize for the delay in getting to this second article.  The problem was I lost access to the 10Gb switches and NICs I was using before.  I just lucked in to another set, so I’m jumping on this while I have the chance 🙂

Testing Harness

The testing harness for this test differs from the last article, due to the equipment issues mentioned above.  Its still pretty comparable though – the servers are two Cisco UCS C200 rackmount servers running vSphere 6.0U2.  As before, these are 3-4 year old servers and aren’t anything special.  They still represent the low end of what type of vSphere hosts you’d find in a production datacenter.  The physical setup is as follows:

 

physical-setup

 

ESXi host specifications

These are a couple of generations old

Cisco R200-1120402W
Dual X5650 CPUs – 2.67Ghz 6C
96GB DDR3 1333MHz RAM
Intel C600 Chipset
Qlogic QLE3442 CU 10GbE Dual Port NIC

Physical Network

Alcatel-Lucent OS6900-X20 10GbE L2/L3 switch

Virtual Machines

3 VMs per host used in testing, all configured as follows:

4 vCPU
4GB vRAM
1 VMXNET3 vNIC
40GB vdisk
Windows 2012 R2

Items of note

  1. I disabled all the power saving stuff in the BIOS.  This is a personal thing really, because I’ve had tests like this get polluted because of weird power saving behavior
  2. I used the async driver for the NICs – I’ve seen the in box drivers that come with vSphere do weird stuff like disable RSS even if the NIC has support for it onboard.

Testing Protocol

The main goal of the testing was to figure out the latency various NSX features would add in a real production scenario.  I tested this using netperf against the raw interfaces of multiple VMs, as well as using a script that basically did a bunch of wgets against an ASP.NET page on the source VM, which in turn did a simple T-SQL query against a small database running on the target VM.

General netperf procedure

On the target VMs:  netserver

On the source VMs:  netperf -H <target server ip> -t TCP_RR -l 120 — -r 1024,1024

This test is a rather simple measurement of the round trip time (RTT) between two VMs.  Each transaction = a new TCP connection initated from the client to the server, then 1024 bytes are transfered from source to target, then 1024 bytes from target to source.  Then the connection is closed and the next one begins.

RTT in milliseconds [aka ms] = (1 / # of transactions per second reported by netperf) * 1000

RTT in microseconds [aka usec] = (RTT in ms) * 1000

 

Scenarios Tested

To figure out the latency introduced by the features in question, we have to check the overall latency of a few different scenarios.  The three that were tested are illustrated in the diagrams below.

Notes on the scenarios

  1. Scenario 1 – Each host has its VTEP on a different subnet, and thus requires some kind of routing to get to the other host.  While this may not be the case in all production designs, it definitely happens in a lot of them.  The local cache was primed on both hosts, so there was no need to do any multicast flood/learn or unicast controller queries for discovery.  I felt that since this only occurs the first time two VMs talk, and isn’t something constantly going on, it would skew the results.  If there’s interest, I could test that specifically – but remember it happens VERY infrequently between VMs that talk to each other regularly.
  2. Scenario 2 – This scenario is included so that we can parse out the latency introduced by having to do the actual physical routing mentioned in Scenario 1.
  3. Scenario 3 – Finally, we test a typical situation where both VMs are on the same VLAN – no routing at all (whether virtual or physical) is happening.  This represents the baseline or legacy design

 

Scenario 1 – VXLAN + DLR + Physical L3 + Physical L2

 

scenario1

Scenario 2 – Physical L3 + Physical L2

 

scenario2

Scenario 3 – Physical L2 Only

 

scenario3

 

Raw Test Data for all 3 East/West Scenarios

 

raw2

 

Full Path Latency Breakdown in Microseconds

 

latbreak12

Bottom line

  1. Distributed Logical Routing adds from 8-14 microseconds to the RTT
  2. VXLAN encapsulation adds less, more in the range of 3-10 microseconds to the RTT
  3. As we learned previously, Distributed Firewall adds 13-16 microseconds to the RTT

 

This is what it looks like relative to the other sources of latency:

 

fullpath1

Conclusion

Because these three things represent all NSX features that are relevant to East/West traffic, it would be reasonable to say that the overall latency added with everything turned on would be between 24 and 40 microseconds to the overall RTT between two VMs.  Unless you’re doing high frequency trading or similar – 40 microseconds is nothingIt is 0.04 milliseconds – completely negligible and undetectable in 99% of production environments.

 

Footnote / Edit

As noted above, netperf was not the only test I did to figure out the ranges.  I also ran an actual .NET/MSSQL application between two VMs and did a boatload of transactions of various sizes, then used merged pcaps taken at various points (pre-VTEP, post-VTEP, etc and used timestamps to work out the latency).

I’m focusing on netperf in this article because it is the easiest thing for someone else to replicate or understand.

 

Author: sean@nsxperts.com

2 thoughts on “How much latency does NSX add? [Part 2 – VXLAN & Distributed Logical Routers]

  1. talha

    This was very useful.

    I am little confused on scenario #2, you mentioned that “This scenario is included so that we can parse out the latency introduced by having to do the actual physical routing mentioned in Scenario 1.”

    But as per diagram, the physical routing is still being done in the L3 switch, just the logical routing is not there from DLR.

    Reply

Leave a Reply to Chris Cousins Cancel reply

Your email address will not be published. Required fields are marked *