Jumbo Frames
In this post I will be discussing jumbo frames, what they are, their benefits, drawbacks, and when to use them. By the end of this article you should be able to identify the use cases for jumbo frames and better determine when to use them. As a prerequisite I assume that you have an understanding of the OSI model, specifically layers 1-3.
Understanding Ethernet Framing
In the post linked here I describe in detail about Ethernet framing and its structure. If you do not already have a solid grasp of Ethernet framing I suggest reviewing that article before continuing.
What Makes a Jumbo Frame
Now that we understand the structure of an Ethernet frame and what it means, how do we keep a large payload (>1500 Bytes) from being fragmented into multiple Ethernet frames?
Assuming your networking equipment is configured with the proper MTU, from the frame perspective, the length value needs to be set to the size of your payload. So lets say that you are sending a TCP/IP packet that is a total size of 9000 Bytes, in this case your length section of the Ethernet frame must be set to 9000 and then your entire TCP/IP packet will follow in the Payload section.
Do I Account for Frame Header When Setting MTU
TL;DR - No
Because Ethernet is a service of L2 when setting MTU you do not need to account for the overhead of Ethernet. You do however need to account for the IP header of the data that is being encapsulated assuming you are using IP. This means if the TCP/IP payload is 1500 bytes exactly, your Ethernet MTU would need to be 1524 bytes to account for the 1500 byte TCP/IP payload plus the 24 byte max TCP/IP header.
Why Use a Jumbo Frame
To understand why you would want to use jumbo framing, you must understand what happens when a frame is received by a switch which I have on a high level outlined below.
- Compute CRC for frame and compare it to the CRC that was received in the frame trailer
- Do a lookup of the MAC address table in the switch for the destination MAC that is in the frame
- Transmit frame to next destination
The thing that should stand out here is that each step takes compute time and that is the key here, the more frames we have the more times we need to compute CRC's and do MAC table lookups which adds an overhead of time. Most traffic on a network this is fine because the data that is being sent/received is not a high enough amount to benefit from sending larger frame sizes.
Now lets think of something like a storage network where you have iSCSI running over it between a NAS/SAN and a VM host. You are sending hard drive block data over the network which is not only high bandwidth, but also needs as low of latency as possible. In this scenario as you get more virtual machines thus more virtual hard drives talking over this storage network the time overhead for a bunch of frames will start to degrade performance so this is a common scenario for using jumbo frames. In addition the nature of the traffic will not be a single device talking to a bunch of servers like the case of an end user PC, but a few VM hosts talking to just a few NAS/SAN devices in large quantities.
Seeing the Benefit of Jumbo Frames
Lets take the scenario of transmitting 1 Megabyte of data, if we send that in frame sized with a 1500 byte payload we would need to send 667 Ethernet frames to transmit the data completely.
Lets take that same 1 Megabyte of data and send it in a jumbo frame sized with a 9000 byte payload, we would now only need to send 112 frames to transmit the data completely.
Lets make some assumptions here now, lets say that it takes 1 micro second for a switch to process a single frame. That is absurdly slow compared to real forwarding rates but for the sake of math just follow along. Using our first example of 667 Ethernet frames that would take 667 microseconds, or just over half of a millisecond to forward our entire 1MB of data. Now imagine it needs to traverse 3 switches to get to the destination device, that brings us to 2001 micro seconds or 2 milliseconds to transmit our 1 MB of data just over the network!
Below I have a table for our 2 example scenarios showing the time difference it would take to transmit our 1MB of data in standard vs jumbo frames based on our assumption of 1 micro second(us) per frame for processing overhead per switch.
1 Switch Hop | 2 Switch Hop | 3 Switch Hop | |
1500 Byte Payload, 1MB of data, 667 Frames | 667us, 0.5ms | 1334us, 1.3ms | 2001us, 2ms |
9000 Byte Payload, 1MB of data, 112 Frames | 112us, 0.1ms | 224us, 0.2 ms | 336us, 0.3 ms |
As you can see the overhead increases at a much higher rate the smaller the payload size.
To clarify how absurd the time estimations here are, modern switches measure the forwarding rate for standard frame sizes in million packets per second, in the neighborhoods from 10-70 million frames per second, so at 10 million frames per second, a switch could process a frame in 1/10 of 1 micro second.
All this to say that you wont be seeing overhead causing absurd lag like in the table above, but as you scale out an application that could benefit from using jumbo frames you will begin to see performance degradation if you are using standard frame size as the scale of data being sent increases.
Where Will I See Jumbo Frames Practically
You will likely only see jumbo frames being used in data center type networks for storage networks or something ancillary to that. You likely wont be having end users on a network supporting jumbo frames because it's just not needed and configuring that will cause more headaches than you care to deal with. I most often see jumbo frames being used on a iSCSI network between VM hosts and a NAS/SAN.
I hope this cleared up what jumbo frames are, how they are used, and the scenarios that they are useful. If you have additional questions leave a comment below so I can clear things up!