Jumbo Frames and Multi-NIC vMotion Performance over 10Gbe – Part 2

A week ago I wrote a post as sort of a follow-on to Chris Wahl’s post on performance gains (or lack thereof) using jumbo frames for multi-NIC vMotion. A fellow blogger, Josh Odgers (blog / twitter), posted a comment stating more testing with the same workloads across tests would be interesting to see. In my previous testing I performed 3 separate tests in which the final test, I used a different workload due to resource constraints.

The tests below all run the same workload, and instead of three tests, five tests were performed. If you’d like to know more about the equipment or test setup/script, see my original post: Jumbo Frames and Multi-NIC vMotion Performance over 10Gbe

  • The only difference in setup is all VMs are configured with 16GB of RAM instead of 24GB, as in the previous tests

The Tests

For testing, I used the same thing as Chris, prime95. All VMs were running at 100% CPU and using 13GB of memory; 12288MB set in prime95 and the rest was being used by the OS. The following 5 tests were performed utilizing a power CLI script (see previous post here for script):

  • Test 1  — vMotion of 1 powered-on VM loaded with prime95
  • Test 2  — vMotion of 2 powered-on VMs loaded with prime95
  • Test 3  — vMotion of 4 powered-on VMs loaded with prime95
  • Test 4  — vMotion of 6 powered-on VMs loaded with prime95
  • Test 5  — vMotion of 8 powered-on VMs loaded with prime95

Each test was performed using two configurations with regards to the MTU:

  • Configuration 1:
    • vmk interfaces:                 1500
    • 1000v uplink port profile    9000
  • Configuration 2:
    • vmk interfaces:                 9000
    • 1000v uplink port profile    9000

The Results

Once all tests were complete I removed the highest and lowest times from each, and then came up with the average. The results are somewhat similar to previous testing:

Test 1 — 1 VM, 13GB mem workload

  • 1500 MTU: 18.75
  • 9000 MTU: 18.10

image

In test 1 jumbo frames are barely faster, not even noticeable; 3.46% faster

Test 2 — 2 VMs, 13GB mem workload

  • 1500 MTU: 34.50
  • 9000 MTU: 21.95

image

In test 2 jumbo frames are NOTICEABLY faster; 36.37% faster

Test 3 — 4 VMs, 13GB mem workload

  • 1500 MTU: 56.94
  • 9000 MTU: 47.19

image

In test 3 jumbo frames are faster, 50% less than test 2, but still faster; 17.12% faster

Test 4 — 6 VMs, 13GB mem workload

  • 1500 MTU: 109.43
  • 9000 MTU: 107.69

image

In test 4, just as in test 1 jumbo frames are barely faster, not even noticeable; 1.59% faster

Test 5 — 8 VMs, 13GB mem workload

  • 1500 MTU: 203.38
  • 9000 MTU: 207.84

image

In test 5 jumbo frames are actually slower, but just barely; 2.14% slower

Conclusion

So, what do you think? Are jumbo frames worth it? If you combine all 1500 MTU times and all 9000 MTU times and calculate a percentage, jumbo frames are ONLY 4.72% faster than the traditional MTU size. Scott Lowe made a comment in one of his recent technology short takes (based on my previous testing), and I’m paraphrasing “be sure to consider complexity of jumbo frames into your design and weigh pros/cons.” While I agree with Scott on this, I will say, if you’re already running jumbo frames as part of the design, why not use the 9000 MTU for vMotion? In this case I think it’s a no brainer, change the MTU size to 9000. Overall there is an increase in performance depending on the workload, and, based on my testing, you can see upwards of a 37% increase in performance.

Comments 1

  1. Very informative post, but I would like to look at the numbers somehow differently…
    If we look at mutli-NIC vMotion with Jumbo frames as a process that needs to be optimized from a time perspective, that is reducing the time it takes for a VM to me migrated over, then the best course of action would be to just limit number of concurrent vMotion tasks to 2/4.
    That would give the best possible migration time when migrating multiple VMs (I think migration time of multiple VMs is more interesting, as a few seconds for migrating a single VM is not a critical issue).
    According to your numbers, the best possible vMotion time would be a around 11 seconds/VM /w Jumbo frames, and 15seconds/VM with 1500MTU, a major improvement over the time for single VMs which is around 15seconds/VM.

Leave a Reply to IonutN Cancel reply

Your email address will not be published. Required fields are marked *

*