5

I just got my hands on a few DL380 G5s, and I thought I would use them in my home lab to test out the creation of a Hyper-V cluster using an iSCSI storage setup. I have installed Server 2012 R2 on all three, and have created a couple of iSCSI disks/LUNs on one host, which has all 8 disks running in RAID 10.

All three servers have at least 6 NICs, so I decided my best option was to use 4 on the storage server for iSCSI, 1 for host management with 1 standby. Then on the hyper-v nodes, I would use 2 for iSCSI, 2 for the VM lan, with 1 management and 1 standby.

I separated out my management and storage traffic onto seperate switches to start with, for best performance. The storage host is using 2012's in built NIC teaming feature to combine the NICs to a single interface/IP (according to this article, this NIC teaming setup is supported for the target side). On the Hyper-V hosts, I kept them separate and installed MPIO instead (using this chaps' guide), setting up a path from each NIC to the storage IP.

My query basically revolves around this; When doing a disk test on the storage host, I get around 250MB/s read/write (both on the physical volume, and the mounted VHDX that my iSCSI is pointing to). When I use a single NIC on the hyper-v hosts, and attach that iSCSI LUN, I get around 95-100MB/s (due to a single gigabit interface, expected result). When I then setup the second NIC, my reads and writes go up to about 150MB/s, which I would have expected to be closer to 200MB/s. When adding in a third NIC to the mix, my reads and writes still sit at around the 150MB/s mark..

I know that I shouldn't expect the same result as the test I did on the host itself, but I find it odd it is capping at 150MB/s. I have jumbo frames enabled on the switch and on all NICs, but I don't seem to be able to overcome this cap. Are there any other steps I should be performing here, or would this be the expected transfer rate in this kind of setup?

Eds
  • 417
  • 2
  • 4
  • 14
  • Are you teaming across onboard and expansion NICs? This could be a bus bottleneck. – Linef4ult Dec 20 '15 at 23:12
  • 1
    The teaming on the storage host is on a single card, which is an HP NC364T 4 port gigabit NIC. Same is true on the Hyper-V side, using the same card. Hadn't thought of spreading across multiple physical adapters, so will have a go. – Eds Dec 20 '15 at 23:31
  • Well different physical NICs seemed to make no difference. For the sake of it, I tried a simple file copy rather than a CrystalDisk test, and seemed to get closer to 220MB/s, so I am now starting to suspect that the iSCSI MPIO was fine, and it was the benchmark that was off. Am going to go with iSCSI for my test cluster, and see what happens. – Eds Dec 21 '15 at 19:55
  • Even more unusual, after I see the file copy operation complete, I still see background network traffic on both the Hyper-V node and the storage server, still capped at 100MB/s. This is really starting to frustrate me! – Eds Dec 21 '15 at 20:12
  • Try disabling Virtual Machine Queues, rebooting the bare metal and testing again. – Linef4ult Dec 21 '15 at 20:16
  • Only place I could find VMQ enabled was on the NIC team adapter. DIsabled that, restarted. Copying to the storage machine shows background traffic after the file operation completes, copying from it, shows no network traffic on either end?!?! Haha – Eds Dec 21 '15 at 20:53
  • The VMQ issue is tied to broadcom adapters. DL's have those if you order them with them. You disable VMQ on the adapter. Then, rebind your NIC to a different virtual switch. You can change the binding back to the original but you have to change virtual switches once. You may see your traffic monitoring works now. – Citizen May 28 '16 at 03:59

0 Answers0