Jump to content


Photo

XenSrvr 6.0.2 Jumbo Frames in Windows 2008R2 server not working

Started by Chuck Veillon , 25 August 2012 - 03:43 PM
22 replies to this topic

Chuck Veillon Members

Chuck Veillon
  • 9 posts

Posted 25 August 2012 - 03:43 PM

I've enabled Jumbo frames (MTU 9000) on my SAN device, XenServer, and on the interface going into my Windows 2008R2 VM. The switch is jumbo enabled and tests fine.

192.168.30.10 - SAN
192.168.30.20 - XenServer console
192.168.30.30 - Win Server

I'm able to test jumbo frames successfully between the XenServer console and my SAN, the XenServer console and my Windows server.

To test this, I did the following from the XenServer console:
ping 192.168.30.10 -M do -s 8900
ping 192.168.30.30 -M do -s 8900

The packets come back good. No fragmentation errors. Just to make sure, I did the same test with a packet size of 9100 and it gets frag errors.

But if I do ping tests from the Windows server console, they fail with frag errors.

To test this, I do: ping 192.168.30.10 -f -l 8900 & ping 192.168.30.20 -f -l 8900
If I do a ping 192.168.30.10 -f -l 1400 it works fine.

Within windows it's running the Citrix PV Eth Adapter driver, driver date 7/19/2011, driver ver 5.9.960.49119
Under the advance tab of the network properties, there's only options to choose and nothing looks like frame adjustments.

So, if Windows doesn't ping larger frames, I'm thinking it's not really working... even if the ping test between the XS console & WinSrvr IP address (test from XS).

Ideas? Fixes?

Thanks!
Chuck



Venugopal Busireddy Members

Venugopal Busireddy
  • 2 posts

Posted 19 December 2012 - 05:53 PM

This question was posted almost four months ago, and I am surprised to see this is still not answered!

I am using XenServer 6.1, with a Windows 2008 R2 VM. The Citrix PV Network Adapter driver version is 7.0.0.65 dated 9/10/2012 (seems recent enough!), and I still have the same problem. I do not see the option to enable jumbo frames!

Venu



Ron Abt Members

Ron Abt
  • 24 posts

Posted 19 December 2012 - 07:04 PM

To enable jumbo frames in Win2008R2 from an admin-context command prompt:

netsh int ipv4 set subint “<interface name>” mtu=9000 store=persistent

To confirm the change:

netsh int ip show int



Venugopal Busireddy Members

Venugopal Busireddy
  • 2 posts

Posted 19 December 2012 - 07:37 PM

I already tried that. That makes it worse...

Without running that command, things work in the following way: Outgoing large ping packet is split into smaller frames (to fit in 1500 MTU). When the far-end sees this large ping packet, and sends one large ping packet, it looks like XenServer splits it up into smaller frames and passes upstream. The net result is that the communication occurs, but not at the full MTU size on the outgoing side.

With MTU set to 9000 as described, nothing goes out on the wire!!!

Did you verify that the jumbo frames actually go out, or, are you just mentioning the Windows commands?



Ron Abt Members

Ron Abt
  • 24 posts

Posted 20 December 2012 - 02:04 PM

It turns out I was just posting the commands. Back in October I specifically tested jumbo frames in a Win2008R2-sp1 guest on a fresh load of XenServer 6.1, intel i350 nics, vswitch, Powerconnect 6248's, and an EqualLogic PS6000X and was successful. I used the same ping parameters you did to test. My ATTO before/after was around 10% better with jumbo frames.

In my quest to make 6.1 useful for production (which it still isn't), I have installed hotfixes, downgraded xentools in the guest, added & removed nics on the hosts, added & removed hosts from the pool, changed between vswitch and bridge and back, changed dom0 memory, and so on.

I booted it all back up and tested again and now I have the same results you do. Jumbo frames are working fine for the host iscsi, but not in the guest.

We may be wasting our time. In the Citrix 'Designing XenServer Network Configurations' document (CTX134880), it says this: "Currently, jumbo frames are only supported for storagenetworks with iSCSI HBAs and the vSwitch configured as the networking bridge." I read this to mean they don't support jumbo frames in the guest. And they don't support my host jumbo frame config either since I'm using nics and not iSCSI HBAs.



Jesus SAN MIGUEL Members

Jesus SAN MIGUEL
  • 22 posts

Posted 22 December 2012 - 04:32 PM

I can confirm your results in my setup. Jumbo packets get lost in the vSwitch. It seems a good time to say goodbye to xenserver and start trying hyper-v :-(



Tobias Kreidl Members

Tobias Kreidl
  • 12,370 posts

Posted 22 December 2012 - 05:42 PM

I have a couple of Linux (RHEL 5) VM guests under XS 6.0.2 and use jumbo frames for a direct iSCSI connection to them (independent of any XenServer SR). If you mean if an MTU of 9000 is supported for general network traffic, I'm not as sure, though I do have a VLAN set up that way for a management interface underneath which iSCSI VLANs are defined, and that seems to work, as well. The setup is not so trivial, though, for any of this, This is running under open vSwitch.
--Tobias



Jesus SAN MIGUEL Members

Jesus SAN MIGUEL
  • 22 posts

Posted 22 December 2012 - 06:33 PM

Our configurations seem to be quite similar... In my case the VM is Windows 2008R2.
The Xen host has 5 phisical interfaces, 2 of them are dedicated to storage with MTU 9000 against a iSCSI SR.
The Windows guest has 3 virtual nics, 2 of them respectively bridged to those in the xen host dedicated to storage and with MTU 9000 also. The problem is that when the packet is bigger than 1500 bytes, it gets lost. Here is the guest configuration. iSCSI Storage networks are 3 and 5 (192.168.201.0/24 and 192.168.202.0/24)

C:\Users\administrator>netsh interface ipv4 show interfaces

Idx Met MTU State Name
--- ---------- ---------- ------------ ---------------------------
1 50 4294967295 connected Loopback Pseudo-Interface 1
16 10 9000 connected Local Area Connection 3
17 10 1496 connected Local Area Connection 4
19 10 9000 connected Local Area Connection 5

C:\Users\administrator.MEDIOS>netsh interface ipv4 show addresses

Configuration for interface "Local Area Connection 5"
DHCP enabled: No
IP Address: 192.168.202.164
Subnet Prefix: 192.168.202.0/24 (mask 255.255.255.0)
InterfaceMetric: 10

Configuration for interface "Local Area Connection 4"
DHCP enabled: No
IP Address: 192.168.200.164
Subnet Prefix: 192.168.200.0/24 (mask 255.255.255.0)
Default Gateway: 192.168.200.107
Gateway Metric: 256
InterfaceMetric: 10

Configuration for interface "Local Area Connection 3"
DHCP enabled: No
IP Address: 192.168.201.164
Subnet Prefix: 192.168.201.0/24 (mask 255.255.255.0)
InterfaceMetric: 10

Configuration for interface "Loopback Pseudo-Interface 1"
DHCP enabled: No
IP Address: 127.0.0.1
Subnet Prefix: 127.0.0.0/8 (mask 255.0.0.0)
InterfaceMetric: 50

C:\Users\administrator>ping -l 1000 -f 192.168.201.243

Pinging 192.168.201.243 with 1000 bytes of data:
Reply from 192.168.201.243: bytes=1000 time<1ms TTL=255
Reply from 192.168.201.243: bytes=1000 time<1ms TTL=255
Reply from 192.168.201.243: bytes=1000 time<1ms TTL=255
Reply from 192.168.201.243: bytes=1000 time<1ms TTL=255

Ping statistics for 192.168.201.243:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms

C:\Users\administrator.>ping -l 2000 -f 192.168.201.243

Pinging 192.168.201.243 with 2000 bytes of data:
Request timed out.
Request timed out.
Request timed out.
Request timed out.

Ping statistics for 192.168.201.243:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),



Jesus SAN MIGUEL Members

Jesus SAN MIGUEL
  • 22 posts

Posted 22 December 2012 - 06:56 PM

Hi Tobias,

Just for fun I created a RHEL 6 VM with the same setup as the windows VM and you are right; the jumbo packets work just fine:

[root@localhost ~]# ifconfig eth1 192.168.201.141 netmask 255.255.255.0 mtu 9000
[root@localhost ~]# ifconfig eth2 192.168.202.141 netmask 255.255.255.0 mtu 9000

[root@localhost ~]# ping 192.168.201.243 -s 2000 -M dont
PING 192.168.201.243 (192.168.201.243) 2000(2028) bytes of data.
2008 bytes from 192.168.201.243: icmp_seq=1 ttl=255 time=0.656 ms
2008 bytes from 192.168.201.243: icmp_seq=2 ttl=255 time=0.349 ms
2008 bytes from 192.168.201.243: icmp_seq=3 ttl=255 time=0.366 ms
2008 bytes from 192.168.201.243: icmp_seq=4 ttl=255 time=0.332 ms

--- 192.168.201.243 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3767ms
rtt min/avg/max/mdev = 0.332/0.425/0.656/0.135 ms
[root@localhost ~]#

So the problem lies within the paravirtualized windows drivers!

Regards,
Jesus



Tobias Kreidl Members
  • #10

Tobias Kreidl
  • 12,370 posts

Posted 22 December 2012 - 08:59 PM

Jesus,
That would be my conclusion, as well. I wonder if Windows 2012 would work with jumbo frames? I assume the network ports are all properly set for flow control in both cases.
--Tobias



Jesus SAN MIGUEL Members
  • #11

Jesus SAN MIGUEL
  • 22 posts

Posted 23 December 2012 - 12:02 AM

It is not a problem with the windows stack... I just confirmed the latest PV drivers are buggy: Avoiding xentools 6.1 and sticking to reliable old 5.6 will allow jumbo frames.
As a bonus, also discovered that Windows Server 2012 will not install as a guest in xenserver 6.1 with the experimental supplied template.

Regards,
Jesus



Jesus SAN MIGUEL Members
  • #12

Jesus SAN MIGUEL
  • 22 posts

Posted 26 December 2012 - 12:11 PM

Just for completeness, don't waste your time with the legacy 6.1 drivers. They do not work either.

Regards
Jesus



Brian Collins Members
  • #13

Brian Collins
  • 4 posts

Posted 07 February 2013 - 08:29 PM

Hello Citrites?
This is something many of us really need an answer and resolution for. No jumbos for Windows guests? Come on!
The whole universe is moving to 10GbE.



Jesus SAN MIGUEL Members
  • #14

Jesus SAN MIGUEL
  • 22 posts

Posted 08 February 2013 - 01:00 PM

MTU-Gate is still present after XS61E009 and XS61E010.
Citrix should focus on important issues like this or VSS snapshots and postpone new niceties rarely used/needed.

Regards,
Jesus



Sander Revenboer Members
  • #15

Sander Revenboer
  • 28 posts

Posted 14 March 2013 - 03:46 PM

Another victim here! I have the same issues at one of my customers. citrix just removed jumbo frames support without announcing it. I do not want to use older 5.6 tools on a 6.x server! anyone any news ont this one?

Best regards,
Sander



Jesus SAN MIGUEL Members
  • #16

Jesus SAN MIGUEL
  • 22 posts

Posted 14 March 2013 - 04:02 PM

Beware of 5.6 tools on Xenserver 6.1. I have been having lockups in VMs with several VCPUs in this scenario.

Regards,
Jesus



Rachel Berry Citrix Employees
  • #17

Rachel Berry
  • 391 posts

Posted 14 March 2013 - 08:28 PM

Hi Sander,

To the best of my knowledge XenServer has only ever supported Jumbo frames in storage networks and not in guest. I believe this remains to be the case. http://support.citri...ticle/CTX134880

If you are experiencing otherwise please could you raise this with support as if an issue is raised through the appropriate channels an appropriate engineer can triage and escalate it. We have not had this issue raised to us.

Best wishes,
Rachel

-----
http://support.citrix.com/article/CTX134880
Currently, jumbo frames are only supported for storage
networks with iSCSI HBAs and the vSwitch configured as the
networking bridge. This means:
1.
If you want to configure jumbo frames for your storage
traffic, you must use a storag
e device that can use an HBA,
such as an iSCSI hardware SAN or a Fibre Channel SAN.
2.
You must configure the vSwitch as the networking bridge.
You can choose to configure the Distributed Virtual Switch
solution or just configure th
e vSwitch as the bridge. See
“Deciding to Use the Distributed Virtual Switch” on page
53.
3.
You must configure end-to-end support for your jumbo
frames, including switches and NICs that are compatible
with them. For more information, see “Configuring
Networks with Jumbo Frames” on page 85.



Tobias Kreidl Members
  • #18

Tobias Kreidl
  • 12,370 posts

Posted 14 March 2013 - 11:43 PM

FYI, it works fine on Linux guests (the independent iSCSI connectivity, as well as jumbo frames), but takes some -- shall we say -- rearranging of the RC command order to make it work. In our case, the connectivity is to iSCSI arrays. I can see why it's not directly supported out of the box -- it's a bit complicated.
However, HA still works totally fine, and performance is much better since the SR mechanism is bypassed altogether and all one is talking to is an LVM. I think this is similar to the concept of a "raw" device as discussed for EqualLogics as a connection option.
--T.



Jesus SAN MIGUEL Members
  • #19

Jesus SAN MIGUEL
  • 22 posts

Posted 15 March 2013 - 09:57 AM

Hi Rachel,

That is not very precise. Please consider that the document you posted mentions (in page 33) the use of jumbo frames on VMs VIFs on cross-server private networks.
In fact, from the VM point of view it is completely transparent while the underlying network is jumbo-frames enabled.
In my experience, for Linux guests it is a rock solid feature (even in Xenserver 6.1), while for windows guests it has been working until the arrival of the infamous 6.0 guest tools.

Kind regards,
Jesus



Rachel Berry Citrix Employees
  • #20

Rachel Berry
  • 391 posts

Posted 17 March 2013 - 04:18 PM

The official support matrix for jumbo frames has only ever been for the specified conditions above in storage networks.

Officially supported features are fully regression tested and customers can rely on them continuing to work. There are things you can do with XenServer that may well work, if they are not fully supported though it does have to be done at your own risk. PVonHVM is not officially supported (i.e. use of the other template).

If you believe something officially supported has regressed please do raise a support issue to highlight this fact - that is the official channel to get these issues investigate. As any feature a customer is payign for support for should continue to work.

I can ask documentation to consider removing/rewriting the VIF usage to ensure people don't get the impression guest support exists for jumbo frames. Would that make it clearer?

Best wishes,
Rachel