Jump to content
Welcome to our new Citrix community!
  • 0

Dell R710 with PERC H700 or H200: Are SATA HDDs supported via these RAID controllers in XenServer 7.6?


Xen See

Question

Homelab noob here, thanks in advance for your help!

I need to know if SATA HDDs are compatible with XenServer 7.6CR when running in RAID on an H700 or H200 controller in a Dell R710 server.

According to XenServer's Hardware Compatibility List, the H700 and H200 only support SAS, not SATA. Am I to understand that SATA HDDs are not compatible with XenServer using Dell's H700 or H200 RAID controllers?

 

 

 

I'm building a homelab PoE security camera server / general virtualization server, and I have the following hardware:

  • Dell R710 w/ 2x Xeon 5680 CPUs, 128GB ECC RAM (16x8GB), H200 RAID Controller, 6x 3.5" hotswap bays
  • 3x WD 300GB 2.5" SATA HDD (old, questionable reliability, but free)
  • 3x WD Purple 3TB 3.5" SATA Surveillance HDD ($$$)
  • 1x WD Blue 1TB 2.5" SATA HDD
  • also have a few 2.5" SSDs on hand
  • might swap the H200 for an H700 RAID controller because I thought it was more powerful? and seller offered free "upgrade", but now I'm worried about compatibility more than specs. Do I possibly want to consider a different RAID controller other than these two?
  • ***EDIT: I returned the three Western Digital Purple and Blue SATA HDDs after deciding that SAS HDD or any SSD would be preferable to SATA HDD if acquired cheaply enough
  • 6x Dell (Seagate ES.2) 3TB 7.2k RPM 6Gb/s 3.5" SAS HDD - Just found these for $37ea  on Amazon....

 


So as far as SATA vs. SAS compatibility with storage controllers in XenServer...this applies only to the disk where you install the hypervisor, correct? but XenServer would still be compatible with, for instance, reading data from an array of SATA drives on a NAS?

 

To rephrase: if the XenServer Hardware Compatibility List says both SATA and SAS are compatible with XenServer for a given RAID controller, then either type of HDD may be used for the XenServer installation volume, but if only SAS is compatible for a particular RAID controller, then XenServer must be installed on an array of only SAS drives?

Is my understanding correct? And if so, should I use an array of SAS drives in my R710 hotswap bays for my hypervisor and VM installation disk, and then, say, a NAS RAID array for storage of my security camera footage?

 

 


When planning this build, I had imagined that either

  • A: my hypervisor, VMs, and camera footage would all be stored on the same really large RAID volume, or
  • B: Two arrays in the hotswap bay on the same RAID controller, one for hypervisor / VMs and one for camera footage, or
  • C: Hotswap array is just for camera footage, install hypervisor and VMs on SSD or array of SSDs? Use two separate RAID controllers in the same server? Or M.2 NVME SSD in PCIe?

 

 

But now realizing the compatibility issues, and since I require a larger volume to store the footage than SAS is capable of, it would seem that I'll probably need to buy a NAS for my camera footage RAID and SAS drives for hypervisor RAID, unless there is some exotic solution that would make this all work without increasing the cost of the system?

 

I had bought the WD Purple HDDs because they are the recommended drive for storage of surveillance footage because they are designed to handle the abuse of 24/7 writes with high reliability. I hope my choices aren't too outdated. Are folks doing camera footage RAIDs with SDDs now?

 


Thanks again folks if you are taking the time to read this and help a noob!
Peace.

 

 

 


Hardware Compatibility List results for Dell PERC storage controllers (SAS only, no SATA support):
http://hcl.xenserver.org/storagecontrollers/47/Dell_EMC_PERC_H700
http://hcl.xenserver.org/storagecontrollers/39/Dell_EMC_PERC_H200

Link to comment

22 answers to this question

Recommended Posts

  • 1

RAID5 with just 3 disks is not going to give you good performance at all, in particular with SATA and especially with writes. You need way more disks than that to get halfway good performance as well as total throughput and even on some of our SAS systems with 8 or more spinning disks, had to go to RAID10 to get good performance. I have a pure SSD RAID5 array with just 4 disks and even with the fast SSD, it doesn't do all that great for writes- not as well as a RAID10 setup with a a total of around 20 spinning disks, in fact. FOr reads, it's super fast (20 - 40 nsec typically).

 

As to overall performance, that H700 RAID controller itself I think is good for at least 1.5 GB/sec and it's probably going to yield for you with typical storage more around 150 - 200 MB/sec, so way below the maximum throughput of the controller. We run almost all our XS dom0 instances on an associated RAID1 setup with 300 GB or smaller drives, so that part of it should be fine.  I'd consider more and smaller disks for your main SR storage, though, if you have enough bays free to hold them.

 

-=Tobias

  • Like 1
Link to comment
  • 1

I always run the hypervisor (dom0) on a separate set of disks (usually in a small, RAID1 configuration) so that it's not interfering with where the VMs are stored. My philosophy is to put VMs on volumes that are used for similar purposes; th eOS and VMs have very different roles and I/O behavior, hence I split them up.

 

As to SATA vs. SAS, part of it will be determined by the limit of the overall I/O rate, which depends on both the drive speed and I/O capabilities plus how many drives you have in your RAID array plus what RAID configuration (5,6,10, 50, etc.) you have implemented. You can judge the overall IOPS based on th enumber fo drives and their I/O capabilities plus the RAID configuration and with a controller like an H700 the drives are more likely to be the limiting factor. Honestly, a lot depends on what you do with this. If you have a lot of writes vs. reads (say 30% or more of the /O) then faster drives and SAS will probably matter. If not, then SATA will probably be OK. A big concern to me wuld be the use of just 3 drives in a RAID configuration, since that's almost certain to be quite slow and you're not left with much failover flexibility plus with big drives, rebuilding a failed drive will take a very long time (many hours) and of course impact performance of your system overall during that entire time.

 

Using different volumes on the same RAID card is unlikely to be much of a bottleneck given the number of overall drives you have. I have run 20 + drives through a SAS controller and the controller itself (capable of 6 GB/sec) was never the limitation.

 

-=Tobias

  • Like 1
Link to comment
  • 1

That'd certainly be better -- the more disks, the more throughput and I would think for your camera data, there's going to be a lot of I/O. If you want to risk the one disk failing, you could do 1 disk for the OS and 5 for the camera storage. You should not IMO do RAID10 with fewer than 6 disks, so RAID 5 would be your best bet if you only have 4 or 5 disk bays available. Alternative, you could buy a small external drive enclosure and hook it up, but there's of course additional cost involved here plus you'd need another controller, like an H800 for example.

 

Note added: 4 disks is indeed the absolute minimum required for RAID10, but you should have extra hot spare drives in case of a drive failure.

 

-=Tobias

  • Like 1
Link to comment
  • 1

XenServer doesn't support software RAID (though XCP-ng does).

 

Smaller, more, and faster disks will get you better performance overall in general. If you can afford some sort of external iSCSI unit, I'd do that if possible; there are some affordable ones that should work. Nearline SAS will be slower than full SAS, for sure.

 

I would tend to avoid putting the OS on the same array as your VMs unless you have very few of them as they will have contention with both reads and writes, in particular if the log file is active.

 

Bare drive probably means without any mounting bracket.

 

-=Tobias

  • Like 1
Link to comment
  • 1

I'd put the storage where it needs to be fastest on the fastest storage!

And, yes the absoute minimum number of disks for RAID10 is four but I fugured you need at least one or better two hot spares. Also, with a lot of I/O you need more spindles, which means more physical disks to spread out the I/O.

And, yes, 2.5" disks are way more pubular now than the 3.5" drives. You can get some decently priced 2.5" SSD drives now, even, like from Crucial, for example.

  • Like 1
Link to comment
  • 0

So my plan now, after learning that the HCL isn't necessarily 100% inclusive of compatible hardware, is to use a RAID 5 array of 3x 300GB WD 2.5" SATA HDDs in first three hot swap bays for hypervisor and VMs, then on the same RAID card (in the other three hot swap bays) I'll have a RAID 5 array of 3x 3TB WD Purple 3.5" Surveillance SATA HDDs for the camera footage which will be recording 24/7, constantly writing over itself.

I'm also considering a setup like RAID 1 for the hypervisor / VMs on just 2x 300GB drives, leaving 4 bays for the surveillance array, so I could use RAID 10.

 

Am I going to run into performance issues running both RAID volumes on the same H700? I've been warned against this as it may be sub-optimal, but I can optimize the configuration with more hardware or more exotic RAID setups later. Right now I just want to get the server spun up, running my cameras. Later I will be adding VMs for other purposes hopefully, as I've spec'd this server to be overkill for a home surveillance server. I should have plenty of resources available to do other cool things...

Link to comment
  • 0

I bought the three 3TB Western Digital Purple 3.5" surveillance HDDs because I read that they were the standard for recording security camera footage 24/7, and so I figured I'd use those to store my footage. Is that information outdated? The camera footage array doesn't need to max out any specs except reliability, as long as it can keep up with the cameras (only 1 camera currently: a 4k Hikvision, but plan to add a few more lower res cameras, say, up to 4MP).

And I happened to have those three 300GB WD 2.5" HDDs lying around not being used, so I figured why not put the hypervisor on them? I have six total hotswap bays and two sets of three HDDs. So my chosen RAID config was based on resources at hand and naiveté. Three drives for Hypervisor. Three for cameras.

But taking into account what you are saying, should I be using different drives (SSD or SAS HDD instead of SATA HDDs)? Should I break the piggy bank once more to get a NAS for the camera array? SAS performance compared to SATA wasn't a huge concern for me originally, because I was trying to keep the budget tight. But I think I'll have quite a bit of RAM and CPU overhead, so I don't want file I/O bottlenecking my rig. I want a balanced setup that can do more than just a surveillance server VM, something that will take advantage of the surpluses I've tried to spec into this build.

 

Also, what are the merits and faults of
a) putting Hypervisor and camera footage on the same RAID volume, or
b) separating Hypervisor and camera into two different RAID volumes?

 

If b), should I use different volumes on the same RAID card, or different RAID cards entirely?


Anyway, thanks for your help Tobias!

Link to comment
  • 0

Trying not to have to buy a NAS...

What if I did two of the six hot swap bays as RAID1 for the hypervisor and the other four in RAID6 or RAID10 for the OS's and camera footage?

 

I understand there are limitations, but for me cost is the biggest one. So without expanding to another box or machine for more drive bays, is the above a pretty good solution or what would be more optimal? Is there some fancy software RAID trick I could do with some PCIe mounted M.2 SSDs for the hypervisor, leaving all six bays for OS / cameras?

Link to comment
  • 0

Returned the Western digital Purple surveillance drives and the Blue as well. Now looking for deals on SAS drives to get solid performance with decent storage capacity for my needs.

What's the cheap way to fill up 6 drive bays with SAS HDDs or SSDs and a lot of space? How do I get the most bang for my buck? Looks like SAS SSDs are prohibitively expensive...

Link to comment
  • 0

Revised Plan

 

I want to put my Hypervisor on 2x 300GB 15k SAS 6Gb/s 2.5" HDD in RAID 1 and put camera footage on 4x 4TB 7200 RPM SAS 6Gb/s HDD in RAID 10. Probably put OS's on the same virtual disk as the Hypervisor for now, until I find a better solution. Sound reasonable?

 

For storing the operating systems / virtual machines, if I have to pick between a) the faster RPM Hypervisor virtual disk array or b) the slower RPM camera footage virtual disk array, the smaller, faster RPM virtual disk array would be the better choice, right? Even though this puts OS's/VM's on same virtual disk as Hypervisor, it would still give better performance than the other option, wouldn't it? Otherwise, to separate the OS storage from the Hypervisor, I am considering storage expansion via the PCI slots or some external solution, whichever can be done cheaper.

 

Also, I read about Linux MD RAID 10, where you can create a RAID 10 volume with only two physical drives using some software RAID voodoo built into the Linux kernel. Is that worth pursuing for the 2x 300GB 15k SAS array? Are performance gains possible with a software RAID 10 on just two physical drives?

 

Hard Drives that I'm Considering:

  • Dell 300GB 15k SAS 6Gb/s 2.5" HDD, Mfg# 0H8DVC available on Amazon for $34
  • Seagate 4TB 7200 RPM SAS 6Gb/s 128MB cache Internal Bare Drive, Mfg# ST4000NM0023 available on Amazon for $68 (my top choice for the 4TB HDD since it's SAS, not NL-SAS)
  • Dell 4TB 7200 RPM SAS 6Gb/s HDD, Mfg# 529FG available on Amazon for $58 (but this is nearline SAS, which I think I should avoid)


Does "bare drive" in the Seagate listing mean that this drive just has exposed internals, as opposed to a full metal case?

Link to comment
  • 0

I should have bought the 8 bay 2.5" R710 instead of this 6 bay 3.5". I probably saved money getting this one, but now it looks like I need a NAS too.

 

Would it be better to put a) the camera footage or b) the VMs on a NAS? Sounds like only one of the two can fit in my drive bays if the Hypervisor is using the first two.

Link to comment
  • 0

Dang it, I think I'll return those drives I just bought, because I found a good deal on Seagate 15k.7 3.5" 450GB SAS drives on ebay. Amazon's hard drive deals just aren't as good.

 

Anyway, Tobias, you've already been so helpful, but I wonder if you have a moment to consider this weird issue I'm having with this TechMikeNY R710 I bought?

Their customer service representative has not determined the cause of the issue yet, and this is not specifically Citrix or XenServer related, but my R710 iDRAC logs show several instances of the following weird error message from syslogd:

cannot resolve hostname 'xxxxxx-xx-xx.serverpod.net', giving up

I changed the first part of that address, but it's just some random looking letters -xx-xx.serverpod.net, and the logs located at >iDRAC6 WebGUI >iDRAC Settings >Logs show an attempt to connect to this address every so often, with no 100% replicable trigger. I have some ideas about how certain events in the iDRAC log and the actions I was taking prior to the weird error message might have somehow triggered it, namely power cycling, resetting any of the server controllers (iDRAC, RAID, USC, am I forgetting any?), or updating/re-flashing firmware. Could all be coincidence, but these are the type of things I was doing when the error message appeared. I see the first instance of the message on a date when the server was still in the seller's possession, but it has since occurred several times, usually while I was attempting to wipe any previous configs by re-flashing firmware (doesn't work, configs carry over) and resetting controllers (hasn't worked yet, even though settings do get wiped).

Any thoughts before I pull my hair out, douse it with gasoline, and use it to light this server on fire?

edit: I should mention the server is not connected to the internet, only to administrative machines for XenServer on NIC and iDRAC on the dedicated out of band NIC. So if the weird address I'm seeing is some benign system thing, some LAN service, then it would make sense that it's failing. But that address does not look benign to me. It looks to me like previous owner's remote access server.

double edit: AHA! I think it was the USC reset that needed doing. I thought I already did that, but maybe I forgot to check the box for "Reset Lifecycle Controller" in the Delete Hardware Configuration and Reset Defaults wizard. This is sort of a "succeed silently" scenario though. I won't know that the mysterious external server address configuration is purged until my iDRAC logs have gone a sufficiently long time with no recurrence of the error message.

Link to comment
  • 0
On 6/22/2019 at 0:05 PM, Tobias Kreidl said:

Maybe you need to reset your server's host name? See: https://www.citrix.com/blogs/2009/06/29/how-to-change-hostname-of-xenserver/

Seem like it's not resolving somehow. Always something, isn't it?

 

-=Tobias


No, the customer support rep at TechMikeNY (excellent used server seller imo) reminded me to check the box to "Reset Lifecycle Controller" in the "Delete Hardware Configuration and Reset Defaults wizard" which got rid of the remote server configuration that was attempting to establish a connection, but failing since the server is currently air-gapped.

On 6/20/2019 at 1:56 PM, Xen See said:

cannot resolve hostname 'xxxxxx-xx-xx.serverpod.net', giving up

double edit: AHA! I think it was the USC reset that needed doing. I thought I already did that, but maybe I forgot to check the box for "Reset Lifecycle Controller" in the Delete Hardware Configuration and Reset Defaults wizard. This is sort of a "succeed silently" scenario though. I won't know that the mysterious external server address configuration is purged until my iDRAC logs have gone a sufficiently long time with no recurrence of the error message.

 

Edited by xensee
typo
Link to comment
  • 0

Also, I posted a new question here:

https://discussions.citrix.com/topic/404643-xenserver-windows-10-vm-is-droping-files-with-write-errors-in-blue-iris-ip-cam-software-how-do-i-eliminate-raid-readwrite-lag-to-optimize-vm-for-blue-iris-surveillance-software/

Seems like I have XenServer running fine without errors, although come to think of it, there are some errors on startup, but I had assumed those were due to the server being air-gapped and not being able to do NTP or whatever else it needs an internet connection for. I'll attempt to record those errors next statup, they scroll so quickly! But yeah I think my write error is due to the 4-disk RAID-10 array not being able to always keep up with BOTH Windows 10 AND writing the video footage. Let me know if you think my admittedly amateur solution is viable or not. Thanks Tobias!

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...