i know that my setup isn't supported officially by this kind of hardware, but still i am really astonished how bad it behaves. i am running a ML350 Gen9 with 128GB of RAM, dual Xeon E5-2630 V3 on which i am using the P440ar to run 8 SSDs Crucial MX500 4TB and a P840 with 14 HDDs Seagate Barracuda SATA. Both hardware raid card have the latest firmware, so as the SAS expander cards and the rest of the server overall (used the latest iso firmware image available from HP). I configured the 8 SSDs in Raid10, tried several stip size and disable the smart path, tried with or without physical cache activated, i really have bad performance when using fio :
read 4K : 80MB/s write 4K : 65MB/s write 1M : 750MB/s read 1M : 3140MB/s
with 8 drive and 2 lines for the link to the RAID card, i assume that i should have 24GB bandwidth, so the read looks to perform well, as i run the fio command using a 10GB file size, so more than the 2GB RAM the controller have. However, the write and 4K performance doesn't look good for a RAID 10 of 8 drives to me.
I plan to install Proxmox to run VMs and containers, various purpose. I need to do additional tests of the Raid 5 volume, but it was even performing less good than that from what i recall. What am i missing ? I understand that these are consumer drives, but still with such amount of drives it should perform better, shouldn't it ? I am not an expert so if you want additional information of course, let me know.
Edit : I did the tests in RAID5 for the HDD, here is the results :
Read 4K : 40,5MB/s Write 4K : 7MB/s Read 1M : 420MB/s Write 1M : 330MB/s
So same terrible performance especially with 4K that we saw with the RAID10 above. As suggested by a member below, i tried to switch my Proxmox to ZFS after placed both controllers in HBA mode. Here is some benchmarks since i now switched to ZFS with the SSDs
Read 4K : 823MB/s Write 4K : 537MB/s Read 1M : 1792MB/s Write 1M : 1892MB/s
Each of the tests run for some minutes, making sure that not only the RAM usage talks and i take the average bandwidth from fio. I also did some test in a VM of OpenMediaVault, to cross-check these numbers, i can say that minus few percentage drop due to the VM environment, the numbers follow the same trend. I need still to test the RaidZ 5 variant, will share the results.