0

I have tried to work through this problem on my own, but I have reached the point where I need your help, and/or encouragement.

I have a small home network set up. The main components are

  1. 2 Macs running OS X Yosemite (10.10.4)
    • 1 iMac has a Gigabit ethernet port
    • 1 MBP has Thunderbolt ethernet adapter that can run as Gigabit ethernet
    • Both Macs also have wireless network cards that run 802.11n
  2. 1 FiOS Gateway (Fiber Optic Internet)/router/wireless router
    • Ethernet ports are Gigabit, however Verizon's firmware limits each port to packets of 1500 MTU max
    • WiFi is dual band 2.4GHz/5GHz antennas; the 5GHz can handle 802.11ac
  3. 1 Synology DS1010+ set up as RAID 6
    • The NAS has two 1Gb ethernet ports that can be set for Jumbo Packets
    • All 5 drives are 7200RPM
    • This is serving large media files such as Raw digital files, movies, iTunes Media Library, etc.
  4. An ethernet connected printer and various wireless devices that don't really factor into this question as I am concern most with the connectivity and performance between the NAS and the Macs.
  5. All Ethernet connections are made with short run Cat 6A cables (6 or 8 feet is the longest, most are 3 feet runs), which should easily handle the bandwidth.

With the NAS and the 2 Macs attached to the the ethernet ports on the router, I am seeing pretty poor performance. Anecdotally, at its best I don't think I have seen transfer rates above 10MB/s, and a lot of the time it can run in the 100's of KB range. Memory readings on the NAS performance monitors don't appear too taxed, so that shouldn't be an issue. A quick Google search of average performance of a 5 disk RAID 6 has a report from Tom's Hardware that has average read transfer rate benchmarks for five-disk RAID 6 arrays at around 220MB/s, though this is not on the same set up as I have... I would be thrilled with half that speed right now as it would be an order of magnitude increase in what I am currently seeing.

I was hoping to try to use Jumbo packets by setting MTUs to 9000 to see if I could make an improvement in transfer rates there, but as the FiOS Gateway limits the MTU to 1500, even though I am able to set MTU to 9000 on the Macs and on the DS1010+, it causes problems with normal internet traffic that come with dropped packets because of mismatching MTUs.

As I only have 25Mb up/down internet, I figure that I would not be sacrificing any noticeable performance if I have the Macs communicating with the FiOS gateway wirelessly and try to find a solution using ethernet where I could have the Macs and the NAS talking to each other directly. If it becomes a bottleneck for web traffic I was thinking that I could leverage Thunderbolt and add two Thunderbolt-to-ethernet adapters and keep the ethernet connections that I have now for regular traffic to keep the wireless bandwidth strictly for the wireless-only devices.

The idea I had was to get a Netgear ProSAFE GS108Tv2 Gigabit Smart Switch and see if I could connect the Macs and the NAS as a VLAN (which I am not exactly sure how to do), set the ports to 1000baseT and MTU 9000, and route all disk I/O through that VLAN on the switch. I thought that I could set the ethernet IP addresses on the three devices to a different subnet and then connect to the NAS volumes by mounting using the static IP for the port that was set to 9000 MTU. But now I am second-guessing myself and I am not sure if this is feasible or the way to proceed.

Here is what I would like to find out

  1. Does anyone think that this idea might work and I could see an improvement in disk I/O between the NAS and the Macs or if I just don't understand how these things fit together?
  2. Are there better solutions out there without having to go to a very expensive option?
    • My current budget for this solution is pretty much tapped out and I would like to try and find a solution that works with my current hardware. I already have the switch, so that is factored into the calculus.
  3. I wanted to see if there is a way to maybe have the switch have an uplink to the router so that the Macs and the NAS could send 1500 MTU packets to the router for network I/O and send 9000 MTU packets between each other for disk I/O through the same port or if I have to use separate ports to segregate the traffic?
    • If I went the additional Thunderbolt-to-Ethernet adapters, could I have all six ports (2 for each Mac and the two on the NAS) pass through the switch, setting three ports to the 9000 MTU subnet and 3 ports to the 1500 MTU subnet and then have the router uplinked to the switch so that all traffic could flow through the switch even though there were different sized packets passing through it?

I am pretty much beyond the limits of my knowledge of networking at this point, and I am not sure what is and is not possible and if it is possible, how to implement. I am not afraid of rolling up my sleeves and tweak system settings; I have set static lease DHCP, static IPs on the computers, and also implemented MAC address filtering, but at this point I am not sure if what I think should be doable actually is. Any advice at this point will be greatly appreciated.

Thank you

Update

This is the run from the test using iperf3.0.11. It was run directly through the gateway router ports. I haven't set up the switch yet so it was easier to just run the test on the network as is.

192.168.1.100$ iperf3 -s -p 5201
192.168.1.102$ iperf3 -c 192.168.1.100 -i 1 -t 20 -w 2M -p 5201
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.1.102, port 59693
[  5] local 192.168.1.100 port 5201 connected to 192.168.1.102 port 59694
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   111 MBytes   932 Mbits/sec                  
[  5]   1.00-2.00   sec   111 MBytes   934 Mbits/sec                  
[  5]   2.00-3.00   sec   111 MBytes   935 Mbits/sec                  
[  5]   3.00-4.00   sec   111 MBytes   935 Mbits/sec                  
[  5]   4.00-5.00   sec   111 MBytes   935 Mbits/sec                  
[  5]   5.00-6.00   sec   111 MBytes   935 Mbits/sec                  
[  5]   6.00-7.00   sec   111 MBytes   935 Mbits/sec                  
[  5]   7.00-8.00   sec   112 MBytes   937 Mbits/sec                  
[  5]   8.00-9.00   sec   111 MBytes   935 Mbits/sec                  
[  5]   9.00-10.00  sec   111 MBytes   935 Mbits/sec                  
[  5]  10.00-11.00  sec   111 MBytes   935 Mbits/sec                  
[  5]  11.00-12.00  sec   111 MBytes   934 Mbits/sec                  
[  5]  12.00-13.00  sec   112 MBytes   937 Mbits/sec                  
[  5]  13.00-14.00  sec   111 MBytes   935 Mbits/sec                  
[  5]  14.00-15.00  sec   111 MBytes   935 Mbits/sec                  
[  5]  15.00-16.00  sec   112 MBytes   936 Mbits/sec                  
[  5]  16.00-17.00  sec   112 MBytes   937 Mbits/sec                  
[  5]  17.00-18.00  sec   111 MBytes   935 Mbits/sec                  
[  5]  18.00-19.00  sec   111 MBytes   935 Mbits/sec                  
[  5]  19.00-20.00  sec   111 MBytes   935 Mbits/sec                  
[  5]  20.00-20.01  sec   872 KBytes   954 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-20.01  sec  2.18 GBytes   935 Mbits/sec                  sender
[  5]   0.00-20.01  sec  2.18 GBytes   935 Mbits/sec                  receiver
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

So as Spiff said, the bottleneck is likely not with ethernet. So that leaves the NAS as the probable culprit... And of course their support pages basically blame network traffic and don't really address how to get better performance out of their servers or kill all of the junky processes that are running unnecessarily and taking up memory. Or it could be the WD Green drives.... Still no solution, but at least it likely isn't ethernet.

Update 2

Here is some additional testing information with the above setup. A 2GB test file was created and file transfer was executed from the command line using both the drive mounted via smb and logged into the NAS with ftp.

Using smb

Load to NAS
$ mkfile -n 2g largetestfile
$ mv -v largetestfile /Volumes/network_attached_storage 2.15GB file
- 336s  Averaged Transfer Rate: 6.4MB/s or 51.2Mbps

Download from NAS
mv -v /Volumes/network_attached_storage/largetestfile ./Downloads/ 2.15GB file
- 40s Average Transfer Rate: 53.75MB/s or 430Mbps

Using ftp

Load to NAS
$ mkfile -n 2g largetestfile
ftp> bin
ftp> hash
ftp> put largetestfile
2147483648 bytes sent in 01:06 (30.74 MiB/s) or ~246Mbps

Download from NAS
Test 1 (forgot to enter bin command prior to download)
ftp> get largetestfile
2147483648 bytes received in 00:42 (48.01 MiB/s) or 384.08Mbps

Test 2 (Using bin command)
ftp> bin
ftp> get largetestfile
2147483648 bytes received in 00:21 (93.97 MiB/s) or 751.73Mbps

While smb download rates are adequate, the upload rate leaves a lot to be desired. I thought that it could be something to do with how the data was written to the RAID, but then when you upload via FTP the rate is over 7 times faster, though still a good bit slower than the download rates.

AMR
  • 570
  • 1
  • 4
  • 16
  • 1
    You can get about 943 Megabits/sec of TCP over IPv4 throughput on Gigabit Ethernet with standard frames. So jumbo frames just let you squeeze another maybe 5% efficiency out of the medium. If you're not already over 100MebiBytes/sec of throughput, put jumbo frames at the very bottom of your list of things to look at for now. – Spiff Aug 05 '15 at 02:25
  • @Spiff Added test data for command line move and ftp copy. I was wondering why setting bin halved the download time in ftp. Also wondering if you have any idea what is going on with smb upload? This becomes important for things like iTunes apps and media updates, as those are stored on the NAS. Thank you again for your help. Also looking at CP was a local backup from NAS to USB through my Mac for an offsite copy... I had asked CP about a different issue (It only addresses a single processor core) and their response about bandwidth was that users should only expect on avg to upload 10GB/day!!! – AMR Aug 07 '15 at 12:33

1 Answers1

2

Plug your Macs into the a gigabit switch (the LAN ports on your router should be fine). Run IPerf 2.0.x between the two of them, and see what throughout you get. It should be 930+ Megabits/sec without even really trying.

If you do get IPerf TCP throughput in that range, then you've shown that the problem is above the Ethernet level. The problem could be the file transfer protocol (or remote filesystem protocol) that you're using, or a poor implementation of the client or server code for that protocol.

Apple has said that SMB2 (and later...it's now v3.x) is the future. Make sure your NAS supports that, and mount it over that protocol (not AFP or the old flavor of SMB).

Spiff
  • 101,729
  • 17
  • 175
  • 229
  • Thank you! smb was definitely a step in the right direction. Synology supports smb3, and when I remounted, I got sustained rates of about 10MBs, which while still well below what I was hoping for, I didn't see rates drop to the KB range, and when navigating folders in Finder, things seemed peppier. Haven't had a chance to run iPerf analysis yet, though I am still wondering if I wouldn't benefit from segregating disk I/O traffic on a VLAN. – AMR Aug 05 '15 at 16:08
  • BTW, I just read this and **[it was a great answer](http://superuser.com/questions/270489/about-mtu-settings-in-machines-and-switch?rq=1)** you gave and really helped me to understand what is going on. Thank you for your contributions. – AMR Aug 05 '15 at 16:35
  • The Disk Station Management software has a tab under there network section where you can set rules for minimum guaranteed and maximum bandwidth to a service. For Windows Files Service I have now set the guarantee to 200MB/s (apparently the maximum allowed) and the maximum to 0, which means unlimited. I don't know if this will help, but are there any other services you could think off that might benefit from this guarantee if I am mounting with smb3.0? – AMR Aug 05 '15 at 20:50
  • Also does it make sense to set up different subnets? I was thinking one subnet would be for Disk I/O on would be for internet traffic and peer-to-peer connections, if I were to say do a screen share of the other computer on the LAN? – AMR Aug 05 '15 at 20:55
  • @AMR By my calculations, your 10MiB/sec is less than a tenth of what your network should be capable of (112 MiB/sec for GigE). I wouldn't bother complicating things with more VLANs or more network links until I'd found and fixed the ridiculous bottleneck. Run the IPerf 2.0.x test to start getting to the bottom of the problem. Use Homebrew or MacPorts to help get it installed if needed. Oh, and add `-w 2M` to both IPerf invocations. BTW, what brand is your Thunderbolt Ethernet dongle? I can vouch for Apple's model being good, but can't say for anyone else. – Spiff Aug 06 '15 at 00:20
  • Apple Thunderbolt adapter. Updated my post with results of iperf tests.... Not the Macs or the router. Pushed over 2.1GB through in 20 seconds... Thanks for your help. If you know anyone who knows how to optimize Synology NAS can you send them my way. Trying to institute CrashPlan backup and I am going to have a lot of data to pump from the NAS to the cloud over what now appears like it will be the next several months.... – AMR Aug 06 '15 at 06:21
  • @AMR Glad to see your GigE is working at full speed. BTW, WD Green drives have sustained throughput faster than GigE's 112 MiB/sec. Your DS1010+'s SATA-II is 3Gb/sec (roughly 348MiB/sec). So the drives and SATA bus aren't likely to be bottlenecks. I still suspect the filesystem protocol (SMB). I see the DS1010+ supports FTP and HTTP file transfers. I'd try the unencrypted flavors of each of those (using a multi-GiB file) and see if they get close to 112MiBytes/sec. FTP and HTTP spew files as fast as raw TCP can carry them. SMB may read only a chunk at a time, adding overhead. – Spiff Aug 06 '15 at 07:10
  • I tried out FTP and it was a bit slower than SMB. The switch from AFP to SMB was an improvement, so yeah!. At this point I think the ultimate bottleneck is CrashPlan. It uses 448-bit encryption when creating a backup, even when it is to a local external drive, so it likely can't go much faster than 15-20MB/s. I upped the Memory allocation to about 4GB for the App, but I can go higher... that might help. I'll have to create a large test file when I have more time to see what transfer rates are like just between transfer too and from the NAS through File System. Thanks again for the help. – AMR Aug 06 '15 at 15:47
  • @AMR. Wait. You've been using CrashPlan as your performance benchmark this whole time, and didn't say so in your original question?? You weren't doing straightforward file copies (Finder drag, or `cp` command) this whole time?? When you did the FTP performance test, did you use the `ftp` command (which is what I would have expected), or did you do something silly like mount the NAS via FTP from the Finder (having Finder fake like an FTP server is a disk is slow as heck) and then like do a CrashPlan backup to that?? – Spiff Aug 06 '15 at 15:58
  • No, no. I have had poor performance for ever, but I have ignored it until now because I have to pump all of that data through to CrashPlan and if disk IO was going to be less that Internet bandwidth, then what was the point. CrashPlan made me try and do something about it. I just haven't had the chance to test straightforward file copy. I think you hit it initially that the original bottleneck was AFP. It would sometimes take a minute for a change of directory to render. I think SMB will be better. – AMR Aug 06 '15 at 16:22
  • I have 25Mbit/s up/down broadband service, so even if I can get even 100Mbits/s disk I/O from the NAS, I figure that will be enough to max out the pipe going out the door. I have wanted to improve the the performance for a long time, but this time s the first time I have needed to, so I am investing the time to get it working properly. I can't do much about broadband rate without paying an additional premium for service, but I can set the LAN up optimally. If you have instructions on how I would alias FTP as mounted drive so CP will see it, I'd appreciate it. Thanks again! – AMR Aug 06 '15 at 17:40