26

When copying large files or testing writespeed with dd, the max writespeed I can get is about 12-15MB/s on drives using the NTFS filesystem. I tested multiple drives (all connected using SATA) which all got writespeeds of 100MB/s+ on Windows or when formatted with ext4, so it's not an alignment or drive issue.

top shows high cpu usage for the mount.ntfs process.

AMD dual core processor (2.2 GHz)
Kernel version: 3.5.0-23-generic
Ubuntu 12.04
ntfs-3g version: both 2012.1.15AR.1 (Ubuntu default version) and 2013.1.13AR.2

How can I fix the writespeed?

Zulakis
  • 1,644
  • 5
  • 19
  • 29
  • Have you tried testing dd with raw drive access (on the drive or partition, doesn't matter)? Note that testing that way will destroy the filesystem and will lose any data on it. It will bypass the NTFS drivers entirely. – Bob Jun 30 '13 at 17:17
  • Yep I just did, the result is `149MB/s`. – Zulakis Jun 30 '13 at 17:20
  • Just out of curiosity I have to ask if this drive is one of those 4k drives and if therefore your filesystem might be unaligned somehow?! – Waxhead Jun 30 '13 at 18:05
  • try bonnie++ and what kernel are you using? uname -r – cybernard Jun 30 '13 at 18:25
  • bonnie++ produced similar results. read speed is faster then write (about `60mb/s`), still not nearly the possible `150mb/s` though. I added kernel and `ntfs-3g` versions to my question. – Zulakis Jun 30 '13 at 18:47
  • What options did you try for dd? the block size should be at least 65536 and a large count size? If dd runs in less than 2 min the samples size is to low. Try: dd if=/dev/sda of=/dev/null bs=65536 count 10000 – cybernard Jun 30 '13 at 19:23
  • dd if=/dev/random of=/dev/sda bs=65536 count=10000 – cybernard Jun 30 '13 at 19:25
  • `655360000 bytes (655 MB) copied, 49.2048 s, 13.3 MB/s` – Zulakis Jun 30 '13 at 19:32
  • If you double the block size to 131072 and 262144 do the speeds increase at all? – cybernard Jun 30 '13 at 19:50
  • Yes, it increases a little bit but only about 1-3 mb/s. Increasing te block size further doesn't increase the speed anymore though. – Zulakis Jun 30 '13 at 20:14
  • How large are the files you're moving? The overhead of file creation will dominate when transferring small files. – HABO Jul 01 '13 at 12:51
  • I am copying single files with sizes between 10-15GB. No small files overhead. Also, when testing with `dd` (which ultimately writes one single file), the write rates are as bad as when copying. – Zulakis Jul 01 '13 at 12:53
  • 4
    I believe that the free version of NTFS-3G is crippled so that it uses 4 KiB writes with no caching, causing extremely slow write performance on SSDs and USB drives. The company behind the driver suggests buying the commercial version for better performance. Apparently no-one cares enough to actually fix (and if necessary, fork) the open source version because this problem has been around for almost a decade, ever since NTFS-3G was first released. – Tronic Mar 09 '14 at 02:23
  • 1
    With the same Ubuntu 2015.04 laptop, I formatted to NTFS a 320GB external hard disk and a 32GB USB stick. Copying 2GB of pictures to the first one was taking forever (6 hours left estimated after 30 minutes), but to the second one (USB stick) it only took a minute or two. I did not change any settings between the two. – Nicolas Raoul Dec 15 '15 at 17:45

7 Answers7

22

Update: Use a newer version of Ubuntu. Newer versions of Ubuntu, e.g. 22.04+ should perform better and use bigger writes by default. Thanks to @dmitry-grigoryev who clarifies in an answer below that

big_writes was deprecated

If your version of fuse/libfuse < 3, then the original old solution provided here applies. To get the version.

fusermount -V

Solution for older Ubuntu systems (e.g. 20.04, supported until April 2025). Simply add the big_writes option, e.g.

sudo mount -o big_writes /dev/<device> /media/<mount_dir>

My Linux NAS with a low spec CPU now manages NTFS large file writes about three times faster. It improved from ~17MB/s to 50MB/s+. Even seen it peek at about 90MB/s in iotop which is probably near the external drives capability (a 2.5" USB3 HDD).

From the NTFS-3G man page:

 big_writes
              This option prevents fuse from splitting write buffers  into  4K
              chunks,  enabling  big  write buffers to be transferred from the
              application in a single step (up to some system limit, generally
              128K bytes).

A previous post was on the right track with the reference provided:

perhaps check here for ideas on what could be causing it. http://www.tuxera.com/community/ntfs-3g-faq/#slow

The original question mentions noticing the issue with large file transfers. In my experience with copying media files or doing backups, the key option in the above FAQ was:

Workaround: using the mount option “big_writes” generally reduces the CPU usage, provided the software requesting the writes supports big blocks.

Closing notes:

  • Paragon also offers an alternative @hi-angel's answer below provides more info if you have a newer Ubuntu version with kernel 5.15 and ntfs3 as an option.
  • Tuxera reserved the pro NTFS driver for embedded system partners and the open-sourced alternative wasn't as performant.
JPvRiel
  • 1,531
  • 16
  • 15
9

big_writes was deprecated in 2016, the corresponding behavior is always enabled when using libfuse version 3.0.0 or later. On a modern Linux system, poor NTFS performance usually means that:

  • the disk is fragmented
  • NTFS disk compression is enabled
  • inadequate mount options such as sync are used
Dmitry Grigoryev
  • 9,151
  • 4
  • 43
  • 77
2

perhaps check here for ideas on what could be causing it. https://github.com/tuxera/ntfs-3g/wiki/NTFS-3G-FAQ

This sounds a bit like the 'old days' when file io was not using DMA by default. It's unlikely these days but is BIOS using IDE emulation for SATA drives? Because if it is emulating IDE then it may also be emulating non-DMA mode as well.

Another potential slow down is if ntfs file compression. Is compression enabled on the folder you are writing to? If it is, that will make any new files in that folder compressed as well.

Uschi
  • 18
  • 3
BeowulfNode42
  • 1,897
  • 2
  • 16
  • 24
  • How can I test if it is using DMA? Apart from this, I have already tried out all the suggestions on the page. – Zulakis Jul 01 '13 at 13:53
  • 1
    Uhm, from what I have read, DMA is only relevant for IDE drives? I am only using SATA drives. – Zulakis Jul 01 '13 at 13:58
  • According to http://en.wikipedia.org/wiki/Serial_ATA#Transport_layer it sounds like DMA is the only option for SATA. Lets find out if his bios is using ide emulation – BeowulfNode42 Jul 02 '13 at 03:11
1

Another likely reason is ntfs-3g you're using is slow by virtue of being a userspace driver. A kernel driver would be faster. So if you want a better speed, try the new ntfs3 driver (not to be confused with the older one called ntfs without the postfix 3). It was contributed into 5.15 Linux kernel by Paragon Software, so make sure you're using 5.15 or later.

To make use of ntfs3 when mounting, execute:

sudo mount /dev/sdX mnt/ -t ntfs3

Alternatively, if you want it in fstab, you may create an entry similar to this:

UUID=XXXXXXXXXXXXX   /path/to/mnt    ntfs3    defaults,uid=1000,gid=1000,force    0 0
Hi-Angel
  • 506
  • 7
  • 13
0

This patch improves wrote performance for embedded devices: https://www.lysator.liu.se/~nietzsche/ntfs/

0

This is an old thread, but for people looking for a solution to the same problem: do you have cpuspeed active? ntfs-3g is CPU-hungry and in my case cpuspeed mistakenly detected a low load for processes with lots of IO waits, eventually throttling down the core and starving the driver.

Try disabling cpuspeed (if e.g. it is running as a service) and test again.

irisx
  • 1
-1

NTFS isn't fast as EXT4.

You will always face the "fragmentation" that the Windows system has.

EXT4 can support individual files up to 16 terabytes, and volumes up to one exabyte in size. But, one of the aspects of EXT4 which contributes to better performance, though, is that EXT4 can handle a larger extents-a range of contiguous physical blocks of data. This allows it to work better with large files and reduce drive fragmentation.

Other factors include the allocate-on-flush technique used by EXT4. By delaying the allocation of data blocks until the data is ready to be written to disk, EXT4 improves performance and reduces fragmentation compared to file systems that allocate blocks earlier.

If you are not a windows user, EXT4 is strongly recommended, OSX has extFS, we don't need to follow the Windows Standard because it supports windows.

Unless you usually(daily) swap around the external USB onto the Windows system and Linux system, I will reformat the drive to EXT4.

Seandex
  • 171
  • 4