For those stumbling on this question in 2016... Use ext4. I tried btrfs and the difference is substantial. Over a 10-day period write IOs to ext4 amounted to 17,800 sectors. Btrfs? 490,400 sectors. Same SSD, identical filesystem, different partitions. Basically, same workload.
Both ext4 and btrfs go "quiet" when there is zero write activity on the drive. That's good.
Ext4 will write the modified data, plus some overhead. Overhead relates to the data written. A 4K write (1 block) pushes about 50-80 blocks of overhead at the next commit. (The ext4 Journal is fully enabled)
Modify a single 4K block on btrfs and you'll push between 4000-5000 blocks of overhead at the next commit. Default commit is 30 seconds, I believe. I used 120.
Now, it depends on how you use the SSD. As root, there is typically a fairly constant, low level, stream of writes going on. Log files, ntp drift files, man db rebuilds, opensm topology updates, etc, etc. Each event will hammer a btrfs drive with another 4000-5000 writes.
The 10 day numbers above are for my "write limited" SSD. The bulk of those 17,800 sectors were the result of a smallish system update. One the btrfs copy did not suffer. My writers are, exactly, ntp drift, opensm topology, and man db updates (nightly). Nothing else hits that disk, except actively initiated things like system upgrades, vim /etc/whatever, etc.
On whole SSDs will suffer a lot of writes, really. I just can't see the point in wasting them just 'cuz the news media is chasing bunnies and rainbows. If you want to pay this price for COW, go for it. For "performance", not so much. It's an SSD and you could probably put the worst "file system" known to man on it, and still get some level of performance - just by brute force. Ext4 is, by far, not the worst file system known to man.
No monthly fs check. Try the script below. It's a 100% hack, won't work for md mountpoints,
#! /bin/bash
dev=`cat /proc/mounts | grep " $1 " | awk '{print $1}'`
x=`basename $dev`
vmnam=`lsblk $dev -o MOUNTPOINT,PKNAME | grep "$1" | awk '{print $2}'`
vmx=`vmstat -d | grep $vmnam | awk '{print $8}'`
lbax=`smartctl -a $dev | grep LBA | awk '{print $10}'`
tmpnam=`mktemp XXX`
echo "Tracking device: $dev, mounted on $1 (vmstat on $vmnam)"
tim=`date +%s`
timx=`date +%s`
while true
do
vm=`vmstat -d | grep "$vmnam" | awk '{print $8}'`
lba=`smartctl -a $dev | grep LBA | awk '{print $10}'`
if [ "$vm" != "$vmx" ]
then
tim=`date +%s`
dif=`dc <<< "$vm $vmx - p"`
lbad=`dc <<< "$lba $lbax - p"`
timd=`dc <<< "$tim $timx - p"`
echo `date` " (sec=$timd) writes=$vm (dif=$dif) (lba=$lbad)"
vmx="$vm"
lbax="$lba"
timx="$tim"
find "$1" -mount -newer "$tmpnam" -print | grep -v "/tmp"
touch "$tmpnam"
fi
sleep 1
done
It will tell you how many blocks were written, according to the drive itself, and exactly which files were updated. Needs root privs. See for yourself. I run SSD on root filesystem, and call the script stat.sh. So... sudo ./stat.sh /