Network filesystem performance on MacBook Air

Just after Christmas, I bought myself a new 11in MacBook Air (MB Air) to replace the 12in PowerBook G4 (PB) that I’ve kept running entirely too long.  I selected the 4GB Ram, 128 GB SSD configuration as the system is not upgradable.

I use the MB Air as my primary email, chat, web surfing, document writing, etc. system and, as such, it has quite a bit of relevant data on it so I need to have backups.

Backup Server

The backup server is a CentOS 5.5 box which has a 1T SATA disk dedicated to backups.  It backs up the mirrored boot drives as well as the 4 VMs I have running in VMware Workstation.  The system is configured to serve this disk via NFS and Samba.  I also store backup copies of this WordPress blog, several MySQL databases and a Subversion repository I manage.

Wired vs Wireless

The first question I had with my new Air was whether or not the wireless was going to be fast enough to backup all my data.  Backing up the PB I had to use wired as I could not backup the 60GB of data overnight over wireless.

So I went to create a network performance test and started by dusting off ttcp(1) program that I still have lying around. I tested a few different setups, loopback, 100Mb wired and 802.11g wireless. The loopback based test is a good judge of the TCP stack and processor on the system. Not useful to use for backups but at least you know if you are CPU bound when your on-wire transfer is close to the loopback speed.

The 100Mb test on the MB Air was accomplished by buying the $20 USB Ethernet Adapter. Interesting that the MB Air numbers are nearly the same as the on-board Ethernet Adapter on the PB.

Network Speed (MB/sec)
PB (OSX 10.4) MB Air (OSX 10.6) MB Air (OSX 10.8.5)
loopback 283.53 1,192.57 998.19
100Mb 11.16 11.20 11.25
802.11g 2.55 2.27 0.38

ttcp(1) test run with default options to backup server (sans loopback test)

NFS vs CIFS

The next question is NFS or CIFS.  In a previous job, I had first hand knowledge that MacOSX NFS performance was sub par. There were no “tunables” (until 10.6) that allowed us to change any NFS client side parameters.

Again I dusted off another disk performance tool I had lying around lmdd(1). All these tests were run using the 100Mb wired network.

Disk Speed (MB/sec)
PB (OSX 10.4) MB Air (OSX 10.6)
Read Write Read Write
local disk 31.21 30.54 200.44 29.76
NFS 10.54 8.42 10.68 6.58
CIFS 5.36 0.79 5.56 0.64

The tests as follows, and the write test was done first, followed by the read test. Then the file was removed and the filesystem unmounted. The mounts were done with default mount options, except for the -P option on mount_nfs, as the server required a privileged port to accept the mount.

Write lmdd if=/dev/zero of=junk count=20971520 bs=512
Read lmdd if=junk of=/dev/null

NFS tuning

The clear winner in NFS vs CIFS is NFS but the write numbers leave room for improvement. I set out to create some NFS write performance tests by adjusting some of the NFS parameters.

The first thought was to try all combinations of version 2 vs 3, TCP vs UDP, adding the noac and async mount options and turning on the following sysctl(8) parameters:

  • vfs.generic.nfs.client.allow_async
  • net.inet.tcp.always_keepalive
  • net.inet.tcp.strict_rfc1948
  • net.inet.tcp.rfc1644

These sysctl(8) parameters were left on for all the tests. Below is a sample of the test results, removing many uninteresting or irrelevant test results.

Protocol Version Read/Write Size Special Mount Options Speed (MB/sec) (10.6) Speed (MB/sec) (10.8.5)
TCP 3 32768 6.04 11.2636
noac 6.09 11.1929
noac,async 7.09 11.3629
UDP 8.56 11.6410
noac 8.67 11.6194
noac,async 10.93 11.7399
65536 10.94 11.7330
131072 10.97 11.7357
262144 10.86 11.7335
524288 10.82 11.7341

Full Results

The first thing I found out is that NFS v2 is not 64 bit compatible. The lmdd test errored out at 4GB boundary with “write: File too large“.

The second fact I found is that my network is not too congested, as that the UDP tests ran faster than TCP. This holds true as long as there are not many packet retransmits, as there would be on a congested network.

So the optimal mount options are these:

mount_nfs -P -o udp,vers=3,noac,async $BACKUPSERVER:/export/backups /mnt

The rwsize=32768 is the default. There seems not to be much performance difference by changing the rwsize from 32K all the way up to 512K. All those tests were in the 10.9 MB/sec range.

tar vs hdiutil

There are two different ways I have found to backup data on my MB Air.  The first and “old school” way was to use tar(1).  On the PB, I originally had to compile hfstar(1) but after 10.3 (I believe) the system tar(1) supported handling the resource forks.  Eventually I found out about the hdiutil(1) command and was able to do backups on the PB with that.  It creates a .dmg file of my entire disk, very useful to browse backups to restore a single file, especially when you do not know the location of the file.

This test was attempted only on the MB Air as there is not much data to backup on the PB. There is a problem with hdiutil(1) on 10.6. It would not complete successfully with several different error cases:

  • The system would hang and not be responsive, requiring a reboot using the power button.
  • The system would error out with a message if using the -debug flag.

To get around this, I wound up just running the test backing up my ~/src directory, using the following commands:

  • tar -cjf $FILE.tbz ~/src
  • hdiutil create -quiet -format UDBZ -nocrossdev -srcfolder ~/src $FILE.dmg
Backup Program Time (MM:SS.sss)
tar 0:49.808
hdiutil 3:15.078

Conclusions

Results show the backups the backups are network bound on both the PB and MB Air. 10.6 also provides enough tunables to make NFS writes near wire speed.

Another parameter I had to change was the Energy Saver Preferences to not to put the computer to sleep when on the Power Adapter, otherwise the backups would fail as the system went to sleep. I guess with 10.6, command line programs do not keep the system awake.

2 comments for “Network filesystem performance on MacBook Air

Comments are closed.