In one of the builds I'm doing I have opted to use new WD EARS drives which have 4KB Physical Sectors that are reported as 512B Logical Sectors. Many of you would have already read about this in popular articles. I am here to give a warning and some information when using these drives in a Linux environment If you are using a single drive as a primary or storage on its own, you should keep a lookout as to whether it is utilising its Intellipark feature excessively. This feature, its pitfalls, and the solution are explained in a post here. The gist of it is that the hard drive spins down to conserve power every 8 seconds and parks the heads. Linux is written to write its cache every ~20 seconds. This causes a lot of excess actuations / parkings. As the drive is rated for 1,000,000 parkings, it may reach this rather quickly. My drives do 16 parks per minute, so 43 days of continous usage / uptime on my machine would ensure that the drive reaches maximum manufacturers specifications. If possible, you will have to set the park time to a lot longer - see the article linked above. To check how many parks your drive has done, in a terminal window in linux use the command "sudo smartctrl -a /dev/sda | grep -i load" (you may need to install smartctrl - run sudo apt-get install smartctrl first if this is the case). Note also to put /dev/sdX where X is the device character relevant to your drive. You should then check the hours it's been on - "sudo smartctl -a /dev/sda | grep -i Hours". This will give you the power on hours. Take the number of load cycles, divide by the number of hours, and you get how many parks it does per hour. Secondly, if you're trying to create a Software RAID-5 array, make sure you align each partition to begin on the 64th sector / 4kb boundary. I've had my RAID performance jump 3x in speed just by getting the alignment right. Check your alignment with: "sudo fdisk -l -u" - if any of the drives listed that you're planning to use with RAID5 have a start listed at 63, you need to change it. example. Run fdisk - "sudo fdisk -u /dev/sda" Delete existing partitions - "d" Create new partitions - "n" Tell it primary partition - "p" Tell it partition 1 = "1" Tell it to align at sector 64 - "64" Change the partition type (if you want the drive as part of your mdadm raid array) - "t" Select Linux raid autodetect fs - "fd" Write & Quit - "w" Repeat with all your other raid disks / partitions Then run "mdadm --create /dev/md0 --chunk=xyz --level=5 --raid-devices=n /dev/sda1 /dev/sdb1 /dev/sdc1..." where xyz is your chunk size (lots of big files/movies = minimum 64 (kB) but I go for 1024 (kB), and for raid 5 the minimum devices (n) = 3 (add more if there's more drives in your array). Then it should start initialising the array at a much faster pace than if you tried to do it through Palimpsest Disk Utility (default on Ubuntu 9.10). ZFS-Fuse has terrible performance because I am still yet to figure out how to align it properly. When I have time I'll reformat the above post to make it more user friendly. Hopefully it helps someone save some time.