Oh Woe is Me: Slow Computer

I recently decided it was probably time to retire my old desktop Linux PC, which is a, um, dual 500MHz Pentium II. (With two 18GB SCSI drives in RAID-1, 1 gigabyte of ECC RAM, and an 8x Nvidia video card with 128MB of memory. What, me, outdated?) I wanted something smaller, quieter, and less power-hungry, so I picked up a 1.8GHz Mini-ITX box online. It’s small, it’s quiet, it only draws something like 65 watts of power. It also only had room for a single hard drive, and I really wanted RAID-1.

My solution? (Don’t laugh!) An IDE-to-Compact Flash adapter, and two CF cards in software RAID-1.

This is proving somewhat suboptimal…

I’m using the latest distribution of Debian, and setting up the software RAID was a breeze. Everything works fine, but even though hdparm reports very good speed – buffered disk reads of around 36MB/second – the system is slooooow; Firefox – whoops, Iceweasel – takes ten or fifteen seconds to start, and brings the load up to around 1.4 – of which about 0.9 is iowait.

The machine has a gigabyte of RAM, and isn’t swapping; noatime is set for the filesystem; I’ve done all the performance tuning of Iceweasel that can be done, and while it’s reasonably fast once running, everything screeches to a halt while it loads, even with preload installed. Similar iowait is experienced when I run other fairly large programs – OpenOffice, the GIMP – but Iceweasel is far and away the worst of the bunch… and the program I use the most.

I’ve reinstalled the system a couple times, now – using new, identical CF cards, either both on the same IDE channel, or one on each, and using new, identical microdrives on separate channels; DMA is always enabled, both by the BIOS and the OS, and I always use EXT2 for the filesystem, because a non-journalling FS should mean longer flash memory life. (Using the microdrives, I tried once each with EXT3 and XFS; no difference.) Each time, everything works fine, hdparm gives great numbers for the disk I/O speed… and every time, trying to load anything fairly large from the disks results in 90-95% iowait, for several seconds. I haven’t tried a single CF card or microdrive yet; might as well just use a 3.5″ hard drive if I can’t have RAID.

Am I just spoiled, coming from 7200RPM SCSI drives on a high-end RAID controller? Does Linux software RAID (md) just flat-out hate compact flash and microdrives? Or is there something else wrong, here?

Published in: General | on March 26th, 2009| 9 Comments »

Both comments and pings are currently closed.

9 Comments

  1. On 3/26/2009 at 7:38 pm chris Said:

    you are probably just suffering from the monstrous slowness of cf drives as disks. I had a similar experience with one of the drives in my eee 900.

    what do you get from “hdparm -t”?

  2. On 3/26/2009 at 8:52 pm Nemo Said:

    Generally, I get 36.nnMB/sec from “hdparm -t”, but I noticed that after the third or so time of doing it, back to back, the speed will drop dramatically – 3.4MB/sec, and eventually ~992KB/sec. That kind of made me wonder whether md was doing some kind of read/write caching that was having a negative impact performance here.

    I thought CF disks and microdrives were supposed to be reasonably fast?

  3. On 3/27/2009 at 6:45 pm Tim Said:

    Linux is evil. It does extensive write caching by default:

    http://www.westnet.com/%7Egsmith/content/linux-pdflush.htm

    You will not see the real write performance of the drives until you either crank down some of the sysctls that you see in that document, or until you write enough data to blow out the OS and drive caches.

    For read performance, this should be less of a problem, though linux is very aggressive about caching data, so unless you are doing large reads, you may not be able to see the true performance of the drives. If you want to get the real numbers, get iozone and tell it to do it’s thing on files that are substantially larger than the size of all the memory that might be caching data.

    I’m willing to bet that this is just caching trickiness, and that the real read/write speeds of CF and your microdrives are not so hot. Microdrives have several things working against them: limited # of platters, high rotational latency, usually slower seek times, smaller caches. CF cards have limitations as well: very slow write speeds (though newer ones are supposedly much better for this), limited # of writes. Though they can be pretty fast for reads, from what I have read. I guess it depends on how new/awesome your CF cards are. Plus, you have the additional latency of something that converts from IDE to CF. All in all, you probably took a step down from a performance standpoint.

    What you might think about doing is to create a tmpfs filesystem upon boot and then start using it for I/O hungry temporary things that cause you pain like firefox’s disk cache or scratch space for CVS (if you use that) or whatever. For stuff that gets read a lot, you could create a tmpfs, copy that data onto it, and then mount it overtop of it, like /usr/local/bin, so long as you don’t write to it. Clearly this burns up memory, and adds complexity, but it’s something you could play with. 🙂 Or you could try out one of these: http://en.wikipedia.org/wiki/List_of_file_systems#File_systems_optimized_for_flash_memory_.2F_solid_state_media

    Sorry, I’m procrastinating at work and so this turned into kind of a long response. Hope it helps.

  4. On 3/28/2009 at 2:09 am Mark Said:

    I also get poor performance using my 233x speed Compact Flash card with Fedora 10 installed on it. I get 35MB/s as the hdparm read speed, which is the speed advertised. Flash file systems like the ones Tim suggested are only for raw flash memory; they shouldn’t be used with CF cards or USB flash sticks since they already contain the wear levelling and block mapping that flash file systems are designed to provide.

    It’d be nice if someone knew the true bottleneck when using CF drives in Linux, but I haven’t been able to find anything helpful.

  5. On 3/28/2009 at 4:36 pm Nemo Said:

    Well, I’ve been playing around, and I’m strongly leaning towards suspecting software RAID (i.e. md) as the culprit in my case. For kicks, I reinstalled Debian onto the two CF cards, but not as RAID; /boot and / on one card, /home and /usr on the other, all formatted as ext2. The performance isn’t earth-shattering, but it’s easily thirty times faster than any of the RAID setups I tried; the base system installs in several minutes, rather than several hours, and desktop performance in KDE is basically how it should be on a 1.8GHz machine, i.e. plenty fast. I/O wait has, under most circumstances, largely disappeared, as well.

    I wonder if part of the problem is that while CF cards have an IDE interface, they don’t support the full IDE/EIDE command set. The kernel itself talks to them okay, but md… doesn’t?

    There’s been some discussion in recent years that Linux development is being performed on and for (very!) high-end machines, and is sometimes not as supportive of not-so-bleeding-edge hardware as people would like. I wonder if this is the case here; does md assume that if you’re using RAID today, in 2009, you’re doing so on high-speed EIDE/SCSI/SATA drives that fully support the extended command sets and have big onboard read/write caches?

    I expect a performance hit for using software RAID, but it shouldn’t, IMO, be more than, at worst, 25%…

  6. On 3/28/2009 at 7:40 pm Mark Said:

    I’m not even using RAID and it goes slow. I’m using a 16GB CF card on my laptop via an ExpressCard adapter. I doubt MD has anything to do with it and my guess would be that it has to do with the way writes and reads are queued and executed on the device. I made some changes based on the first link in Tim’s post and that helped increase responsiveness of applications, but it doesn’t seem like the time it takes to perform reads or writes has changes. Linux does, however, boot in 30 seconds and only takes a few more seconds to load my desktop once I log in. It seems like only write performance is suffering.

    Have you compared an hdparm -t on each drive to an hdparm -t on the RAID array? I have a feeling you’ll find that the RAID array is twice as fast, as it should be. Try using dd to benchmark both read and write performance to the file systems on your CF cards. Here are my results; use the same commands and let me know yours. If possible, can you try the file read/writes using various file systems on one of your CF disks?

    Write to file:

    $ dd if=/dev/zero of=test count=10000 conv=fdatasync
    10000+0 records in
    10000+0 records out
    5120000 bytes (5.1 MB) copied, 2.16445 s, 2.4 MB/s

    Read from file (using direct IO is slower, but I don’t know any other way to bypass caching):

    $ dd if=test of=/dev/null iflag=direct
    10000+0 records in
    10000+0 records out
    5120000 bytes (5.1 MB) copied, 5.21797 s, 981 kB/s

    Read directly from drive /dev/sdb (my CF disk; I’m modifying skip so that I read uncached areas of the drive, so only use a particular value once):

    $ sudo dd if=/dev/sdb of=/dev/null count=10000 skip=10000
    10000+0 records in
    10000+0 records out
    5120000 bytes (5.1 MB) copied, 0.681693 s, 7.5 MB/s

    $ sudo dd if=/dev/sdb of=/dev/null count=10000 skip=20000
    10000+0 records in
    10000+0 records out
    5120000 bytes (5.1 MB) copied, 1.24874 s, 4.1 MB/s

    Reads directly from the device look slower than they should be.

  7. On 3/29/2009 at 1:32 pm Nemo Said:

    Actually, my individual cards are always faster than the RAID array. Strange.

    I’m currently running in non-RAID, so these should be fairly comparable to your situation:

    dd if=/dev/zero of=test count=10000 conv=fdatasync
    10000+0 records in
    10000+0 records out
    5120000 bytes (5.1 MB) copied, 0.888413 s, 5.8 MB/s

    dd if=test of=/dev/null iflag=direct
    10000+0 records in
    10000+0 records out
    5120000 bytes (5.1 MB) copied, 1.62937 s, 3.1 MB/s

    sudo dd if=/dev/hdd of=/dev/null count=10000 skip=10000
    10000+0 records in
    10000+0 records out
    5120000 bytes (5.1 MB) copied, 0.13225 s, 38.7 MB/s

    sudo dd if=/dev/hdd of=/dev/null count=10000 skip=20000
    10000+0 records in
    10000+0 records out
    5120000 bytes (5.1 MB) copied, 0.131822 s, 38.8 MB/s

    I wonder if your performance is because of the ExpressCard adapter? Isn’t that basically a USB interface?

    I’m using 133X Transcend CF cards, BTW.

    (Oh, and I tried on a Hitachi Microdrive, by the way: direct reads varied between 4.6MB/sec and 8.5MB/sec; reading/writing a file were not much worse than on the regular CF card:

    dd if=/dev/zero of=test count=10000 conv=fdatasync
    10000+0 records in
    10000+0 records out
    5120000 bytes (5.1 MB) copied, 1.17217 s, 4.4 MB/s

    dd if=test of=/dev/null iflag=direct
    10000+0 records in
    10000+0 records out
    5120000 bytes (5.1 MB) copied, 1.65727 s, 3.1 MB/s

    …)

  8. On 3/29/2009 at 5:54 pm Mark Said:

    I’m using a RiDATA 233x 16GB CF card. You could definitely be correct that the slowdown is in the ExpressCard adapter, however ExpressCard operates at 480 Mb/s in USB mode and 2.0 Gb/s in PCI Express mode, so that shouldn’t be the slowdown unless the adapter is poorly designed. It’s the 2.0 (newer) model out of the two adapters I could have purchased. I’d give you the name of it, but I kind of need it at the moment. 😉

    In your disk tests, it looks like you were reading from cached parts of the disk, since I get those same results if I re-read the same section. Try stepping up the skip value some more and see if the speed drops at all.

  9. On 4/2/2009 at 3:11 pm Nemo Said:

    Doesn’t seem to matter much:

    sudo dd if=/dev/hdd of=/dev/null count=10000 skip=927000
    10000+0 records in
    10000+0 records out
    5120000 bytes (5.1 MB) copied, 0.289381 s, 17.7 MB/s

    sudo dd if=/dev/hdd of=/dev/null count=10000 skip=814000
    10000+0 records in
    10000+0 records out
    5120000 bytes (5.1 MB) copied, 0.128051 s, 40.0 MB/s

    sudo dd if=/dev/hdd of=/dev/null count=10000 skip=202000
    10000+0 records in
    10000+0 records out
    5120000 bytes (5.1 MB) copied, 0.131031 s, 39.1 MB/s

    sudo dd if=/dev/hdd of=/dev/null count=10000 skip=001000
    10000+0 records in
    10000+0 records out
    5120000 bytes (5.1 MB) copied, 0.128186 s, 39.9 MB/s

    sudo dd if=/dev/hdd of=/dev/null count=10000 skip=9876543
    10000+0 records in
    10000+0 records out
    5120000 bytes (5.1 MB) copied, 0.130148 s, 39.3 MB/s