Running debian (xen) with 4 GB memory and four small lightly loaded
VMs. Hardware is a DELL 2950 with internal disks mirrored RAID1 and an
external disk pack configured with hardware RAID5. They have different
controllers (#0 is SCSI tape; #1 is SAS 4x 136GB disk, #2 is SATA 7x
700GB). On top of these two disk groups I have a pair of LVM containers.
$ uname -r
[0:0:4:0] tape IBM ULTRIUM-TD3 6B20 /dev/st0 # Tape LT03
[1:0:8:0] enclosu DP BACKPLANE 1.05 -
[1:2:0:0] disk DELL PERC 5/i 1.03 /dev/sda # LVM vg1a
[1:2:1:0] disk DELL PERC 5/i 1.03 /dev/sdb # LVM vg1b
[2:0:1:0] enclosu DELL MD1000 A.03 -
[2:2:0:0] disk DELL PERC 5/E Adapter 1.03 /dev/sdc # LVM vg5
I've been trying to work out why my tape backups take so long. LT03s
aren't slow; another one that's connected to a Solaris box streams
at around 60 MB/s. If I write to tape from disk I get around only
6 MB/s. When I read the resulting bytestream back from tape (without
reference to disk) I get 60 MB/s. If I could work out a reliable way of
generating a pseudo-random non-compressible bytestream in memory I'd try
streaming that to tape, but I suspect I'd get a reasonable 60 MB/s too.
When I read from the RAID5 diskpack to /dev/null (cpio -H ustar |
dd bs=1024k) I get 6 MB/s, so that seems pretty clearly to be the
bottleneck. Fortunately it's not an active filesystem, but I'd still
like to be able to do a 500 GB backup in a sensible time. When I read
from the RAID1 internal diskpack I get 16 MB/s, which also doesn't seem
that fast to me.
Is there anything I can do about this? Rebuilding with software RAID
isn't infeasible, but it will be very time-consuming and I don't want
to go down that route unless someone can help me "prove" it's going to
create a significantly faster subsystem.
Can any of you suggest improvements, please?
||9/9/2008 8:59:27 AM