We've been testing a 3510 with dual controllers, the host is a 6800
with 4 HBA 2 Gb/s fibre PCI cards. The numbers from bonnie++ are pretty
dissapointing. On block I/0 reading we are only seeing about 70 MB/s.
We have an ultra320 array which gets numbers in the 150 MB/s. We
thought our large investment in the 3510's would blow doors off the
ultra320 based array we have.
My concern is that we are not utilizing all 4 paths to the array in an
active-active load balanced way. According to Sun docs the mpathadm
command should show you the state and config of the paths, but
apparently this command is not even released yet.
We've turned multipathing on with the "stmsboot -e" command. Is there
anything else we need to do to ensure we are getting load balanced I/O
across the paths? Is there a command to run to view this?
We're runninng 5.10 6/6.
> We've been testing a 3510 with dual controllers, the host is a 6800
> with 4 HBA 2 Gb/s fibre PCI cards. The numbers from bonnie++ are pretty
> dissapointing. On block I/0 reading we are only seeing about 70 MB/s.
This isn't a multipathing issue I think: a single 2GB fibre should be
able to handle somewhere well over 100MB/s under good conditions (not
lots of really tiny reads in other words). So if you're only getting
70MB/s you probably have some other issue: fix that first.
One naive thing would be to try dding one of the (block? raw? I never
know) devices exposed by the 3510 to /dev/null with a suitably large
block size (I don't know what `large' means, I try a meg or so, which
will presumably be large enough that the actual transfers will be as
big as they can be) and see what throughput you get. This isn't meant
as a reasonable test of how the thing should perform in real life, but
it ought to give you some kind of peak figure for a completely
sequential transfer with no FS overhead. It should also give some kind
of hint as to whether you're getting multipathing to do things: if it's
less than 100MB/s then there's something badly wrong; if it's a bit
less than 200MB/s you're using a single path, and if it's over 200MB/s
you're using more than one.
I did have some figures for a 6800 with a faintly similar config (I
think 3510 with only 2 paths) but I am afraid I've lost them.
3510 2 server configuration Hi all,
I want to connect a 3510 with 2 controllers to 2 different hosts. The
hosts have 2 single port FC HBA's installed. I want to make this as
redundant as possible by using the Solaris 10 multipathing software and
connecting each server to one port on each of the controllers. However,
I cannot seem to get the host lun mappings right. I don't even know if
I have the cabling correct.
Host1 HBA 1 - connected to Ch. 0 on top controller.
Host1 HBA 2 - connected to Ch. 5 on bottom contoller.
Host2 HBA 1 - connected to Ch. 1 on top contoller.
Host2 HBA 2 - connected to Ch. 4 on bottom controller.
I can see a lun presented to Host1 that I created on the controller but
when I try to assign that lun to Ch. 5 on the bottom controller it says
"No logical drive available".
What am I doing wrong?
> Hi all,
> I want to connect a 3510 with 2 controllers to 2 different hosts. The
> hosts have 2 single port FC HBA's installed. I want to make this as
> redundant as possible by using the Solaris 10 multipathing software and
> connecting each server to one port on each of the controllers. However,
> I cannot seem to get