1. This forum section is a read-only archive which contains old newsgroup posts. If you wish to post a query, please do so in one of our main forum sections (here). This way you will get a faster, better response from the members on Motherboard Point.

6120 Performance problems

Discussion in 'Sun Hardware' started by Konstantinos Agouros, Jan 1, 2005.

  1. Hi,

    at a customer site I have to 6120 each one connected in a loop to a
    fc-card in a Sun480. Both are configured as a Raid5 and using Veritas
    a Stripe of 1.8TB is put on them. The whole thing is running a mysql
    database. The whole thing is terribly slow (mysql running at 5% CPU
    and the system running between 50 to 90% iowait). At one point the
    batteries failed and the cache was off, but this was reactivated at
    least due to what the fru-commands on the raid tell me. Is there
    any other way to debug this?

    Konstantinos Agouros, Jan 1, 2005
    1. Advertisements

  2. Konstantinos Agouros

    David Wade Guest

    I see the 6120 has FC disks so it should not be too bad but the stats appear
    to indicate an i/o hold up.
    Its been a while since I played with this sort of box, and I am not familiar
    with my SQL but the following points come to mind..

    RAID-5 is generally very slow on write. What is read/write ratio on the
    How big are the writes (SQL page size). Can you change the chunk size on the
    6120 to match?
    How many drives do you have in the R5?
    If there are many, can you configure as raid 0+5?
    Have you split the transaction logs onto a seperate raid set to the actual

    <<If I am telling you to "suck eggs" sorry, but sometimes we omit the
    David Wade, Jan 1, 2005
    1. Advertisements

  3. Depending if someone is using the frontend it's easier mostly writes or half
    and half.
    Nope all goes on the same stripe.
    Konstantinos Agouros, Jan 2, 2005
  4. Konstantinos Agouros

    McBofh Guest

    Are you using raid5 from Veritas' perspective or just ordinary
    Are you mirroring across two ports on a single card, or across
    multiple cards? If multiple cards, are they both in 66mhz slots?

    Software raid5 is horrendous, hardware raid5 in the 6x20 line
    is pretty good.

    Cache status (writethrough/writebehind) will definitely affect
    your performance.

    If you check the output of 'sys stat' do you see an "rw" line
    or an "mpxio" line? "mpxio" is what it should usually be set to.

    Do you have error messages on the sf480 relating to the fc port
    that the 6120s are connected to?

    Are you using
    -- vxfs? (if so what version)
    -- ufs?
    -- is logging turned on?
    -- directio or qlog?

    Have you tuned /etc/system? what settings are there?

    Have you checked the lockstat output at a 60sec granularity
    while the system is showing the problem?

    What does the 'iostat -xn 2' output show? What about vxstat?

    If you want help from the group, please provide some data.

    McBofh, Jan 3, 2005
  5. The raid5 is realized inside the two 6120s we have to 14 disks
    form a ~ 850GB Raid each. This is then put into a 1.8TB strip using
    veritas Software.
    We have two FC cards (one for each 6120). And I am pretty sure
    that both are in the 66MHz slots.
    Writebehind is enabled (at least that's what it tells me).
    I get a:
    <1>sys stat
    Unit State Role Partner
    ----- --------- ------ -------
    1 ONLINE Master

    On both raids if You mean sys list I get:

    sys list
    controller : 2.5
    blocksize : 16k
    cache : auto
    mirror : auto
    mp_support : none
    naca : off
    rd_ahead : on
    recon_rate : med
    sys memsize : 256 MBytes
    cache memsize : 1024 MBytes
    fc_topology : auto
    fc_speed : 2Gb

    But since there is only one arbitrated loop each I don't think I need
    mp_support turned on or did I miss something?
    Yes since UFS in Sol8 it's version 3.5 with a point patch.
    mount gives me:
    /usr/local/mysql on /dev/vx/dsk/mysqldg/mysqlvol read/write/setuid/delaylog/largefiles/ioerror=mwdisable/dev=39dc908 on Mon Jan 3 10:14:34 2005
    Only the entries vx-install added and a little bit of turning up shared
    memory for mysql
    Nope but how shall I call it?
    iostat -xn 2
    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t0d0
    1.8 2.2 267.1 289.0 0.0 0.1 1.2 13.1 0 2 c1t1d0
    1.8 2.2 269.4 289.2 0.0 0.1 1.2 13.5 0 2 c1t0d0
    2.8 45.0 56.4 502.5 13.1 41.4 274.8 865.4 38 93 c2t1d0
    2.8 45.5 57.2 487.1 11.1 40.1 229.8 829.1 34 94 c3t1d0
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 mysqlhost:vold(pid432)
    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t0d0
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t1d0
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0
    0.0 49.9 0.0 591.2 3.8 48.0 76.9 961.1 39 100 c2t1d0
    0.0 50.4 0.0 678.1 1.5 44.2 30.1 877.3 26 100 c3t1d0

    # vxstat -g mysqldg
    vol mysqlvol 141194 2626821 7849416 68972845 235.5 1190.7

    I am happy to I only need to know what to look for.
    Konstantinos Agouros, Jan 3, 2005
  6. Konstantinos Agouros

    McBofh Guest

    that seems ok iirc.

    should be set to mpxio, even for a single loop.

    I can't recall the lockstat params exactly at the moment,
    but something along the lines of

    for i in 1 2 3 4 5 6 7 8 9 10; do
    lockstat -kgAEbc sleep 60 >> lockstat.out 2>&1

    If you log a call with Sun, ask for their GUDS script
    because it gathers all this data for you.

    You should be looking at the syslog file on the 6120s
    and the /var/adm/messages* files on the SP (if you've
    got an SP).

    Are the fc disks uptodate on firmware?
    Are you running with current 6120 firmware (loop card,
    controller, etc)?

    best regards,
    McBofh, Jan 3, 2005
  7. can I change that without destroying the raid?
    Some guy from sun mentioned that we should change the blocksize
    to 24k. And that would mean reset the whole raid. Also can this be done
    while the raid is online?

    I will try this tomorrow.

    I also checked the syslog of both raids... Nothing.
    As far as I heard there's at least a firmware upgrade for the
    6120 available. We will get this after opening an official call
    with sun.

    Thanks for the help,

    Konstantinos Agouros, Jan 3, 2005
  8. Well and the solution:

    Firmware patches for the raid and the disks in it. Now it is as fast
    as one could expect considering the price \:)


    Konstantinos Agouros, Jan 4, 2005
  9. Konstantinos Agouros

    Andre Guest

    Err, I'm not a DBA, but I know enough to know this is a Bad Thing on a
    busy database. Seperate spindles for transaction/redo logs.
    Andre, Jan 6, 2005
  10. Konstantinos Agouros

    McBofh Guest

    Actually, when you have a hardware raid engine, this
    separate spindles thing is irrelevant.

    You create luns/volumes/pools on your storage array
    and export them (with lun mapping and lun masking)
    to the relevant host(s). The engine handles the distribution
    of read and write operations efficiently.

    What makes the difference over physical spindles?
    A whopping great cache size. In 6120 the cache is
    1Gb. In the HDS 99[6789]0 series, the cache can be
    as large as 32Gb.

    McBofh, Jan 6, 2005
  11. Konstantinos Agouros

    David Wade Guest

    David Wade, Jan 6, 2005
    Konstantinos Agouros, Jan 7, 2005
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.