Motherboard Forums


Reply
Thread Tools Display Modes

Gigaswift nics only doing 5-10MB ?

 
 
bl8n8r
Guest
Posts: n/a
 
      02-09-2009, 06:14 PM
Hello All,
I have two sun v880s hooked up on ce2 with an unmanaged copper gig
switch in between. As near as I can tell, both nics are linked at
1000mb:

NOTICE: ce2: xcvr addr:0x01 - link up 1000 Mbps full duplex

Problem is, transferring files over scp, between the two interfaces,
results in transfer speeds at only around 5-10MB/sec. CPU utilization
(sar -u) on one of the v880s averages around 54, so maybe the v880 is
already pushing as much as it can. I'm stumped. Anyone know why this
is?

(box 1)
# kstat ce:2
module: ce instance: 2
name: ce2 class: net
adv_cap_1000fdx 1
adv_cap_1000hdx 1
adv_cap_100T4 0
adv_cap_100fdx 1
adv_cap_100hdx 1
adv_cap_10fdx 1
adv_cap_10hdx 1
adv_cap_asmpause 0
adv_cap_autoneg 1
adv_cap_pause 0
alignment_err 0
brdcstrcv 9
brdcstxmt 407
cap_1000fdx 1
cap_1000hdx 1
cap_100T4 0
cap_100fdx 1
cap_100hdx 1
cap_10fdx 1
cap_10hdx 1
cap_asmpause 0
cap_autoneg 1
cap_pause 0
code_violations 0
collisions 0
crc_err 0
crtime 117.4819754
excessive_collisions 0
first_collision 0
ierrors 0
ifspeed 1000000000
ipackets 316937
ipackets64 316937
ipackets_cpu00 153754
ipackets_cpu01 43309
ipackets_cpu02 31432
ipackets_cpu03 88442
late_collisions 0
lb_mode 0
length_err 0
link_T4 0
link_asmpause 0
link_duplex 2
link_pause 0
link_speed 1000
link_up 1
lp_cap_1000fdx 1
lp_cap_1000hdx 1
lp_cap_100T4 0
lp_cap_100fdx 1
lp_cap_100hdx 1
lp_cap_10fdx 1
lp_cap_10hdx 1
lp_cap_asmpause 1
lp_cap_autoneg 1
lp_cap_pause 1
mac_mtu 1522
mac_reset 0
mdt_hdr_bind_fail 0
mdt_hdr_bufs 0
mdt_hdrs 0
mdt_pkts 0
mdt_pld_bind_fail 0
mdt_pld_bufs 0
mdt_plds 0
mdt_reqs 0
multircv 0
multixmt 0
norcvbuf 0
noxmtbuf 0
obytes 3245606
obytes64 3245606
oerrors 0
opackets 48640
opackets64 48640
pci_bad_ack_err 0
pci_bus_speed 33
pci_dmarz_err 0
pci_dmawz_err 0
pci_drto_err 0
pci_err 0
pci_parity_err 0
pci_rma_err 0
pci_rta_err 0
peak_attempts 0
promisc off
qos_mode 0
rbytes 449434520
rbytes64 449434520
rev_id 48
rx_allocb_fail 0
rx_hdr_drops 0
rx_hdr_pkts 1289
rx_inits 0
rx_jumbo_pkts 0
rx_len_mm 0
rx_msgdup_fail 0
rx_mtu_drops 0
rx_mtu_pkts 315648
rx_new_hdr_pgs 40
rx_new_mtu_pgs 78911
rx_new_nxt_pgs 0
rx_new_pages 78951
rx_no_buf 0
rx_no_comp_wb 0
rx_nocanput 0
rx_nxt_drops 0
rx_ov_flow 0
rx_pkts_dropped 0
rx_rel_bit 16
rx_rel_flow 0
rx_reused_pgs 78588
rx_split_pkts 0
rx_tag_err 0
rx_taskq_waits 0
snaptime 99936.4664675
taskq_disable 0
trunk_mode 0
tx_allocb_fail 0
tx_ddi_pkts 0
tx_dma_bind_fail 0
tx_dvma_pkts 230
tx_hdr_pkts 48682
tx_inits 0
tx_jumbo_pkts 0
tx_max_pend 32
tx_max_pkt_err 0
tx_msgdup_fail 0
tx_no_desc 0
tx_nocanput 0
tx_queue0 10560
tx_queue1 14411
tx_queue2 18748
tx_queue3 5194
tx_starts 48912
tx_uflo 0
xcvr_addr 1
xcvr_id 536894584
xcvr_inits 5
xcvr_inuse 1



(box 2)
# kstat ce:2
module: ce instance: 2
name: ce2 class: net
adv_cap_1000fdx 1
adv_cap_1000hdx 1
adv_cap_100T4 0
adv_cap_100fdx 1
adv_cap_100hdx 1
adv_cap_10fdx 1
adv_cap_10hdx 1
adv_cap_asmpause 0
adv_cap_autoneg 1
adv_cap_pause 0
alignment_err 0
brdcstrcv 380
brdcstxmt 9
cap_1000fdx 1
cap_1000hdx 1
cap_100T4 0
cap_100fdx 1
cap_100hdx 1
cap_10fdx 1
cap_10hdx 1
cap_asmpause 0
cap_autoneg 1
cap_pause 0
code_violations 0
collisions 0
crc_err 0
crtime 82.1649441
excessive_collisions 0
first_collision 0
ierrors 0
ifspeed 1000000000
ipackets 48567
ipackets64 48567
ipackets_cpu00 13095
ipackets_cpu01 5506
ipackets_cpu02 6551
ipackets_cpu03 23415
late_collisions 0
lb_mode 0
length_err 0
link_T4 0
link_asmpause 0
link_duplex 2
link_pause 0
link_speed 1000
link_up 1
lp_cap_1000fdx 1
lp_cap_1000hdx 1
lp_cap_100T4 0
lp_cap_100fdx 1
lp_cap_100hdx 1
lp_cap_10fdx 1
lp_cap_10hdx 1
lp_cap_asmpause 1
lp_cap_autoneg 1
lp_cap_pause 1
mac_mtu 1522
mac_reset 0
mdt_hdr_bind_fail 0
mdt_hdr_bufs 26317
mdt_hdrs 315543
mdt_pkts 315543
mdt_pld_bind_fail 0
mdt_pld_bufs 26365
mdt_plds 315591
mdt_reqs 26317
multircv 0
multixmt 0
norcvbuf 0
noxmtbuf 0
obytes 449416932
obytes64 449416932
oerrors 0
opackets 316663
opackets64 316663
pci_bad_ack_err 0
pci_bus_speed 33
pci_dmarz_err 0
pci_dmawz_err 0
pci_drto_err 0
pci_err 0
pci_parity_err 0
pci_rma_err 0
pci_rta_err 0
peak_attempts 0
promisc off
qos_mode 0
rbytes 3138060
rbytes64 3138060
rev_id 48
rx_allocb_fail 0
rx_hdr_drops 0
rx_hdr_pkts 48518
rx_inits 0
rx_jumbo_pkts 0
rx_len_mm 0
rx_msgdup_fail 0
rx_mtu_drops 0
rx_mtu_pkts 49
rx_new_hdr_pgs 1516
rx_new_mtu_pgs 12
rx_new_nxt_pgs 0
rx_new_pages 1528
rx_no_buf 0
rx_no_comp_wb 0
rx_nocanput 0
rx_nxt_drops 0
rx_ov_flow 0
rx_pkts_dropped 0
rx_rel_bit 380
rx_rel_flow 0
rx_reused_pgs 1407
rx_split_pkts 0
rx_tag_err 0
rx_taskq_waits 0
snaptime 99186.076518
trunk_mode 0
tx_allocb_fail 0
tx_ddi_pkts 96
tx_dma_bind_fail 0
tx_dvma_pkts 19
tx_hdr_pkts 1042
tx_inits 0
tx_jumbo_pkts 0
tx_max_pend 98
tx_max_pkt_err 0
tx_msgdup_fail 0
tx_no_desc 0
tx_nocanput 0
tx_queue0 14
tx_queue1 27446
tx_queue2 0
tx_queue3 0
tx_starts 1136
tx_uflo 0
xcvr_addr 1
xcvr_id 536894584
xcvr_inits 1
xcvr_inuse 1
 
Reply With Quote
 
 
 
 
Cydrome Leader
Guest
Posts: n/a
 
      02-09-2009, 09:51 PM
bl8n8r <(E-Mail Removed)> wrote:
> Hello All,
> I have two sun v880s hooked up on ce2 with an unmanaged copper gig
> switch in between. As near as I can tell, both nics are linked at
> 1000mb:
>
> NOTICE: ce2: xcvr addr:0x01 - link up 1000 Mbps full duplex
>
> Problem is, transferring files over scp, between the two interfaces,
> results in transfer speeds at only around 5-10MB/sec. CPU utilization
> (sar -u) on one of the v880s averages around 54, so maybe the v880 is
> already pushing as much as it can. I'm stumped. Anyone know why this
> is?


what's netstat -i show?

it's likely your hub thing is running at the wrong duplex- if you see lots
of errors under Ierrs and Oerrs




 
Reply With Quote
 
 
 
 
DoN. Nichols
Guest
Posts: n/a
 
      02-10-2009, 05:34 AM
On 2009-02-09, Cydrome Leader <(E-Mail Removed)> wrote:
> bl8n8r <(E-Mail Removed)> wrote:
>> Hello All,
>> I have two sun v880s hooked up on ce2 with an unmanaged copper gig
>> switch in between. As near as I can tell, both nics are linked at
>> 1000mb:
>>
>> NOTICE: ce2: xcvr addr:0x01 - link up 1000 Mbps full duplex
>>
>> Problem is, transferring files over scp, between the two interfaces,
>> results in transfer speeds at only around 5-10MB/sec. CPU utilization
>> (sar -u) on one of the v880s averages around 54, so maybe the v880 is
>> already pushing as much as it can. I'm stumped. Anyone know why this
>> is?

>
> what's netstat -i show?
>
> it's likely your hub thing is running at the wrong duplex- if you see lots
> of errors under Ierrs and Oerrs


Also -- how much CPU is the scp chewing up in the encryption at
one end and decryption at the other end.

Just for the fun of it -- try making a transfer via rcp instead
of scp -- if you can enable rcpd for long enough without exposing your
systems to hazards from the outside.

Do your v880s have hardware encryption installed? Does scp know
how to use it?

Enjoy,
DoN.

--
Email: <(E-Mail Removed)> | Voice (all times): (703) 938-4564
(too) near Washington D.C. | http://www.d-and-d.com/dnichols/DoN.html
--- Black Holes are where God is dividing by zero ---
 
Reply With Quote
 
Doug McIntyre
Guest
Posts: n/a
 
      02-10-2009, 01:32 PM
"DoN. Nichols" <(E-Mail Removed)> writes:
>On 2009-02-09, Cydrome Leader <(E-Mail Removed)> wrote:
>> bl8n8r <(E-Mail Removed)> wrote:
>>> Hello All,
>>> I have two sun v880s hooked up on ce2 with an unmanaged copper gig
>>> switch in between. As near as I can tell, both nics are linked at
>>> 1000mb:
>>>
>>> NOTICE: ce2: xcvr addr:0x01 - link up 1000 Mbps full duplex
>>>
>>> Problem is, transferring files over scp, between the two interfaces,
>>> results in transfer speeds at only around 5-10MB/sec. CPU utilization
>>> (sar -u) on one of the v880s averages around 54, so maybe the v880 is
>>> already pushing as much as it can. I'm stumped. Anyone know why this
>>> is?

>>
>> what's netstat -i show?
>>
>> it's likely your hub thing is running at the wrong duplex- if you see lots
>> of errors under Ierrs and Oerrs


> Also -- how much CPU is the scp chewing up in the encryption at
>one end and decryption at the other end.


THats most likely it right there.
scp has a huge overhead and is a poor test of network throughput.

I just did 7MB/sec transfers between two (non-Sun) machines. These
machines can regularly do iperf of 680Mbps between the two of them.
FTP transfers can get 220-250Mbps, which is more likely to be the
limits of the disk systems in either of these machines (ie. not
super new, super fast disk systems).
 
Reply With Quote
 
bl8n8r
Guest
Posts: n/a
 
      02-11-2009, 05:37 PM
On Feb 10, 7:32 am, Doug McIntyre <(E-Mail Removed)> wrote:
> "DoN. Nichols" <(E-Mail Removed)> writes:


> >On 2009-02-09, Cydrome Leader <(E-Mail Removed)> wrote:
> >> what's netstat -i show?


Hmm.. Ierrs and Oerrs are both 0

>
> THats most likely it right there.
> scp has a huge overhead and is a poor test of network throughput.
>


Thanks for the replies..

I setup rsync this morning and transferred without ssh - got 13MBs one
time and 6MB the next. A third try yielded 15MBs. sar is reporting
cpu %idle at 37. Maybe the v880 is just way too utilized?

# rsync -av --progress /tmp/foo.bin rsync://host/slash/tmp/
building file list ...
1 file to consider
foo.bin
104857600 100% 13.55MB/s 0:00:07 (xfer#1, to-check=0/1)

sent 104870488 bytes received 38 bytes 13982736.80 bytes/sec
total size is 104857600 speedup is 1.00


# rsync -av --progress /tmp/foo.bin rsync://host/slash/tmp/
building file list ...
1 file to consider
foo.bin
104857600 100% 6.82MB/s 0:00:14 (xfer#1, to-check=0/1)

sent 104870488 bytes received 38 bytes 6765840.39 bytes/sec
total size is 104857600 speedup is 1.00
 
Reply With Quote
 
 
 
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
GigaSwift in Ultra 10 Rav Sun Hardware 0 10-03-2006 11:10 AM
Is the latest Gigaswift card net bootable? Rich Teer Sun Hardware 2 05-25-2006 02:59 PM
Identifying ce? interface based on PCI slot and GigaSwift port locations cdmonline@mac.com Sun Hardware 5 07-05-2005 08:41 PM
GigaSwift problems, 3RD party cards Ken K Sun Hardware 2 10-14-2003 08:45 PM
Gigaswift to Cisco CAT650x trunking Jeff Specoli Sun Hardware 0 08-29-2003 08:24 PM


All times are GMT. The time now is 10:06 PM.


Welcome!
Welcome to Motherboard Point
 

Advertisment