1. This forum section is a read-only archive which contains old newsgroup posts. If you wish to post a query, please do so in one of our main forum sections (here). This way you will get a faster, better response from the members on Motherboard Point.

SB 2000 or SB 2500: Which would you buy?

Discussion in 'Sun Hardware' started by Nemo ad Nusquam, Sep 14, 2009.

  1. SB 2000 and SB 2500 are available on ebay at PC prices. I am going to buy
    one. Specs aside, which one would gentle readers choose?

    Thank you.

    --
    Posted via NewsDemon.com - Premium Uncensored Newsgroup Service
    ------->>>>>>http://www.NewsDemon.com<<<<<<------
    Unlimited Access, Anonymous Accounts, Uncensored Broadband Access
     
    1. Advertising

  2. Nemo ad Nusquam <> writes:
    >SB 2000 and SB 2500 are available on ebay at PC prices. I am going to buy
    >one. Specs aside, which one would gentle readers choose?



    The thing that would probably push it one way or the other for me is
    that the SB2000 use FCAL hard drives, while the SB2500 switched back
    to SCSI (other than the SB2500 generally having newer, bigger CPUs, etc).
     
    1. Advertising

  3. DoN. Nichols

    DoN. Nichols Guest

    On 2009-09-14, Nemo ad Nusquam <> wrote:
    > SB 2000 and SB 2500 are available on ebay at PC prices. I am going to buy
    > one. Specs aside, which one would gentle readers choose?


    Well ... I *already* have a SB-2000, and enough FC disk drives
    so I would like to keep it.

    However, if I were starting from scratch, I would go for the
    SB-2500, since the FC (Fibre Channel) disk drives are harder to find at
    reasonable prices -- except occasionally at hamfests. And the SB-2000
    uses two FC drives internally, and has external 68-pin SCSI and FC (as
    well as slow USB 1.1 and a rather slow Firewire built in, while the
    SB-2500 comes with a PCI card which offers several USB 2.0 and a couple
    of faster Firewire ports. (I can testify that the card works in the
    SB-2000 as well, as I am using one that way.

    Other than that -- the SB-2500 seems to only be marginally
    faster (in CPU speed) than the SB-2000.

    I know that the SB-2000 is far from featherweight, and I presume
    that the SB-2500 is similar in weight.

    Enjoy,
    DoN.

    --
    Email: <> | Voice (all times): (703) 938-4564
    (too) near Washington D.C. | http://www.d-and-d.com/dnichols/DoN.html
    --- Black Holes are where God is dividing by zero ---
     
  4. Dave

    Dave Guest

    Nemo ad Nusquam wrote:
    > SB 2000 and SB 2500 are available on ebay at PC prices. I am going to buy
    > one. Specs aside, which one would gentle readers choose?
    >
    > Thank you.


    I noticed on eBay the other day someone selling a Silver Blade 2500
    rather than a red one. They implied this was a newer model. I know there
    are two colours, but are unsure of the difference between them.

    I have a Blade 2000 which I really liked, but it was damaged recently by
    lightning. I am in fact going to replace it by an x86 box.

    As far as I can ascertain, there is no electrical difference between a
    blade 1000 and a 2000. The motherboards are the same.

    As others have said, FC-AL disks are somewhat rarer than SCA. Whilst my
    machine has a pair of 147 GB disks, I can't pick them up and put them in
    any other machine, or in any external enclosure I own.

    --
    I respectfully request that this message is not archived by companies as
    unscrupulous as 'Experts Exchange' . In case you are unaware,
    'Experts Exchange' take questions posted on the web and try to find
    idiots stupid enough to pay for the answers, which were posted freely
    by others. They are leeches.
     
  5. Hi,

    Dave wrote:
    > Nemo ad Nusquam wrote:
    >> SB 2000 and SB 2500 are available on ebay at PC prices. I am going to
    >> buy
    >> one. Specs aside, which one would gentle readers choose?
    >>
    >> Thank you.

    >
    > I noticed on eBay the other day someone selling a Silver Blade 2500
    > rather than a red one. They implied this was a newer model. I know there
    > are two colours, but are unsure of the difference between them.

    Red is up to 1.28Ghz and Silver is 1.6GHz which is the fastest IIIi CPU.
    >
    > I have a Blade 2000 which I really liked, but it was damaged recently by
    > lightning. I am in fact going to replace it by an x86 box.
    >
    > As far as I can ascertain, there is no electrical difference between a
    > blade 1000 and a 2000. The motherboards are the same.
    >

    SCSI(1000) FCAL(2000), but I would assume that the motherboard has been
    reved since the first 650MHz 1000!


    > As others have said, FC-AL disks are somewhat rarer than SCA. Whilst my
    > machine has a pair of 147 GB disks, I can't pick them up and put them in
    > any other machine, or in any external enclosure I own.
    >


    /michael
     
  6. Hi,

    Nemo ad Nusquam wrote:
    > SB 2000 and SB 2500 are available on ebay at PC prices. I am going to buy
    > one. Specs aside, which one would gentle readers choose?
    >
    > Thank you.
    >
    >

    The Blade 1000/2000 uses the server CPU Ultrasparc III in various speed
    and caches, the Blade 1500/2500 uses the the desktop CPU Ultrasparc IIIi .
    IIIi runs cooler, but I think if loaded with a lot of processes the III
    will outperform the IIIi.

    IIIi can also take more RAM(8GB per CPU) and cheaper RAM.


    /michael
     
  7. DoN. Nichols

    DoN. Nichols Guest

    On 2009-09-16, Dave <> wrote:
    > Nemo ad Nusquam wrote:
    >> SB 2000 and SB 2500 are available on ebay at PC prices. I am going to buy
    >> one. Specs aside, which one would gentle readers choose?
    >>
    >> Thank you.

    >
    > I noticed on eBay the other day someone selling a Silver Blade 2500
    > rather than a red one. They implied this was a newer model. I know there
    > are two colours, but are unsure of the difference between them.


    Hmm ... visit the FEH at the Australian site:

    <http://www.sunshack.org/data/sh/2.1.8/infoserver.central/data/syshbk/index.html>

    and navigate around until you find the man pages for the two systems.
    Routing is going in a loop for me right now, so I can't post links to the
    specific pages, but hopefully by the time you try this, the routing will
    be fixed.

    IIRC, the primary difference is that the later version comes
    with faster CPUs, but I may be mis-remembering, since I don't have any
    of these beyond the SB-2K.

    > I have a Blade 2000 which I really liked, but it was damaged recently by
    > lightning. I am in fact going to replace it by an x86 box.
    >
    > As far as I can ascertain, there is no electrical difference between a
    > blade 1000 and a 2000. The motherboards are the same.


    There are several system boards some of which overlap between
    the two systems, and the primary difference in these is the version of
    the SCHIZO modules -- and I don't really know what difference these make
    anyway. :)

    > As others have said, FC-AL disks are somewhat rarer than SCA. Whilst my
    > machine has a pair of 147 GB disks, I can't pick them up and put them in
    > any other machine, or in any external enclosure I own.


    There are various external boxes for multiple FC-AL drives. EMC
    makes them, I'm using a set of Eurologic ones which I particularly like
    Sun makes some -- and I *think* that there is even a FC-AL version of the
    Multipack.

    Or -- you could get a spare drive cage for a SB-[12]000, dig
    though the foam rubber on the back of the backplane, and jumper one of
    the ID wires high which is currently low to make the drive addresses no
    longer conflict with the ones which are internal (1&2 for the SB-[12]Km
    0&1 for the Sun Fire 280R. Also -- the Sun Fire 280R cage only will
    accept 1" high drives, not 1.6" ones. You can get 146 GB 1" FC-AL
    drives, and I have two in one of my SF-280R machines.

    I'm not sure how many traces you'll need to cut to make this
    jumpering possible.

    Good Luck,
    DoN.

    --
    Email: <> | Voice (all times): (703) 938-4564
    (too) near Washington D.C. | http://www.d-and-d.com/dnichols/DoN.html
    --- Black Holes are where God is dividing by zero ---
     
  8. * Nemo ad Nusquam:
    > SB 2000 and SB 2500 are available on ebay at PC prices. I am going to buy
    > one. Specs aside, which one would gentle readers choose?


    Well, the SB2000 has a hampered memory interface (only the primary CPU
    has directly connected main memory, the second processor has to go over
    the first one to access RAM which in memory-intensive applications
    affects performance), not sure if this got better with the SB2500. As
    others have said, the SB2000 uses FC-AL disks which limits the choice of
    available disks quite a lot.

    To be honest, if you don't really *need* a SPARC system I would go for a
    x86 (x64) PC instead which will give you much more performance than any
    of those old machines at the same cost. Heck, even a Sun W2100z (the
    first generation dual Opteron workstation with AMD AGP chipset) will
    very likely run circles around any SB2x00 system.

    Benjamin
     
  9. Nemo ad Nusquam wrote:
    > SB 2000 and SB 2500 are available on ebay at PC prices. I am going to buy
    > one. Specs aside, which one would gentle readers choose?


    Thank you for your comments. I think I will go for the SB 2500.

    --
    Posted via NewsDemon.com - Premium Uncensored Newsgroup Service
    ------->>>>>>http://www.NewsDemon.com<<<<<<------
    Unlimited Access, Anonymous Accounts, Uncensored Broadband Access
     
  10. Bart

    Bart Guest

    > Well, the SB2000 has a hampered memory interface (only the primary CPU
    > has directly connected main memory, the second processor has to go over
    > the first one to access RAM which in memory-intensive applications
    > affects performance),


    Can you point me to any documentation that explains this any better?
    I've never heard of this about the 1k/2k blades. Most any multiprocess
    system architecture I am familiar with has some overhead with the
    memory controller.

    Thanks
     
  11. * Bart:
    > Can you point me to any documentation that explains this any better?
    > I've never heard of this about the 1k/2k blades. Most any multiprocess
    > system architecture I am familiar with has some overhead with the
    > memory controller.


    Standard multiprocessor computers have a common memory controller and
    common memory. This is called UMA (Uniform Memory Access) architecture.
    It looks like this:

    [I/O]
    |
    [CPU0]---[CPU1]
    |
    [MEMORY CONTROLLER]-[MEMORY]

    Good examples of UMA machines are most servers and workstations with
    older intel XEON processors (pre-XEON 5500 series). These XEON
    processors have a common (or with XEON 5000 series separate) FSB to
    communicate with an external memory controller (Northbridge). This
    basically means for a given situation the performance of each CPU
    accessing memory is always the same, no matter which CPU does the access
    and no matter which area of the system memory is accessed. However,
    because the memory controller is outside the CPU, accessing memory takes
    time (higher latency), and especially with older XEONs with common
    single FSB the FSB limits the actual bandwidth available to the system
    memory.


    However, the UltraSPARCIII-based Sun machines like the SB1000/2000/2500
    as well as AMD Opteron-based computers are NUMA[1] (Non-Uniform Memory
    Access) architecture. NUMA means that every processor has its own memory
    controller (which in case of UltraSPARC III and AMD Opteron is built
    into the CPU) and its own local memory. NUMA looks like this:

    [I/O]
    |
    [CPU0]-[MEMORY CONTROLLER]-[MEMORY]
    |
    [CPU1]-[MEMORY CONTROLLER]-[MEMORY]
    |
    [I/O]

    The advantage is that every CPU has very fast access to its local RAM
    (low latency), and it doesn't have to share the bandwidth with the other
    processor. However, as soon as a CPU has to access memory connected to
    another CPU, things get much slower as it has to go over the other
    processor to access its memory. Now the memory performance depends which
    part of the system memory has to be accessed, if it is local it is fast,
    if it is connected to another processor it is slow. Therefore NUMA needs
    a NUMA-aware OS (like Solaris, Windows or Linux) which distributes
    processes and assigns memory in a way that processes use system RAM
    connected to the processor it runs on.

    As to the Sun Blade 1000/2000/2500, it is a crippled NUMA system which
    basically looks like this:

    [I/O]
    |
    [CPU0]-[MEMORY CONTROLLER]-[MEMORY]
    |
    [CPU1]-[MEMORY CONTROLLER]

    While both processors do have memory controllers, only the first CPU can
    actually have physical RAM. This means all processes running on the
    second processor have to go over the primary one to access RAM as the
    second CPU doesn't have local memory. This has quite a huge impact on
    memory-intensive multiprocessor applications.

    Ben





    [1] http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access
     
  12. ChrisQ

    ChrisQ Guest

    Benjamin Gawert wrote:

    > While both processors do have memory controllers, only the first CPU can
    > actually have physical RAM. This means all processes running on the
    > second processor have to go over the primary one to access RAM as the
    > second CPU doesn't have local memory. This has quite a huge impact on
    > memory-intensive multiprocessor applications.
    >
    > Ben
    >


    That's interesting, but I doubt that would have much of an impact on the
    sort of work done by many users, other than for seriously compute
    intensive applications. A B1000 is used here both as the lab server and
    also for software development. Recently removed one of the cpu's to save
    power and notice no difference at all in terms of interactive response
    or compilation times. Juice is expensive now, so why pay to use more if
    there's little benefit ?. The blade 1 and 2k series are quite power
    hungry (~200 watts) compared to earlier Sparc workstation class machines.

    On windows, using task manager on a dual xeon machine, the only apps
    that really make use of mt or the two cpu's are those like photoshop or
    lightroom, but even then the processors are really just loafing along.
    Might just as well remove one of the cpu's there as well...

    Regards,

    Chris
     
  13. * erik magnuson:

    > Are you sure that applies to the SB-2500? The max RAM for the SB-2500
    > was twice that of the SB-1500. Bear in mind that the SB1k/2k used the
    > US-III processor, while the SB-2500 used the US-IIIi processor.


    I checked again and you are right, this doesn't apply to the SB2500
    which has both CPUs connected to local memory (with one CPU only half of
    the memory slots can be used).

    Benjamin
     
  14. * ChrisQ:

    > That's interesting, but I doubt that would have much of an impact on the
    > sort of work done by many users, other than for seriously compute
    > intensive applications.


    It affects all multiprocessor (multithreaded) applications that make use
    of a lot of memory, and it does affect singlethreaded applications as
    well if the scheduler shifts them to the second processor.

    But considering that these machines are now becoming close to a decade
    old and being really slow now I doubt this is a problem any more as the
    overall performance of these critters is low. Also, the memory interface
    of the USIII is not great.

    > A B1000 is used here both as the lab server and
    > also for software development. Recently removed one of the cpu's to save
    > power and notice no difference at all in terms of interactive response
    > or compilation times. Juice is expensive now, so why pay to use more if
    > there's little benefit ?. The blade 1 and 2k series are quite power
    > hungry (~200 watts) compared to earlier Sparc workstation class machines.


    If power is of concern then a SB1000/2000 is probably a bad choice.

    > On windows, using task manager on a dual xeon machine, the only apps
    > that really make use of mt or the two cpu's are those like photoshop or
    > lightroom, but even then the processors are really just loafing along.
    > Might just as well remove one of the cpu's there as well...


    Don't know about Lightroom but IIRC Photoshop only uses multiple
    processors for certain filters.

    It doesn't seem any of your applications make use of multiple
    processors, so you might as well just use single processor machines
    which use less power.

    Benjamin
     
  15. ChrisQ

    ChrisQ Guest

    Benjamin Gawert wrote:

    >
    > But considering that these machines are now becoming close to a decade
    > old and being really slow now I doubt this is a problem any more as the
    > overall performance of these critters is low. Also, the memory interface
    > of the USIII is not great.


    It's still the fastest sparc based machine here. At last, a sparc
    machine that equals my old 1997 vintage Alpha box in terms of
    interactive response and compile times. It's just a real shame sun
    stopped doing sparc based workstations. Diversity improves the breed and
    the cpu gene pool is shrinking fast.

    > It doesn't seem any of your applications make use of multiple
    > processors, so you might as well just use single processor machines
    > which use less power.
    >


    It can always be refitted if there is the need and would probably get
    more benefit for present work upgrading to a single 1200Mhz cpu, rather
    than a dual 900Mhz config.

    What other options there are, without mentioning X86, of course :)...

    Regards,

    Chris
     
  16. Bart

    Bart Guest

    On Sep 23, 8:01 am, ChrisQ <> wrote:
    > Benjamin Gawert wrote:
    >
    > > But considering that these machines are now becoming close to a decade
    > > old and being really slow now I doubt this is a problem any more as the
    > > overall performance of these critters is low. Also, the memory interface
    > > of the USIII is not great.

    >
    > It's still the fastest sparc based machine here. At last, a sparc
    > machine that equals my old 1997 vintage Alpha box in terms of
    > interactive response and compile times. It's just a real shame sun
    > stopped doing sparc based workstations. Diversity improves the breed and
    > the cpu gene pool is shrinking fast.
    >
    > > It doesn't seem any of your applications make use of multiple
    > > processors, so you might as well just use single processor machines
    > > which use less power.

    >
    > It can always be refitted if there is the need and would probably get
    > more benefit for present work upgrading to a single 1200Mhz cpu, rather
    > than a dual 900Mhz config.
    >
    > What other options there are, without mentioning X86, of course :)...
    >
    > Regards,
    >
    > Chris


    Here is a document I found that provides the details on the Sunblade
    1000/2000 memory and CPU architecture (Starts on Page 30).

    http://ru.sun.com/products/workstations/blade1000/pics/sb1000wp.pdf

    Apparently the US IIIcu has a built in memory controller and
    multiprocessor systems use a buss arbitration system. I can see how
    this might have a small impact in performance but the obvious benefit
    is for scaling so I can also see why Sun chose this path. It provides
    direct access for 1 cpu (because the USIII has a memory controller
    built-in) while subsequent processors use the shared interconnect bus
    (Sun called it Fireplane I guess).
     
  17. Benjamin Gawert wrote:
    > * Bart:
    >> Can you point me to any documentation that explains this any better?
    >> I've never heard of this about the 1k/2k blades. Most any multiprocess
    >> system architecture I am familiar with has some overhead with the
    >> memory controller.

    >
    > Standard multiprocessor computers have a common memory controller and
    > common memory. This is called UMA (Uniform Memory Access) architecture.
    > It looks like this:
    >
    > [I/O]
    > |
    > [CPU0]---[CPU1]
    > |
    > [MEMORY CONTROLLER]-[MEMORY]
    >
    > Good examples of UMA machines are most servers and workstations with
    > older intel XEON processors (pre-XEON 5500 series). These XEON
    > processors have a common (or with XEON 5000 series separate) FSB to
    > communicate with an external memory controller (Northbridge). This
    > basically means for a given situation the performance of each CPU
    > accessing memory is always the same, no matter which CPU does the access
    > and no matter which area of the system memory is accessed. However,
    > because the memory controller is outside the CPU, accessing memory takes
    > time (higher latency), and especially with older XEONs with common
    > single FSB the FSB limits the actual bandwidth available to the system
    > memory.
    >
    >
    > However, the UltraSPARCIII-based Sun machines like the SB1000/2000/2500
    > as well as AMD Opteron-based computers are NUMA[1] (Non-Uniform Memory
    > Access) architecture. NUMA means that every processor has its own memory
    > controller (which in case of UltraSPARC III and AMD Opteron is built
    > into the CPU) and its own local memory. NUMA looks like this:
    >
    > [I/O]
    > |
    > [CPU0]-[MEMORY CONTROLLER]-[MEMORY]
    > |
    > [CPU1]-[MEMORY CONTROLLER]-[MEMORY]
    > |
    > [I/O]
    >
    > The advantage is that every CPU has very fast access to its local RAM
    > (low latency), and it doesn't have to share the bandwidth with the other
    > processor. However, as soon as a CPU has to access memory connected to
    > another CPU, things get much slower as it has to go over the other
    > processor to access its memory. Now the memory performance depends which
    > part of the system memory has to be accessed, if it is local it is fast,
    > if it is connected to another processor it is slow. Therefore NUMA needs
    > a NUMA-aware OS (like Solaris, Windows or Linux) which distributes
    > processes and assigns memory in a way that processes use system RAM
    > connected to the processor it runs on.
    >
    > As to the Sun Blade 1000/2000/2500, it is a crippled NUMA system which
    > basically looks like this:
    >
    > [I/O]
    > |
    > [CPU0]-[MEMORY CONTROLLER]-[MEMORY]
    > |
    > [CPU1]-[MEMORY CONTROLLER]
    >
    > While both processors do have memory controllers, only the first CPU can
    > actually have physical RAM. This means all processes running on the
    > second processor have to go over the primary one to access RAM as the
    > second CPU doesn't have local memory. This has quite a huge impact on
    > memory-intensive multiprocessor applications.
    >
    > Ben
    >
    >
    >
    >
    >
    > [1] http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access
    >



    If you take a look at the block diagram of the Blade 2500 in
    817-5117-11.pdf, page C-3, you'll see that both cpus are connected to
    two memory banks of their own and communicate with two IO bridges over a
    common J-Bus.

    - Thomas
     
  18. * Bart:

    > Here is a document I found that provides the details on the Sunblade
    > 1000/2000 memory and CPU architecture (Starts on Page 30).
    >
    > http://ru.sun.com/products/workstations/blade1000/pics/sb1000wp.pdf
    >
    > Apparently the US IIIcu has a built in memory controller and
    > multiprocessor systems use a buss arbitration system. I can see how
    > this might have a small impact in performance but the obvious benefit
    > is for scaling


    Nope, it has no benefit "for scaling". In fact, it is a huge bottleneck
    in multiprocessing and scales like crap.

    > so I can also see why Sun chose this path.


    I can, too. It was very likely a cost cutting measure and nothing else.

    > It provides
    > direct access for 1 cpu (because the USIII has a memory controller
    > built-in) while subsequent processors use the shared interconnect bus
    > (Sun called it Fireplane I guess).


    It's not a bus, it's a crossbar switch which means it add noticeable
    latency to the I/O.

    For a workstation in this price range (when it was new, when I got my
    first SB1000 the machine did cost around 30kEUR) I would have expected a
    design that doesn't cut corners in important areas.

    Of course today this is probably irrelevant as even the fastest SB1000
    (and SB2000) is slow like hell by todays' standards.

    Benjamin
     
  19. * Thomas Maier-Komor:

    > If you take a look at the block diagram of the Blade 2500 in
    > 817-5117-11.pdf, page C-3, you'll see that both cpus are connected to
    > two memory banks of their own and communicate with two IO bridges over a
    > common J-Bus.


    And if you read my posting from September 23rd (almost a month ago!) you
    will find out that I already corrected myself re. Blade 2500 memory.

    Benjamin
     
  20. Benjamin Gawert wrote:
    > * Thomas Maier-Komor:
    >
    >> If you take a look at the block diagram of the Blade 2500 in
    >> 817-5117-11.pdf, page C-3, you'll see that both cpus are connected to
    >> two memory banks of their own and communicate with two IO bridges over a
    >> common J-Bus.

    >
    > And if you read my posting from September 23rd (almost a month ago!) you
    > will find out that I already corrected myself re. Blade 2500 memory.
    >
    > Benjamin


    sorry Benjamin - I've overlooked your reply to Erik in the other branch...
     
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Giuseppe Carmine De Blasio

    What Abit mobo would you buy...?

    Giuseppe Carmine De Blasio, Oct 30, 2004, in forum: Abit
    Replies:
    1
    Views:
    335
    Wes Newell
    Oct 30, 2004
  2. Thomas Jansen
    Replies:
    1
    Views:
    451
    Thomas Jansen
    Oct 27, 2003
  3. Wookie
    Replies:
    0
    Views:
    303
    Wookie
    Jun 11, 2005
  4. SAB
    Replies:
    8
    Views:
    290
  5. Tony
    Replies:
    10
    Views:
    400
Loading...

Share This Page