1. This forum section is a read-only archive which contains old newsgroup posts. If you wish to post a query, please do so in one of our main forum sections (here). This way you will get a faster, better response from the members on Motherboard Point.

U.S. files antitrust suit against Intel -unfair tactics used againstrivals

Discussion in 'Intel' started by Intel Guy, Dec 23, 2009.

  1. Intel Guy

    Intel Guy Guest

    http://www.washingtonpost.com/wp-dyn/content/article/2009/12/16/AR2009121601121.html

    U.S. files antitrust suit against Intel, alleges unfair tactics used
    against rivals

    By Cecilia Kang and Steven Mufson
    Washington Post Staff Writer
    Thursday, December 17, 2009

    The Obama administration sued chip giant Intel on Wednesday over a
    decade-long run of actions allegedly designed to stifle competition,
    opening a new front in the battle that big technology firms have been
    waging for years against antitrust challenges in Asia and Europe.

    The Federal Trade Commission lawsuit resembles past cases brought
    against Intel by Japanese, Korean and European Union regulators over
    rival Advanced Micro Devices, and it adds new allegations that Intel
    rigged its microprocessors in a way that made it difficult for a
    competitor, Nvidia, to provide consumers with superior graphics
    abilities for computer games and video.

    Intel denied the allegations, saying that it "competed fairly and
    lawfully" and that "its actions have benefited consumers."

    The lawsuit marks a major step for President Obama toward fulfilling
    his 2008 presidential campaign promise to "reinvigorate antitrust
    enforcement." At the time, he criticized the Bush administration for
    "what may be the weakest record of antitrust enforcement of any
    administration in the last half century."

    Other key antitrust tests lie ahead. The power of Google, Comcast's
    proposed takeover of NBC, and the market share of the makers of mobile
    phone handsets are all under examination by the Justice Department,
    the Federal Communications Commission or the FTC.

    The technology industry, which has also been wooed by Obama, has been
    striving to resolve a string of antitrust actions in the United States
    and abroad. On Wednesday, the European Union ended its decade-long
    antitrust investigation of Microsoft after Microsoft agreed to market
    rival browsers as well as its own Internet Explorer. On Nov. 12, Intel
    paid $1.25 billion to rival AMD to drop antitrust and patent lawsuits
    as well as complaints filed with agencies, including the FTC. Many
    technology analysts were also cheered by the administration's decision
    to let software giant Oracle acquire Sun Microsystems, despite
    Oracle's dominant position in business software.

    But the FTC on Wednesday alleged that Intel had used bullying tactics
    and payments to get computer makers such as Dell and Hewlett-Packard
    to use Intel chips instead of those made by AMD. The FTC complaint,
    the culmination of a one-year investigation, said "that Intel fell
    behind in the race for technological superiority in a number of
    markets and resorted to a wide range of anticompetitive conduct,
    including deception and coercion, to stall competitors until it could
    catch up."

    The agency added that "Intel's anticompetitive tactics were designed
    to put the brakes on superior competitive products that threatened its
    monopoly."

    The FTC isn't seeking monetary damages from Intel. "We are frankly
    more focused on conduct," Richard Feinstein, director of the FTC's
    bureau of competition, said in a news conference. Such remedies could
    include forcing Intel to share intellectual property with competitors.

    The case could become a key test of antitrust law. Forged during the
    Progressive Era a century ago, antitrust legislation was designed to
    tame steel and oil monopolies, and was later applied to shoe and beer
    makers.

    But under the influence of University of Chicago economists and
    others, courts began to worry about the harm antitrust enforcement
    actions could do to innovation and ultimately to the consumers they
    were supposed to protect. Over the past 15 years, federal courts have
    made it harder to show abuse of monopoly power and to win suits for
    treble damages. Judges taking a more skeptical view of antitrust
    actions have ranged from federal appeals court judges Richard A.
    Posner and Frank H. Easterbrook to Supreme Court Justice Stephen G.
    Breyer.

    Applying antitrust to the tech sector has been particularly thorny
    because of falling prices, constant innovation and technology that
    often changes faster than it takes to litigate an antitrust case. Yet
    rarely have so few companies stayed so dominant in their fields as
    Microsoft, Intel, Oracle and now Google.

    "Concern over class actions, treble damages awards, and costly jury
    trials have caused many courts in recent decades to limit the reach of
    antitrust," FTC Chairman Jon Leibowitz and commissioner J. Thomas
    Rosch said in a statement. "The result has been that some conduct
    harmful to consumers may be given a 'free pass.' "

    The Intel lawsuit relies on the rarely used Section 5 of the law that
    established the FTC in 1914. Antitrust lawyers saw this as an effort
    to speed up the litigation and sidestep obstacles federal courts have
    erected in recent years.

    Unlike lawsuits filed by the Justice Department's antitrust division,
    which are tried by juries, the FTC suit goes to an administrative
    judge and cannot be used by private plaintiffs seeking treble damages.
    The FTC said it expects the Intel trial to start in nine months and
    conclude within 20 months.

    Three years ago, Leibowitz, then an FTC board member, argued for
    resurrecting the use of Section 5. At the time, Joe Sims, an antitrust
    lawyer at Jones Day, wrote that reviving the use of Section 5 "would
    benefit no one other than antitrust lawyers." He said it "would signal
    the potential for a retreat to the antitrust of the past or, perhaps,
    the rather less bounded 'competition' policy that is applied by many
    non-U.S. regulators less constrained by statutes and case law (and
    sometimes common sense)."

    But on Wednesday, Leibowitz reasserted that Section 5's "broad reach
    is beyond dispute."

    The case is also important to Intel, whose stock fell 42 cents, or 2.1
    percent, to close at $19.38 a share on Wednesday.

    The FTC case "shows that the strategy being followed by Intel applies
    not only to AMD but to Nvidia," said Albert A. Foer, president of the
    American Antitrust Institute. "A small competitor in a highly
    concentrated market comes up with a new technology that looks better
    than Intel's, so Intel comes up with strategies using primarily
    pricing, including loyalty discounts, but also deception to hurt the
    competitor as much as possible and at least delay it from having
    success with its new technology until Intel can catch up."

    He said the FTC could force Intel to license its processors to anyone
    willing to pay for it. "That would be a bombshell remedy that could
    open up the global market in computers," Foer said.

    Intel said "the highly competitive microprocessor industry . . . has
    kept innovation robust and prices declining at a faster rate than any
    other industry." Intel General Counsel Doug Melamed said in a
    statement that "this case could have, and should have, been settled."
    He said the FTC insisted on "unprecedented remedies" that "would make
    it impossible for Intel to conduct business."
     
    1. Advertising

  2. Robert Myers

    Robert Myers Guest

    Re: U.S. files antitrust suit against Intel -unfair tactics usedagainst rivals

    On Dec 23, 5:35 pm, Intel Guy <> wrote:
    > http://www.washingtonpost.com/wp-dyn/content/article/2009/12/16/AR200...
    >
    > U.S. files antitrust suit against Intel, alleges unfair tactics used
    > against rivals
    >


    There is a solution. Intel should just buy everyone out, starting
    with Obama, who clearly has a price (just ask big pharma).

    Then Intel can work its way downward, all the way to the whining
    Internet trolls who have used up so much bandwidth.

    When everyone has been bought out, ARM can take over as Intel's rival,
    AMD can go out of business as the just recompense for whiners, and we
    can live the rest of our lives in peace, while Intel and ARM slug it
    out. By that point, gamers will be so knowledgeable and so advanced
    that they can build their own custom GPU's. Apple? IBM? Hell, I
    don't know. Maybe China can buy them out.

    Robert.
     
    1. Advertising

  3. Yousuf Khan

    Yousuf Khan Guest

    Re: U.S. files antitrust suit against Intel -unfair tactics usedagainst rivals

    Robert Myers wrote:
    > There is a solution. Intel should just buy everyone out, starting
    > with Obama, who clearly has a price (just ask big pharma).
    >
    > Then Intel can work its way downward, all the way to the whining
    > Internet trolls who have used up so much bandwidth.
    >
    > When everyone has been bought out, ARM can take over as Intel's rival,
    > AMD can go out of business as the just recompense for whiners, and we
    > can live the rest of our lives in peace, while Intel and ARM slug it
    > out. By that point, gamers will be so knowledgeable and so advanced
    > that they can build their own custom GPU's. Apple? IBM? Hell, I
    > don't know. Maybe China can buy them out.
    >
    > Robert.


    Feeling a little under siege there, big guy? :)

    Yousuf Khan
     
  4. Robert Myers

    Robert Myers Guest

    Re: U.S. files antitrust suit against Intel -unfair tactics usedagainst rivals

    On Dec 26, 3:23 pm, Yousuf Khan <> wrote:
    > Robert Myers wrote:
    > > There is a solution.  Intel should just buy everyone out, starting
    > > with Obama, who clearly has a price (just ask big pharma).

    >
    > > Then Intel can work its way downward, all the way to the whining
    > > Internet trolls who have used up so much bandwidth.

    >
    > > When everyone has been bought out, ARM can take over as Intel's rival,
    > > AMD can go out of business as the just recompense for whiners, and we
    > > can live the rest of our lives in peace, while Intel and ARM slug it
    > > out.  By that point, gamers will be so knowledgeable and so advanced
    > > that they can build their own custom GPU's.  Apple?  IBM?  Hell, I
    > > don't know.  Maybe China can buy them out.

    >
    > > Robert.

    >
    > Feeling a little under siege there, big guy? :)


    Not especially, no.

    I don't think anyone should feel safe right now. AMD won't cooperate
    in the US investigations by agreement with Intel. I'm not a lawyer,
    but, as I understand things, AMD can cooperate only to the extent that
    it is forced to. No more running to the Principal's office for AMD.

    Recent movements in AMD and NVidia stock (nearly parallel) are because
    Intel dropped the ball on Larrabee. At this point, every player I
    know of has some combination of low power/graphics/installed base/
    yield/gross margin problem. In other words, no one is safe. Well,
    IBM is safe, but IBM employees are not, nor is IBM's presence in any
    hardware business other than mainframes and businesses that are wired-
    into that culture so that they'll be interested in other p-series
    offerings. Intel has tons of money and probably a long run yet before
    it realizes that it's in the same boat IBM was in (no lock on any
    technology works forever).

    I'm sort of hoping that Arm will succeed in stirring the waters. I
    hope that NVidia makes Fermi work, but, along with lots of others, I'm
    skeptical. As usual, I am *not* rooting for AMD, which has the same
    threat from ARM that Intel does, only AMD has lots less money than
    does Intel.

    In short, I'm pretty happy becuase

    1. The future will almost inevitably involve vector/stream processors,
    probably in a form that was beyond my wildest dreams.

    2. The x86 monoculture is severely threatened.

    3. The architecture focus is going to move away from the CPU, which
    has been beaten to death and toward heterogeneous processing and
    connectivity issues. What's left of CPU microarchitecture is going to
    go toward low power/massively parallel operation, all good news for
    scientific computing.

    4. I foresee the end of supposedly clever but completely useless
    programming models, most of which are a step back from Fortran,
    because bolted-on concurrency won't work any more.

    I gather that RHEL 6 won't support Itanium, so one assumes that's it
    for Itanium, even on HP. I have no idea what HP plans in its place.
    Maybe just a featured-up x86. Anything I ever wanted from Itanium can
    be better supplied now from other sources, anyway. It looks like IBM
    has salvaged what's left of its mainframe business, so I assume
    they're going to be much less interested in pumping up AMD.

    Whatever it takes, so long as I don't have to listen to AMD whining
    any more.

    Robert.
     
  5. Yousuf Khan

    Yousuf Khan Guest

    Re: U.S. files antitrust suit against Intel -unfair tactics usedagainst rivals

    Robert Myers wrote:
    > On Dec 26, 3:23 pm, Yousuf Khan <> wrote:
    >> Feeling a little under siege there, big guy? :)

    >
    > Not especially, no.


    Sounds like the same kind of "no, I'm fine" that a boxer who's being
    pounded into the mat would give.

    > I don't think anyone should feel safe right now. AMD won't cooperate
    > in the US investigations by agreement with Intel. I'm not a lawyer,
    > but, as I understand things, AMD can cooperate only to the extent that
    > it is forced to. No more running to the Principal's office for AMD.


    AMD now has its own arbitration process with Intel, so it doesn't have
    to go to the governments anymore, theoretically. Still, AMD would be a
    fool to trust Intel to live up to its agreements, so it's likely going
    to put no resistance up whatsoever if asked for documentation against
    Intel. It just won't now gloat about it.

    > Recent movements in AMD and NVidia stock (nearly parallel) are because
    > Intel dropped the ball on Larrabee. At this point, every player I
    > know of has some combination of low power/graphics/installed base/
    > yield/gross margin problem. In other words, no one is safe. Well,
    > IBM is safe, but IBM employees are not, nor is IBM's presence in any
    > hardware business other than mainframes and businesses that are wired-
    > into that culture so that they'll be interested in other p-series
    > offerings. Intel has tons of money and probably a long run yet before
    > it realizes that it's in the same boat IBM was in (no lock on any
    > technology works forever).


    I'm not sure how you segued in from graphics chips to IBM. Not even sure
    why IBM is relevant to the entire discussion let alone graphics chips.

    > I'm sort of hoping that Arm will succeed in stirring the waters. I
    > hope that NVidia makes Fermi work, but, along with lots of others, I'm
    > skeptical. As usual, I am *not* rooting for AMD, which has the same
    > threat from ARM that Intel does, only AMD has lots less money than
    > does Intel.


    I agree that AMD is in the same boat with Intel with regards to ARM.
    What I don't agree is that ARM is any sort of threat to either of them.
    ARM will never succeed in going into any sort of general-purpose
    computer, higher than a smartphone. ARM processors get seriously winded
    just trying to run background tasks in smartphones -- that's why
    smartphones don't do background tasks -- multitasking is outside the
    scope of ARM.

    > In short, I'm pretty happy becuase
    >
    > 1. The future will almost inevitably involve vector/stream processors,
    > probably in a form that was beyond my wildest dreams.


    Not that it's replacing x86. Vector processors are just merging into x86
    processors, in the form of GPUs.

    > 2. The x86 monoculture is severely threatened.


    Not while people have their Windows addictions. Hell, even Linux is
    better supported under x86 than anything else.

    > 3. The architecture focus is going to move away from the CPU, which
    > has been beaten to death and toward heterogeneous processing and
    > connectivity issues. What's left of CPU microarchitecture is going to
    > go toward low power/massively parallel operation, all good news for
    > scientific computing.


    Sure, but that's all still based around x86. The heterogeneous
    processors will all be subprocessors of an x86 main processor. If
    anything, the x86 will gain popularity as the manager of other processors.

    > 4. I foresee the end of supposedly clever but completely useless
    > programming models, most of which are a step back from Fortran,
    > because bolted-on concurrency won't work any more.


    Okay, whatever you say.

    > I gather that RHEL 6 won't support Itanium, so one assumes that's it
    > for Itanium, even on HP. I have no idea what HP plans in its place.
    > Maybe just a featured-up x86. Anything I ever wanted from Itanium can
    > be better supplied now from other sources, anyway.


    HP-UX.

    > It looks like IBM
    > has salvaged what's left of its mainframe business, so I assume
    > they're going to be much less interested in pumping up AMD.


    Again, another segue into IBM. What's IBM got to do with anything? Since
    when was IBM pumping up AMD? AMD was an IBM customer in fab development,
    that's about all.

    The closest any company got to pumping AMD up was HP, your previous
    subject matter. HP was its most reliable customer.

    > Whatever it takes, so long as I don't have to listen to AMD whining
    > any more.



    All of the whining came from Intel, "Oh, <insert government entity here>
    doesn't know how to apply its own anti-trust laws, here's this is how it
    should've done it".

    Yousuf Khan
     
  6. Robert Myers

    Robert Myers Guest

    Re: U.S. files antitrust suit against Intel -unfair tactics usedagainst rivals

    On Dec 26, 11:12 pm, Yousuf Khan <> wrote:
    > Robert Myers wrote:
    > > On Dec 26, 3:23 pm, Yousuf Khan <> wrote:
    > >> Feeling a little under siege there, big guy? :)

    >
    > > Not especially, no.

    >
    > Sounds like the same kind of "no, I'm fine" that a boxer who's being
    > pounded into the mat would give.
    >

    I'm not a fighter. I'm a lover.

    > > I don't think anyone should feel safe right now.  AMD won't cooperate
    > > in the US investigations by agreement with Intel.  I'm not a lawyer,
    > > but, as I understand things, AMD can cooperate only to the extent that
    > > it is forced to.  No more running to the Principal's office for AMD.

    >
    > AMD now has its own arbitration process with Intel, so it doesn't have
    > to go to the governments anymore, theoretically. Still, AMD would be a
    > fool to trust Intel to live up to its agreements, so it's likely going
    > to put no resistance up whatsoever if asked for documentation against
    > Intel. It just won't now gloat about it.
    >
    > > Recent movements in AMD and NVidia stock (nearly parallel) are because
    > > Intel dropped the ball on Larrabee.  At this point, every player I
    > > know of has some combination of low power/graphics/installed base/
    > > yield/gross margin problem.  In other words, no one is safe.  Well,
    > > IBM is safe, but IBM employees are not, nor is IBM's presence in any
    > > hardware business other than mainframes and businesses that are wired-
    > > into that culture so that they'll be interested in other p-series
    > > offerings.  Intel has tons of money and probably a long run yet before
    > > it realizes that it's in the same boat IBM was in (no lock on any
    > > technology works forever).

    >
    > I'm not sure how you segued in from graphics chips to IBM. Not even sure
    > why IBM is relevant to the entire discussion let alone graphics chips.
    >

    Cell? And the Intel of today has to worry that it is like the IBM of
    1990.

    > > I'm sort of hoping that Arm will succeed in stirring the waters.  I
    > > hope that NVidia makes Fermi work, but, along with lots of others, I'm
    > > skeptical.  As usual, I am *not* rooting for AMD, which has the same
    > > threat from ARM that Intel does, only AMD has lots less money than
    > > does Intel.

    >
    > I agree that AMD is in the same boat with Intel with regards to ARM.
    > What I don't agree is that ARM is any sort of threat to either of them.
    > ARM will never succeed in going into any sort of general-purpose
    > computer, higher than a smartphone. ARM processors get seriously winded
    > just trying to run background tasks in smartphones -- that's why
    > smartphones don't do background tasks -- multitasking is outside the
    > scope of ARM.
    >

    The computer architects I talk to don't seem to feel that way. If the
    software model moves toward running on lots of fairly wimpy
    processors, then ARM is a player. Multitasking is easy. You add more
    processors. The world has passed you by. The idea that a processor
    has to be really agile so that it can be shared among many processes
    because it is so expensive is like the idea that you have to be really
    clever about managing memory because it is so expensive.

    > > In short, I'm pretty happy becuase

    >
    > > 1. The future will almost inevitably involve vector/stream processors,
    > > probably in a form that was beyond my wildest dreams.

    >
    > Not that it's replacing x86. Vector processors are just merging into x86
    > processors, in the form of GPUs.
    >

    We don't know that at this point. The people who seem competent say
    that the future might not need much in the way of a general purpose
    cpu. IBM built blue Gene around embedded processors. ARM was a
    player in the RFP from LLNL that IBM "won" (cough, cough). In a world
    where performance/watt is the key metric, ARM looks like a player,
    and, if it weren't for IBM's wired-in status at LLNL, we might already
    be seeing ARM in the Top 500. One fairly wimpy something or other to
    deal with process scheduling and communication, and a stream processor
    to do the heavy lifting. You didn't believe it the first time we
    talked about it and you don't believe it now, but I don't really care.

    > > 2. The x86 monoculture is severely threatened.

    >
    > Not while people have their Windows addictions. Hell, even Linux is
    > better supported under x86 than anything else.
    >

    Yes, but ARM is ubiquitous. Cell never took off because it never got
    the developer support. ARM won't have that problem.

    > > 3. The architecture focus is going to move away from the CPU, which
    > > has been beaten to death and toward heterogeneous processing and
    > > connectivity issues.  What's left of CPU microarchitecture is going to
    > > go toward low power/massively parallel operation, all good news for
    > > scientific computing.

    >
    > Sure, but that's all still based around x86. The heterogeneous
    > processors will all be subprocessors of an x86 main processor. If
    > anything, the x86 will gain popularity as the manager of other processors..
    >

    Or subprocessors for ARM.

    > > 4. I foresee the end of supposedly clever but completely useless
    > > programming models, most of which are a step back from Fortran,
    > > because bolted-on concurrency won't work any more.

    >
    > Okay, whatever you say.
    >

    It matters because, if the world is to move to a different model of
    CPU architecture, the languages that are so widespread now will have
    to undergo heavy duty changes. It matters little if the world remains
    x86. The ISA doesn't matter all that much any more because the action
    is going to be outside the individual CPU. Maybe x86 will morph into
    something that handles the world of hundreds of cores deftly, or maybe
    it won't.

    > > I gather that RHEL 6 won't support Itanium, so one assumes that's it
    > > for Itanium, even on HP.  I have no idea what HP plans in its place.
    > > Maybe just a featured-up x86.  Anything I ever wanted from Itanium can
    > > be better supplied now from other sources, anyway.  

    >
    > HP-UX.
    >

    VMS has to move onto something, and IBM seems really determined to
    move virtualized Linux onto big iron.

    > > It looks like IBM
    > > has salvaged what's left of its mainframe business, so I assume
    > > they're going to be much less interested in pumping up AMD.

    >
    > Again, another segue into IBM. What's IBM got to do with anything? Since
    > when was IBM pumping up AMD? AMD was an IBM customer in fab development,
    > that's about all.
    >

    Oh, yes, IBM was heartbroken over the fate of Itanium and it had
    nothing whatever to do with the timing of the (temporary) success of
    AMD--temporary success that killed Itanium.

    > The closest any company got to pumping AMD up was HP, your previous
    > subject matter. HP was its most reliable customer.
    >

    There was a window where no one, including HP, could ignore Opteron.
    HP didn't create the window.

    > > Whatever it takes, so long as I don't have to listen to AMD whining
    > > any more.

    >
    > All of the whining came from Intel, "Oh, <insert government entity here>
    > doesn't know how to apply its own anti-trust laws, here's this is how it
    > should've done it".
    >

    That isn't the whining I've been listening to on these forums, where
    you have been a big contributor.

    Robert.
     
  7. Yousuf Khan

    Yousuf Khan Guest

    Re: U.S. files antitrust suit against Intel -unfair tactics usedagainst rivals

    Robert Myers wrote:
    > On Dec 26, 11:12 pm, Yousuf Khan <> wrote:
    >> Robert Myers wrote:
    >>> Recent movements in AMD and NVidia stock (nearly parallel) are because
    >>> Intel dropped the ball on Larrabee. At this point, every player I
    >>> know of has some combination of low power/graphics/installed base/
    >>> yield/gross margin problem. In other words, no one is safe. Well,
    >>> IBM is safe, but IBM employees are not, nor is IBM's presence in any
    >>> hardware business other than mainframes and businesses that are wired-
    >>> into that culture so that they'll be interested in other p-series
    >>> offerings. Intel has tons of money and probably a long run yet before
    >>> it realizes that it's in the same boat IBM was in (no lock on any
    >>> technology works forever).

    >> I'm not sure how you segued in from graphics chips to IBM. Not even sure
    >> why IBM is relevant to the entire discussion let alone graphics chips.
    >>

    > Cell? And the Intel of today has to worry that it is like the IBM of
    > 1990.


    Well then, Cell is already dead, killed off by its GPU competition. So
    how does that demonstrate your little theory that "IBM is safe"?

    >> I agree that AMD is in the same boat with Intel with regards to ARM.
    >> What I don't agree is that ARM is any sort of threat to either of them.
    >> ARM will never succeed in going into any sort of general-purpose
    >> computer, higher than a smartphone. ARM processors get seriously winded
    >> just trying to run background tasks in smartphones -- that's why
    >> smartphones don't do background tasks -- multitasking is outside the
    >> scope of ARM.
    >>

    > The computer architects I talk to don't seem to feel that way. If the
    > software model moves toward running on lots of fairly wimpy
    > processors, then ARM is a player. Multitasking is easy. You add more
    > processors. The world has passed you by. The idea that a processor
    > has to be really agile so that it can be shared among many processes
    > because it is so expensive is like the idea that you have to be really
    > clever about managing memory because it is so expensive.


    The world has hardly passed me by, you're just living in your own world
    that's all. As usual.

    If as you say all you need to do to get ARM properly multitasking, is to
    add lots of little ARM processors, then where does that leave ARM's
    much-vaunted power consumption? You've basically created a chip that
    consumes power like an x86 but isn't as well supported by software in
    this end of the market. If the idea is to do multitasking on a
    smartphone, then that smartphone will have only a few minutes of battery
    life, with a regular cellphone-sized battery. If you want the smartphone
    to last all day, then it won't be able to do the multitasking.

    >>> In short, I'm pretty happy becuase
    >>> 1. The future will almost inevitably involve vector/stream processors,
    >>> probably in a form that was beyond my wildest dreams.

    >> Not that it's replacing x86. Vector processors are just merging into x86
    >> processors, in the form of GPUs.
    >>

    > We don't know that at this point. The people who seem competent say
    > that the future might not need much in the way of a general purpose
    > cpu. IBM built blue Gene around embedded processors. ARM was a
    > player in the RFP from LLNL that IBM "won" (cough, cough). In a world
    > where performance/watt is the key metric, ARM looks like a player,
    > and, if it weren't for IBM's wired-in status at LLNL, we might already
    > be seeing ARM in the Top 500. One fairly wimpy something or other to
    > deal with process scheduling and communication, and a stream processor
    > to do the heavy lifting. You didn't believe it the first time we
    > talked about it and you don't believe it now, but I don't really care.


    So far all hybrid supercomputers have usually involved pairing an x86
    processor to something else. For example, the IBM Roadrunner
    supercomputer at LLNL was a pairing of Opteron with Cell processors.
    There's a Chinese supercomputer that paired Xeons with AMD GPUs. They're
    designing some kind of computer with Nvidia Fermi GPUs at Oak Ridge
    which also has x86 processors. Etc.

    The current trend towards heterogeneous hybrid solutions in
    supercomputing is only a temporary trend. It's being done to make up for
    performance deficiencies in homogeneous systems. And if the trend
    continued, then I agree ARM has a chance to become the management
    processor of such hybrid systems. However, I don't think think the trend
    will continue beyond the next few years. That's when x86-GPU processors
    will come online. So unless ARM can integrate a GPU into itself, its
    window of opportunity in supercomputing is very limited.

    >>> 2. The x86 monoculture is severely threatened.

    >> Not while people have their Windows addictions. Hell, even Linux is
    >> better supported under x86 than anything else.
    >>

    > Yes, but ARM is ubiquitous. Cell never took off because it never got
    > the developer support. ARM won't have that problem.


    ARM is ubiquitous in everything but general-purpose computing. Most
    software that exist for ARM are usually proprietary, where you can't
    even share software between two different products based on the same ARM
    processor.

    > It matters because, if the world is to move to a different model of
    > CPU architecture, the languages that are so widespread now will have
    > to undergo heavy duty changes. It matters little if the world remains
    > x86. The ISA doesn't matter all that much any more because the action
    > is going to be outside the individual CPU. Maybe x86 will morph into
    > something that handles the world of hundreds of cores deftly, or maybe
    > it won't.


    They're approaching it rapidly. Processors with six cores are already
    common (in servers). Next step are 8 and 12. The design time to come up
    with higher numbers of cores is getting smaller.

    >>> It looks like IBM
    >>> has salvaged what's left of its mainframe business, so I assume
    >>> they're going to be much less interested in pumping up AMD.

    >> Again, another segue into IBM. What's IBM got to do with anything? Since
    >> when was IBM pumping up AMD? AMD was an IBM customer in fab development,
    >> that's about all.
    >>

    > Oh, yes, IBM was heartbroken over the fate of Itanium and it had
    > nothing whatever to do with the timing of the (temporary) success of
    > AMD--temporary success that killed Itanium.


    IBM was one of the original Itanium backers, and came to AMD later than
    everybody else except Dell. They kept their AMD hardware choices
    purposely limited. Hardly a big supporter of AMD, much more an Intel
    supporter. Of course we all know now that was because of Intel treats
    and threats.

    The earliest supporters of AMD servers were HP, Sun, and Cray.

    >> The closest any company got to pumping AMD up was HP, your previous
    >> subject matter. HP was its most reliable customer.
    >>

    > There was a window where no one, including HP, could ignore Opteron.
    > HP didn't create the window.


    No one could ignore the Opteron, yet they did very purposefully. And
    yes, it wasn't HP that created the window, it was Sun.

    >>> Whatever it takes, so long as I don't have to listen to AMD whining
    >>> any more.

    >> All of the whining came from Intel, "Oh, <insert government entity here>
    >> doesn't know how to apply its own anti-trust laws, here's this is how it
    >> should've done it".
    >>

    > That isn't the whining I've been listening to on these forums, where
    > you have been a big contributor.


    It's not whining if it was proven true. So far all of the trials have
    come to the conclusion that AMD was right. Whining is what Intel is
    doing now that it's lost all of the cases, and all of its excuses were
    disproved, but it still keeps using those excuses.

    Yousuf Khan
     
  8. Robert Myers

    Robert Myers Guest

    Re: U.S. files antitrust suit against Intel -unfair tactics usedagainst rivals

    On Dec 27, 11:24 pm, Yousuf Khan <> wrote:
    > Robert Myers wrote:


    > > Cell?  And the Intel of today has to worry that it is like the IBM of
    > > 1990.

    >
    > Well then, Cell is already dead, killed off by its GPU competition. So
    > how does that demonstrate your little theory that "IBM is safe"?
    >

    We don't know if Cell is dead or not and we don't really know that it
    was killed off by GPU competition. More likely, it was killed off
    (if, indeed, it has been killed off) by a lack of developer support.
    I think the cost to manufacture it (because of yield problems) was
    also a factor, but there I'm guessing against denials from people who
    could conceivably know.

    Except for mainframes and business tied to mainframes, IBM doesn't
    want to be in the hardware business any more. Why should they? It's
    easier to make money in software and services and they've learned new
    lockin techniques.

    > The world has hardly passed me by, you're just living in your own world
    > that's all. As usual.
    >
    > If as you say all you need to do to get ARM properly multitasking, is to
    > add lots of little ARM processors, then where does that leave ARM's
    > much-vaunted power consumption? You've basically created a chip that
    > consumes power like an x86 but isn't as well supported by software in
    > this end of the market. If the idea is to do multitasking on a
    > smartphone, then that smartphone will have only a few minutes of battery
    > life, with a regular cellphone-sized battery. If you want the smartphone
    > to last all day, then it won't be able to do the multitasking.
    >

    Teensy little consumer devices have certainly helped to drive ARM and
    partly driven the obsession with performance per watt. The obsession
    with performance per watt and the potential for devices that do well
    on that measure goes well beyond smartphones. All kinds of devices
    that need to do multitasking will also need very low power
    consumption.

    The reasons for putting many small cores onto a larger die rather than
    trying to improve on yesterday's single-core performance are:

    1. Significant improvements in single core performance and power
    management seem unlikely.

    2. If you can't add performance by adding transistors without an
    unacceptable power penalty, the only way you can improve performance
    is to add cores.

    3. A physically big core consumes lots of power just in moving data
    around. One of the drivers behind many cores on a die is to reduce
    the power consumed in data movement--you put data on a small area (a
    core), do stuff with it, all in that small area, then move it back to
    cache--a much shorter net path for data, with a resulting decrease in
    power consumption.

    4. If you can do tasks in parallel, it's much more energy efficient to
    have two cores running at half speed than one core running at full
    speed. If you run many simpler cores at lower speed, the cost of
    abandoning power-hungry features like OoO can also be reduced to the
    point where you can do without them.

    As to who is living in what world, you can be certain that I didn't
    pluck any of this out of the aether.

    > So far all hybrid supercomputers have usually involved pairing an x86
    > processor to something else. For example, the IBM Roadrunner
    > supercomputer at LLNL was a pairing of Opteron with Cell processors.
    > There's a Chinese supercomputer that paired Xeons with AMD GPUs. They're
    > designing some kind of computer with Nvidia Fermi GPUs at Oak Ridge
    > which also has x86 processors. Etc.
    >

    Thanks for educating me, Yousuf. The ink was barely dry on the Blue
    Gene press releases when I said publicly they should have built it
    with Cell. I can't remember anyone else saying so, but I learned long
    ago that the national labs think of everything first.

    > The current trend towards heterogeneous hybrid solutions in
    > supercomputing is only a temporary trend. It's being done to make up for
    > performance deficiencies in homogeneous systems. And if the trend
    > continued, then I agree ARM has a chance to become the management
    > processor of such hybrid systems. However, I don't think think the trend
    > will continue beyond the next few years. That's when x86-GPU processors
    > will come online. So unless ARM can integrate a GPU into itself, its
    > window of opportunity in supercomputing is very limited.
    >

    ARM doesn't have to integrate anything. Only *someone* does. There
    will be a chip for applications other than graphics that integrates
    stream processing capabilities. That chip might use x86 or it might
    not. If someone with enough money comes along, it could use Power
    (some version of Cell, essentially). Or it could use ARM.

    > ARM is ubiquitous in everything but general-purpose computing. Most
    > software that exist for ARM are usually proprietary, where you can't
    > even share software between two different products based on the same ARM
    > processor.
    >

    You got linux, you got gcc, you got software. You already got
    developers up the wazoo.

    > > It matters because, if the world is to move to a different model of
    > > CPU architecture, the languages that are so widespread now will have
    > > to undergo heavy duty changes.  It matters little if the world remains
    > > x86.  The ISA doesn't matter all that much any more because the action
    > > is going to be outside the individual CPU.  Maybe x86 will morph into
    > > something that handles the world of hundreds of cores deftly, or maybe
    > > it won't.

    >
    > They're approaching it rapidly. Processors with six cores are already
    > common (in servers). Next step are 8 and 12. The design time to come up
    > with higher numbers of cores is getting smaller.
    >

    The problem, as I understand it, is system processes that scale as N^2
    and the desire (need) to do finer scale concurrency.

    > IBM was one of the original Itanium backers, and came to AMD later than
    > everybody else except Dell. They kept their AMD hardware choices
    > purposely limited. Hardly a big supporter of AMD, much more an Intel
    > supporter. Of course we all know now that was because of Intel treats
    > and threats.
    >

    Somewhere in there, a consent decree ceased to be in effect. Wouldn't
    you know it? IBM was *much* less interested in Itanium. Who is a
    whiner and who has a legitimate beef depends on where you sit, and I
    think I know where you sit.

    > > There was a window where no one, including HP, could ignore Opteron.
    > > HP didn't create the window.

    >
    > No one could ignore the Opteron, yet they did very purposefully. And
    > yes, it wasn't HP that created the window, it was Sun.
    >

    Nope. 'Twas Big Blue. Without Big Blue, there would have been no
    Opteron and probably no AMD.

    > > That isn't the whining I've been listening to on these forums, where
    > > you have been a big contributor.

    >
    > It's not whining if it was proven true. So far all of the trials have
    > come to the conclusion that AMD was right. Whining is what Intel is
    > doing now that it's lost all of the cases, and all of its excuses were
    > disproved, but it still keeps using those excuses.
    >

    Listen. Here's the deal. I like smart technical guys. I like smart
    marketers and managers. I don't like technical guys who only think
    they're smart. I don't like marketers and managers who shoot
    themselves in the foot, say, by needlessly pissing off technical
    guys. And I don't like lawyers of any kind, nor do I like people who
    need lawyers even to play. You have your standards, and I have mine.

    Robert.
     
  9. Yousuf Khan

    Yousuf Khan Guest

    Re: U.S. files antitrust suit against Intel -unfair tactics usedagainst rivals

    Robert Myers wrote:
    > On Dec 27, 11:24 pm, Yousuf Khan <> wrote:
    >> Robert Myers wrote:

    >
    >>> Cell? And the Intel of today has to worry that it is like the IBM of
    >>> 1990.

    >> Well then, Cell is already dead, killed off by its GPU competition. So
    >> how does that demonstrate your little theory that "IBM is safe"?
    >>

    > We don't know if Cell is dead or not and we don't really know that it
    > was killed off by GPU competition. More likely, it was killed off
    > (if, indeed, it has been killed off) by a lack of developer support.
    > I think the cost to manufacture it (because of yield problems) was
    > also a factor, but there I'm guessing against denials from people who
    > could conceivably know.


    They're paying some lip-service to "the concepts of Cell may be applied
    in some future products". Which means to me it's dead.

    >> If as you say all you need to do to get ARM properly multitasking, is to
    >> add lots of little ARM processors, then where does that leave ARM's
    >> much-vaunted power consumption? You've basically created a chip that
    >> consumes power like an x86 but isn't as well supported by software in
    >> this end of the market. If the idea is to do multitasking on a
    >> smartphone, then that smartphone will have only a few minutes of battery
    >> life, with a regular cellphone-sized battery. If you want the smartphone
    >> to last all day, then it won't be able to do the multitasking.
    >>

    > Teensy little consumer devices have certainly helped to drive ARM and
    > partly driven the obsession with performance per watt. The obsession
    > with performance per watt and the potential for devices that do well
    > on that measure goes well beyond smartphones. All kinds of devices
    > that need to do multitasking will also need very low power
    > consumption.
    >
    > The reasons for putting many small cores onto a larger die rather than
    > trying to improve on yesterday's single-core performance are:


    <snip>

    Yeah, thanks for telling me all of the advantages of multi-cores for
    multitasking. That's not the point. How will a multi-core ARM keep its
    power consumption advantage with so many cores? In order to match the
    multitasking capabilities of x86, power consumption of ARM will go up,
    it may match the consumption of x86 too.

    And let's not forget that x86 was able to multitask long before it had
    multicores. It had enough performance to time-slice, therefore it had
    high power consumption. Even a single-core ARM that can match that
    multitasking capability of x86 would consume like an x86.

    It doesn't matter if you add cores or if you increase single-core
    performance, in order to multitask power consumption will go up.

    >> The world has hardly passed me by, you're just living in your own world
    >> that's all. As usual.
    >>

    > As to who is living in what world, you can be certain that I didn't
    > pluck any of this out of the aether.


    No, you plucked it out of the Internet, which is just the Ether, I guess.

    >> So far all hybrid supercomputers have usually involved pairing an x86
    >> processor to something else. For example, the IBM Roadrunner
    >> supercomputer at LLNL was a pairing of Opteron with Cell processors.
    >> There's a Chinese supercomputer that paired Xeons with AMD GPUs. They're
    >> designing some kind of computer with Nvidia Fermi GPUs at Oak Ridge
    >> which also has x86 processors. Etc.
    >>

    > Thanks for educating me, Yousuf. The ink was barely dry on the Blue
    > Gene press releases when I said publicly they should have built it
    > with Cell. I can't remember anyone else saying so, but I learned long
    > ago that the national labs think of everything first.
    >
    >> The current trend towards heterogeneous hybrid solutions in
    >> supercomputing is only a temporary trend. It's being done to make up for
    >> performance deficiencies in homogeneous systems. And if the trend
    >> continued, then I agree ARM has a chance to become the management
    >> processor of such hybrid systems. However, I don't think think the trend
    >> will continue beyond the next few years. That's when x86-GPU processors
    >> will come online. So unless ARM can integrate a GPU into itself, its
    >> window of opportunity in supercomputing is very limited.
    >>

    > ARM doesn't have to integrate anything. Only *someone* does. There
    > will be a chip for applications other than graphics that integrates
    > stream processing capabilities. That chip might use x86 or it might
    > not. If someone with enough money comes along, it could use Power
    > (some version of Cell, essentially). Or it could use ARM.


    No, actually ARM does have to. Putting heterogeneous processors on a
    system is a major pain the ass. You need to design special physical
    connections for each type of processor, not to mention that you have to
    design special software for each type. If you have a processor that
    integrates two different types of processors inside a single package, it
    reduces your costs considerably. Saves hardware and software costs. It's
    the main reason why not more supercomputers have adopted the
    heterogeneous route.

    >> ARM is ubiquitous in everything but general-purpose computing. Most
    >> software that exist for ARM are usually proprietary, where you can't
    >> even share software between two different products based on the same ARM
    >> processor.
    >>

    > You got linux, you got gcc, you got software. You already got
    > developers up the wazoo.


    Linux is hardly the most popular software platform for ARM. The most
    popular platform for ARM is: no platform at all. Most apps that exist on
    ARM are not running on any OS whatsoever, bare-metal apps. It's because
    of these one-off bare-metal apps that ARM is said to have so much
    software written for it.

    In the world of Linux on ARM, software support still lags that of the
    world of Linux on x86.

    >> They're approaching it rapidly. Processors with six cores are already
    >> common (in servers). Next step are 8 and 12. The design time to come up
    >> with higher numbers of cores is getting smaller.
    >>

    > The problem, as I understand it, is system processes that scale as N^2
    > and the desire (need) to do finer scale concurrency.


    Actually, they're going overboard with multi-cores now. They need to
    find a use for all of these cores, and there isn't enough multitasks to
    go around most of the time. Even when an app is multithreaded, it still
    can't make full use of all of the resources.

    >> IBM was one of the original Itanium backers, and came to AMD later than
    >> everybody else except Dell. They kept their AMD hardware choices
    >> purposely limited. Hardly a big supporter of AMD, much more an Intel
    >> supporter. Of course we all know now that was because of Intel treats
    >> and threats.
    >>

    > Somewhere in there, a consent decree ceased to be in effect. Wouldn't
    > you know it? IBM was *much* less interested in Itanium. Who is a
    > whiner and who has a legitimate beef depends on where you sit, and I
    > think I know where you sit.


    There's been plenty written about how many OEMs were bullied by Intel
    into cancelling or delaying products based on non-Intel products.
    Certain OEMs caved to it more readily than others. IBM certainly wasn't
    the biggest caver, but it was average on the scale. Top of the scale was
    Dell.

    >>> There was a window where no one, including HP, could ignore Opteron.
    >>> HP didn't create the window.

    >> No one could ignore the Opteron, yet they did very purposefully. And
    >> yes, it wasn't HP that created the window, it was Sun.
    >>

    > Nope. 'Twas Big Blue. Without Big Blue, there would have been no
    > Opteron and probably no AMD.


    You give IBM much too much credit for being AMD's guardian angel. The
    semiconductor folks certainly helped out -- for a price. The server
    folks were very meek.

    >>> That isn't the whining I've been listening to on these forums, where
    >>> you have been a big contributor.

    >> It's not whining if it was proven true. So far all of the trials have
    >> come to the conclusion that AMD was right. Whining is what Intel is
    >> doing now that it's lost all of the cases, and all of its excuses were
    >> disproved, but it still keeps using those excuses.
    >>

    > Listen. Here's the deal. I like smart technical guys. I like smart
    > marketers and managers. I don't like technical guys who only think
    > they're smart. I don't like marketers and managers who shoot
    > themselves in the foot, say, by needlessly pissing off technical
    > guys. And I don't like lawyers of any kind, nor do I like people who
    > need lawyers even to play. You have your standards, and I have mine.



    So you're saying because AMD needed lawyers to assert its rights, it was
    whining? Similarly, all of those software companies killed by Microsoft
    over the years, where also whiners? They certainly had to get the
    anti-trust authorities after Microsoft to get anywhere. Any company that
    has an elephant stepping on its neck deserves what's coming to it?

    Yousuf Khan
     
  10. Robert Myers

    Robert Myers Guest

    Re: U.S. files antitrust suit against Intel -unfair tactics usedagainst rivals

    On Dec 29, 4:07 am, Yousuf Khan <> wrote:
    > Robert Myers wrote:


    > > We don't know if Cell is dead or not and we don't really know that it
    > > was killed off by GPU competition.  More likely, it was killed off
    > > (if, indeed, it has been killed off) by a lack of developer support.
    > > I think the cost to manufacture it (because of yield problems) was
    > > also a factor, but there I'm guessing against denials from people who
    > > could conceivably know.

    >
    > They're paying some lip-service to "the concepts of Cell may be applied
    > in some future products". Which means to me it's dead.
    >

    IBM will be using Cell technology. It has killed off one particular
    discrete product. That's all.

    >
    > Yeah, thanks for telling me all of the advantages of multi-cores for
    > multitasking. That's not the point. How will a multi-core ARM keep its
    > power consumption advantage with so many cores? In order to match the
    > multitasking capabilities of x86, power consumption of ARM will go up,
    > it may match the consumption of x86 too.
    >
    > And let's not forget that x86 was able to multitask long before it had
    > multicores. It had enough performance to time-slice, therefore it had
    > high power consumption. Even a single-core ARM that can match that
    > multitasking capability of x86 would consume like an x86.
    >
    > It doesn't matter if you add cores or if you increase single-core
    > performance, in order to multitask power consumption will go up.
    >

    One last, time, and then I'm going to stop trying.

    The metric that matters is performance/watt, and many smaller, simpler
    cores win big on that metric for reasons I explained--if you can make
    the software take advantage of the parallelism.

    > > As to who is living in what world, you can be certain that I didn't
    > > pluck any of this out of the aether.

    >
    > No, you plucked it out of the Internet, which is just the Ether, I guess.
    >

    If I do just pluck things out, at least I get it right when I do.
    I'll offer the hint, which I know you won't take, that not everyone
    gets their ideas from someone else.

    >
    > No, actually ARM does have to. Putting heterogeneous processors on a
    > system is a major pain the ass. You need to design special physical
    > connections for each type of processor, not to mention that you have to
    > design special software for each type. If you have a processor that
    > integrates two different types of processors inside a single package, it
    > reduces your costs considerably. Saves hardware and software costs. It's
    > the main reason why not more supercomputers have adopted the
    > heterogeneous route.
    >

    I guess you don't know about nVidia Tegra.

    >
    > Linux is hardly the most popular software platform for ARM. The most
    > popular platform for ARM is: no platform at all. Most apps that exist on
    > ARM are not running on any OS whatsoever, bare-metal apps. It's because
    > of these one-off bare-metal apps that ARM is said to have so much
    > software written for it.
    >
    > In the world of Linux on ARM, software support still lags that of the
    > world of Linux on x86.
    >

    Big deal. If ARM gets a significant power/performance advantage,
    watch the x86 diehards move on. Intel, at least, knows that, even if
    you don't. ARM has a small but possibly significant and probably
    fundamental advantage over x86 in the power/performance department.

    >
    > > The problem, as I understand it, is system processes that scale as N^2
    > > and the desire (need) to do finer scale concurrency.

    >
    > Actually, they're going overboard with multi-cores now. They need to
    > find a use for all of these cores, and there isn't enough multitasks to
    > go around most of the time. Even when an app is multithreaded, it still
    > can't make full use of all of the resources.
    >

    Everyone who matters understands that. The world of software will
    have to change.

    > There's been plenty written about how many OEMs were bullied by Intel
    > into cancelling or delaying products based on non-Intel products.
    > Certain OEMs caved to it more readily than others. IBM certainly wasn't
    > the biggest caver, but it was average on the scale. Top of the scale was
    > Dell.


    Dell wasn't bullied. It was bought. It was a good deal for Intel, a
    good deal for Dell, and a good deal for consumers. I don't see
    anything immoral about it. The Intel emails that have been discovered
    only make obvious what everyone with half a brain knew to be true.
    Rather than building it's own distribution channel, Intel obtained the
    exclusive services of one. If there is some law against that, the law
    is wrong.

    Intel's "bullying" of other distributors is a different matter. Can
    an HP or an IBM be bullied? Maybe they can be. Those are the kinds
    of facts you need to put in front of a jury. I'm sure that other,
    smaller players could easily have been bullied and some of it might
    have been immoral or illegal or both. How much it amounts to is
    another matter.

    > You give IBM much too much credit for being AMD's guardian angel. The
    > semiconductor folks certainly helped out -- for a price. The server
    > folks were very meek.
    >

    The help was absolutely essential. IBM didn't have to, and they could
    have exacted a much higher price.

    Of course the semiconductor people were unenthusiastic. Physical
    scientists and hardware types at IBM all have to be unhappy giving
    their jobs and their business to others. In fact, it's hard for me to
    know how IBM keeps domestic employees at all, as their jobs usually
    turn out to be facilitating the export of their own livelihood. As a
    company, IBM is doing great.

    >
    > So you're saying because AMD needed lawyers to assert its rights, it was
    > whining? Similarly, all of those software companies killed by Microsoft
    > over the years, where also whiners? They certainly had to get the
    > anti-trust authorities after Microsoft to get anywhere. Any company that
    > has an elephant stepping on its neck deserves what's coming to it?


    Intel isn't remotely like Microsoft. Microsoft has been like the
    Asian Carp of software. As one venture capitalist put it, Microsoft
    takes other people's ideas and turns them into bad products.
    Microsoft and AMD are both "me-too" companies. AMD whines that they
    don't get to eat enough of Intel's lunch. Too bad for them, I say.
    Intel is ruthless in protecting its lunch? Tell me a business that
    *doesn't* work that way.

    Whether the ubiquity of microprocessors is a good thing or not is a
    separate question. IBM didn't make it happen. Wintel did, and, yes,
    Microsoft's ruthlessness played a big role in making that happen.
    Apple deserves some big points, too. AMD is just a parasite, although
    they could have been much, much more. Just my opinion, of course.

    Robert.
     
  11. Yousuf Khan

    Yousuf Khan Guest

    Re: U.S. files antitrust suit against Intel -unfair tactics usedagainst rivals

    Robert Myers wrote:
    > IBM will be using Cell technology. It has killed off one particular
    > discrete product. That's all.


    We'll see. Or perhaps, we won't see, I guess. At some point IBM won't
    have returned to Cell technology long enough, where you'll have to call
    it as IBM has abandoned it. However, by that point who will care?

    > One last, time, and then I'm going to stop trying.
    >
    > The metric that matters is performance/watt, and many smaller, simpler
    > cores win big on that metric for reasons I explained--if you can make
    > the software take advantage of the parallelism.


    "If software can take advantage of the parallelism" is exactly why your
    thesis is faulty. If there isn't enough parallelism there, then you have
    a lot of little cores sitting idle doing nothing, taking up a minuscule
    amount of power, but taking up power nonetheless.

    >> No, actually ARM does have to. Putting heterogeneous processors on a
    >> system is a major pain the ass. You need to design special physical
    >> connections for each type of processor, not to mention that you have to
    >> design special software for each type. If you have a processor that
    >> integrates two different types of processors inside a single package, it
    >> reduces your costs considerably. Saves hardware and software costs. It's
    >> the main reason why not more supercomputers have adopted the
    >> heterogeneous route.
    >>

    > I guess you don't know about nVidia Tegra.


    That would be a step in the right direction, except Nvidia has installed
    a low-performance GPU with it, and it's wholly inadequate for the task
    of supercomputing. Tegra is a good solution for a consumer device
    however, which is what it was meant to be.

    The type of GPUs required for supercomputing are what are being planned
    by AMD for its x86/ATI Fusion strategy.

    >> Linux is hardly the most popular software platform for ARM. The most
    >> popular platform for ARM is: no platform at all. Most apps that exist on
    >> ARM are not running on any OS whatsoever, bare-metal apps. It's because
    >> of these one-off bare-metal apps that ARM is said to have so much
    >> software written for it.
    >>
    >> In the world of Linux on ARM, software support still lags that of the
    >> world of Linux on x86.
    >>

    > Big deal. If ARM gets a significant power/performance advantage,
    > watch the x86 diehards move on. Intel, at least, knows that, even if
    > you don't. ARM has a small but possibly significant and probably
    > fundamental advantage over x86 in the power/performance department.


    As has been told to you over and over again, ARM does have a fundamental
    advantage over x86, in the consumer electronics devices. Nowhere else.

    >> There's been plenty written about how many OEMs were bullied by Intel
    >> into cancelling or delaying products based on non-Intel products.
    >> Certain OEMs caved to it more readily than others. IBM certainly wasn't
    >> the biggest caver, but it was average on the scale. Top of the scale was
    >> Dell.

    >
    > Dell wasn't bullied. It was bought. It was a good deal for Intel, a
    > good deal for Dell, and a good deal for consumers. I don't see
    > anything immoral about it. The Intel emails that have been discovered
    > only make obvious what everyone with half a brain knew to be true.


    Dell was also bullied, when it wanted to go with AMD products, it was
    threatened with loss of its "discounts". Discounts should only be based
    on how much of your products somebody sells, not on how much of somebody
    else's products you don't sell. At that time, Intel's money was over
    half of Dell's profits.

    > Rather than building it's own distribution channel, Intel obtained the
    > exclusive services of one. If there is some law against that, the law
    > is wrong.


    I'm sure when you're allowed to write your own laws, you'll have some
    doozies for laws. All your anti-trust laws will say, "applies double to
    Microsoft, doesn't apply to Intel." :)

    > Intel's "bullying" of other distributors is a different matter. Can
    > an HP or an IBM be bullied? Maybe they can be. Those are the kinds
    > of facts you need to put in front of a jury. I'm sure that other,
    > smaller players could easily have been bullied and some of it might
    > have been immoral or illegal or both. How much it amounts to is
    > another matter.


    Well, as a matter of fact they were put before a jury (several separate
    ones actually) and it was proven true. That's why Intel lost all of
    those anti-trust cases. So much so that Intel couldn't go through with
    the private case with AMD, it was just too obvious where things were going.

    >> You give IBM much too much credit for being AMD's guardian angel. The
    >> semiconductor folks certainly helped out -- for a price. The server
    >> folks were very meek.
    >>

    > The help was absolutely essential. IBM didn't have to, and they could
    > have exacted a much higher price.
    >
    > Of course the semiconductor people were unenthusiastic. Physical
    > scientists and hardware types at IBM all have to be unhappy giving
    > their jobs and their business to others. In fact, it's hard for me to
    > know how IBM keeps domestic employees at all, as their jobs usually
    > turn out to be facilitating the export of their own livelihood. As a
    > company, IBM is doing great.


    IBM's business is now built on outsourcing, hopefully where they are on
    the receiving end of outsourcing contracts. I worked for a division of
    IBM that did just that. So I don't see why IBM's semiconductor people
    would have any trouble being contracted to other companies.

    >> So you're saying because AMD needed lawyers to assert its rights, it was
    >> whining? Similarly, all of those software companies killed by Microsoft
    >> over the years, where also whiners? They certainly had to get the
    >> anti-trust authorities after Microsoft to get anywhere. Any company that
    >> has an elephant stepping on its neck deserves what's coming to it?

    >
    > Intel isn't remotely like Microsoft. Microsoft has been like the
    > Asian Carp of software. As one venture capitalist put it, Microsoft
    > takes other people's ideas and turns them into bad products.
    > Microsoft and AMD are both "me-too" companies. AMD whines that they
    > don't get to eat enough of Intel's lunch. Too bad for them, I say.
    > Intel is ruthless in protecting its lunch? Tell me a business that
    > *doesn't* work that way.


    Intel is definitely a "me-too" company. All of the innovation in the
    last decade has been from AMD, with Intel following along. 64-bit,
    virtualization, multi-core, etc.

    > Whether the ubiquity of microprocessors is a good thing or not is a
    > separate question. IBM didn't make it happen. Wintel did, and, yes,
    > Microsoft's ruthlessness played a big role in making that happen.


    What the hell are you talking about? The whole Wintel PC started out as
    the IBM PC, or have you forgotten? For the first 10 years of the PC
    business it was IBM creating the rules, until it was deposed by Intel
    and Microsoft. They are in the process of being deposed now too.


    > Apple deserves some big points, too. AMD is just a parasite, although
    > they could have been much, much more. Just my opinion, of course.



    Your opinion isn't worth much, just my opinion, of course. Apple
    deserves points? For what, how to keep prices high?

    Yousuf Khan
     
  12. Robert Myers

    Robert Myers Guest

    Re: U.S. files antitrust suit against Intel -unfair tactics usedagainst rivals

    On Dec 30, 3:36 pm, Yousuf Khan <> wrote:
    > Robert Myers wrote:
    > > IBM will be using Cell technology.   It has killed off one particular
    > > discrete product.  That's all.

    >
    > We'll see. Or perhaps, we won't see, I guess. At some point IBM won't
    > have returned to Cell technology long enough, where you'll have to call
    > it as IBM has abandoned it. However, by that point who will care?
    >

    We may never see another discrete Cell product, but the technology has
    so many advantages that there is no way IBM is just going to dump it
    into the ocean. After all, IBM has to compete with the GPU crowd,
    too.

    > > One last, time, and then I'm going to stop trying.

    >
    > > The metric that matters is performance/watt, and many smaller, simpler
    > > cores win big on that metric for reasons I explained--if you can make
    > > the software take advantage of the parallelism.

    >
    > "If software can take advantage of the parallelism" is exactly why your
    > thesis is faulty. If there isn't enough parallelism there, then you have
    > a lot of little cores sitting idle doing nothing, taking up a minuscule
    > amount of power, but taking up power nonetheless.
    >

    At this point, no one really knows. If there is no way to take
    meaningful advantage of massive parallelism for a large enough class
    of applications, then computer architecture appears to be at a dead
    end. For lots of reasons that I don't really want to go into here, I
    think the future of massive parallelism is bright. You are entitled
    to a different opinion.

    > >> No, actually ARM does have to. Putting heterogeneous processors on a
    > >> system is a major pain the ass. You need to design special physical
    > >> connections for each type of processor, not to mention that you have to
    > >> design special software for each type. If you have a processor that
    > >> integrates two different types of processors inside a single package, it
    > >> reduces your costs considerably. Saves hardware and software costs. It's
    > >> the main reason why not more supercomputers have adopted the
    > >> heterogeneous route.

    >
    > > I guess you don't know about nVidia Tegra.

    >
    > That would be a step in the right direction, except Nvidia has installed
    > a low-performance GPU with it, and it's wholly inadequate for the task
    > of supercomputing. Tegra is a good solution for a consumer device
    > however, which is what it was meant to be.
    >
    > The type of GPUs required for supercomputing are what are being planned
    > by AMD for its x86/ATI Fusion strategy.
    >

    The point was that nVidia did the integration, not ARM. Or, rather,
    whatever ARM had to do is done. The idea that they lack the resources
    to do it is proven incorrect by example.


    >
    > > Big deal.  If ARM gets a significant power/performance advantage,
    > > watch the x86 diehards move on.  Intel, at least, knows that, even if
    > > you don't.  ARM has a small but possibly significant and probably
    > > fundamental advantage over x86 in the power/performance department.

    >
    > As has been told to you over and over again, ARM does have a fundamental
    > advantage over x86, in the consumer electronics devices. Nowhere else.
    >

    You know, Yousuf, if you actually had something to back up your
    snottiness, I could handle it.

    I asked this particular question of people who are actually competent
    to answer it. Parity of x86 with ARM, say, in a Blue Gene type
    configuration, with respect to energy efficiency is, at this point,
    entirely theoretical. How far Intel could push it if they wanted to
    badly enough is a question that gets answers from no difference to a
    small (say, ten percent) fundamental difference. An architect whom I
    trust and who certainly has an impressive resume has pointed out that,
    of the candidates on the table, x86, as I infer it because it has a
    CISC instruction set, has a fundamental disadvantage as to unavoidable
    gate delays, which I interpret to mean that it is at a fundamental
    disadvantage as to power consumption. More gates=more power
    consumption--maybe you'll give me that one.

    > >> There's been plenty written about how many OEMs were bullied by Intel
    > >> into cancelling or delaying products based on non-Intel products.
    > >> Certain OEMs caved to it more readily than others. IBM certainly wasn't
    > >> the biggest caver, but it was average on the scale. Top of the scale was
    > >> Dell.

    >
    > > Dell wasn't bullied.  It was bought.  It was a good deal for Intel,a
    > > good deal for Dell, and a good deal for consumers.  I don't see
    > > anything immoral about it.  The Intel emails that have been discovered
    > > only make obvious what everyone with half a brain knew to be true.

    >
    > Dell was also bullied, when it wanted to go with AMD products, it was
    > threatened with loss of its "discounts". Discounts should only be based
    > on how much of your products somebody sells, not on how much of somebody
    > else's products you don't sell. At that time, Intel's money was over
    > half of Dell's profits.
    >

    Who knows how the actual facts will come out. Intel funneled a big
    chunk of its advertising budget through Dell. If I were Intel buying
    Dell advertisements and sales brochures as marketing vehicles for my
    own product, I wouldn't be happy to be sharing advertising space with
    my chief competitor. How the law looks upon that sort of arrangement
    is beyond my knowledge and competence. As it is, Dell was just *one*
    vendor, and nothing about the arrangement seemed even remotely sneaky
    to me. "We'll pay for your advertising if you'll use our chips."

    > > Intel's "bullying" of other distributors is a different matter.  Can
    > > an HP or an IBM be bullied?  Maybe they can be.  Those are the kinds
    > > of facts you need to put in front of a jury.  I'm sure that other,
    > > smaller players could easily have been bullied and some of it might
    > > have been immoral or illegal or both.  How much it amounts to is
    > > another matter.

    >
    > Well, as a matter of fact they were put before a jury (several separate
    > ones actually) and it was proven true. That's why Intel lost all of
    > those anti-trust cases. So much so that Intel couldn't go through with
    > the private case with AMD, it was just too obvious where things were going.
    >

    As far as I know, nothing has ever been put in front of an American-
    style jury. So far, all the judgments so far have been made by what
    American law would refer to as an administrative law judge.

    > >> You give IBM much too much credit for being AMD's guardian angel. The
    > >> semiconductor folks certainly helped out -- for a price. The server
    > >> folks were very meek.

    >
    > > The help was absolutely essential.  IBM didn't have to, and they could
    > > have exacted a much higher price.

    >
    > > Of course the semiconductor people were unenthusiastic.  Physical
    > > scientists and hardware types at IBM all have to be unhappy giving
    > > their jobs and their business to others.  In fact, it's hard for me to
    > > know how IBM keeps domestic employees at all, as their jobs usually
    > > turn out to be facilitating the export of their own livelihood.  As a
    > > company, IBM is doing great.

    >
    > IBM's business is now built on outsourcing, hopefully where they are on
    > the receiving end of outsourcing contracts. I worked for a division of
    > IBM that did just that. So I don't see why IBM's semiconductor people
    > would have any trouble being contracted to other companies.
    >

    What? And have IBM know all your secrets? No, thank you. AMD did it
    because it had no other option.

    > >> So you're saying because AMD needed lawyers to assert its rights, it was
    > >> whining? Similarly, all of those software companies killed by Microsoft
    > >> over the years, where also whiners? They certainly had to get the
    > >> anti-trust authorities after Microsoft to get anywhere. Any company that
    > >> has an elephant stepping on its neck deserves what's coming to it?

    >
    > > Intel isn't remotely like Microsoft.  Microsoft has been like the
    > > Asian Carp of software.  As one venture capitalist put it, Microsoft
    > > takes other people's ideas and turns them into bad products.
    > > Microsoft and AMD are both "me-too" companies.  AMD whines that they
    > > don't get to eat enough of Intel's lunch.  Too bad for them, I say.
    > > Intel is ruthless in protecting its lunch?  Tell me a business that
    > > *doesn't* work that way.

    >
    > Intel is definitely a "me-too" company. All of the innovation in the
    > last decade has been from AMD, with Intel following along. 64-bit,
    > virtualization, multi-core, etc.
    >

    Intel's business model has been: fund lots of work by smart people in
    academia. Listen to the academics, but make decisions based on
    business considerations.

    AMD's business model has been: get a free ride on work funded by Intel
    and others. Take everyone else's ideas and claim them as your own, as
    if no one else had thought of it. Little wonder that you admire AMD.

    > > Whether the ubiquity of microprocessors is a good thing or not is a
    > > separate question.  IBM didn't make it happen.  Wintel did, and, yes,
    > > Microsoft's ruthlessness played a big role in making that happen.

    >
    > What the hell are you talking about? The whole Wintel PC started out as
    > the IBM PC, or have you forgotten? For the first 10 years of the PC
    > business it was IBM creating the rules, until it was deposed by Intel
    > and Microsoft. They are in the process of being deposed now too.
    >

    You're missing a big chunk of the history. Geeks could use command
    line interfaces and cope with kludged multi-tasking, but everyone else
    wanted a Mac. Macs were expensive, not customizable, not impressively
    reliable, and available from only a single vendor. When Microsoft
    turned the PC into something that could sort of be used like a Mac is
    when the dam broke. Companies not well positioned to cope with the
    attack of the killer micros fell like dominoes, but only after windows
    became available.

    Bill Gates understood that getting ordinary people to depend on the PC
    was the key, and much of what Microsoft did (like Encarta) was aimed
    at making that happen. Microsoft understood that making lots of money
    meant someone selling lots of PC's. That they were buggy and sluggish
    and unreliable was less important than making everyone think that they
    had to have one.


    > > Apple deserves some big points, too.  AMD is just a parasite, although
    > > they could have been much, much more.  Just my opinion, of course.

    >
    > Your opinion isn't worth much, just my opinion, of course. Apple
    > deserves points? For what, how to keep prices high?


    Another big chunk of history you're missing. The Apple II and Visi-
    Calc was the first dam break. The IBM PC was a hurry-up response to
    rapid uptake of Apple's by businesses who wanted to use it for little
    more than spreadsheets. IBM was the me-too company.

    Apple was also the first to put out a computer with a desktop and
    icons, ideas that it got from Xerox-PARC. Xerox has only itself to
    blame for dawdling with its good ideas.

    Robert.
     
  13. Re: U.S. files antitrust suit against Intel -unfair tactics usedagainst rivals

    Yousuf Khan wrote:
    > Robert Myers wrote:
    >> IBM will be using Cell technology. It has killed off one particular
    >> discrete product. That's all.

    >
    > We'll see. Or perhaps, we won't see, I guess. At some point IBM won't
    > have returned to Cell technology long enough, where you'll have to call
    > it as IBM has abandoned it. However, by that point who will care?
    >
    >> One last, time, and then I'm going to stop trying.
    >>
    >> The metric that matters is performance/watt, and many smaller, simpler
    >> cores win big on that metric for reasons I explained--if you can make
    >> the software take advantage of the parallelism.

    >
    > "If software can take advantage of the parallelism" is exactly why your
    > thesis is faulty. If there isn't enough parallelism there, then you have
    > a lot of little cores sitting idle doing nothing, taking up a minuscule
    > amount of power, but taking up power nonetheless.
    >

    There is whole class of problems, real world problems, which can not be solved
    other than serially. I'm tired and not about to look it up tonight, but not all
    problems we wish to solve are going to benefit from parallelism. The new Intel
    CPUs which run one CPU core at a higher clock when others are idle reflects
    this. There is both room and need for parallel processors with high performance
    to power ratios, and high performance single cores.

    > Intel is definitely a "me-too" company. All of the innovation in the
    > last decade has been from AMD, with Intel following along. 64-bit,
    > virtualization, multi-core, etc.


    64 bit? Really? When do you think the itanium came out vs. the first AMD 64 bit?
    And when did the first AMD hyperthread CPU come out? Depending on how you view
    IBM POWER SMT, and if you count shipment in a very few servers the same way as
    end user desktop availability, you might claim that IBM shipped SMT before Intel
    shipped HT, but Intel was clearly the innovator in shipping to any kind of mass
    market. Like every other company in the field Intel is doing a mix of innovation
    and trying to do better engineering on existing tech from someone else.

    >
    > Yousuf Khan
     
  14. Robert Myers

    Robert Myers Guest

    Re: U.S. files antitrust suit against Intel -unfair tactics usedagainst rivals

    On Jan 3, 12:10 am, Bill Davidsen <> wrote:

    > There is whole class of problems, real world problems, which can not be solved
    > other than serially. I'm tired and not about to look it up tonight, but not all
    > problems we wish to solve are going to benefit from parallelism. The new Intel
    > CPUs which run one CPU core at a higher clock when others are idle reflects
    > this. There is both room and need for parallel processors with high performance
    > to power ratios, and high performance single cores.


    Yes, but we've sort of run out of places to go. There are things that
    *could* be done that could improve single-thread performance that
    probably won't be done in hardware because of the low-power mantra.
    Not only that, but cleverness in software can give you most of what
    the hardware could give you through cleverness, anyway.

    If there were all that much urgent demand for single-thread
    performance, you'd see a substantial market for heroic cooling. If
    there is such a demand, I don't see it.

    If we want significant progress on single-thread performance, we
    should be praying for physics miracles, because that's what it's going
    to take now.

    Robert.
     
  15. Yousuf Khan

    Yousuf Khan Guest

    Re: U.S. files antitrust suit against Intel -unfair tactics usedagainstrivals

    Bill Davidsen wrote:
    > Yousuf Khan wrote:
    >> Intel is definitely a "me-too" company. All of the innovation in the
    >> last decade has been from AMD, with Intel following along. 64-bit,
    >> virtualization, multi-core, etc.

    >
    > 64 bit? Really? When do you think the itanium came out vs. the first AMD
    > 64 bit? And when did the first AMD hyperthread CPU come out? Depending
    > on how you view IBM POWER SMT, and if you count shipment in a very few
    > servers the same way as end user desktop availability, you might claim
    > that IBM shipped SMT before Intel shipped HT, but Intel was clearly the
    > innovator in shipping to any kind of mass market. Like every other
    > company in the field Intel is doing a mix of innovation and trying to do
    > better engineering on existing tech from someone else.



    We are of course talking about 64-bit x86, since that is the topic we're
    limiting ourselves too.

    Yousuf Khan
     
  16. Re: U.S. files antitrust suit against Intel -unfair tactics usedagainstrivals

    Yousuf Khan wrote:
    > Bill Davidsen wrote:


    >
    > We are of course talking about 64-bit x86, since that is the topic we're
    > limiting ourselves too.


    The topic of the group isn't really "Windows Game Hardware," nor is the topic of
    the thread. You may restrict your discussion as you wish. Intel had both 32 bit
    and 64 bit early on, and AMD did nothing but "me too" for a long time before
    they developed competitive new technology of their own (I would put that around
    Athlon release).
     
  17. Re: U.S. files antitrust suit against Intel -unfair tactics usedagainst rivals

    Robert Myers wrote:
    > On Jan 3, 12:10 am, Bill Davidsen <> wrote:
    >
    >> There is whole class of problems, real world problems, which can not be solved
    >> other than serially. I'm tired and not about to look it up tonight, but not all
    >> problems we wish to solve are going to benefit from parallelism. The new Intel
    >> CPUs which run one CPU core at a higher clock when others are idle reflects
    >> this. There is both room and need for parallel processors with high performance
    >> to power ratios, and high performance single cores.

    >
    > Yes, but we've sort of run out of places to go. There are things that
    > *could* be done that could improve single-thread performance that
    > probably won't be done in hardware because of the low-power mantra.
    > Not only that, but cleverness in software can give you most of what
    > the hardware could give you through cleverness, anyway.
    >

    The problems which need serial solutions are not likely to be solved on a
    desktop, I doubt that power usage is important in theoretical physics and
    certain calculations related to weapons systems.

    > If there were all that much urgent demand for single-thread
    > performance, you'd see a substantial market for heroic cooling. If
    > there is such a demand, I don't see it.
    >

    Again, you're thinking desktop stuff, I believe. And the fact that Intel does
    speed one core when others are idle indicates that they see a benefit even on a
    desktop.

    > If we want significant progress on single-thread performance, we
    > should be praying for physics miracles, because that's what it's going
    > to take now.
    >

    I agree that's where the solution will lie, but I doubt miracles will be needed.
    Physics has a way of finding new things, engineers find a way first to make it
    useful, then to make it affordable. Recent CPUs seem to produce more work per
    cycles, so the push for clock speed has cooled, but marketers need some new
    feature to sell, so things will go faster, unfortunately vendors make what what
    they can sell, even if it isn't what the customer needs. I say that in general,
    not related to the usefulness of more single thread performance.
     
  18. Yousuf Khan

    Yousuf Khan Guest

    Re: U.S. files antitrust suit against Intel -unfair tactics usedagainstrivals

    Bill Davidsen wrote:
    > Yousuf Khan wrote:
    >> Bill Davidsen wrote:

    >
    >>
    >> We are of course talking about 64-bit x86, since that is the topic
    >> we're limiting ourselves too.

    >
    > The topic of the group isn't really "Windows Game Hardware," nor is the
    > topic of the thread. You may restrict your discussion as you wish. Intel
    > had both 32 bit and 64 bit early on, and AMD did nothing but "me too"
    > for a long time before they developed competitive new technology of
    > their own (I would put that around Athlon release).


    Where do you come up with "Windows game hardware"?

    I would put their branching away from Intel designs at K5 and K6 days.
    Prior to that, they were of course Intel's second source manufacturer.
    So by definition they had to be a "me-too" manufacturer at that time.
    Intel was happy to have them as a second source until about 286
    processor, and Intel wasn't so happy about it during and after the 386
    processor. But that's of course irrelevant, since a second source has to
    be identical to the primary source.

    Yousuf Khan
     
  19. Robert Myers

    Robert Myers Guest

    Re: U.S. files antitrust suit against Intel -unfair tactics usedagainst rivals

    On Jan 5, 3:30 pm, Bill Davidsen <> wrote:
    > Robert Myers wrote:
    > > On Jan 3, 12:10 am, Bill Davidsen <> wrote:

    >
    > >> There is whole class of problems, real world problems, which can not be solved
    > >> other than serially. I'm tired and not about to look it up tonight, but not all
    > >> problems we wish to solve are going to benefit from parallelism. The new Intel
    > >> CPUs which run one CPU core at a higher clock when others are idle reflects
    > >> this. There is both room and need for parallel processors with high performance
    > >> to power ratios, and high performance single cores.

    >
    > > Yes, but we've sort of run out of places to go.  There are things that
    > > *could* be done that could improve single-thread performance that
    > > probably won't be done in hardware because of the low-power mantra.
    > > Not only that, but cleverness in software can give you most of what
    > > the hardware could give you through cleverness, anyway.

    >
    > The problems which need serial solutions are not likely to be solved on a
    > desktop, I doubt that power usage is important in theoretical physics and
    > certain calculations related to weapons systems.
    >

    Power consumption has become a huge issue for the bomb labs. That's
    how IBM managed to sell its first Blue Gene (to LLNL)).

    If the NSA or the bomb labs were using heroic cooling for some single-
    threaded application, it would not surprise me.

    The power of this desktop vastly exceeds the power of the Cray-1,
    where I did an awful lot of physics. Right now, the problems that
    interest me most can use, at most, a mid-sized cluster because of
    bandwidth limitations, which, again,come down to a power issue. I can
    test all those clustering issues with the handful of such desktops I
    have, at least from the point of software development.

    > > If there were all that much urgent demand for single-thread
    > > performance, you'd see a substantial market for heroic cooling.  If
    > > there is such a demand, I don't see it.

    >
    > Again, you're thinking desktop stuff, I believe. And the fact that Intel does
    > speed one core when others are idle indicates that they see a benefit even on a
    > desktop.
    >

    Yes, Intel does speed one core about 10% in Turbo-Boost mode. If I
    were a marketer, I'd be ecstatic. I don't know why anyone else would
    care.

    > > If we want significant progress on single-thread performance, we
    > > should be praying for physics miracles, because that's what it's going
    > > to take now.

    >
    > I agree that's where the solution will lie, but I doubt miracles will be needed.
    > Physics has a way of finding new things, engineers find a way first to make it
    > useful, then to make it affordable. Recent CPUs seem to produce more workper
    > cycles, so the push for clock speed has cooled, but marketers need some new
    > feature to sell, so things will go faster, unfortunately vendors make what what
    > they can sell, even if it isn't what the customer needs. I say that in general,
    > not related to the usefulness of more single thread performance.


    I'm sure that manufacturers will take every advantage in speed they
    can find that doesn't unreasonably compromise clock speed. Memory
    issues, both latency and bandwidth, are more important for most
    applications than clock speed, and the effects of improvements in
    those, not clock speed, are what made i7 such a killer.

    Robert.
     
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Craig
    Replies:
    24
    Views:
    638
    Helen
    Aug 14, 2005
  2. TI Inside
    Replies:
    0
    Views:
    396
    TI Inside
    Nov 18, 2005
  3. YKhan
    Replies:
    0
    Views:
    301
    YKhan
    Sep 11, 2006
  4. Replies:
    2
    Views:
    390
  5. Mill
    Replies:
    2
    Views:
    304
    Barry Watzman
    Oct 26, 2007
Loading...

Share This Page