Motherboard Forums


Reply
Thread Tools Display Modes

U.S. files antitrust suit against Intel -unfair tactics used againstrivals

 
 
Intel Guy
Guest
Posts: n/a
 
      12-23-2009, 10:35 PM
http://www.washingtonpost.com/wp-dyn...121601121.html

U.S. files antitrust suit against Intel, alleges unfair tactics used
against rivals

By Cecilia Kang and Steven Mufson
Washington Post Staff Writer
Thursday, December 17, 2009

The Obama administration sued chip giant Intel on Wednesday over a
decade-long run of actions allegedly designed to stifle competition,
opening a new front in the battle that big technology firms have been
waging for years against antitrust challenges in Asia and Europe.

The Federal Trade Commission lawsuit resembles past cases brought
against Intel by Japanese, Korean and European Union regulators over
rival Advanced Micro Devices, and it adds new allegations that Intel
rigged its microprocessors in a way that made it difficult for a
competitor, Nvidia, to provide consumers with superior graphics
abilities for computer games and video.

Intel denied the allegations, saying that it "competed fairly and
lawfully" and that "its actions have benefited consumers."

The lawsuit marks a major step for President Obama toward fulfilling
his 2008 presidential campaign promise to "reinvigorate antitrust
enforcement." At the time, he criticized the Bush administration for
"what may be the weakest record of antitrust enforcement of any
administration in the last half century."

Other key antitrust tests lie ahead. The power of Google, Comcast's
proposed takeover of NBC, and the market share of the makers of mobile
phone handsets are all under examination by the Justice Department,
the Federal Communications Commission or the FTC.

The technology industry, which has also been wooed by Obama, has been
striving to resolve a string of antitrust actions in the United States
and abroad. On Wednesday, the European Union ended its decade-long
antitrust investigation of Microsoft after Microsoft agreed to market
rival browsers as well as its own Internet Explorer. On Nov. 12, Intel
paid $1.25 billion to rival AMD to drop antitrust and patent lawsuits
as well as complaints filed with agencies, including the FTC. Many
technology analysts were also cheered by the administration's decision
to let software giant Oracle acquire Sun Microsystems, despite
Oracle's dominant position in business software.

But the FTC on Wednesday alleged that Intel had used bullying tactics
and payments to get computer makers such as Dell and Hewlett-Packard
to use Intel chips instead of those made by AMD. The FTC complaint,
the culmination of a one-year investigation, said "that Intel fell
behind in the race for technological superiority in a number of
markets and resorted to a wide range of anticompetitive conduct,
including deception and coercion, to stall competitors until it could
catch up."

The agency added that "Intel's anticompetitive tactics were designed
to put the brakes on superior competitive products that threatened its
monopoly."

The FTC isn't seeking monetary damages from Intel. "We are frankly
more focused on conduct," Richard Feinstein, director of the FTC's
bureau of competition, said in a news conference. Such remedies could
include forcing Intel to share intellectual property with competitors.

The case could become a key test of antitrust law. Forged during the
Progressive Era a century ago, antitrust legislation was designed to
tame steel and oil monopolies, and was later applied to shoe and beer
makers.

But under the influence of University of Chicago economists and
others, courts began to worry about the harm antitrust enforcement
actions could do to innovation and ultimately to the consumers they
were supposed to protect. Over the past 15 years, federal courts have
made it harder to show abuse of monopoly power and to win suits for
treble damages. Judges taking a more skeptical view of antitrust
actions have ranged from federal appeals court judges Richard A.
Posner and Frank H. Easterbrook to Supreme Court Justice Stephen G.
Breyer.

Applying antitrust to the tech sector has been particularly thorny
because of falling prices, constant innovation and technology that
often changes faster than it takes to litigate an antitrust case. Yet
rarely have so few companies stayed so dominant in their fields as
Microsoft, Intel, Oracle and now Google.

"Concern over class actions, treble damages awards, and costly jury
trials have caused many courts in recent decades to limit the reach of
antitrust," FTC Chairman Jon Leibowitz and commissioner J. Thomas
Rosch said in a statement. "The result has been that some conduct
harmful to consumers may be given a 'free pass.' "

The Intel lawsuit relies on the rarely used Section 5 of the law that
established the FTC in 1914. Antitrust lawyers saw this as an effort
to speed up the litigation and sidestep obstacles federal courts have
erected in recent years.

Unlike lawsuits filed by the Justice Department's antitrust division,
which are tried by juries, the FTC suit goes to an administrative
judge and cannot be used by private plaintiffs seeking treble damages.
The FTC said it expects the Intel trial to start in nine months and
conclude within 20 months.

Three years ago, Leibowitz, then an FTC board member, argued for
resurrecting the use of Section 5. At the time, Joe Sims, an antitrust
lawyer at Jones Day, wrote that reviving the use of Section 5 "would
benefit no one other than antitrust lawyers." He said it "would signal
the potential for a retreat to the antitrust of the past or, perhaps,
the rather less bounded 'competition' policy that is applied by many
non-U.S. regulators less constrained by statutes and case law (and
sometimes common sense)."

But on Wednesday, Leibowitz reasserted that Section 5's "broad reach
is beyond dispute."

The case is also important to Intel, whose stock fell 42 cents, or 2.1
percent, to close at $19.38 a share on Wednesday.

The FTC case "shows that the strategy being followed by Intel applies
not only to AMD but to Nvidia," said Albert A. Foer, president of the
American Antitrust Institute. "A small competitor in a highly
concentrated market comes up with a new technology that looks better
than Intel's, so Intel comes up with strategies using primarily
pricing, including loyalty discounts, but also deception to hurt the
competitor as much as possible and at least delay it from having
success with its new technology until Intel can catch up."

He said the FTC could force Intel to license its processors to anyone
willing to pay for it. "That would be a bombshell remedy that could
open up the global market in computers," Foer said.

Intel said "the highly competitive microprocessor industry . . . has
kept innovation robust and prices declining at a faster rate than any
other industry." Intel General Counsel Doug Melamed said in a
statement that "this case could have, and should have, been settled."
He said the FTC insisted on "unprecedented remedies" that "would make
it impossible for Intel to conduct business."
 
Reply With Quote
 
 
 
 
Robert Myers
Guest
Posts: n/a
 
      12-23-2009, 11:43 PM
On Dec 23, 5:35*pm, Intel Guy <(E-Mail Removed)> wrote:
> http://www.washingtonpost.com/wp-dyn...09/12/16/AR200...
>
> U.S. files antitrust suit against Intel, alleges unfair tactics used
> against rivals
>


There is a solution. Intel should just buy everyone out, starting
with Obama, who clearly has a price (just ask big pharma).

Then Intel can work its way downward, all the way to the whining
Internet trolls who have used up so much bandwidth.

When everyone has been bought out, ARM can take over as Intel's rival,
AMD can go out of business as the just recompense for whiners, and we
can live the rest of our lives in peace, while Intel and ARM slug it
out. By that point, gamers will be so knowledgeable and so advanced
that they can build their own custom GPU's. Apple? IBM? Hell, I
don't know. Maybe China can buy them out.

Robert.
 
Reply With Quote
 
 
 
 
Yousuf Khan
Guest
Posts: n/a
 
      12-26-2009, 08:23 PM
Robert Myers wrote:
> There is a solution. Intel should just buy everyone out, starting
> with Obama, who clearly has a price (just ask big pharma).
>
> Then Intel can work its way downward, all the way to the whining
> Internet trolls who have used up so much bandwidth.
>
> When everyone has been bought out, ARM can take over as Intel's rival,
> AMD can go out of business as the just recompense for whiners, and we
> can live the rest of our lives in peace, while Intel and ARM slug it
> out. By that point, gamers will be so knowledgeable and so advanced
> that they can build their own custom GPU's. Apple? IBM? Hell, I
> don't know. Maybe China can buy them out.
>
> Robert.


Feeling a little under siege there, big guy?

Yousuf Khan
 
Reply With Quote
 
Robert Myers
Guest
Posts: n/a
 
      12-26-2009, 11:48 PM
On Dec 26, 3:23*pm, Yousuf Khan <(E-Mail Removed)> wrote:
> Robert Myers wrote:
> > There is a solution. *Intel should just buy everyone out, starting
> > with Obama, who clearly has a price (just ask big pharma).

>
> > Then Intel can work its way downward, all the way to the whining
> > Internet trolls who have used up so much bandwidth.

>
> > When everyone has been bought out, ARM can take over as Intel's rival,
> > AMD can go out of business as the just recompense for whiners, and we
> > can live the rest of our lives in peace, while Intel and ARM slug it
> > out. *By that point, gamers will be so knowledgeable and so advanced
> > that they can build their own custom GPU's. *Apple? *IBM? *Hell, I
> > don't know. *Maybe China can buy them out.

>
> > Robert.

>
> Feeling a little under siege there, big guy?


Not especially, no.

I don't think anyone should feel safe right now. AMD won't cooperate
in the US investigations by agreement with Intel. I'm not a lawyer,
but, as I understand things, AMD can cooperate only to the extent that
it is forced to. No more running to the Principal's office for AMD.

Recent movements in AMD and NVidia stock (nearly parallel) are because
Intel dropped the ball on Larrabee. At this point, every player I
know of has some combination of low power/graphics/installed base/
yield/gross margin problem. In other words, no one is safe. Well,
IBM is safe, but IBM employees are not, nor is IBM's presence in any
hardware business other than mainframes and businesses that are wired-
into that culture so that they'll be interested in other p-series
offerings. Intel has tons of money and probably a long run yet before
it realizes that it's in the same boat IBM was in (no lock on any
technology works forever).

I'm sort of hoping that Arm will succeed in stirring the waters. I
hope that NVidia makes Fermi work, but, along with lots of others, I'm
skeptical. As usual, I am *not* rooting for AMD, which has the same
threat from ARM that Intel does, only AMD has lots less money than
does Intel.

In short, I'm pretty happy becuase

1. The future will almost inevitably involve vector/stream processors,
probably in a form that was beyond my wildest dreams.

2. The x86 monoculture is severely threatened.

3. The architecture focus is going to move away from the CPU, which
has been beaten to death and toward heterogeneous processing and
connectivity issues. What's left of CPU microarchitecture is going to
go toward low power/massively parallel operation, all good news for
scientific computing.

4. I foresee the end of supposedly clever but completely useless
programming models, most of which are a step back from Fortran,
because bolted-on concurrency won't work any more.

I gather that RHEL 6 won't support Itanium, so one assumes that's it
for Itanium, even on HP. I have no idea what HP plans in its place.
Maybe just a featured-up x86. Anything I ever wanted from Itanium can
be better supplied now from other sources, anyway. It looks like IBM
has salvaged what's left of its mainframe business, so I assume
they're going to be much less interested in pumping up AMD.

Whatever it takes, so long as I don't have to listen to AMD whining
any more.

Robert.
 
Reply With Quote
 
Yousuf Khan
Guest
Posts: n/a
 
      12-27-2009, 04:12 AM
Robert Myers wrote:
> On Dec 26, 3:23 pm, Yousuf Khan <(E-Mail Removed)> wrote:
>> Feeling a little under siege there, big guy?

>
> Not especially, no.


Sounds like the same kind of "no, I'm fine" that a boxer who's being
pounded into the mat would give.

> I don't think anyone should feel safe right now. AMD won't cooperate
> in the US investigations by agreement with Intel. I'm not a lawyer,
> but, as I understand things, AMD can cooperate only to the extent that
> it is forced to. No more running to the Principal's office for AMD.


AMD now has its own arbitration process with Intel, so it doesn't have
to go to the governments anymore, theoretically. Still, AMD would be a
fool to trust Intel to live up to its agreements, so it's likely going
to put no resistance up whatsoever if asked for documentation against
Intel. It just won't now gloat about it.

> Recent movements in AMD and NVidia stock (nearly parallel) are because
> Intel dropped the ball on Larrabee. At this point, every player I
> know of has some combination of low power/graphics/installed base/
> yield/gross margin problem. In other words, no one is safe. Well,
> IBM is safe, but IBM employees are not, nor is IBM's presence in any
> hardware business other than mainframes and businesses that are wired-
> into that culture so that they'll be interested in other p-series
> offerings. Intel has tons of money and probably a long run yet before
> it realizes that it's in the same boat IBM was in (no lock on any
> technology works forever).


I'm not sure how you segued in from graphics chips to IBM. Not even sure
why IBM is relevant to the entire discussion let alone graphics chips.

> I'm sort of hoping that Arm will succeed in stirring the waters. I
> hope that NVidia makes Fermi work, but, along with lots of others, I'm
> skeptical. As usual, I am *not* rooting for AMD, which has the same
> threat from ARM that Intel does, only AMD has lots less money than
> does Intel.


I agree that AMD is in the same boat with Intel with regards to ARM.
What I don't agree is that ARM is any sort of threat to either of them.
ARM will never succeed in going into any sort of general-purpose
computer, higher than a smartphone. ARM processors get seriously winded
just trying to run background tasks in smartphones -- that's why
smartphones don't do background tasks -- multitasking is outside the
scope of ARM.

> In short, I'm pretty happy becuase
>
> 1. The future will almost inevitably involve vector/stream processors,
> probably in a form that was beyond my wildest dreams.


Not that it's replacing x86. Vector processors are just merging into x86
processors, in the form of GPUs.

> 2. The x86 monoculture is severely threatened.


Not while people have their Windows addictions. Hell, even Linux is
better supported under x86 than anything else.

> 3. The architecture focus is going to move away from the CPU, which
> has been beaten to death and toward heterogeneous processing and
> connectivity issues. What's left of CPU microarchitecture is going to
> go toward low power/massively parallel operation, all good news for
> scientific computing.


Sure, but that's all still based around x86. The heterogeneous
processors will all be subprocessors of an x86 main processor. If
anything, the x86 will gain popularity as the manager of other processors.

> 4. I foresee the end of supposedly clever but completely useless
> programming models, most of which are a step back from Fortran,
> because bolted-on concurrency won't work any more.


Okay, whatever you say.

> I gather that RHEL 6 won't support Itanium, so one assumes that's it
> for Itanium, even on HP. I have no idea what HP plans in its place.
> Maybe just a featured-up x86. Anything I ever wanted from Itanium can
> be better supplied now from other sources, anyway.


HP-UX.

> It looks like IBM
> has salvaged what's left of its mainframe business, so I assume
> they're going to be much less interested in pumping up AMD.


Again, another segue into IBM. What's IBM got to do with anything? Since
when was IBM pumping up AMD? AMD was an IBM customer in fab development,
that's about all.

The closest any company got to pumping AMD up was HP, your previous
subject matter. HP was its most reliable customer.

> Whatever it takes, so long as I don't have to listen to AMD whining
> any more.



All of the whining came from Intel, "Oh, <insert government entity here>
doesn't know how to apply its own anti-trust laws, here's this is how it
should've done it".

Yousuf Khan
 
Reply With Quote
 
Robert Myers
Guest
Posts: n/a
 
      12-27-2009, 07:26 AM
On Dec 26, 11:12*pm, Yousuf Khan <(E-Mail Removed)> wrote:
> Robert Myers wrote:
> > On Dec 26, 3:23 pm, Yousuf Khan <(E-Mail Removed)> wrote:
> >> Feeling a little under siege there, big guy?

>
> > Not especially, no.

>
> Sounds like the same kind of "no, I'm fine" that a boxer who's being
> pounded into the mat would give.
>

I'm not a fighter. I'm a lover.

> > I don't think anyone should feel safe right now. *AMD won't cooperate
> > in the US investigations by agreement with Intel. *I'm not a lawyer,
> > but, as I understand things, AMD can cooperate only to the extent that
> > it is forced to. *No more running to the Principal's office for AMD.

>
> AMD now has its own arbitration process with Intel, so it doesn't have
> to go to the governments anymore, theoretically. Still, AMD would be a
> fool to trust Intel to live up to its agreements, so it's likely going
> to put no resistance up whatsoever if asked for documentation against
> Intel. It just won't now gloat about it.
>
> > Recent movements in AMD and NVidia stock (nearly parallel) are because
> > Intel dropped the ball on Larrabee. *At this point, every player I
> > know of has some combination of low power/graphics/installed base/
> > yield/gross margin problem. *In other words, no one is safe. *Well,
> > IBM is safe, but IBM employees are not, nor is IBM's presence in any
> > hardware business other than mainframes and businesses that are wired-
> > into that culture so that they'll be interested in other p-series
> > offerings. *Intel has tons of money and probably a long run yet before
> > it realizes that it's in the same boat IBM was in (no lock on any
> > technology works forever).

>
> I'm not sure how you segued in from graphics chips to IBM. Not even sure
> why IBM is relevant to the entire discussion let alone graphics chips.
>

Cell? And the Intel of today has to worry that it is like the IBM of
1990.

> > I'm sort of hoping that Arm will succeed in stirring the waters. *I
> > hope that NVidia makes Fermi work, but, along with lots of others, I'm
> > skeptical. *As usual, I am *not* rooting for AMD, which has the same
> > threat from ARM that Intel does, only AMD has lots less money than
> > does Intel.

>
> I agree that AMD is in the same boat with Intel with regards to ARM.
> What I don't agree is that ARM is any sort of threat to either of them.
> ARM will never succeed in going into any sort of general-purpose
> computer, higher than a smartphone. ARM processors get seriously winded
> just trying to run background tasks in smartphones -- that's why
> smartphones don't do background tasks -- multitasking is outside the
> scope of ARM.
>

The computer architects I talk to don't seem to feel that way. If the
software model moves toward running on lots of fairly wimpy
processors, then ARM is a player. Multitasking is easy. You add more
processors. The world has passed you by. The idea that a processor
has to be really agile so that it can be shared among many processes
because it is so expensive is like the idea that you have to be really
clever about managing memory because it is so expensive.

> > In short, I'm pretty happy becuase

>
> > 1. The future will almost inevitably involve vector/stream processors,
> > probably in a form that was beyond my wildest dreams.

>
> Not that it's replacing x86. Vector processors are just merging into x86
> processors, in the form of GPUs.
>

We don't know that at this point. The people who seem competent say
that the future might not need much in the way of a general purpose
cpu. IBM built blue Gene around embedded processors. ARM was a
player in the RFP from LLNL that IBM "won" (cough, cough). In a world
where performance/watt is the key metric, ARM looks like a player,
and, if it weren't for IBM's wired-in status at LLNL, we might already
be seeing ARM in the Top 500. One fairly wimpy something or other to
deal with process scheduling and communication, and a stream processor
to do the heavy lifting. You didn't believe it the first time we
talked about it and you don't believe it now, but I don't really care.

> > 2. The x86 monoculture is severely threatened.

>
> Not while people have their Windows addictions. Hell, even Linux is
> better supported under x86 than anything else.
>

Yes, but ARM is ubiquitous. Cell never took off because it never got
the developer support. ARM won't have that problem.

> > 3. The architecture focus is going to move away from the CPU, which
> > has been beaten to death and toward heterogeneous processing and
> > connectivity issues. *What's left of CPU microarchitecture is going to
> > go toward low power/massively parallel operation, all good news for
> > scientific computing.

>
> Sure, but that's all still based around x86. The heterogeneous
> processors will all be subprocessors of an x86 main processor. If
> anything, the x86 will gain popularity as the manager of other processors..
>

Or subprocessors for ARM.

> > 4. I foresee the end of supposedly clever but completely useless
> > programming models, most of which are a step back from Fortran,
> > because bolted-on concurrency won't work any more.

>
> Okay, whatever you say.
>

It matters because, if the world is to move to a different model of
CPU architecture, the languages that are so widespread now will have
to undergo heavy duty changes. It matters little if the world remains
x86. The ISA doesn't matter all that much any more because the action
is going to be outside the individual CPU. Maybe x86 will morph into
something that handles the world of hundreds of cores deftly, or maybe
it won't.

> > I gather that RHEL 6 won't support Itanium, so one assumes that's it
> > for Itanium, even on HP. *I have no idea what HP plans in its place.
> > Maybe just a featured-up x86. *Anything I ever wanted from Itanium can
> > be better supplied now from other sources, anyway. *

>
> HP-UX.
>

VMS has to move onto something, and IBM seems really determined to
move virtualized Linux onto big iron.

> > It looks like IBM
> > has salvaged what's left of its mainframe business, so I assume
> > they're going to be much less interested in pumping up AMD.

>
> Again, another segue into IBM. What's IBM got to do with anything? Since
> when was IBM pumping up AMD? AMD was an IBM customer in fab development,
> that's about all.
>

Oh, yes, IBM was heartbroken over the fate of Itanium and it had
nothing whatever to do with the timing of the (temporary) success of
AMD--temporary success that killed Itanium.

> The closest any company got to pumping AMD up was HP, your previous
> subject matter. HP was its most reliable customer.
>

There was a window where no one, including HP, could ignore Opteron.
HP didn't create the window.

> > Whatever it takes, so long as I don't have to listen to AMD whining
> > any more.

>
> All of the whining came from Intel, "Oh, <insert government entity here>
> doesn't know how to apply its own anti-trust laws, here's this is how it
> should've done it".
>

That isn't the whining I've been listening to on these forums, where
you have been a big contributor.

Robert.
 
Reply With Quote
 
Yousuf Khan
Guest
Posts: n/a
 
      12-28-2009, 04:24 AM
Robert Myers wrote:
> On Dec 26, 11:12 pm, Yousuf Khan <(E-Mail Removed)> wrote:
>> Robert Myers wrote:
>>> Recent movements in AMD and NVidia stock (nearly parallel) are because
>>> Intel dropped the ball on Larrabee. At this point, every player I
>>> know of has some combination of low power/graphics/installed base/
>>> yield/gross margin problem. In other words, no one is safe. Well,
>>> IBM is safe, but IBM employees are not, nor is IBM's presence in any
>>> hardware business other than mainframes and businesses that are wired-
>>> into that culture so that they'll be interested in other p-series
>>> offerings. Intel has tons of money and probably a long run yet before
>>> it realizes that it's in the same boat IBM was in (no lock on any
>>> technology works forever).

>> I'm not sure how you segued in from graphics chips to IBM. Not even sure
>> why IBM is relevant to the entire discussion let alone graphics chips.
>>

> Cell? And the Intel of today has to worry that it is like the IBM of
> 1990.


Well then, Cell is already dead, killed off by its GPU competition. So
how does that demonstrate your little theory that "IBM is safe"?

>> I agree that AMD is in the same boat with Intel with regards to ARM.
>> What I don't agree is that ARM is any sort of threat to either of them.
>> ARM will never succeed in going into any sort of general-purpose
>> computer, higher than a smartphone. ARM processors get seriously winded
>> just trying to run background tasks in smartphones -- that's why
>> smartphones don't do background tasks -- multitasking is outside the
>> scope of ARM.
>>

> The computer architects I talk to don't seem to feel that way. If the
> software model moves toward running on lots of fairly wimpy
> processors, then ARM is a player. Multitasking is easy. You add more
> processors. The world has passed you by. The idea that a processor
> has to be really agile so that it can be shared among many processes
> because it is so expensive is like the idea that you have to be really
> clever about managing memory because it is so expensive.


The world has hardly passed me by, you're just living in your own world
that's all. As usual.

If as you say all you need to do to get ARM properly multitasking, is to
add lots of little ARM processors, then where does that leave ARM's
much-vaunted power consumption? You've basically created a chip that
consumes power like an x86 but isn't as well supported by software in
this end of the market. If the idea is to do multitasking on a
smartphone, then that smartphone will have only a few minutes of battery
life, with a regular cellphone-sized battery. If you want the smartphone
to last all day, then it won't be able to do the multitasking.

>>> In short, I'm pretty happy becuase
>>> 1. The future will almost inevitably involve vector/stream processors,
>>> probably in a form that was beyond my wildest dreams.

>> Not that it's replacing x86. Vector processors are just merging into x86
>> processors, in the form of GPUs.
>>

> We don't know that at this point. The people who seem competent say
> that the future might not need much in the way of a general purpose
> cpu. IBM built blue Gene around embedded processors. ARM was a
> player in the RFP from LLNL that IBM "won" (cough, cough). In a world
> where performance/watt is the key metric, ARM looks like a player,
> and, if it weren't for IBM's wired-in status at LLNL, we might already
> be seeing ARM in the Top 500. One fairly wimpy something or other to
> deal with process scheduling and communication, and a stream processor
> to do the heavy lifting. You didn't believe it the first time we
> talked about it and you don't believe it now, but I don't really care.


So far all hybrid supercomputers have usually involved pairing an x86
processor to something else. For example, the IBM Roadrunner
supercomputer at LLNL was a pairing of Opteron with Cell processors.
There's a Chinese supercomputer that paired Xeons with AMD GPUs. They're
designing some kind of computer with Nvidia Fermi GPUs at Oak Ridge
which also has x86 processors. Etc.

The current trend towards heterogeneous hybrid solutions in
supercomputing is only a temporary trend. It's being done to make up for
performance deficiencies in homogeneous systems. And if the trend
continued, then I agree ARM has a chance to become the management
processor of such hybrid systems. However, I don't think think the trend
will continue beyond the next few years. That's when x86-GPU processors
will come online. So unless ARM can integrate a GPU into itself, its
window of opportunity in supercomputing is very limited.

>>> 2. The x86 monoculture is severely threatened.

>> Not while people have their Windows addictions. Hell, even Linux is
>> better supported under x86 than anything else.
>>

> Yes, but ARM is ubiquitous. Cell never took off because it never got
> the developer support. ARM won't have that problem.


ARM is ubiquitous in everything but general-purpose computing. Most
software that exist for ARM are usually proprietary, where you can't
even share software between two different products based on the same ARM
processor.

> It matters because, if the world is to move to a different model of
> CPU architecture, the languages that are so widespread now will have
> to undergo heavy duty changes. It matters little if the world remains
> x86. The ISA doesn't matter all that much any more because the action
> is going to be outside the individual CPU. Maybe x86 will morph into
> something that handles the world of hundreds of cores deftly, or maybe
> it won't.


They're approaching it rapidly. Processors with six cores are already
common (in servers). Next step are 8 and 12. The design time to come up
with higher numbers of cores is getting smaller.

>>> It looks like IBM
>>> has salvaged what's left of its mainframe business, so I assume
>>> they're going to be much less interested in pumping up AMD.

>> Again, another segue into IBM. What's IBM got to do with anything? Since
>> when was IBM pumping up AMD? AMD was an IBM customer in fab development,
>> that's about all.
>>

> Oh, yes, IBM was heartbroken over the fate of Itanium and it had
> nothing whatever to do with the timing of the (temporary) success of
> AMD--temporary success that killed Itanium.


IBM was one of the original Itanium backers, and came to AMD later than
everybody else except Dell. They kept their AMD hardware choices
purposely limited. Hardly a big supporter of AMD, much more an Intel
supporter. Of course we all know now that was because of Intel treats
and threats.

The earliest supporters of AMD servers were HP, Sun, and Cray.

>> The closest any company got to pumping AMD up was HP, your previous
>> subject matter. HP was its most reliable customer.
>>

> There was a window where no one, including HP, could ignore Opteron.
> HP didn't create the window.


No one could ignore the Opteron, yet they did very purposefully. And
yes, it wasn't HP that created the window, it was Sun.

>>> Whatever it takes, so long as I don't have to listen to AMD whining
>>> any more.

>> All of the whining came from Intel, "Oh, <insert government entity here>
>> doesn't know how to apply its own anti-trust laws, here's this is how it
>> should've done it".
>>

> That isn't the whining I've been listening to on these forums, where
> you have been a big contributor.


It's not whining if it was proven true. So far all of the trials have
come to the conclusion that AMD was right. Whining is what Intel is
doing now that it's lost all of the cases, and all of its excuses were
disproved, but it still keeps using those excuses.

Yousuf Khan
 
Reply With Quote
 
Robert Myers
Guest
Posts: n/a
 
      12-28-2009, 09:19 PM
On Dec 27, 11:24*pm, Yousuf Khan <(E-Mail Removed)> wrote:
> Robert Myers wrote:


> > Cell? *And the Intel of today has to worry that it is like the IBM of
> > 1990.

>
> Well then, Cell is already dead, killed off by its GPU competition. So
> how does that demonstrate your little theory that "IBM is safe"?
>

We don't know if Cell is dead or not and we don't really know that it
was killed off by GPU competition. More likely, it was killed off
(if, indeed, it has been killed off) by a lack of developer support.
I think the cost to manufacture it (because of yield problems) was
also a factor, but there I'm guessing against denials from people who
could conceivably know.

Except for mainframes and business tied to mainframes, IBM doesn't
want to be in the hardware business any more. Why should they? It's
easier to make money in software and services and they've learned new
lockin techniques.

> The world has hardly passed me by, you're just living in your own world
> that's all. As usual.
>
> If as you say all you need to do to get ARM properly multitasking, is to
> add lots of little ARM processors, then where does that leave ARM's
> much-vaunted power consumption? You've basically created a chip that
> consumes power like an x86 but isn't as well supported by software in
> this end of the market. If the idea is to do multitasking on a
> smartphone, then that smartphone will have only a few minutes of battery
> life, with a regular cellphone-sized battery. If you want the smartphone
> to last all day, then it won't be able to do the multitasking.
>

Teensy little consumer devices have certainly helped to drive ARM and
partly driven the obsession with performance per watt. The obsession
with performance per watt and the potential for devices that do well
on that measure goes well beyond smartphones. All kinds of devices
that need to do multitasking will also need very low power
consumption.

The reasons for putting many small cores onto a larger die rather than
trying to improve on yesterday's single-core performance are:

1. Significant improvements in single core performance and power
management seem unlikely.

2. If you can't add performance by adding transistors without an
unacceptable power penalty, the only way you can improve performance
is to add cores.

3. A physically big core consumes lots of power just in moving data
around. One of the drivers behind many cores on a die is to reduce
the power consumed in data movement--you put data on a small area (a
core), do stuff with it, all in that small area, then move it back to
cache--a much shorter net path for data, with a resulting decrease in
power consumption.

4. If you can do tasks in parallel, it's much more energy efficient to
have two cores running at half speed than one core running at full
speed. If you run many simpler cores at lower speed, the cost of
abandoning power-hungry features like OoO can also be reduced to the
point where you can do without them.

As to who is living in what world, you can be certain that I didn't
pluck any of this out of the aether.

> So far all hybrid supercomputers have usually involved pairing an x86
> processor to something else. For example, the IBM Roadrunner
> supercomputer at LLNL was a pairing of Opteron with Cell processors.
> There's a Chinese supercomputer that paired Xeons with AMD GPUs. They're
> designing some kind of computer with Nvidia Fermi GPUs at Oak Ridge
> which also has x86 processors. Etc.
>

Thanks for educating me, Yousuf. The ink was barely dry on the Blue
Gene press releases when I said publicly they should have built it
with Cell. I can't remember anyone else saying so, but I learned long
ago that the national labs think of everything first.

> The current trend towards heterogeneous hybrid solutions in
> supercomputing is only a temporary trend. It's being done to make up for
> performance deficiencies in homogeneous systems. And if the trend
> continued, then I agree ARM has a chance to become the management
> processor of such hybrid systems. However, I don't think think the trend
> will continue beyond the next few years. That's when x86-GPU processors
> will come online. So unless ARM can integrate a GPU into itself, its
> window of opportunity in supercomputing is very limited.
>

ARM doesn't have to integrate anything. Only *someone* does. There
will be a chip for applications other than graphics that integrates
stream processing capabilities. That chip might use x86 or it might
not. If someone with enough money comes along, it could use Power
(some version of Cell, essentially). Or it could use ARM.

> ARM is ubiquitous in everything but general-purpose computing. Most
> software that exist for ARM are usually proprietary, where you can't
> even share software between two different products based on the same ARM
> processor.
>

You got linux, you got gcc, you got software. You already got
developers up the wazoo.

> > It matters because, if the world is to move to a different model of
> > CPU architecture, the languages that are so widespread now will have
> > to undergo heavy duty changes. *It matters little if the world remains
> > x86. *The ISA doesn't matter all that much any more because the action
> > is going to be outside the individual CPU. *Maybe x86 will morph into
> > something that handles the world of hundreds of cores deftly, or maybe
> > it won't.

>
> They're approaching it rapidly. Processors with six cores are already
> common (in servers). Next step are 8 and 12. The design time to come up
> with higher numbers of cores is getting smaller.
>

The problem, as I understand it, is system processes that scale as N^2
and the desire (need) to do finer scale concurrency.

> IBM was one of the original Itanium backers, and came to AMD later than
> everybody else except Dell. They kept their AMD hardware choices
> purposely limited. Hardly a big supporter of AMD, much more an Intel
> supporter. Of course we all know now that was because of Intel treats
> and threats.
>

Somewhere in there, a consent decree ceased to be in effect. Wouldn't
you know it? IBM was *much* less interested in Itanium. Who is a
whiner and who has a legitimate beef depends on where you sit, and I
think I know where you sit.

> > There was a window where no one, including HP, could ignore Opteron.
> > HP didn't create the window.

>
> No one could ignore the Opteron, yet they did very purposefully. And
> yes, it wasn't HP that created the window, it was Sun.
>

Nope. 'Twas Big Blue. Without Big Blue, there would have been no
Opteron and probably no AMD.

> > That isn't the whining I've been listening to on these forums, where
> > you have been a big contributor.

>
> It's not whining if it was proven true. So far all of the trials have
> come to the conclusion that AMD was right. Whining is what Intel is
> doing now that it's lost all of the cases, and all of its excuses were
> disproved, but it still keeps using those excuses.
>

Listen. Here's the deal. I like smart technical guys. I like smart
marketers and managers. I don't like technical guys who only think
they're smart. I don't like marketers and managers who shoot
themselves in the foot, say, by needlessly ****ing off technical
guys. And I don't like lawyers of any kind, nor do I like people who
need lawyers even to play. You have your standards, and I have mine.

Robert.
 
Reply With Quote
 
Yousuf Khan
Guest
Posts: n/a
 
      12-29-2009, 09:07 AM
Robert Myers wrote:
> On Dec 27, 11:24 pm, Yousuf Khan <(E-Mail Removed)> wrote:
>> Robert Myers wrote:

>
>>> Cell? And the Intel of today has to worry that it is like the IBM of
>>> 1990.

>> Well then, Cell is already dead, killed off by its GPU competition. So
>> how does that demonstrate your little theory that "IBM is safe"?
>>

> We don't know if Cell is dead or not and we don't really know that it
> was killed off by GPU competition. More likely, it was killed off
> (if, indeed, it has been killed off) by a lack of developer support.
> I think the cost to manufacture it (because of yield problems) was
> also a factor, but there I'm guessing against denials from people who
> could conceivably know.


They're paying some lip-service to "the concepts of Cell may be applied
in some future products". Which means to me it's dead.

>> If as you say all you need to do to get ARM properly multitasking, is to
>> add lots of little ARM processors, then where does that leave ARM's
>> much-vaunted power consumption? You've basically created a chip that
>> consumes power like an x86 but isn't as well supported by software in
>> this end of the market. If the idea is to do multitasking on a
>> smartphone, then that smartphone will have only a few minutes of battery
>> life, with a regular cellphone-sized battery. If you want the smartphone
>> to last all day, then it won't be able to do the multitasking.
>>

> Teensy little consumer devices have certainly helped to drive ARM and
> partly driven the obsession with performance per watt. The obsession
> with performance per watt and the potential for devices that do well
> on that measure goes well beyond smartphones. All kinds of devices
> that need to do multitasking will also need very low power
> consumption.
>
> The reasons for putting many small cores onto a larger die rather than
> trying to improve on yesterday's single-core performance are:


<snip>

Yeah, thanks for telling me all of the advantages of multi-cores for
multitasking. That's not the point. How will a multi-core ARM keep its
power consumption advantage with so many cores? In order to match the
multitasking capabilities of x86, power consumption of ARM will go up,
it may match the consumption of x86 too.

And let's not forget that x86 was able to multitask long before it had
multicores. It had enough performance to time-slice, therefore it had
high power consumption. Even a single-core ARM that can match that
multitasking capability of x86 would consume like an x86.

It doesn't matter if you add cores or if you increase single-core
performance, in order to multitask power consumption will go up.

>> The world has hardly passed me by, you're just living in your own world
>> that's all. As usual.
>>

> As to who is living in what world, you can be certain that I didn't
> pluck any of this out of the aether.


No, you plucked it out of the Internet, which is just the Ether, I guess.

>> So far all hybrid supercomputers have usually involved pairing an x86
>> processor to something else. For example, the IBM Roadrunner
>> supercomputer at LLNL was a pairing of Opteron with Cell processors.
>> There's a Chinese supercomputer that paired Xeons with AMD GPUs. They're
>> designing some kind of computer with Nvidia Fermi GPUs at Oak Ridge
>> which also has x86 processors. Etc.
>>

> Thanks for educating me, Yousuf. The ink was barely dry on the Blue
> Gene press releases when I said publicly they should have built it
> with Cell. I can't remember anyone else saying so, but I learned long
> ago that the national labs think of everything first.
>
>> The current trend towards heterogeneous hybrid solutions in
>> supercomputing is only a temporary trend. It's being done to make up for
>> performance deficiencies in homogeneous systems. And if the trend
>> continued, then I agree ARM has a chance to become the management
>> processor of such hybrid systems. However, I don't think think the trend
>> will continue beyond the next few years. That's when x86-GPU processors
>> will come online. So unless ARM can integrate a GPU into itself, its
>> window of opportunity in supercomputing is very limited.
>>

> ARM doesn't have to integrate anything. Only *someone* does. There
> will be a chip for applications other than graphics that integrates
> stream processing capabilities. That chip might use x86 or it might
> not. If someone with enough money comes along, it could use Power
> (some version of Cell, essentially). Or it could use ARM.


No, actually ARM does have to. Putting heterogeneous processors on a
system is a major pain the ass. You need to design special physical
connections for each type of processor, not to mention that you have to
design special software for each type. If you have a processor that
integrates two different types of processors inside a single package, it
reduces your costs considerably. Saves hardware and software costs. It's
the main reason why not more supercomputers have adopted the
heterogeneous route.

>> ARM is ubiquitous in everything but general-purpose computing. Most
>> software that exist for ARM are usually proprietary, where you can't
>> even share software between two different products based on the same ARM
>> processor.
>>

> You got linux, you got gcc, you got software. You already got
> developers up the wazoo.


Linux is hardly the most popular software platform for ARM. The most
popular platform for ARM is: no platform at all. Most apps that exist on
ARM are not running on any OS whatsoever, bare-metal apps. It's because
of these one-off bare-metal apps that ARM is said to have so much
software written for it.

In the world of Linux on ARM, software support still lags that of the
world of Linux on x86.

>> They're approaching it rapidly. Processors with six cores are already
>> common (in servers). Next step are 8 and 12. The design time to come up
>> with higher numbers of cores is getting smaller.
>>

> The problem, as I understand it, is system processes that scale as N^2
> and the desire (need) to do finer scale concurrency.


Actually, they're going overboard with multi-cores now. They need to
find a use for all of these cores, and there isn't enough multitasks to
go around most of the time. Even when an app is multithreaded, it still
can't make full use of all of the resources.

>> IBM was one of the original Itanium backers, and came to AMD later than
>> everybody else except Dell. They kept their AMD hardware choices
>> purposely limited. Hardly a big supporter of AMD, much more an Intel
>> supporter. Of course we all know now that was because of Intel treats
>> and threats.
>>

> Somewhere in there, a consent decree ceased to be in effect. Wouldn't
> you know it? IBM was *much* less interested in Itanium. Who is a
> whiner and who has a legitimate beef depends on where you sit, and I
> think I know where you sit.


There's been plenty written about how many OEMs were bullied by Intel
into cancelling or delaying products based on non-Intel products.
Certain OEMs caved to it more readily than others. IBM certainly wasn't
the biggest caver, but it was average on the scale. Top of the scale was
Dell.

>>> There was a window where no one, including HP, could ignore Opteron.
>>> HP didn't create the window.

>> No one could ignore the Opteron, yet they did very purposefully. And
>> yes, it wasn't HP that created the window, it was Sun.
>>

> Nope. 'Twas Big Blue. Without Big Blue, there would have been no
> Opteron and probably no AMD.


You give IBM much too much credit for being AMD's guardian angel. The
semiconductor folks certainly helped out -- for a price. The server
folks were very meek.

>>> That isn't the whining I've been listening to on these forums, where
>>> you have been a big contributor.

>> It's not whining if it was proven true. So far all of the trials have
>> come to the conclusion that AMD was right. Whining is what Intel is
>> doing now that it's lost all of the cases, and all of its excuses were
>> disproved, but it still keeps using those excuses.
>>

> Listen. Here's the deal. I like smart technical guys. I like smart
> marketers and managers. I don't like technical guys who only think
> they're smart. I don't like marketers and managers who shoot
> themselves in the foot, say, by needlessly ****ing off technical
> guys. And I don't like lawyers of any kind, nor do I like people who
> need lawyers even to play. You have your standards, and I have mine.



So you're saying because AMD needed lawyers to assert its rights, it was
whining? Similarly, all of those software companies killed by Microsoft
over the years, where also whiners? They certainly had to get the
anti-trust authorities after Microsoft to get anywhere. Any company that
has an elephant stepping on its neck deserves what's coming to it?

Yousuf Khan
 
Reply With Quote
 
Robert Myers
Guest
Posts: n/a
 
      12-29-2009, 10:57 PM
On Dec 29, 4:07*am, Yousuf Khan <(E-Mail Removed)> wrote:
> Robert Myers wrote:


> > We don't know if Cell is dead or not and we don't really know that it
> > was killed off by GPU competition. *More likely, it was killed off
> > (if, indeed, it has been killed off) by a lack of developer support.
> > I think the cost to manufacture it (because of yield problems) was
> > also a factor, but there I'm guessing against denials from people who
> > could conceivably know.

>
> They're paying some lip-service to "the concepts of Cell may be applied
> in some future products". Which means to me it's dead.
>

IBM will be using Cell technology. It has killed off one particular
discrete product. That's all.

>
> Yeah, thanks for telling me all of the advantages of multi-cores for
> multitasking. That's not the point. How will a multi-core ARM keep its
> power consumption advantage with so many cores? In order to match the
> multitasking capabilities of x86, power consumption of ARM will go up,
> it may match the consumption of x86 too.
>
> And let's not forget that x86 was able to multitask long before it had
> multicores. It had enough performance to time-slice, therefore it had
> high power consumption. Even a single-core ARM that can match that
> multitasking capability of x86 would consume like an x86.
>
> It doesn't matter if you add cores or if you increase single-core
> performance, in order to multitask power consumption will go up.
>

One last, time, and then I'm going to stop trying.

The metric that matters is performance/watt, and many smaller, simpler
cores win big on that metric for reasons I explained--if you can make
the software take advantage of the parallelism.

> > As to who is living in what world, you can be certain that I didn't
> > pluck any of this out of the aether.

>
> No, you plucked it out of the Internet, which is just the Ether, I guess.
>

If I do just pluck things out, at least I get it right when I do.
I'll offer the hint, which I know you won't take, that not everyone
gets their ideas from someone else.

>
> No, actually ARM does have to. Putting heterogeneous processors on a
> system is a major pain the ass. You need to design special physical
> connections for each type of processor, not to mention that you have to
> design special software for each type. If you have a processor that
> integrates two different types of processors inside a single package, it
> reduces your costs considerably. Saves hardware and software costs. It's
> the main reason why not more supercomputers have adopted the
> heterogeneous route.
>

I guess you don't know about nVidia Tegra.

>
> Linux is hardly the most popular software platform for ARM. The most
> popular platform for ARM is: no platform at all. Most apps that exist on
> ARM are not running on any OS whatsoever, bare-metal apps. It's because
> of these one-off bare-metal apps that ARM is said to have so much
> software written for it.
>
> In the world of Linux on ARM, software support still lags that of the
> world of Linux on x86.
>

Big deal. If ARM gets a significant power/performance advantage,
watch the x86 diehards move on. Intel, at least, knows that, even if
you don't. ARM has a small but possibly significant and probably
fundamental advantage over x86 in the power/performance department.

>
> > The problem, as I understand it, is system processes that scale as N^2
> > and the desire (need) to do finer scale concurrency.

>
> Actually, they're going overboard with multi-cores now. They need to
> find a use for all of these cores, and there isn't enough multitasks to
> go around most of the time. Even when an app is multithreaded, it still
> can't make full use of all of the resources.
>

Everyone who matters understands that. The world of software will
have to change.

> There's been plenty written about how many OEMs were bullied by Intel
> into cancelling or delaying products based on non-Intel products.
> Certain OEMs caved to it more readily than others. IBM certainly wasn't
> the biggest caver, but it was average on the scale. Top of the scale was
> Dell.


Dell wasn't bullied. It was bought. It was a good deal for Intel, a
good deal for Dell, and a good deal for consumers. I don't see
anything immoral about it. The Intel emails that have been discovered
only make obvious what everyone with half a brain knew to be true.
Rather than building it's own distribution channel, Intel obtained the
exclusive services of one. If there is some law against that, the law
is wrong.

Intel's "bullying" of other distributors is a different matter. Can
an HP or an IBM be bullied? Maybe they can be. Those are the kinds
of facts you need to put in front of a jury. I'm sure that other,
smaller players could easily have been bullied and some of it might
have been immoral or illegal or both. How much it amounts to is
another matter.

> You give IBM much too much credit for being AMD's guardian angel. The
> semiconductor folks certainly helped out -- for a price. The server
> folks were very meek.
>

The help was absolutely essential. IBM didn't have to, and they could
have exacted a much higher price.

Of course the semiconductor people were unenthusiastic. Physical
scientists and hardware types at IBM all have to be unhappy giving
their jobs and their business to others. In fact, it's hard for me to
know how IBM keeps domestic employees at all, as their jobs usually
turn out to be facilitating the export of their own livelihood. As a
company, IBM is doing great.

>
> So you're saying because AMD needed lawyers to assert its rights, it was
> whining? Similarly, all of those software companies killed by Microsoft
> over the years, where also whiners? They certainly had to get the
> anti-trust authorities after Microsoft to get anywhere. Any company that
> has an elephant stepping on its neck deserves what's coming to it?


Intel isn't remotely like Microsoft. Microsoft has been like the
Asian Carp of software. As one venture capitalist put it, Microsoft
takes other people's ideas and turns them into bad products.
Microsoft and AMD are both "me-too" companies. AMD whines that they
don't get to eat enough of Intel's lunch. Too bad for them, I say.
Intel is ruthless in protecting its lunch? Tell me a business that
*doesn't* work that way.

Whether the ubiquity of microprocessors is a good thing or not is a
separate question. IBM didn't make it happen. Wintel did, and, yes,
Microsoft's ruthlessness played a big role in making that happen.
Apple deserves some big points, too. AMD is just a parasite, although
they could have been much, much more. Just my opinion, of course.

Robert.
 
Reply With Quote
 
 
 
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Class action suit against Toshiba for not supporting products Mill Laptops 2 10-26-2007 07:22 PM
Class action suit against Dell Inspiron 5150 jeshon@gmail.com Laptops 2 10-10-2006 12:41 AM
EU takes over German antitrust inquiry into Intel YKhan Intel 0 09-11-2006 07:21 PM
Workers file suit against Texas Instruments TI Inside Intel 0 11-18-2005 05:05 AM
OT: AMD files suit against Intel Craig Dell 24 08-14-2005 06:19 PM


All times are GMT. The time now is 10:14 AM.


Welcome!
Welcome to Motherboard Point
 

Advertisment