1. This forum section is a read-only archive which contains old newsgroup posts. If you wish to post a query, please do so in one of our main forum sections (here). This way you will get a faster, better response from the members on Motherboard Point.

so Jobs gets screwed by IBM over game consoles, thus Apple-Intel ?

Discussion in 'Intel' started by Guest, Jun 10, 2005.

  1. *Tries to imagine how a TARDIS clocks up -- is that fast or slow? ;-) *
    Kristoffer Lawson, Jun 15, 2005
    1. Advertisements

  2. Guest

    Yousuf Khan Guest

    Well a TARDIS can do anything it likes with its clock, fast or slow,
    backwards or forwards. I assume a G5's clock system would only be one
    subset o the TARDIS's clock system. :)

    Yousuf Khan
    Yousuf Khan, Jun 15, 2005
    1. Advertisements

  3. Guest

    keith Guest

    East Fishkill is *not* a research facility. Other than that, I won't
    Aren't press-releases wunnerful?
    You think this is novel? Processes are tweaked on the fly to produce
    what's needed all the time.
    What I wonder is what's going to stio Applex86 applications from running
    on Winwhatever (or Linux). Without applications...
    keith, Jun 16, 2005
  4. Guest

    keith Guest

    You have the prices of the G5 (aka 970)? The Athlon64 can't do SMP,
    right? ...the others do. I think the comparison between the 970 and
    Opteron are fair.
    keith, Jun 16, 2005
  5. Guest

    Del Cecchi Guest

    Well, EF has the pilot line and SRC in addition to the 300mm line that
    howie got outbid on. He didn't want all those hazardous chemicals in his
    state anyway. :)


    Del Cecchi, Jun 16, 2005
  6. Guest

    imouttahere Guest

    Opteron: 12 int/17 fp stages 970FX: 16/21


    The problem with the G5, I gather, is heat dissipation at higher clocks.
    imouttahere, Jun 16, 2005
  7. Hasn't the ASTC been closed down?
    YEEHAWW Howie wasn't invited to the auction. No water. No
    electricity. No land. No workers. Lotsa taxes buys lotsa bureaucracy
    though. It's taken forty years to build a two-lane road, and it'll
    likely be another forty before it's finished.
    If people only knew... You shoulda seen people turn white when there
    was a b*mb scare on the main site some ten years back (long before
    Keith R. Williams, Jun 16, 2005
  8. Guest

    YKhan Guest

    It has a research component. No one is going to do experiments on a
    working production line.

    But each manufacturer has its own history to draw upon when it creates
    its own automated process management scheme. AMD's history (including
    all of its failures) is particularly relevant to the manufacture of x86
    microprocessors. Most of this is learned experience, not theoretical.
    AMD has much more experience at producing high-speed processors at good
    yields than IBM. This is borne out by IBM's severe yields problem at
    Fishkill compared to AMD's relative lack of it using nearly indentical
    equipment in Dresden.

    AMD needed some help from IBM about integrating new materials into its
    processes. But once that knowledge was gained, it would be AMD that
    would have the better chance at getting it going on a big scale.

    Yousuf Khan
    YKhan, Jun 16, 2005
  9. Guest

    TravelinMan Guest

    Sorry to disappoint you, but it happens all the time in the real world.
    TravelinMan, Jun 16, 2005
  10. Guest

    keith Guest

    Wrong. One doesn't duplicate $billion lines. Experimets are run on
    production lines all the time.
    ....and IBM has never made an x86 processor? Also, perhaps you can
    clarify why you think x86 is so special. Processors technologies are
    processor technologies. IBM has different goals, sure.

    ? Most of this is learned experience, not theoretical.
    You're on drugs, yousuf!
    Geez! What a load of horse-hockey!
    keith, Jun 17, 2005
  11. Guest

    YKhan Guest

    Yet, no production silicon came out of there.
    No, IBM hasn't made an x86 processor in the longest time. But it's not
    really the fact that it's an x86 processor, that makes it an issue.
    It's the fact that IBM doesn't have enough experience making massive
    amounts of processors in a long long time, so it doesn't have the
    background anymore. Sure it can feed its own needs with Power
    processors, but that's not a large quantity of processors.

    Nvidia was about to use Fishkill as its centre of manufacture of
    Geforce chips, but IBM was completely unprepared for the task. Nvidia
    is back with TSMC again. Cray wanted Fishkill to produce some router
    chips for its XT3/Red Storm computers, and IBM couldn't even handle
    that small job, ending up delaying a supercomputer project at Sandia.
    And to top it off, IBM was able to produce its own supercomputers which
    pipped the Red Storm in bypassing the Earth Simulator. So Cray gave
    the job to TI to complete it.

    So far, it's been a dismal record of incompetance. IBM might have some
    great theoreticians around, but not enough of the practicians.
    If I am then you should get on them right away too, it'll bring you
    back into planet Earth. You're not seeing the big commercial picture
    here: IBM is getting a reputation for manufacturing incompetence, pure
    and simple.
    Yes, sometimes the blunt truth hurts.

    Yousuf Khan
    YKhan, Jun 17, 2005
  12. Guest

    websnarf Guest

    When they disclosed the software, hardware, specifications of their
    testing, and when people on the outside have been able to reproduce
    their results.

    If you have BIAS problems (*cough* ads from nVidia *cough*) with Tom's
    that's fine, but that's ok, you have all the other sites I listed which
    keep him in check.

    Compare this to Apple which hired Veritest (formly ZD labs, notorious
    for bias towards their sponsors and just generally bad benchmarking in
    the past) who told Apple they they *LOST* on SPEC CPU Int (even after
    rigging the whole test scenario in Apple's favor, and ignoring the
    faster Opteron/Athlon based CPUs), and then Apple interpreting this to
    mean that their computer was the fastest on the planet.

    There are two completely different standards here.
    websnarf, Jun 17, 2005
  13. Guest

    websnarf Guest

    I thought you "knew" that greater pipelining was how you got higher
    clock rate.
    Really? By who?

    Intel may have given up on the P4, but that, I think, it just a
    reflection that the P4 didn't have as much life in it as Intel
    originally thought, and they didn't have a next generation technology
    ready to go. But they did have a re-implemented P-Pro architecture
    that seems to have fit their current product needs.

    Trust me, then next Intel processor (after the Pentium-M) will be very
    deeply pipelined. Just going by their previous generation design
    times, I would say that in about 12-24 months they should be
    introducing a completely new core design. (On the other hand, the tech
    bubble burst and the time they wasted on Itanium may have truly thrown
    things off kilter over at Intel. Who knows, we'll see.)
    It was meant as background details to help you understand what is
    really going on. But as an Apple-head, I should have realized you
    don't care about details.
    They did, but their intention was not to have low IPC. They just
    couldn't design around this. Their real goal was to deliver high
    overall performance. A goal they did achieve. Just not quite as well
    as AMD achieved it. If Intel could have figured out how to truly
    deliver higher IPC with their half-width double pumped integer ALU
    architecture, they would have ruled the CPU universe and have
    completely rewritten the rules on CPU design.

    But AMD had been putting too much pressure on Intel, and Prescott did
    not impress. Furthermore, Intel seems to be shifting to Pentium-M. So
    presumably, its not a easy (if not impossible) to solve problem.

    The reason why I bring up AMD is that you cannot understand where
    Intel's technology has been going without understanding what their
    strongest competitor is doing.
    And you must one of those who still believe there are WMDs in Iraq.
    Apple does not disclose sufficient details to reproduce the benchmark
    numbers. Futhermore, there are no third parties auditting their
    results. The few benchmarks where they did give sufficient details, I
    have personally debunked here:


    Scroll down to the 08/20/03, 07/18/03 entries. Other useful debunking
    includes the 06/15/02 entry.
    Right. Because dual processor Athlons don't exist. Oh wait a sec?
    What's this:


    Barbarically throwing an whole CPUs worth of extra computation power to
    win in *some* benchmarks with yet more provisos was a nice trick, but
    ultimately, AMD (and thus, eventually a desperately following Intel)
    will win that one too.
    You are extremely trusting of authority figures aren't you? You should
    never take the statements of a CEO or PR person at face value. Only
    statements made for which there is legal accoutability should be
    believed from such people.
    Its called marketing and positioning. I thought I posted this already.
    I don't know what "RDF" stands for (I'm not a regular in this group)
    but I'll just read that as "challenged". There is no credible sense in
    which IBM can be considered ahead of Intel (let alone AMD). Except
    with the Power architecture on Spec FP CPU which is really a memory
    bandwidth test, but that's a different matter altogether.
    Ok, so they have copper. Look, there is a reason why AMD made a big
    deal about announcing a fab technology sharing agreement with IBM --
    there are techniques and technologies that multiple billions in
    research that IBM spends develops that lesser fabricators like TSMC
    cannot match.

    If TSMC is making a highly clocked Microprocessor with 3 cores, it
    means they are using a low gate count core design and tweaking the hell
    out of it at the circuit level for higher clock rates. Maybe these are
    those G3's that were at 1Ghz, but had no AltiVec units. That would
    make the most sense (one of the lessons of the video game industry is
    that SIMD instruction sets don't matter if you have a fast graphics
    I don't even know what that means. You mean by having a steady
    customer, they will be able to fund future process technology? That's
    great, but their problem is what they can deliver with their *current*
    technology. And they have other steady customers, like ATI and nVidia
    who will pay just as well as Microsoft for fab capacity.
    *Sigh* ... Microsoft doesn't *OWN* the design.
    For licensing a core? That's not the way it works. Beyond a nominal
    up front cost, they pay a percentage royality on each chip shipped.
    For a design like this, I'm guessing about $5 per chip.
    Exactly ... this is the "deep reason" so many people are missing. The
    stress at Apple over IBM/Mot/Freescale being unable to deliver must be
    driving them nuts on a constant basis. By going with Intel, they get
    to benefit from the all out war between AMD and Intel (both from a
    price and technology perspective) without even having to deal with more
    than one CPU supplier.

    The reason why you see Dell openly calling Apple to license OS X to
    them, is because Dell realizes that ultimately Apple will come out of
    this much stronger (something they could have done long time ago, BTW.)
    Of course, Dell may be scared for no reason at all -- Apple still has
    the problem of proving that their value add is superior to that of a
    typical Windows machine; something they have so far only been able to
    translate into 3% marketshare. The other 97% are not *ALL* just
    obsessed with benchmark performance.
    Ok, that's nice that Apple doesn't feel the need to emphasize it. That
    has nothing to do with whether or not a developer will use the endian
    sensitive tricks described there.
    Yes, but who's fault it is, doesn't change the reality. Software which
    makes endianness assumptions exists in essentially equal quantity with
    software that is not cross platform.
    One thing has nothing to do with the other. You are too quick to defer
    to perceived authority. It is in fact your statement which is dubious,
    because you have no evidence to back it up.

    On the other hand, one thing we know for sure is that reams of raw x86
    assembly code linked up with an OS X front end doesn't exist in any
    form anywhere. Photoshop ported to OS X on x86 will probably be the
    first (and possibly only) application to do this ever.
    Codewarrior on the PC? Pulease! For people with even the most minor
    concerns about performance on Windows/X86 basically, your choices are
    gcc, MSVC, and Intel C/C++.
    God, you really don't have any clue at all do you? The assembly is
    used to make it run with high performance. Pixel correctness isn't the
    issue -- "does it even compile" is the issue.
    websnarf, Jun 17, 2005
  14. Guest

    Randy Howard Guest

    It would be nice to see them do something tangible to approach
    Hypertransport on the high end. Maybe Apple is even asking
    for this in return for not entertaining AMD proposals.
    It's the very same "reality distortion field" that you mention in long
    form on the link above.
    Dell answered an email inquiry with a bit of political correctness, and
    probably just to yank Gates' chain a bit. I'm sure he doesn't really
    want to train another 5000 Indians in Bangalore to support it.

    It was a far cry from "calling for" Apple to license it to them.
    Randy Howard, Jun 17, 2005
  15. Guest

    YKhan Guest

    They've announced the next two generations of desktop processors
    already, they will be Conroe and Merom, both Pentium-M derivatives.
    Intel has given up deep pipelining it's now gunshy about it. It'll
    slowly increase the pipelines a little bit at a time from now on (like
    it should've been doing all along). We probably won't see Intel trying
    to hit 4Ghz again for another five years.

    Intel's got more pressing issues to deal with now rather than
    pipelines. Like how to match AMD's Direct Connect Architecture
    (Hypertransport and internal memory controller).

    Yousuf Khan
    YKhan, Jun 17, 2005
  16. Guest

    YKhan Guest

    Still don't see the difference between them and Tom's. Have you seen
    their latest attempt "stress testing" an AMD and an Intel dual-core
    systems? At one point the Intel system burned through three
    motherboards, until they finally found one that was stable. Stable is
    relative in this case, as all it means is that it hasn't burned up yet,
    however it's suffered 4 reboots. At the same time, the AMD had no
    reboots. So they decided in order to make it fairer for Intel, they
    decided to reboot both systems and start the tests over again. Again
    the reboots started happening for Intel. So eventually they just
    decided to call it an "unfair" test because the "AMD systems were
    production systems", while the "Intel systems were preproduction". Now
    is it my imagination but didn't Intel introduce its dual-core desktop
    processor at least a month before AMD? So why is Intel's systems still
    pre-production? Tom's doesn't have an answer for that.

    Yousuf Khan
    YKhan, Jun 17, 2005
  17. Guest

    Tony Hill Guest

    Err.. IBM is currently fabbing VIA's x86 processors... do those not
    They do manage to turn out a fair share of power processors for the
    higher end of the embedded market, and then they have things like the
    PPC970 and the Power4/Power5 for the high-end. They do have an
    interesting hole in the middle where Intel and AMD tend to live, but I
    don't think it's THAT much of a stretch for them.

    Still, I DO believe that IBM has a fair bit to learn from AMD, the
    information flow between the two companies is definitely not a one-way
    Tony Hill, Jun 18, 2005
  18. Guest

    Tony Hill Guest

    The P4 was introduced in 2000. We're now in 2005 and the P4 is
    Intel's mainstream core for this year and well into next year at the
    very least. Their next generation core isn't expected until at least
    late-2006 and probably not until well into 2007.

    A 7-year lifespan for a processor core is EXTREMELY good.
    Calling the Pentium-M a "re-implemented P-Pro" is a VERY large
    stretch. Basically every component of the processor has been
    significantly modified from the original PPro. It may be a more
    evolutionary design than the P4, but it's still worlds away from the

    For comparison sake, AMD's Opteron is much more closely related to the
    original Athlon than the Pentium-M is to the P-Pro.
    My view is that Intel created some sort of think-tank group to come up
    with what would be the dominant application that people were really
    going to need processing power for during the P4's lifetime. This
    think-tank came back and said "streaming media is it!". Intel then
    went about creating a processor with a significant focus on streaming
    media performance. The resulting P4 processor really IS quite good at
    streaming media, but unfortunately it can be a bit ho-hum at some
    other tasks. The real problem here is that streaming media just isn't
    the real killer-app that Intel thought it would be. Sure, we do
    stream some media here and there, but more often than not we end up
    being limited by something other than the processor.
    Also, as I've mentioned in other threads, companies like Dell and HP
    pay significantly less than the above-mentioned prices due to the
    quantities they buy in. I wouldn't be at all surprised that Dell
    could pick up a 3.2GHz Pentium-D for about $150.

    As for the comparison of the 3-core PPC CPU of the XBox360 vs. the
    dual-core Pentium-D, the two chips really aren't in the same ballpark.
    IBM really stripped out a lot of features from their PPC core to allow
    it to clock higher. Clock for clock and core-for-core performance of
    the chip will most likely be lower than that of the Pentium-D. Not
    that it really matters of course, the console industry is quite
    different from the PC world and it's difficult to make direct
    Of just about any other high-end server benchmark. SPEC, TPC,
    Linpack, you name it. The Power5 is a beast of a processor. It also
    costs a boatload to produce and really isn't competing in the same
    sort of market as Intel and AMD for the most part (though IBM does
    have some rather interesting 2 and 4 processor servers at attractive
    TSMC has some very advanced technology, but their business is rather
    different than that of AMD, IBM and Intel. Where those companies
    might concentrate on high-performance above all else, TSMC has to
    focus more on low-cost of production. Manufacturing process can be
    tweaked in many different ways, but performance and cost are two of
    the main variables that can be optimized for.

    TSMC is quite capable of making damn near anything that you through at
    them, but they might not be able to hit the same clock frequencies as
    Intel, IBM or AMD. That being said, they'll get higher yields and
    lower costs, at least as compared to IBM and AMD (Intel can probably
    match them on a cost basis due to shear volume if nothing else).
    ATI and nVidia are hardly their only customers. TSMC is a BIG
    manufacturing company. For 2004 they had a total capacity of 5
    million 8-inch equivalent wafers, split between their 9 fabs. For
    comparison, once AMD gets their new 12-inch, 300nm Fab36 up and
    running at full steam, combined with their current 8-inch, 200mm
    Fab30, they will have a total capacity of about 650,000 8-inch
    equivalent wafers per year.

    Intel is the only company in the world with capacity to match TSMC.
    Or perhaps because Dell sees the opportunity for themselves to come
    out of this much stronger? I know I'm not the only one who would love
    to run MacOS X on a (non-Apple) PC.
    Tony Hill, Jun 18, 2005
  19. Guest

    YKhan Guest

    I think that only starts with the new C7 processor, I think the current
    C3's are being fabbed at TSMC still.

    It would be worthwhile actually seeing how quickly IBM can ramp up to
    create C7's. With its problems in creating GeForce chips and Cray
    routers, and other problems, it's getting a bad reputation for
    Possibly, but the embedded PPC's would be the equivalent of a AMD's
    Geode/Alchemy processors or Intel's Xscale. Low frequency parts not
    requiring a lot of hard work, where even a slightly wonky yield problem
    would still churn out working processors.

    Yousuf Khan
    YKhan, Jun 18, 2005
  20. Guest

    Yousuf Khan Guest

    Not all of their 9 fabs are state-of-the-art. Some of them are pretty
    old, used mainly for creating very cheap components. Similarly, AMD has
    got considerably more than just the Dresden fabs, it's also got less
    state-of-the-art fabs in austin (fab 28), and various in Japan through
    its Spansion subsidiary.

    Yousuf Khan
    Yousuf Khan, Jun 18, 2005
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.