Laptops, wait for Intel Centrino Core Duo?

Discussion in 'Dell' started by Kevin K. Fosler, Feb 12, 2006.

  1. I'm just curious. Why are you on a Dell newsgroup touting AMD? Do
    you own a Dell? I guess I don't understand why you're here since Dell
    doesn't use AMD.

    Kevin
     
    Kevin K. Fosler, Feb 14, 2006
    #21
    1. Advertisements

  2. Kevin K. Fosler

    me Guest

    Is that true of the Pent M chip as well..... P4 based?
     
    me, Feb 14, 2006
    #22
    1. Advertisements

  3. Kevin K. Fosler

    NoNoBadDog! Guest

    If you read my posts (as you obviously have), then you see I do not post
    benchmarks, which are rarely applicable to real world. I do not run a
    6600GT on any of my machines, rarely play a game, and do not use anything
    derivative of the synthetic benchmarks used in most reviews. I prefer to
    base my judgments on my own real-world experience, and on the few reviews
    that do not use benchmarking as a comparison. Overall, the Turion is a
    better chip, with the exception of battery life. I bolster AMD because at
    least AMD is trying to move it's technology forward. I have nothing against
    Intel. My last Intel was a 3.06 GHZ. When Intel gets around to making a new
    chip, with an on-doe memory controller so it does not have to be married to
    a chipset on the motherboard, I'll give them serious consideration.

    I also do not jump into threads where name-calling and character
    assassination creep in. I don't mind being called a fanboy, but when I see
    threads like your ref #2, I just leave it alone. Should I respond, it just
    degenerates into a childish name-calling contest and I do not have time for
    that.

    Bobby
     
    NoNoBadDog!, Feb 14, 2006
    #23
  4. Kevin K. Fosler

    NoNoBadDog! Guest

    Kevin;

    Yes, I have owned Dell. I have nothing against Dell. The OP asked an
    opinion about processors, and I gave my opinion, which happened to be about
    an MD processor.

    I am hoping that Dell will one day offer AMD processors in their computers.

    Bobby
     
    NoNoBadDog!, Feb 14, 2006
    #24
  5. Kevin K. Fosler

    NoNoBadDog! Guest

    Yes. The underlying architecture of the Pentium M is still P4.

    Bobby
     
    NoNoBadDog!, Feb 14, 2006
    #25
  6. Kevin K. Fosler

    NoNoBadDog! Guest

    Ooops...

    I was wrong on my first response. The Pentium M is *NOT* a P4. It is
    actually a heavily modified Pentium III. Here is the link:

    http://en.wikipedia.org/wiki/Pentium_M

    As you can see, it is a low-voltage variant of the Tualatin core, hence a
    P3.

    Bobby
     
    NoNoBadDog!, Feb 14, 2006
    #26
  7. That is incorrect; although I'm sure there's traces of NetBurst (P-4)
    architecture in the P-m, its much closer ancestor is the Tualatin
    (P-III) architecture.

    "The Pentium M couples the execution core of the Pentium III with a
    Pentium 4 compatible bus interface, an improved instruction
    decoding/issuing front end, improved branch prediction, SSE, SSE2, and
    (from Yonah onwards) SSE3 support, and a much larger cache."
    (http://en.wikipedia.org/wiki/Pentium-M)
     
    Nicholas Andrade, Feb 14, 2006
    #27
  8. Kevin K. Fosler

    NoNoBadDog! Guest

    See my retraction of the above posting...I ironically posted the same link
    that you posted here...

    Basically the Pentium M is a step backward in order to meet a lower power
    window....


    Bobby
     
    NoNoBadDog!, Feb 14, 2006
    #28
  9. Kevin K. Fosler

    Tom Scales Guest

    Backward? I think not.
     
    Tom Scales, Feb 14, 2006
    #29
  10. Kevin K. Fosler

    NoNoBadDog! Guest

    Please enlighten me as to ow using a PIII core vice a Netburst/P4 is not a
    step backward?

    A PIII with a faster system bus is still much slower than a P4/NB, and the
    issues with a single FPU and limited LBP is definitely not a step in a
    forward direction.

    Bobby
     
    NoNoBadDog!, Feb 14, 2006
    #30
  11. Kevin K. Fosler

    Tom Scales Guest

    Have you actually USED one? I couldn't care less about what transistors are
    on the chip. They're blisteringly fast with fantastic battery life. I
    realize that you're very pro AMD, and that's fine, but let's face it, these
    are just tools.

    Tom
     
    Tom Scales, Feb 14, 2006
    #31
  12. I highly suggest you take a course in modern computer architecture (we
    offer some excellent ones at the Jacobs School of Engineering @ UCSD
    where I now work), and no, what you read in AMD press releases don't
    count. The NetBurst architecture was designed mainly to inflate clock
    rates (what Intel had been marketing at the time). They made sacrifices
    to reach those high speeds (increasing the pipeline, etc.), and they
    found out (what I believe they knew all along) that the performance gain
    from a faster clock was less than that of a more efficiently used one.
    Just because an architecture is older, it doesn't mean that its worse --
    just look at the DEC Alpha which came out in 1992 and saw derivations
    all the way to 2004.

    On a side note, your retraction post never made it through to my usenet
    server (SBC).
     
    Nicholas Andrade, Feb 14, 2006
    #32
  13. Kevin K. Fosler

    NoNoBadDog! Guest

    Agreed, they are tools. I'll see if I can get my hands on a P-M for
    testing...
    from the limited experience of trying them on retail shelves, they do not
    seem any faster than earlier Pentium based laptops. One other
    question...does the P-M address the heat issue? My last Pentium based
    laptop, a Toshiba, would get so hot that thermal protection would kick in.
    It was a Satellite 5005-S504, and was involved in the class action
    settlement where Toshiba was ordered to buy back the units because of the
    heat problems. When it did operate, it sounded like a 747 on a take-off
    roll (LOL). Battery life was mediocre at best. I spent a lot of money on
    the unit, based upon reviews I had read, and the unit was absolutely
    horrible. I was glad to sell it back to Toshiba, even though they did not
    buy if for the original purchase price.

    My Dell notebook also moonlights as a space heater. I cannot use it as a
    "laptop" without danger of burning my thighs. My brother has a Sony, and it
    gets just as hot.
    At work, we are occasionally given laptops to use for various projects, and
    every one that I have used was P4 based and HOT!

    Bobby
     
    NoNoBadDog!, Feb 14, 2006
    #33
  14. Kevin K. Fosler

    User N Guest

    What isn't clear is the degree to which you are basing your definition simply
    on AMD specific implementation details (marketing tactic, engineering sin)
    vs indirectly referring to functionality that is provided by AMD's crossbar
    and SRI. Perhaps the easiest way to ferret that out would be to consider
    a fundamentally different implementation. Conceptually, one could design
    cores which just have a serial interface to a multi-drop network. The cores
    could communicate with each other via internal network and communicate
    with peripherals via gateway. Given a chip with two cores, would it be
    "dual core" in your book? If not, specifically what *functionality* would
    be missing?
     
    User N, Feb 14, 2006
    #34
  15. Kevin K. Fosler

    NoNoBadDog! Guest

    Interesting proposal! However, for the sake of the preceding thread, I
    would have to say that the ability to communicate on die is essential before
    a core can be referred to as a "dual core". Imagine two people having to
    share the workload on an assembly line, and that the task requires a high
    degree of cooperation and communication between the two workers in order to
    keep up with the workload. If the two workers can see and hear each other,
    it is very easy for them to see what the other is doing and what has yet to
    be done. It is easy for the two to communicate and to share the workload,
    or to divide tasks as necessary. Now imagine the same scenario, but now the
    two works are placed inside of individual cubicles, with no way to know what
    the other one is doing. In order for worker "A" to determine what worker
    "B" is doing at any given time, he has to stop what he is doing, leave his
    cubicle, go to the cubicle of worker "B" an observe and communicate with
    worker "B". Then he has to return to his own cubicle and adapt what he saw
    to his own work, hoping that the workload has not changed since he visited
    the cubicle of worker "B". The latter scenario is what Intel is using on
    its "Core Duo" chips. The former is what AMD uses. the next proposed step
    in the Intel multi-core line is to add communication across the L2 cache,
    which would be analogous to giving the workers in the second scenario
    walkie-talkies. Now, although they cannot see what the other is doing, they
    can at least communicate without having to go outside their cubicles. It is
    better, but still not efficient.

    Sorry for the long thread....
     
    NoNoBadDog!, Feb 14, 2006
    #35
  16. Kevin K. Fosler

    User N Guest

    OK, sounds reasonable. However it would be good to clarify what is meant
    by "communicate". At the highest level, it seems to me that the objective is
    simply to allow threads in both cores to operate on the same data without
    going off die.
    You'd have to elaborate on what you mean by that. Core Duo has a 2M
    shared L2 cache. Data residing there can be read and written by both
    cores. That could be called "communicating across the L2 cache".

    My understanding of how things work in AMD dual cores is quite limited,
    but I'm under the impression that the SRI (alone I think, but feel free to clear
    that up) provides cross-core communications and cache coherency. More
    specifically, I'm under the impression that it effectively acts as an on demand
    L2<->L2 transfer mechanism.

    It seems to me that both approaches are meeting your requirement of on
    die communication, only in a different way. In one case there is a shared L2,
    in the other case there is a mechanism to keep both L2's coherent.

    FWIW, I think the next step may be Merom, due latter this year IIRC. In
    any case, that supposedly has a larger (4M) shared L2 cache as well as
    the ability to perform L1<->L1 transfers of some kind.
     
    User N, Feb 15, 2006
    #36
  17. Kevin K. Fosler

    User N Guest

    It just dawned on me that you might be thinking of something different than
    what I was thinking about. Possibly not, and not that it really matters, but
    in the scenario I described above, I was thinking in terms of the "internal
    network" being on die and the "peripherals" being anything external to the
    CPU package. Thus core to core communications for keeping caches
    coherent or what not *would* remain on die.
     
    User N, Feb 15, 2006
    #37
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.