1. This forum section is a read-only archive which contains old newsgroup posts. If you wish to post a query, please do so in one of our main forum sections (here). This way you will get a faster, better response from the members on Motherboard Point.

When will 90nm technology be "ready for prime time"?

Discussion in 'Intel' started by AJ, Dec 18, 2004.

  1. AJ

    Yousuf Khan Guest

    But certainly the Hyperthreading identification mechanism will be around
    long after Hyperthreading itself. For example, even AMD will identify
    its dual-cores through the HT identification flags, when its DC's arrive.

    But the concept behind Hyperthreading is a unique situation that only
    Pentium 4 could've exploited. Hyperthreading is Symmetric Multithreading
    (SMT) exploits the fact that there are so many wasted cycles kicking
    about inside a P4 processor; in other words it's making use of the P4's
    inefficiency. Most other processors, such as Pentium M, don't have that
    kind of inefficiency to exploit. They are continuously busy inside, with
    very few cycles to spare.

    Designing SMT into other processors (besides P4) is almost as difficult
    as designing dual-cores, possibly more difficult. You'd have to add
    extra integer and/or floating point units into the core of the
    processor, and beef up the scheduling unit, all to allow an extra stream
    of instructions to flow through the processor, since they are fully busy
    with one stream of instructions as it is. This is how SMT works inside
    all other processors which support SMT from IBM and Sun. Intel would
    have to redesign Pentium M in the same way to accomodate SMT; easier to
    simply graft a second core in, rather than rearranging the core to
    accomodate all of these extra units.

    Yousuf Khan
     
    Yousuf Khan, Dec 24, 2004
    #21
    1. Advertisements

  2. Not quite. The principle behind SMT from Sun or Intel is that there are
    parts of the CPU which are not in use serving the current thread.
    Typically there are at least two integer arithmetic unit, for example.
    By adding some instruction decode splitting and another register set
    it's possible to use some of the capability which would otherwise be
    idle. And any CPU will have some idle time, regardless of design,
    because the CPU is much faster than memory, and in some cases will
    simply starve for data or bog writing it.
    In a word, no. If you don't have at least two integer processing units
    every calculation of register+constant address is serialized with user
    integer processing. Likewise the BIU (bus interface unit or memory
    interface) will have idle cycles while code executes which uses all data
    in registers.

    having idle resources is not bad design, it's necessary to allow the CPU
    to run user instructions full speed.
     
    Bill Davidsen, Jan 5, 2005
    #22
    1. Advertisements

  3. AJ

    Robert Myers Guest

    Soon, apparently, but not yet

    http://www.computer.org/computer/homepage/1003/entertainment/

    http://www.gpgpu.org/

    and probably not limited to 32-bit precision. As to what they are
    actually computing, who knows

    http://www.cs.unc.edu/~ibr/projects/paranoia/gpu_paranoia.pdf

    The plots, though, will look good, and that's become the accepted
    figure of merit for FP-intensive computing, anyway.

    RM
     
    Robert Myers, Jan 15, 2005
    #23
  4. AJ

    MatoXD99

    Joined:
    Jul 6, 2019
    Messages:
    2
    Likes Received:
    0
    Just want to say to everyone... Its 9nm, not 90nm
     
    MatoXD99, Jul 6, 2019
    #24
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.