1. This forum section is a read-only archive which contains old newsgroup posts. If you wish to post a query, please do so in one of our main forum sections (here). This way you will get a faster, better response from the members on Motherboard Point.

How many x86 instructions?

Discussion in 'Intel' started by Yousuf Khan, Feb 20, 2014.

  1. Yousuf Khan

    BillW50 Guest

    On 2/23/2014 4:41 PM, charlie wrote:
    > The front panel on many of the old mainframes and minicomputers allowed
    > direct entry of machine code, and was usually used to manually enter
    > such things as a "bootstrap", or loader program.


    The way I recall is any computer only understands machine code and
    nothing else. Anything else must be converted to machine at some point.

    --
    Bill
    Gateway M465e ('06 era) - Thunderbird v24.3.0
    Centrino Core2 Duo T7400 2.16 GHz - 4GB - Windows 8 Pro w/Media Center
     
    BillW50, Feb 23, 2014
    #21
    1. Advertisements

  2. On 2/23/2014, J. P. Gilliver (John) posted:
    > In message <5foOu.2965$>, charlie <>
    > writes:
    > []
    >>At the time, the only out we had in order to meet contract
    >> requirements was to write a combination of assembly code, compiled
    >> code, and horrors,
    >>machine code. If that wasn't bad enough, we then had to
    >> "disassemble"
    >>the machine code to see if there was a way to duplicate it at the
    >> highest level possible, without writing compiler extensions.


    > What's machine code (as opposed to assembly code) in this context?
    > How did you write it?


    This might help:

    When I owned an Apple ][, for a long time I din't own an assembler
    program. I wrote some code in hex...

    Let me tell you, "a small change" was a complete oxymoron.

    "Machine code" means the actual bits or bytes that go into memory.
    "Assembly code" is a *symbolic* language. Assembly language code, for
    various reasons, might not even be a perfect 1 to 1 match to what goes
    into the machine.

    --
    Gene E. Bloch (Stumbling Bloch)
     
    Gene E. Bloch, Feb 23, 2014
    #22
    1. Advertisements

  3. Yousuf Khan

    BillW50 Guest

    On 2/23/2014 5:45 PM, Gene E. Bloch wrote:
    > On 2/23/2014, J. P. Gilliver (John) posted:
    >> In message <5foOu.2965$>, charlie <>
    >> writes:
    >> []
    >>> At the time, the only out we had in order to meet contract
    >>> requirements was to write a combination of assembly code, compiled
    >>> code, and horrors,
    >>> machine code. If that wasn't bad enough, we then had to "disassemble"
    >>> the machine code to see if there was a way to duplicate it at the
    >>> highest level possible, without writing compiler extensions.

    >
    >> What's machine code (as opposed to assembly code) in this context? How
    >> did you write it?

    >
    > This might help:
    >
    > When I owned an Apple ][, for a long time I din't own an assembler
    > program. I wrote some code in hex...
    >
    > Let me tell you, "a small change" was a complete oxymoron.
    >
    > "Machine code" means the actual bits or bytes that go into memory.
    > "Assembly code" is a *symbolic* language. Assembly language code, for
    > various reasons, might not even be a perfect 1 to 1 match to what goes
    > into the machine.


    +1

    --
    Bill
    Gateway M465e ('06 era) - Thunderbird v24.3.0
    Centrino Core2 Duo T7400 2.16 GHz - 4GB - Windows 8 Pro w/Media Center
     
    BillW50, Feb 23, 2014
    #23
  4. Yousuf Khan

    Yousuf Khan Guest

    On 23/02/2014 6:15 PM, BillW50 wrote:
    > On 2/23/2014 4:41 PM, charlie wrote:
    >> The front panel on many of the old mainframes and minicomputers allowed
    >> direct entry of machine code, and was usually used to manually enter
    >> such things as a "bootstrap", or loader program.

    >
    > The way I recall is any computer only understands machine code and
    > nothing else. Anything else must be converted to machine at some point.


    I know what Charlie is talking about. When he talks about directly
    entering machine code, it means typing in the binary codes directly,
    even without niceness of an assembler to translate it partially into
    English readable. This would be entering numbers into memory directly,
    like 0x2C, 0x01, 0xFB, etc., etc.

    Yousuf Khan
     
    Yousuf Khan, Feb 24, 2014
    #24
  5. Yousuf Khan

    Guest

    On Sun, 23 Feb 2014 17:15:24 -0600, BillW50 <> wrote:

    >On 2/23/2014 4:41 PM, charlie wrote:
    >> The front panel on many of the old mainframes and minicomputers allowed
    >> direct entry of machine code, and was usually used to manually enter
    >> such things as a "bootstrap", or loader program.

    >
    >The way I recall is any computer only understands machine code and
    >nothing else. Anything else must be converted to machine at some point.


    That sorta the meaning of the word "machine" in "machine code". ;-)

    The issue is how the programs are stored, in the mean time. If the
    machine code is never "seen" in the wild, it's an interpreter. If the
    machine code is stored somewhere it's either "assembled" or
    "compiled". The major difference being that an "assembled" program
    has a 1:1 correspondence to its machine code, a "compiled" program
    will not. Of course a "macro" assembler confuses this point some.
     
    , Feb 24, 2014
    #25
  6. Yousuf Khan

    Jason Guest

    On Fri, 21 Feb 2014 14:23:01 +0000 (UTC) "Robert Redelmeier"
    <> wrote in article <le7ng5$jfq$
    >
    > Hate to break it to you, but you are behind the times. Compilers
    > are passe' -- "modern" systems use interpreters like JIT Java.
    >
    > How else you you think Android gets Apps to run on the dogs-breakfast
    > of ARM processors out there? It is [nearly] all interpreted Java.
    > So much so that Dell can get 'roid Apps to run on its x86 tablet!
    > (AFAIK, iOS still runs compiled Apps prob'cuz Apple _hatez_ Oracle)
    >
    >
    > -- Robert


    Compilers are NOT passe'

    The performance penalty for interpreted languages is a large factor. It's
    fine in many situations - scripting languages and the like - and the
    modern processors are fast enough to make the performance hit tolerable.
    Large-scale applications are still compiled and heavily optimized. Time
    is money.
     
    Jason, Feb 24, 2014
    #26
  7. Yousuf Khan

    charlie Guest

    > machine code is stored somewhere it's either "assembled" or
    >> "compiled".


    There's more! A "Loader" can take a binary type file and add it to
    memory. If the loader has a system level "map" of memory usage, and
    resident code entries and exits, it can load the code at a relative or
    absolute memory location, and inform the system level software where it
    is. Or it might do a "load and go" so that when the loader is finished,
    the processor goes to and starts executing at an address provided by the
    loader. A system might tell the loader where in memory to put the code.
    A programmer's nightmare is intermixed code and data, with self
    modifying code added, just for giggles! Some compilers/assemblers used
    to generate machine code had/have detectable signatures tracing back to
    the particular development software that was used. This allowed authors
    to check to see if they were being properly paid for use of their
    development software. (Freeware or student development software, pay for
    commercial use) I'd suggest that you don't consider use of "student" or
    "educational" development software to develop a commercial program!


    On 2/23/2014 7:34 PM, wrote:
    > On Sun, 23 Feb 2014 17:15:24 -0600, BillW50 <> wrote:
    >
    >> On 2/23/2014 4:41 PM, charlie wrote:
    >>> The front panel on many of the old mainframes and minicomputers allowed
    >>> direct entry of machine code, and was usually used to manually enter
    >>> such things as a "bootstrap", or loader program.

    >>
    >> The way I recall is any computer only understands machine code and
    >> nothing else. Anything else must be converted to machine at some point.

    >
    > That sorta the meaning of the word "machine" in "machine code". ;-)
    >
    > The issue is how the programs are stored, in the mean time. If the
    > machine code is never "seen" in the wild, it's an interpreter. If the
    > machine code is stored somewhere it's either "assembled" or
    > "compiled". The major difference being that an "assembled" program
    > has a 1:1 correspondence to its machine code, a "compiled" program
    > will not. Of course a "macro" assembler confuses this point some.
    >
     
    charlie, Feb 24, 2014
    #27
  8. Yousuf Khan

    Guest

    On Sun, 23 Feb 2014 23:21:52 -0500, Jason <>
    wrote:

    >On Fri, 21 Feb 2014 14:23:01 +0000 (UTC) "Robert Redelmeier"
    ><> wrote in article <le7ng5$jfq$
    >>
    >> Hate to break it to you, but you are behind the times. Compilers
    >> are passe' -- "modern" systems use interpreters like JIT Java.
    >>
    >> How else you you think Android gets Apps to run on the dogs-breakfast
    >> of ARM processors out there? It is [nearly] all interpreted Java.
    >> So much so that Dell can get 'roid Apps to run on its x86 tablet!
    >> (AFAIK, iOS still runs compiled Apps prob'cuz Apple _hatez_ Oracle)
    >>
    >>
    >> -- Robert

    >
    >Compilers are NOT passe'
    >
    >The performance penalty for interpreted languages is a large factor. It's
    >fine in many situations - scripting languages and the like - and the
    >modern processors are fast enough to make the performance hit tolerable.
    >Large-scale applications are still compiled and heavily optimized. Time
    >is money.


    Time may be money but transistors are free. ;-)
     
    , Feb 24, 2014
    #28
  9. Yousuf Khan

    Jason Guest

    On Mon, 24 Feb 2014 13:02:02 -0500 "" <> wrote
    in article <>
    >
    > On Sun, 23 Feb 2014 23:21:52 -0500, Jason <>
    > wrote:
    >
    > >On Fri, 21 Feb 2014 14:23:01 +0000 (UTC) "Robert Redelmeier"
    > ><> wrote in article <le7ng5$jfq$
    > >>
    > >> Hate to break it to you, but you are behind the times. Compilers
    > >> are passe' -- "modern" systems use interpreters like JIT Java.
    > >>
    > >> How else you you think Android gets Apps to run on the dogs-breakfast
    > >> of ARM processors out there? It is [nearly] all interpreted Java.
    > >> So much so that Dell can get 'roid Apps to run on its x86 tablet!
    > >> (AFAIK, iOS still runs compiled Apps prob'cuz Apple _hatez_ Oracle)
    > >>
    > >>
    > >> -- Robert

    > >
    > >Compilers are NOT passe'
    > >
    > >The performance penalty for interpreted languages is a large factor. It's
    > >fine in many situations - scripting languages and the like - and the
    > >modern processors are fast enough to make the performance hit tolerable.
    > >Large-scale applications are still compiled and heavily optimized. Time
    > >is money.

    >
    > Time may be money but transistors are free. ;-)


    Well, not exactly free. Visit a National Lab sometime to get an idea of
    the magnitude of the expenditures for "free" transistors. I've been
    there. Those people do everything to wring out every droplet of
    performance that they can, even on petaflops machines.
     
    Jason, Feb 24, 2014
    #29
  10. Yousuf Khan

    Guest

    On Mon, 24 Feb 2014 13:38:40 -0500, Jason <>
    wrote:

    >On Mon, 24 Feb 2014 13:02:02 -0500 "" <> wrote
    >in article <>
    >>
    >> On Sun, 23 Feb 2014 23:21:52 -0500, Jason <>
    >> wrote:
    >>
    >> >On Fri, 21 Feb 2014 14:23:01 +0000 (UTC) "Robert Redelmeier"
    >> ><> wrote in article <le7ng5$jfq$
    >> >>
    >> >> Hate to break it to you, but you are behind the times. Compilers
    >> >> are passe' -- "modern" systems use interpreters like JIT Java.
    >> >>
    >> >> How else you you think Android gets Apps to run on the dogs-breakfast
    >> >> of ARM processors out there? It is [nearly] all interpreted Java.
    >> >> So much so that Dell can get 'roid Apps to run on its x86 tablet!
    >> >> (AFAIK, iOS still runs compiled Apps prob'cuz Apple _hatez_ Oracle)
    >> >>
    >> >>
    >> >> -- Robert
    >> >
    >> >Compilers are NOT passe'
    >> >
    >> >The performance penalty for interpreted languages is a large factor. It's
    >> >fine in many situations - scripting languages and the like - and the
    >> >modern processors are fast enough to make the performance hit tolerable.
    >> >Large-scale applications are still compiled and heavily optimized. Time
    >> >is money.

    >>
    >> Time may be money but transistors are free. ;-)

    >
    >Well, not exactly free. Visit a National Lab sometime to get an idea of
    >the magnitude of the expenditures for "free" transistors. I've been
    >there. Those people do everything to wring out every droplet of
    >performance that they can, even on petaflops machines.
    >

    Now, divide that expenditure by the number manufactured. I worked in
    high-end microprocessor design for seven or eight years. Transistors
    are indeed treated as free, and getting cheaper every year. If you
    look at how programmers write, they think they're free, too. ;-)
     
    , Feb 24, 2014
    #30
  11. On 2/23/2014, Yousuf Khan posted:
    > On 23/02/2014 6:15 PM, BillW50 wrote:
    >> On 2/23/2014 4:41 PM, charlie wrote:
    >>> The front panel on many of the old mainframes and minicomputers
    >>> allowed
    >>> direct entry of machine code, and was usually used to manually
    >>> enter
    >>> such things as a "bootstrap", or loader program.

    >>
    >> The way I recall is any computer only understands machine code and
    >> nothing else. Anything else must be converted to machine at some
    >> point.


    > I know what Charlie is talking about. When he talks about directly
    > entering machine code, it means typing in the binary codes directly,
    > even without niceness of an assembler to translate it partially into
    > English readable. This would be entering numbers into memory
    > directly, like 0x2C, 0x01, 0xFB, etc., etc.


    > Yousuf Khan


    Not so recently, when I worked on what were then called minicomputers,
    the boot process went like this:

    Set the front panel data switches to the bits of the first loader
    instruction (in machine language, of course)
    Set the front panel address switches to the first location of the
    loader
    Enter the data into memory by pressing the Store button.

    Set the data switches to the second instruction and the address
    switches to the second address. Press Store.

    Repeat a dozen or two times to get the entire bootstrap loader into
    memory

    Load the main loader paper tape into the paper tape reader

    Set the address switches to the starting location of the boot strap
    loader

    Press the Go button

    When to main loader is in, load the paper tape of the program you want
    to run into the reader

    Set the starting address to the main loader's first address

    Press Go

    That loader will load the final paper tape automatically, thank Silicon

    Over time the process was streamlined a bit, for example by letting the
    storage address autoincrement after each Store operation.

    Maybe you can guess how happy I was when BIOSes started to appear :)

    --
    Gene E. Bloch (Stumbling Bloch)
     
    Gene E. Bloch, Feb 24, 2014
    #31
  12. Yousuf Khan

    Jason Guest

    On Mon, 24 Feb 2014 14:09:02 -0500 "" <> wrote
    in article <>
    >
    > On Mon, 24 Feb 2014 13:38:40 -0500, Jason <>
    > wrote:
    >
    > >On Mon, 24 Feb 2014 13:02:02 -0500 "" <> wrote
    > >in article <>
    > >>
    > >> On Sun, 23 Feb 2014 23:21:52 -0500, Jason <>
    > >> wrote:
    > >>
    > >> >On Fri, 21 Feb 2014 14:23:01 +0000 (UTC) "Robert Redelmeier"

    > Now, divide that expenditure by the number manufactured. I worked in
    > high-end microprocessor design for seven or eight years. Transistors
    > are indeed treated as free, and getting cheaper every year. If you
    > look at how programmers write, they think they're free, too. ;-)


    Ok, transistors are indeed free in that regard. But as we've learned
    there are limits to absolute performance that can be had even with an
    unlimited transistor budget - hence multi-core machines. Programmers
    would be very happy if we could have figured out how to continuously
    boost uniprcessor performance but it cannot happen, at least with
    silicon. Taking advantage of parallel processor, for most tasks, is very
    hard.
     
    Jason, Feb 24, 2014
    #32
  13. Yousuf Khan

    Jason Guest

    On Mon, 24 Feb 2014 12:11:05 -0800 "Gene E. Bloch"
    <> wrote in article <leg90r$nln$1
    @news.albasani.net>
    >
    > On 2/23/2014, Yousuf Khan posted:
    > > On 23/02/2014 6:15 PM, BillW50 wrote:
    > >> On 2/23/2014 4:41 PM, charlie wrote:
    > >>> The front panel on many of the old mainframes and minicomputers
    > >>> allowed
    > >>> direct entry of machine code, and was usually used to manually
    > >>> enter
    > >>> such things as a "bootstrap", or loader program.
    > >>
    > >> The way I recall is any computer only understands machine code and
    > >> nothing else. Anything else must be converted to machine at some
    > >> point.

    >
    > > I know what Charlie is talking about. When he talks about directly
    > > entering machine code, it means typing in the binary codes directly,
    > > even without niceness of an assembler to translate it partially into
    > > English readable. This would be entering numbers into memory
    > > directly, like 0x2C, 0x01, 0xFB, etc., etc.

    >
    > > Yousuf Khan

    >
    > Not so recently, when I worked on what were then called minicomputers,
    > the boot process went like this:
    >
    > Set the front panel data switches to the bits of the first loader
    > instruction (in machine language, of course)
    > Set the front panel address switches to the first location of the
    > loader
    > Enter the data into memory by pressing the Store button.
    >
    > Set the data switches to the second instruction and the address
    > switches to the second address. Press Store.
    >
    > Repeat a dozen or two times to get the entire bootstrap loader into
    > memory
    >
    > Load the main loader paper tape into the paper tape reader
    >
    > Set the address switches to the starting location of the boot strap
    > loader
    >
    > Press the Go button
    >
    > When to main loader is in, load the paper tape of the program you want
    > to run into the reader
    >
    > Set the starting address to the main loader's first address
    >
    > Press Go
    >
    > That loader will load the final paper tape automatically, thank Silicon
    >
    > Over time the process was streamlined a bit, for example by letting the
    > storage address autoincrement after each Store operation.
    >
    > Maybe you can guess how happy I was when BIOSes started to appear :)


    lol I'm sure you were! The first computer I used had the boot record on a
    single tab card. It used up about 75 of the 80 columns. We whipersnappers
    memorized the sequence and could type it in on the console
    teletypewriter. It was faster than tracking down the boot card sometimes.
     
    Jason, Feb 24, 2014
    #33
  14. In comp.sys.ibm.pc.hardware.chips Jason <> wrote in part:

    > On Fri, 21 Feb 2014 14:23:01 +0000 (UTC) "Robert Redelmeier"
    > <> wrote in article <le7ng5$jfq$

    In comp.sys.ibm.pc.hardware.chips Yousuf Khan <> wrote in part:
    >>> But it goes to show why the age of compilers is well and
    >>> truly upon us, there's no human way to keep track of these
    >>> machine language instructions. Compilers just use a subset,
    >>> and just repeat those instructions over and over again.

    >>
    >> Hate to break it to you, but you are behind the times. Compilers
    >> are passe' -- "modern" systems use interpreters like JIT Java.
    >>
    >> How else you you think Android gets Apps to run on the dogs-breakfast
    >> of ARM processors out there? It is [nearly] all interpreted Java.
    >> So much so that Dell can get 'roid Apps to run on its x86 tablet!
    >> (AFAIK, iOS still runs compiled Apps prob'cuz Apple _hatez_ Oracle)

    >
    > Compilers are NOT passe'


    I feel quoted-out-of-context. I was replying to Mr Khan (restored above)
    that compiled languages were in turn being supplanted by interpreted.

    > The performance penalty for interpreted languages is a large
    > factor. It's fine in many situations - scripting languages and
    > the like - and the modern processors are fast enough to make the
    > performance hit tolerable. Large-scale applications are still
    > compiled and heavily optimized. Time is money.


    I am well aware of the perfomance penalty of interpreted languages
    (I once programmed in APL/360) and that compiling has been
    preferable for HPC. However, the differences between compilers
    are reducing to the quality of their libraries, especially SIMD and
    multi-threading. The flexibility of interpreters might have value.


    -- Robert
     
    Robert Redelmeier, Feb 25, 2014
    #34
  15. Yousuf Khan

    Jim Guest

    Jim, Feb 27, 2014
    #35
  16. Yousuf Khan

    Guest

    On Tuesday, February 25, 2014 2:02:02 AM UTC+8, wrote:
    >
    >
    > Time may be money but transistors are free. ;-)



    That is how AMD made a mess of Bulldozer
     
    , Mar 29, 2014
    #36
  17. Yousuf Khan

    Yousuf Khan Guest

    On 28/03/2014 10:50 PM, wrote:
    > On Tuesday, February 25, 2014 2:02:02 AM UTC+8, wrote:
    >>
    >>
    >> Time may be money but transistors are free. ;-)

    >
    >
    > That is how AMD made a mess of Bulldozer


    I think that was for the opposite reason. They were too stingy with
    their transistor budget (shared FPU between cores).

    Yousuf Khan
     
    Yousuf Khan, Apr 2, 2014
    #37
  18. Yousuf Khan

    John Doe Guest

    Robert Redelmeier <redelm ev1.net.invalid> wrote:

    > Jason <jason_warren ieee.org> wrote in part:
    >> "Robert Redelmeier" <redelm ev1.net.invalid> wrote
    >>> Yousuf Khan <bbbl67 spammenot.yahoo.com> wrote:


    >>>> But it goes to show why the age of compilers is well and
    >>>> truly upon us, there's no human way to keep track of these
    >>>> machine language instructions. Compilers just use a subset,
    >>>> and just repeat those instructions over and over again.
    >>>
    >>> Hate to break it to you, but you are behind the times.
    >>> Compilers are passe' -- "modern" systems use interpreters like
    >>> JIT Java.
    >>>
    >>> How else you you think Android gets Apps to run on the
    >>> dogs-breakfast of ARM processors out there? It is [nearly]
    >>> all interpreted Java. So much so that Dell can get 'roid Apps
    >>> to run on its x86 tablet! (AFAIK, iOS still runs compiled Apps
    >>> prob'cuz Apple _hatez_ Oracle)

    >>
    >> Compilers are NOT passe'

    >
    > I feel quoted-out-of-context. I was replying to Mr Khan
    > (restored above) that compiled languages were in turn being
    > supplanted by interpreted.
    >
    >> The performance penalty for interpreted languages is a large
    >> factor. It's fine in many situations - scripting languages and
    >> the like - and the modern processors are fast enough to make
    >> the performance hit tolerable. Large-scale applications are
    >> still compiled and heavily optimized. Time is money.

    >
    > I am well aware of the perfomance penalty of interpreted
    > languages (I once programmed in APL/360) and that compiling has
    > been preferable for HPC. However, the differences between
    > compilers are reducing to the quality of their libraries,
    > especially SIMD and multi-threading. The flexibility of
    > interpreters might have value.


    Not talking about commercial stuff, but...

    I use speech and VC++. Speech activated scripting involves (what I
    think is) an interpreted scripting language (Vocola) hooked into
    NaturallySpeaking (DNS) speech recognition. Additionally, I'm
    using a Windows system hook written in C++ that is compiled. The
    systemwide hook is for a few numeric keypad key activated short
    SendInput() scripts. The much more involved voice-activated
    scripting is for a large number of longer scripts. It's a great
    combination for making Windows dance. I would say it's cumbersome,
    but I have the editors working efficiently here. Currently using
    that to play Age of Empires 2 HD. Speech is on the one extreme. I
    suppose assembly language would be on the other, but C++ is at
    least compiled.

    That has nothing to do with any mass of programmers, but it's
    useful here and is a very wide range mess of programming for one
    task.
     
    John Doe, Apr 25, 2014
    #38
  19. On Fri, 21 Feb 2014 05:55:02 -0000, Yousuf Khan
    <> wrote:

    > On 20/02/2014 11:21 PM, Paul wrote:
    >> At one time, a compiler would issue instructions
    >> from about 30% of the instruction set. It would mean
    >> a compiled program would never emit the other 70% of
    >> them. But a person writing assembler code, would
    >> have access to all of them, at least, as long as
    >> the mnemonic existed in the assembler.

    >
    > I think the original idea of the x86's large instruction count was to
    > make an assembly language as full-featured as a high-level language. x86
    > even had string-handling instructions!
    >
    > I remember I designed an early version of the CPUID program that ran
    > under DOS. The whole executable including its *.exe headers was
    > something like 40 bytes! Got it down to under 20 bytes when I converted
    > it to *.com (which had no headers)! Most of the space was used to store
    > strings, like "This processor is a:" followed by generated strings like
    > 386SX or 486DX, etc. :)
    >
    > You could make some really tiny assembler programs on x86. Of course,
    > compiled programs ignored most of these useful high-level instructions
    > and stuck with simple instructions to do everything.
    >
    > Yousuf Khan

    Did you cater for all the early cpus?

    ;This code assembles under nasm as 105 bytes of machine code, and will
    ;return the following values in ax:
    ;
    ;AX CPU
    ;0 8088 (NMOS)
    ;1 8086 (NMOS)
    ;2 8088 (CMOS)
    ;3 8086 (CMOS)
    ;4 NEC V20
    ;5 NEC V30
    ;6 80188
    ;7 80186
    ;8 286
    ;0Ah 386 and higher

    code segment
    assume cs:code,ds:code
    ..radix 16
    org 100

    mov ax,1
    mov cx,32
    shl ax,cl
    jnz x186

    ;pusha
    db '60'
    stc
    jc nec

    mov ax,cs
    add ax,01000h
    mov es,ax
    xor si,si
    mov di,100h
    mov cx,08000h
    ;rep es movsb
    rep es:movsb
    or cx,cx
    jz cmos
    nmos:
    mov ax,0
    jmp x8_16
    cmos:
    mov ax,2
    jmp x8_16
    nec:
    mov ax,4
    jmp x8_16
    x186:
    push sp
    pop ax
    cmp ax,sp
    jz x286

    mov ax,6
    x8_16:
    xor bx,bx
    mov byte [a1],043h
    a1 label byte
    nop
    or bx,bx
    jnz t1
    or bx,1
    t1:
    jmp cpuid_end
    x286:
    pushf
    pop ax
    or ah,070h
    push ax
    popf
    pushf
    pop ax
    and ax,07000h
    jnz x386

    mov ax,8
    jmp cpuid_end
    x386:
    mov ax,0Ah

    cpuid_end:


    code ends

    end


    --
    It's a money /life balance.
     
    Stanley Daniel de Liver, Apr 25, 2014
    #39
  20. Yousuf Khan

    Yousuf Khan Guest

    On 25/04/2014 5:54 AM, Stanley Daniel de Liver wrote:
    > On Fri, 21 Feb 2014 05:55:02 -0000, Yousuf Khan
    > <> wrote:
    >> I remember I designed an early version of the CPUID program that ran
    >> under DOS. The whole executable including its *.exe headers was
    >> something like 40 bytes! Got it down to under 20 bytes when I
    >> converted it to *.com (which had no headers)! Most of the space was
    >> used to store strings, like "This processor is a:" followed by
    >> generated strings like 386SX or 486DX, etc. :)
    >>
    >> You could make some really tiny assembler programs on x86. Of course,
    >> compiled programs ignored most of these useful high-level instructions
    >> and stuck with simple instructions to do everything.
    >>
    >> Yousuf Khan

    > Did you cater for all the early cpus?
    >
    > ;This code assembles under nasm as 105 bytes of machine code, and will
    > ;return the following values in ax:
    > ;
    > ;AX CPU
    > ;0 8088 (NMOS)
    > ;1 8086 (NMOS)
    > ;2 8088 (CMOS)
    > ;3 8086 (CMOS)
    > ;4 NEC V20
    > ;5 NEC V30
    > ;6 80188
    > ;7 80186
    > ;8 286
    > ;0Ah 386 and higher


    I don't know if I still have my old program anymore, but I do remember
    at that time it could distinguish 386SX from DX and 486SX from DX as well.

    Yousuf Khan
     
    Yousuf Khan, Apr 26, 2014
    #40
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.