1. This forum section is a read-only archive which contains old newsgroup posts. If you wish to post a query, please do so in one of our main forum sections (here). This way you will get a faster, better response from the members on Motherboard Point.

Re: Floating point format for Intel math coprocessors

Discussion in 'Embedded' started by Jonathan Kirwan, Jun 27, 2003.

  1. On Fri, 27 Jun 2003 17:58:50 GMT, Jonathan Kirwan
    <> wrote:

    >Hmm. Are you *THE* Jack Crenshaw? The "Let's Build A Compiler"
    >and "Math Toolkit for Real-Time Programming" Jack _W._ Crenshaw?


    Actually, I think I've answered my own question, here. You
    really *are* that Jack.

    What clues me in is your use of "phantom" here:

    > mantissa -- 23 bits + phantom bit in bit 24.


    The same term used on page 50 in "Math toolkit..."

    In my own experience, even that predating the Intel 8087 or the
    IEEE standardization, it was called a "hidden bit" notation. I
    don't know where "phantom" comes from, as my own reading managed
    to completely miss it.

    So, a hearty "Hello" from me!

    Jon
     
    Jonathan Kirwan, Jun 27, 2003
    #1
    1. Advertising

  2. Jonathan Kirwan wrote:
    >
    > On Fri, 27 Jun 2003 17:58:50 GMT, Jonathan Kirwan
    > <> wrote:
    >
    > >Hmm. Are you *THE* Jack Crenshaw? The "Let's Build A Compiler"
    > >and "Math Toolkit for Real-Time Programming" Jack _W._ Crenshaw?

    >
    > Actually, I think I've answered my own question, here. You
    > really *are* that Jack.


    Grin! Yep, I really am.

    > What clues me in is your use of "phantom" here:
    >
    > > mantissa -- 23 bits + phantom bit in bit 24.

    >
    > The same term used on page 50 in "Math toolkit..."
    >
    > In my own experience, even that predating the Intel 8087 or the
    > IEEE standardization, it was called a "hidden bit" notation. I
    > don't know where "phantom" comes from, as my own reading managed
    > to completely miss it.
    >
    > So, a hearty "Hello" from me!


    Hello. Re the term, phantom bit: I've been using that term since I can
    remember -- and that's
    a looooonnnngggg time. Then again, I still sometimes catch myself
    saying "cycles" or "kilocycles,"
    or "B+". I first heard the term in 1975. Not sure when it became
    Politically Incorrect. Maybe
    someone objected to the implied occult nature of the term, "phantom"?
    Who knows?
    but as far as I'm concerned the term "hidden bit" is a
    Johnny-come-lately on the scene.

    Back to the point. I want to thank you and everyone else who responded
    (except the guy who said
    "stop it") for helping to straighten out my warped brain.

    It's nice that you have my book. Thanks for buying it. As a matter of
    fact, I first ran across this
    "peculiarity" three years ago, when I was writing it. I needed to
    twiddle the components of the
    floating-point number -- separate the exponent from mantissa -- to write
    the fp_hack structure for
    the square root algorithm. I looked at the formats for float, double,
    and long double, and found the
    second two formats easy enough to grok. But when I looked at the format
    for floats, I sort of went,
    "Gag!" and quickly decided to use doubles for the book.

    It's funny how an idea, once formed, can persist. Lo those many years
    ago, I didn't have a lot of time
    to think about it -- had to get the chapter done. I just managed to
    convince myself that the format
    used this peculiar convention, what with base-4 exponents, and all. I
    had no more need of it at the time,
    so never went back and revisited the impression. It's persisted ever
    since.

    All of the folks who responded are absolutely right. Once I got my head
    screwed on straight, it was
    quite obvious that the format has no mysteries. It is indeed the IEEE
    754 format, plain and simple.
    The thing that had me confused was the exponents: 3f8, 400, 408, etc.
    With one bit for the sign and
    eight for the exponent, it's perfectly obvious that the exponent has to
    bleed down one bit into the next
    lower hex digit. That's what I was seeing, but somehow in my haste, I
    didn't recognize it as such, and
    formed this "theory" that it was using a base-4 exponent.

    Wanna hear the funny part? After tinkering with it for awhile, I worked
    out the rules for my imagined
    format, that worked just fine. At work, I've got a Mathcad file that
    takes the hex number, shifts it
    two bits at a time, diddles the "phantom" bit, and produces the right
    results. I can go from integer to
    float and back nicely, using this cockamamie scheme.

    Needless to say, the conversion is a whole lot easier if one uses the
    real format! My Mathcad file just
    got a lot shorter.

    Thanks again to everyone who responded, and my apologies for bothering
    y'all with this imaginary problem.

    Jack
     
    Jack Crenshaw, Jul 1, 2003
    #2
    1. Advertising

  3. On Tue, 01 Jul 2003 13:03:19 GMT, Jack Crenshaw
    <> wrote:

    >Jonathan Kirwan wrote:
    >>
    >> On Fri, 27 Jun 2003 17:58:50 GMT, Jonathan Kirwan
    >> <> wrote:
    >>
    >> >Hmm. Are you *THE* Jack Crenshaw? The "Let's Build A Compiler"
    >> >and "Math Toolkit for Real-Time Programming" Jack _W._ Crenshaw?

    >>
    >> Actually, I think I've answered my own question, here. You
    >> really *are* that Jack.

    >
    >Grin! Yep, I really am.


    Hehe. Nice to know one of my antennas is still sharp.

    >> What clues me in is your use of "phantom" here:
    >>
    >> > mantissa -- 23 bits + phantom bit in bit 24.

    >>
    >> The same term used on page 50 in "Math toolkit..."
    >>
    >> In my own experience, even that predating the Intel 8087 or the
    >> IEEE standardization, it was called a "hidden bit" notation. I
    >> don't know where "phantom" comes from, as my own reading managed
    >> to completely miss it.
    >>
    >> So, a hearty "Hello" from me!

    >
    >Hello. Re the term, phantom bit: I've been using that term since I can
    >remember -- and that's a looooonnnngggg time.


    I think my first exposure to hidden-bit as a term dates to about
    1974. But I could be off, by a year, either way.

    >Then again, I still sometimes catch myself saying "cycles" or
    >"kilocycles," or "B+".


    Hehe. Now those terms aren't so "hidden" to me. I learned my
    early electronics on tube design manuals. One sticking point I
    remember bugging me for a long time was exactly, "How do they
    size those darned grid leak resistors?" I just couldn't figure
    out where they got the current from which to figure their
    magnitude. So even B+ is old hat to me.

    >I first heard the term in 1975.


    Well, that's about the time for "hidden bit," too. Probably, at
    that time the term was still in a state of flux. I just got my
    hands on different docs, I imagine.

    >Not sure when it became Politically Incorrect.


    Oh, it's fine to me, anyway. I knew what was meant the moment I
    saw the term. It's pretty clear. I just think time has settled
    more on one term than another.

    But to take your allusion and run with it a bit... I don't know
    of anyone part of some conspiracy to set the term -- in any
    case, setting terms usually is propagandistic, designed for
    setting agendas in peoples' minds and here is a case where
    everyone would want the same agenda.

    >Maybe someone objected to the implied occult nature of the term,
    >"phantom"?


    Oh, geez. I've never known a geek to care about such things. I
    suppose they must exist, somwhere. I've just never met one
    willing to let me know they thought like that. But that's an
    interesting thought. It would fit the weird times in the US we
    live in, with about 30% aligning themselves as fundamentalists.

    Nah... it just can't be.

    >Who knows?


    I really think it was more the IEEE settling on a term. But
    then, this isn't my area so I could be dead wrong about that --
    I'm only guessing.

    >but as far as I'm concerned the term "hidden bit" is a
    >Johnny-come-lately on the scene.


    Hehe. I've no problem if that's true.

    >Back to the point. I want to thank you and everyone else who responded
    >(except the guy who said "stop it") for helping to straighten out my
    >warped brain.


    No problem. It was really pretty easy to recall the details.
    Like learning to ride a bicycle, I suppose.

    >It's nice that you have my book. Thanks for buying it.


    Oh, there was no question. I've a kindred interest in physics
    and engineering, I imagine. I'm currently struggling through
    Robert Gilmore's books, one on lie groups and algebras and the
    other on catastrophe theory for engineers as well as polytropes,
    packing spheres, and other delights. There were some nice
    insights in your book, which helped wind me on just enough of a
    different path to stretch me without losing me.

    By the way!! I completely agree with you about MathCad! What a
    piece of *&!@&$^%$^ it is, now. I went through several
    iterations, loved at first the slant or approach in using it,
    but absolutely hate it now because, frankly, I can't run it for
    more than an hour before I don't have any memory left and it
    crashes out. Reboot time every hour is not my idea of a good
    thing. And that's only if I don't type and change things too
    fast. When I work quick on it, I can go through what's left
    with Win98 on a 256Mb RAM machine in a half hour! No help from
    them and two versions later I've simply stopped using it. I
    don't even want to hear from them, again. Hopefully, I'll be
    able to find an old version somewhere. For now, I'm doing
    without.

    >As a matter of fact, I first ran across this
    >"peculiarity" three years ago, when I was writing it. I needed to
    >twiddle the components of the
    >floating-point number -- separate the exponent from mantissa -- to write
    >the fp_hack structure for
    >the square root algorithm. I looked at the formats for float, double,
    >and long double, and found the
    >second two formats easy enough to grok. But when I looked at the format
    >for floats, I sort of went,
    >"Gag!" and quickly decided to use doubles for the book.


    Yes. But that's fine, I suspect. I've taught undergrad classes
    and most folks just go "barf" when confronted with learning
    floating point. In class evaluations, I think having to learn
    floating point was the bigger source of complaints about the
    classes. You probably addressed everything anyone "normal"
    could reasonably care about and more.

    >It's funny how an idea, once formed, can persist. Lo those many years
    >ago, I didn't have a lot of time
    >to think about it -- had to get the chapter done. I just managed to
    >convince myself that the format
    >used this peculiar convention, what with base-4 exponents, and all. I
    >had no more need of it at the time,
    >so never went back and revisited the impression. It's persisted ever
    >since.


    No problem.

    >All of the folks who responded are absolutely right. Once I got my head
    >screwed on straight, it was
    >quite obvious that the format has no mysteries. It is indeed the IEEE
    >754 format, plain and simple.
    >The thing that had me confused was the exponents: 3f8, 400, 408, etc.
    >With one bit for the sign and
    >eight for the exponent, it's perfectly obvious that the exponent has to
    >bleed down one bit into the next
    >lower hex digit. That's what I was seeing, but somehow in my haste, I
    >didn't recognize it as such, and
    >formed this "theory" that it was using a base-4 exponent.


    In any case, it's clear that your imagination is able to work
    overtime, here! Maybe that's a good thing.

    >Wanna hear the funny part? After tinkering with it for awhile, I worked
    >out the rules for my imagined
    >format, that worked just fine. At work, I've got a Mathcad file that
    >takes the hex number, shifts it
    >two bits at a time, diddles the "phantom" bit, and produces the right
    >results. I can go from integer to
    >float and back nicely, using this cockamamie scheme.


    Hmm. Then you should be able to construct a function to map
    between these, proving the consistent results. I've a hard time
    believing there is one. But who knows? Maybe this is the
    beginning of a new facet of mathematics, like the investigation
    into fractals or something!

    >Needless to say, the conversion is a whole lot easier if one uses the
    >real format! My Mathcad file just
    >got a lot shorter.


    Hehe!! When you get things right, they *do* tend to become a
    little more prosaic, too. Good thing for those of us with
    feeble minds, too.

    >Thanks again to everyone who responded, and my apologies for bothering
    >y'all with this imaginary problem.


    hehe. Best of luck. In the process, I did notice that you are
    entertaining thoughts on a revised "Let's build a compiler."
    Best of luck on that and if you feel the desire for unloading
    some of the work, I might could help a little. I've written a
    toy C compiler before, an assembler, several linkers, and a
    not-so-toy BASIC interpreter. I can, at least, be a little bit
    dangerous. Might be able to shoulder something, if it helps.

    Jon
     
    Jonathan Kirwan, Jul 1, 2003
    #3
  4. Jonathan Kirwan

    Bob Guest

    Hi Jack!

    I've always used the term "implied bit". I think I saw it in the 80186
    programmer's reference section on using an 80187 coprocessor.

    BTW: thanks for the articles (and the Math toolkit book) on interpolating
    functions - I use that stuff over and over again. In fact, I'm simulating an
    antilog interpolation routine (4 terms by 4 indicies) right now that will
    eventually run on a PIC (no hardware multiply; not even an add-with-carry
    instruction). Those foward and backward difference operators make it all
    pretty easy! With no hardware support from the PIC, it will end up taking
    near 10mS from 24 bit ADC to final LED output but it will be better than 14
    bit accurate over the 23 bit output dynamic range.

    all the best,
    Bob
     
    Bob, Jul 2, 2003
    #4
  5. Jonathan Kirwan

    Yousuf Khan Guest

    "Jack Crenshaw" <> wrote in message
    news:...
    > Back to the point. I want to thank you and everyone else who responded
    > (except the guy who said
    > "stop it") for helping to straighten out my warped brain.


    Yeah, sometimes the ones who have the most details about a subject sometimes
    fail to see the overall big picture (i.e. the old forest for the trees
    argument). I've sometimes had revelations about things when I go to teach
    someone something that I thought I already knew, but now I understand it
    better. :)

    Yousuf Khan
     
    Yousuf Khan, Jul 2, 2003
    #5
  6. On Wed, 02 Jul 2003 03:39:51 GMT, "Yousuf Khan"
    <> wrote:

    >"Jack Crenshaw" <> wrote in message
    >news:...
    >> Back to the point. I want to thank you and everyone else who responded
    >> (except the guy who said
    >> "stop it") for helping to straighten out my warped brain.

    >
    >Yeah, sometimes the ones who have the most details about a subject sometimes
    >fail to see the overall big picture (i.e. the old forest for the trees
    >argument). I've sometimes had revelations about things when I go to teach
    >someone something that I thought I already knew, but now I understand it
    >better. :)


    Teaching *is* one of the better ways to learn something.

    Jon
     
    Jonathan Kirwan, Jul 2, 2003
    #6
  7. On Fri, 27 Jun 2003 19:14:09 GMT, Jonathan Kirwan
    <> wrote:


    >In my own experience, even that predating the Intel 8087 or the
    >IEEE standardization, it was called a "hidden bit" notation. I
    >don't know where "phantom" comes from, as my own reading managed
    >to completely miss it.


    I have never heard about phantom bits before, but the PDP-11 processor
    handbooks talked about hidden bit normalisation when talking about the
    floating point processor (FPP) instructions in the mid-70's. It might
    even be older, since the same format was used on the FIS instruction
    set extensions on some early PDP-11s.

    Paul
     
    Paul Keinanen, Jul 2, 2003
    #7
  8. Bob wrote:
    >
    > Hi Jack!
    >
    > I've always used the term "implied bit". I think I saw it in the 80186
    > programmer's reference section on using an 80187 coprocessor.


    I think I got the term "phantom bit" from Intel's f.p. library for the
    8080, ca. 1975.
    Then again, I've been doing my own ASCII-binary and binary-ASCII
    conversions since
    way before that, on big IBM iron. We pretty much had to, since the old
    Fortran I/O
    routines were so incredibly confining. It had to be around 1960-62.
    But the old
    7094 format didn't use the "phantom" bit, AIR.

    You just haven't lived until you've twiddled F.P. bits in Fortran <g>.

    > BTW: thanks for the articles (and the Math toolkit book) on interpolating
    > functions - I use that stuff over and over again. In fact, I'm simulating an
    > antilog interpolation routine (4 terms by 4 indicies) right now that will
    > eventually run on a PIC (no hardware multiply; not even an add-with-carry
    > instruction). Those foward and backward difference operators make it all
    > pretty easy! With no hardware support from the PIC, it will end up taking
    > near 10mS from 24 bit ADC to final LED output but it will be better than 14
    > bit accurate over the 23 bit output dynamic range.


    Sounds neat. I'm glad I could help.

    FWIW, there's a fellow in my office who has a Friden calculator sitting
    on his credenza.
    He's restoring it.

    Jack
     
    Jack Crenshaw, Jul 2, 2003
    #8
  9. Jonathan Kirwan wrote:
    >
    > On Tue, 01 Jul 2003 13:03:19 GMT, Jack Crenshaw
    > <> wrote:

    <snip>
    > >Hello. Re the term, phantom bit: I've been using that term since I can
    > >remember -- and that's a looooonnnngggg time.

    >
    > I think my first exposure to hidden-bit as a term dates to about
    > 1974. But I could be off, by a year, either way.
    >
    > >Then again, I still sometimes catch myself saying "cycles" or
    > >"kilocycles," or "B+".

    >
    > Hehe. Now those terms aren't so "hidden" to me. I learned my
    > early electronics on tube design manuals. One sticking point I
    > remember bugging me for a long time was exactly, "How do they
    > size those darned grid leak resistors?" I just couldn't figure
    > out where they got the current from which to figure their
    > magnitude. So even B+ is old hat to me.


    Then you definitely ain't one of the young punks, are you? <g>
    Re grid leak: I think it must be pretty much trial and error. No doubt
    _SOMEONE_ has a theory for it, but I would think the grid current must
    vary
    a lot from tube to tube.

    FWIW, I sit here surrounded by old Heathkit tube electronics. I collect
    them.
    Once I started buying them, I realized I couldn't just drive down to the
    local
    drugstore and test the tubes. Had to buy a tube tester, VTVM, and all
    the other
    accoutrements to be able to work on them.

    > >Maybe someone objected to the implied occult nature of the term,
    > >"phantom"?

    >
    > Oh, geez. I've never known a geek to care about such things. I
    > suppose they must exist, somwhere. I've just never met one
    > willing to let me know they thought like that. But that's an
    > interesting thought. It would fit the weird times in the US we
    > live in, with about 30% aligning themselves as fundamentalists.
    >
    > Nah... it just can't be.


    I agree; I was mostly kidding about the PC aspects. One never knows,
    tho.
    FYI, I have been known to be called a "fundie" on talk.origins and
    others
    of those insightful and respectful sites. I'm not, but they are not
    noted
    for their discernment or subtleties of observation.

    One of my favorite atheists is Stan Kelly-Bootle of "Devil's DP
    Dictionary"
    fame. Among others of his many myriad talents, he's one of the world's
    leading
    experts on matters religious. He and I have had some most stimulating
    and
    rewarding discussions, on the rare occasions when we get together. The
    trick
    is a little thing called mutual respect. Most modern denizens of the
    'net
    don't get the notion of respecting a person's opinion, even while
    disagreeing
    with it.

    > Oh, there was no question. I've a kindred interest in physics
    > and engineering, I imagine. I'm currently struggling through
    > Robert Gilmore's books, one on lie groups and algebras and the
    > other on catastrophe theory for engineers as well as polytropes,
    > packing spheres, and other delights. There were some nice
    > insights in your book, which helped wind me on just enough of a
    > different path to stretch me without losing me.


    Glad to help.

    > By the way!! I completely agree with you about MathCad! What a
    > piece of *&!@&$^%$^ it is, now. I went through several
    > iterations, loved at first the slant or approach in using it,
    > but absolutely hate it now because, frankly, I can't run it for
    > more than an hour before I don't have any memory left and it
    > crashes out.


    Don't get me started on Mathcad!

    As some old-time readers might know, I used to recommend Mathcad to
    everyone.
    In my conference papers, I'd say, "If you are doing engineering and
    don't have
    Mathcad, you're limiting your career." After Version 7 came out, I had
    to say,
    "Don't buy Mathcad at any price; it's broken." Here at home I've stuck
    at
    Version 6. Even 6 has its problems -- 5 was more stable -- but it's the
    oldest
    I could get (from RecycledSoftware, a great source). The main reason I
    talked
    my company into getting Matlab was as a refuge from Mathcad.

    Having said that, truth in advertising also requires me to say that I
    use it
    almost every day. The reason is simple: It's the only game in town.
    It's the only
    Windows program that lets you write both math equations and text, lets
    you generate
    graphics, and also does symbolic algebra, in a WYSIWYG interface. Pity
    it's so
    unstable.

    Come to that, my relationship with Mathcad is very consistent, and much
    the same as
    my relationship with Microsoft Office and Windows. I use it every day,
    and curse it
    every day. I've learned to save early and often. Even that doesn't
    always help, but
    it's the best policy. I had one case where saving broke the file, but
    the Mathcad
    support people (who can be really nice, sometimes) managed to restore
    it.

    I stay in pretty constant contact with the Mathcad people. As near as I
    can tell, they
    are trying hard to get the thing under control. Their goal is to get the
    program to
    such a point that it's reasonable to use as an Enterprise-level utility,
    and a means
    of sharing data across organizations. I'm also on their power users'
    group, and
    theoretically supposed to be telling them where things aren't working.

    Even so, when I report problems, which is often, the two most common
    responses I get
    are:

    1) It's not a bug, it's a feature, and
    2) Sorry, we can't reproduce that problem.

    I think Mathsoft went through a period where all the original authors
    were replaced
    by maintenance programmers -- programmers with more confidence than
    ability. They
    seemingly had no qualms about changing things around and redefining user
    interfaces,
    with little regard for what they might break. Mathsoft is trying to
    turn things
    around now, but it's not going to be easy. IMO.

    > Reboot time every hour is not my idea of a good
    > thing. And that's only if I don't type and change things too
    > fast. When I work quick on it, I can go through what's left
    > with Win98 on a 256Mb RAM machine in a half hour! No help from
    > them and two versions later I've simply stopped using it. I
    > don't even want to hear from them, again. Hopefully, I'll be
    > able to find an old version somewhere. For now, I'm doing
    > without.


    See RecycledSoftware as mentioned above. BTW, have you _TOLD_ Mathsoft
    how you feel?
    Sometimes I think I'm the only one complaining.

    I'm using Version 11 with all the upgrades, and it's still thoroughly
    broken. Much less
    stable than versions 7, 8, etc.

    > Yes. But that's fine, I suspect. I've taught undergrad classes
    > and most folks just go "barf" when confronted with learning
    > floating point. In class evaluations, I think having to learn
    > floating point was the bigger source of complaints about the
    > classes. You probably addressed everything anyone "normal"
    > could reasonably care about and more.


    F.P. is going to be in my next book. I have a format called "short
    float" which
    uses a 24-bit form factor; 16-bit mantissa. I first used it back in '76
    for an
    embedded 8080 problem (Kalman filter on an 8080!). Used it again, 20
    years later,
    on a '486. Needless to say, it's not very accurate, but 16 bits is
    about all we
    can get out of an A/D converter anyway, so it's reasonable for embedded
    use.

    > >Wanna hear the funny part? After tinkering with it for awhile, I worked
    > >out the rules for my imagined
    > >format, that worked just fine. At work, I've got a Mathcad file that
    > >takes the hex number, shifts it
    > >two bits at a time, diddles the "phantom" bit, and produces the right
    > >results. I can go from integer to
    > >float and back nicely, using this cockamamie scheme.

    >
    > Hmm. Then you should be able to construct a function to map
    > between these, proving the consistent results. I've a hard time
    > believing there is one. But who knows? Maybe this is the
    > beginning of a new facet of mathematics, like the investigation
    > into fractals or something!


    Grin! I don't know about that, but there is indeed a connection. I
    suppose that,
    with enough effort, I could work out a scheme for using base 16, and
    still get the
    same bit patterns. Epicycles upon epicycles, don'cha know.

    > hehe. Best of luck. In the process, I did notice that you are
    > entertaining thoughts on a revised "Let's build a compiler."
    > Best of luck on that and if you feel the desire for unloading
    > some of the work, I might could help a little. I've written a
    > toy C compiler before, an assembler, several linkers, and a
    > not-so-toy BASIC interpreter. I can, at least, be a little bit
    > dangerous. Might be able to shoulder something, if it helps.


    Thanks for the offer. I'm thinking that perhaps an open-source sort of
    approach might
    be useful. Several people have offered to help. My intent is to use
    Delphi, and there
    are lots of folks out there who know it better than I. Of course, I'll
    still have to
    do the prose, but help with the software is always welcome.

    Jack
     
    Jack Crenshaw, Jul 2, 2003
    #9
  10. Jack Crenshaw <> writes:

    > You just haven't lived until you've twiddled F.P. bits in Fortran <g>.


    Or any other HLL, for that matter!

    > FWIW, there's a fellow in my office who has a Friden calculator sitting
    > on his credenza. He's restoring it.


    He has a sliderule for backup?


    Speaking of hardware math mysteries, Dr. Crenshaw, et al,
    does anyone know how the (very few) computers that have
    the capability perform BCD multiply and divide? Surely
    there's a better way than repeated adding/subtracting n
    times per multiplier/divisor digit. Converting arbitrary
    precision BCD to binary, performing the operation, and
    then converting back to BCD wouldn't seem to be the way
    to go (in hardware).
     
    Everett M. Greene, Jul 2, 2003
    #10
  11. On Wed, 02 Jul 2003 09:11:38 +0300, Paul Keinanen
    <> wrote:

    >On Fri, 27 Jun 2003 19:14:09 GMT, Jonathan Kirwan
    ><> wrote:
    >
    >>In my own experience, even that predating the Intel 8087 or the
    >>IEEE standardization, it was called a "hidden bit" notation. I
    >>don't know where "phantom" comes from, as my own reading managed
    >>to completely miss it.

    >
    >I have never heard about phantom bits before, but the PDP-11 processor
    >handbooks talked about hidden bit normalisation when talking about the
    >floating point processor (FPP) instructions in the mid-70's. It might
    >even be older, since the same format was used on the FIS instruction
    >set extensions on some early PDP-11s.


    Thanks for that. I think I still have a PDP-11 book or two
    around here... yes! There it is. 1976, PDP 11/70 Processor
    Handbook, and yes... they talk about the hidden bit.

    Yup, I was working on PDP-11's (and PDP-8's as well) from about
    1972, on. PDP-8's first, though. So I'm pretty sure that's
    where I got it and it probably *was* circa 1974, my guess.

    Damn, my memory is good in spots!

    Thanks,
    Jon
     
    Jonathan Kirwan, Jul 2, 2003
    #11
  12. On Wed, 2 Jul 2003 07:48:49 PST, (Everett M.
    Greene) wrote:

    >Jack Crenshaw <> writes:
    >
    >> You just haven't lived until you've twiddled F.P. bits in Fortran <g>.


    What is the problem ?

    IIRC the IAND and IOR are standard functions in FORTRAN and in many
    implementations .AND. and .OR. operators between integers actually
    produced bitwise and and bitwise or results.

    >Or any other HLL, for that matter!


    Doing bit manipulation in COBOL is a bit messy :).



    >Speaking of hardware math mysteries, Dr. Crenshaw, et al,
    >does anyone know how the (very few) computers that have
    >the capability perform BCD multiply and divide? Surely
    >there's a better way than repeated adding/subtracting n
    >times per multiplier/divisor digit. Converting arbitrary
    >precision BCD to binary, performing the operation, and
    >then converting back to BCD wouldn't seem to be the way
    >to go (in hardware).


    If you have a BCD hardware, why on earth should the values be
    converted to binary for multiply or divide ? Do it just as if you are
    doing it on paper. To make a BCD computer (or a 4 function calculator)
    you just need a BCD adder and some circuitry to make the 9's
    complement of a value. Some hardware doing BCD x BCD producing a two
    digit BCD value (or a table search) will speed up some operations
    quite a lot.

    Paul
     
    Paul Keinanen, Jul 2, 2003
    #12
  13. Paul Keinanen wrote:
    >
    > On Wed, 2 Jul 2003 07:48:49 PST, (Everett M.
    > Greene) wrote:
    >
    > >Jack Crenshaw <> writes:
    > >
    > >> You just haven't lived until you've twiddled F.P. bits in Fortran <g>.

    >
    > What is the problem ?
    >
    > IIRC the IAND and IOR are standard functions in FORTRAN and in many
    > implementations .AND. and .OR. operators between integers actually
    > produced bitwise and and bitwise or results.


    Hmphh! In Fortran II, we were lucky to get add and subtract. No such
    thing
    as .AND. and .OR. there.

    Jack
     
    Jack Crenshaw, Jul 3, 2003
    #13
  14. Jonathan Kirwan

    Morris Dovey Guest

    Jonathan Kirwan wrote:

    > Oh, cripes. It's that goto process which reminded me of Fortran
    > II days on the IBM 1130. Now some more brain cells have been
    > restored. Bad news, as I have now probably forgotten yet
    > another important something. ;)


    That's what happens when you punch "//JOB T"
    --
    Morris Dovey
    West Des Moines, Iowa USA
    C links at http://www.iedu.com/c
     
    Morris Dovey, Jul 4, 2003
    #14
  15. Jack Crenshaw <> writes:
    > "Everett M. Greene" wrote:
    > > Jack Crenshaw <> writes:
    > >
    > > > You just haven't lived until you've twiddled F.P. bits in Fortran <g>.

    > >
    > > Or any other HLL, for that matter!
    > >
    > > > FWIW, there's a fellow in my office who has a Friden calculator sitting
    > > > on his credenza. He's restoring it.

    > >
    > > He has a sliderule for backup?

    >
    > Funny you should mention that. He has a whole _WALL_ covered with slide
    > rules, of all shapes, sizes, and descriptions. He has long ones, short
    > ones, circular ones, spiral ones. He has Pascal-based adding machines.
    > He has others of which I don't know the origin. He has Curta "pepper
    > grinders" like we used to use in sports car rallies. The guy's a collector.
    >
    > But I presume you mean that the Friden won't be reliable. You may be
    > right, since he's the one doing the restoring. But IMX, I have never,
    > ever seen one fail.


    I was just making a light-hearted comment. I just guessed that
    someone who has an interest in older technology would have some
    even older things. Does he have an abacus or two -- just in case?

    > > Speaking of hardware math mysteries, Dr. Crenshaw, et al,
    > > does anyone know how the (very few) computers that have
    > > the capability perform BCD multiply and divide? Surely
    > > there's a better way than repeated adding/subtracting n
    > > times per multiplier/divisor digit. Converting arbitrary
    > > precision BCD to binary, performing the operation, and
    > > then converting back to BCD wouldn't seem to be the way
    > > to go (in hardware).

    >
    > Here, I'm not too clear as to whether you mean mechanical or electronic
    > calculators. The Friden and Monroe mechanical calculators did indeed
    > do successive subtraction. But they moved the carriage so that you
    > always got one digit of result after, on average, 5 subtractions. 50
    > for 10 digits. That's not too bad.


    Ah, yes. I'd forgotten about the ol' divide by zero on the
    electro-mechanical calculators. And the only "reset" was to
    pull the plug.

    > Most electronic calculators used decimal arithmetic internally,
    > though I think the newer ones are binary.
    > As someone else said, they used multiplication tables stored in ROM.


    I'll have to think about the use of multiplication tables,
    especially for the case of packed values.

    > Since the advent of the microprocessor, ca. 1972, Intel and other mfrs
    > have included "decimal adjust" operations that correct addition and
    > subtraction operations if you want them to be decimal. Adding
    > numbers in decimal is the same as adding them in binary (or hex),
    > except that you must add six if the result is bigger than 9. They
    > use auxiliarly carry bits to allow the addition of more than one
    > digit at a time.


    Addition and subtraction as you say is accommodated by the DAA
    instruction. Most of the micros only have a decimal-adjust for
    addition so you have to learn how to subtract by adding the tens-
    complement of the value being subtracted.
     
    Everett M. Greene, Jul 4, 2003
    #15
  16. Jonathan Kirwan wrote:
    >
    > On Thu, 03 Jul 2003 17:28:20 GMT, "Tauno Voipio"
    > <_REMOVE.invalid> wrote:
    >
    > >"Jack Crenshaw" <> wrote in message
    > >news:...
    > >>
    > >> Paul Keinanen wrote:
    > >> >
    > >> > On Wed, 2 Jul 2003 07:48:49 PST, (Everett M.
    > >> > Greene) wrote:
    > >> >
    > >> > >Jack Crenshaw <> writes:
    > >> > >
    > >> > >> You just haven't lived until you've twiddled F.P. bits in Fortran

    > ><g>.
    > >> >
    > >> > What is the problem ?
    > >> >
    > >> > IIRC the IAND and IOR are standard functions in FORTRAN and in many
    > >> > implementations .AND. and .OR. operators between integers actually
    > >> > produced bitwise and and bitwise or results.
    > >>
    > >> Hmphh! In Fortran II, we were lucky to get add and subtract. No such
    > >> thing
    > >> as .AND. and .OR. there.

    > >
    > >Right. The logic operations were performed by strings of three-way GOTO's.
    > >
    > >IIRC, the IBM 1620 did not have the logical operations in machine code,
    > >either.

    >
    > Oh, cripes. It's that goto process which reminded me of Fortran
    > II days on the IBM 1130. Now some more brain cells have been
    > restored. Bad news, as I have now probably forgotten yet
    > another important something. ;)


    Grin!! I did a _LOT_ of work on the 1130. That's where I learned a lot
    of my Fortran skills.
    IMO the 1130 was one of IBM's very few really good computers. Ours had
    16k of RAM <!>, and
    one 512k, removable HD. And it supported 100 engineers, plus the
    accounting dept.

    The 1130 OS provided all kinds of neat tricks (remember LOCAL?) to save
    RAM. I was generating
    trajectories to the Moon and Mars on it. Its Fortran
    compiler was designed to be fully functional with 8k of RAM, total.
    Let's
    see Bill Gates try _THAT_!!!

    Jack
     
    Jack Crenshaw, Jul 5, 2003
    #16
  17. "Everett M. Greene" wrote:
    >
    > Jack Crenshaw <> writes:
    > > "Everett M. Greene" wrote:
    > > > Jack Crenshaw <> writes:
    > > >
    > > > > You just haven't lived until you've twiddled F.P. bits in Fortran <g>.
    > > >
    > > > Or any other HLL, for that matter!
    > > >
    > > > > FWIW, there's a fellow in my office who has a Friden calculator sitting
    > > > > on his credenza. He's restoring it.
    > > >
    > > > He has a sliderule for backup?

    > >
    > > Funny you should mention that. He has a whole _WALL_ covered with slide
    > > rules, of all shapes, sizes, and descriptions. He has long ones, short
    > > ones, circular ones, spiral ones. He has Pascal-based adding machines.
    > > He has others of which I don't know the origin. He has Curta "pepper
    > > grinders" like we used to use in sports car rallies. The guy's a collector.
    > >
    > > But I presume you mean that the Friden won't be reliable. You may be
    > > right, since he's the one doing the restoring. But IMX, I have never,
    > > ever seen one fail.

    >
    > I was just making a light-hearted comment. I just guessed that
    > someone who has an interest in older technology would have some
    > even older things. Does he have an abacus or two -- just in case?


    Actually, he does -- several. He seems to be collecting every possible
    example of
    mechanical ways to do math. His office wall is a work of art.

    > > > Speaking of hardware math mysteries, Dr. Crenshaw, et al,
    > > > does anyone know how the (very few) computers that have
    > > > the capability perform BCD multiply and divide? Surely
    > > > there's a better way than repeated adding/subtracting n
    > > > times per multiplier/divisor digit. Converting arbitrary
    > > > precision BCD to binary, performing the operation, and
    > > > then converting back to BCD wouldn't seem to be the way
    > > > to go (in hardware).

    > >
    > > Here, I'm not too clear as to whether you mean mechanical or electronic
    > > calculators. The Friden and Monroe mechanical calculators did indeed
    > > do successive subtraction. But they moved the carriage so that you
    > > always got one digit of result after, on average, 5 subtractions. 50
    > > for 10 digits. That's not too bad.

    >
    > Ah, yes. I'd forgotten about the ol' divide by zero on the
    > electro-mechanical calculators. And the only "reset" was to
    > pull the plug.
    >
    > > Most electronic calculators used decimal arithmetic internally,
    > > though I think the newer ones are binary.
    > > As someone else said, they used multiplication tables stored in ROM.

    >
    > I'll have to think about the use of multiplication tables,
    > especially for the case of packed values.
    >
    > > Since the advent of the microprocessor, ca. 1972, Intel and other mfrs
    > > have included "decimal adjust" operations that correct addition and
    > > subtraction operations if you want them to be decimal. Adding
    > > numbers in decimal is the same as adding them in binary (or hex),
    > > except that you must add six if the result is bigger than 9. They
    > > use auxiliarly carry bits to allow the addition of more than one
    > > digit at a time.

    >
    > Addition and subtraction as you say is accommodated by the DAA
    > instruction. Most of the micros only have a decimal-adjust for
    > addition so you have to learn how to subtract by adding the tens-
    > complement of the value being subtracted.


    Agreed, although some (I think the Z80 was one) worked for subtraction
    as well.
    As for multiplication and division, forget it. But if you do the Friden
    trick of
    shifting, successive subtraction works pretty well. Still not nearly as
    fast as
    binary, of course, but if you have to convert back & forth for I/O
    anyway, it may
    still be more efficient to do it in BCD. Plus, you don't have to deal
    with the bother
    of a 1 that becomes 0.99999999.

    Calculators do everything completely differently. In a calculator (at
    least the old
    ones -- modern things like PDA's have RAM to burn), time isn't an issue.
    The CPU
    is always waiting for the user anyway, so efficiency of computation
    isn't required.
    Saving ROM space is lots more important.

    Jack
     
    Jack Crenshaw, Jul 5, 2003
    #17
  18. On Sat, 05 Jul 2003 15:47:56 GMT, Jack Crenshaw
    <> wrote:

    >Jonathan Kirwan wrote:
    >>

    ><snip>
    >> >Re grid leak: I think it must be pretty much trial and error. No doubt
    >> >_SOMEONE_ has a theory for it, but I would think the grid current must
    >> >vary a lot from tube to tube.

    >>
    >> I'll tell you, I sure beat myself to death on that one. I
    >> didn't have anyone smart enough to ask this, so it was just me
    >> and the libraries. Finally, it did dawn on me about the
    >> practical knowledge one simply needed to have. And I relaxed a
    >> little. But then I started looking for physics models of vacuum
    >> tubes.
    >>
    >> I've only recently realized that the full understanding
    >> (allowing for rarified gases in the tube) requires at least 2
    >> spatial and 1 time dimension of PDEs coupled to at least
    >> 6-dimensional ODEs to understand, along with radiation transport
    >> and atomic interactions. I've lost some hubris and gained some
    >> humility from such realizations. This stuff ain't just for
    >> anyone!

    >
    >I think you'll appreciate this story, which came from my major prof in
    >college.
    >He was a wonder -- designed and built the Calutrons for separating
    >Uranium at Oak Ridge,
    >later went on to design analog fire-control systems for the Navy. Nobody
    >knew vacuum tubes
    >like him.
    >
    >You remember photomultiplier tubes? You have a bunch of electrodes all
    >curved and arranged in
    >strange patterns. Each is held at a different potential. The idea is
    >that once an electron
    >is released at the first sensor, they cascade down from plate to plate,
    >increasing the current
    >each time.


    I use PMTs in my work, so yes.

    >Dr. Carr asked me if I could guess how they designed the shape of those
    >electrodes? Or the
    >potentials. I assumed, as you did, that they used PDE's and lots of
    >math. Not so.
    >
    >He said they'd build a large mechanical analog of the tube, with huge
    >models of each electrode.
    >The height of each model was set at the potential it would be at.
    >
    >Then they'd stretch a sheet of rubber over the whole thing, and roll
    >marbles down it <!!!!>.
    >They'd play with shapes and heights (potentials) until they got most
    >marbles rolling the right
    >way. Neat, eh?


    Excellent. Of course, their might be some subtlety that they
    didn't include in the physical model which becomes dominant, in
    the real mccoy. But odds are the physical approximation will
    get you far, if you are wise about modeling the dimensions (and
    I don't just mean length here.)

    I just like having the theory from which to make such deductions
    to the specific, as well. Sometimes, interesting ideas can
    suggest themselves from that. More, it's hard to apply your
    dimensional analysis to the physical modeling without at least
    some theory as your guide.

    But one uses all the tools available at reasonable cost, I
    imagine. I'm sure they did, too.

    >> >FWIW, I sit here surrounded by old Heathkit tube electronics. I collect
    >> >them. Once I started buying them, I realized I couldn't just drive down to the
    >> >local drugstore and test the tubes. Had to buy a tube tester, VTVM, and all
    >> >the other accoutrements to be able to work on them.

    >>
    >> I collected boxes and boxes of tubes for the longest time. From
    >> VR-150's I removed from radar systems of WW II, to 12AX7s, to
    >> whatever. Finally, I selected a radio club and just gave the
    >> entire lot of them away. I had to balance my memories of
    >> thinking about these things against my hording a collection of
    >> them and I decided to find some folks who would make some honest
    >> use of them. I had no business keeping them.
    >>
    >> Took me a while to let them go, though. And there are times of
    >> slight regret, when I want to play with one in a circuit again.

    >
    >Yep, I know. We had a junk room at the Physics dept, with all kinds of
    >tubes in it. I raided it and
    >had a big box of tubes which I carried around from house to house, and
    >job to job. I might have used,
    >like, 4. Like you, I finally ended up junking them.
    >
    >But now I have more. Here's a box right here with the cream of the crop
    >-- 5881's, KT66's, 6V6's,
    >etc. Somewhere else I've got the 12AX7's, etc. Tubes are coming back
    >into style, and tube audio
    >amps bring $100's on eBay.


    It wasn't about money, for me. So that's no incentive for me.
    I just enjoyed the learning experience.

    >> >> >Maybe someone objected to the implied occult nature of the term,
    >> >> >"phantom"?
    >> >>
    >> >> Oh, geez. I've never known a geek to care about such things. I
    >> >> suppose they must exist, somwhere. I've just never met one
    >> >> willing to let me know they thought like that. But that's an
    >> >> interesting thought. It would fit the weird times in the US we
    >> >> live in, with about 30% aligning themselves as fundamentalists.
    >> >>
    >> >> Nah... it just can't be.
    >> >
    >> >I agree; I was mostly kidding about the PC aspects. One never knows,
    >> >tho. FYI, I have been known to be called a "fundie" on talk.origins and
    >> >others of those insightful and respectful sites. I'm not, but they are not
    >> >noted for their discernment or subtleties of observation.

    >>
    >> Hehe. I was raised Catholic and I attended both Catholic and
    >> public schools -- including a few years at the University of
    >> Portland, which is a Catholic university. I'm an atheist (a
    >> conclusion I've come to out of what I feel is being honest about
    >> the preponderance of the evidence), but I've quite a collection
    >> of religious materials -- including "parallels" and facsimiles
    >> of various fragments and as much raw source materials as I can
    >> muster. Fills many shelves in my library. So, as you might
    >> imagine, I can manage some debate on the subject.
    >>
    >> >One of my favorite atheists is Stan Kelly-Bootle of "Devil's DP
    >> >Dictionary" fame. Among others of his many myriad talents, he's one of
    >> >the world's leading experts on matters religious. He and I have had
    >> >some most stimulating and rewarding discussions, on the rare occasions
    >> >when we get together. The trick is a little thing called mutual respect.
    >> >Most modern denizens of the 'net don't get the notion of respecting a
    >> >person's opinion, even while disagreeing with it.

    >>
    >> Frankly, I think this is one area of physics and physics
    >> training that most lay people simply do NOT understand well. I
    >> can stand up in front of a group and propose my thoughts on the
    >> board. They can tear into me like a pack of lions (in fact,
    >> there would be something really, really wrong if they didn't)
    >> and abuse me with all manner of criticism. Any outsider looking
    >> in would imagine that I must feel pretty darned bad after such
    >> attacks. But only a half hour later we are out in the lunch
    >> room talking like the best of friends, which we are.
    >>
    >> I *need* criticism. Without it, I simply don't get better.
    >> This is such an important process and most people I know don't
    >> seem to understand how one's friends can appear to go for the
    >> jugular in one moment and try and make you feel like a worm, it
    >> seems, and in the next be your great pal. They seem to think
    >> that friends support friends in their ideas.
    >>
    >> But there is an opposing facet in physics. One where peoples'
    >> respect is demonstrated by their willingness to put in time and
    >> criticize. And the critiquing processes is fundamental for all
    >> of us becoming better. And when each of us is better, we are
    >> all made better.
    >>
    >> Underlying this, as you say, is *respect* and a willingness to
    >> deal with the objections made, and not just ignore them.
    >>
    >> Sadly, this is something which most outsiders do not well fathom
    >> or appreciate. And personally, I think we'd all be better off
    >> if people would learn to separate the concepts of challenge from
    >> respect. One can be quite respectful, while challenging the
    >> hell out of another. And just a few minutes later be the best
    >> of pals. And that's a healthy way to be, in my mind.
    >>
    >> In personal and business relationships, I have to be careful to
    >> continually reinforce my respect and appreciation whenever I'm
    >> also challenging ideas, to diffuse natural reactions. It feels
    >> almost smarmy and fawning to me, but it seems what's expected in
    >> "polite society." But among physicists? Nah. There's no need
    >> for such blandishments -- it's assumed. And the respect is
    >> demonstrated all the more by the heat and energy which goes into
    >> the debate.
    >>
    >> A final note about this, though. Respect is *earned*, not given
    >> away. One earns this through due diligence and mental effort.
    >> It's not some free-bee. Perhaps this is what bothers
    >> non-scientists more -- the work that is required in order to
    >> earn the respect which endures through any processes of
    >> challenge. No one is going to think the less of you, if you've
    >> done your work and applied yourself well -- even if you are
    >> wrong. And winnowing away wrong ideas is part and parcel,
    >> anyway. It's what science is largely about. The better of us
    >> aren't judged so much by the conclusions reached, as by the
    >> careful application of intellect and diligence.

    >
    >To which, all I can say is, "Amen." Sharing ideas with people who don't
    >agree is how we learn. Calling them nasty names is not helping.


    Yes, in general.

    I think there may be a place for name calling is if other
    avenues of goading fail and it's perceived that there might be
    some hope with a dash of cold water in the face. Sometimes, it
    just takes a slap to get a response. If it's done with respect,
    even if they don't really realize you honestly care about them
    caring about themselves, then it may work out for the better.

    Of course, there's always the risk of a broken relationship as a
    result. But sometimes it's already broken by that time and this
    is the only remaining possibility for restoring it. One takes
    one's chances.

    >> >> Oh, there was no question. I've a kindred interest in physics
    >> >> and engineering, I imagine. I'm currently struggling through
    >> >> Robert Gilmore's books, one on lie groups and algebras and the
    >> >> other on catastrophe theory for engineers as well as polytropes,
    >> >> packing spheres, and other delights. There were some nice
    >> >> insights in your book, which helped wind me on just enough of a
    >> >> different path to stretch me without losing me.
    >> >
    >> >Glad to help.
    >> >
    >> >> By the way!! I completely agree with you about MathCad! What a
    >> >> piece of *&!@&$^%$^ it is, now. I went through several
    >> >> iterations, loved at first the slant or approach in using it,
    >> >> but absolutely hate it now because, frankly, I can't run it for
    >> >> more than an hour before I don't have any memory left and it
    >> >> crashes out.
    >> >
    >> >Don't get me started on Mathcad!
    >> >
    >> >As some old-time readers might know, I used to recommend Mathcad to
    >> >everyone. In my conference papers, I'd say, "If you are doing engineering and
    >> >don't have Mathcad, you're limiting your career." After Version 7 came out, I had
    >> >to say, "Don't buy Mathcad at any price; it's broken." Here at home I've stuck
    >> >at Version 6. Even 6 has its problems -- 5 was more stable -- but it's the
    >> >oldest I could get (from RecycledSoftware, a great source). The main reason I
    >> >talked my company into getting Matlab was as a refuge from Mathcad.

    >>
    >> You are in Arizona, now? I seem to recall you worked at some
    >> business in Florida -- perhaps even one related to the area I
    >> work in. But my memory is fading.

    >
    >Yep. I used to work at ATK in Clearwater. Now I'm with Spectrum Astro
    >in the Phoenix
    >area (111 degrees, yesterday!). Good memory. Both jobs require a lot
    >of simulation skills.


    Hehe. Looks like very interesting work with very interesting
    people. Excellent!

    >> (I remember you mentioning a Kaypro somewhere. My first
    >> personal purchase of a PC was the Kaypro 286i, which was the
    >> first truly IBM PC compatible. Before it, they were "90%"
    >> compatible, or so.)

    >
    >Right again. I still have my two original machines (the pre-DOS, CP/M
    >boxes), plus
    >five or six more I got from eBay. Nice little boxes, in their day.


    Yes, Kaypro did good and with reasonable pricing at the time.

    >FWIW, I still miss the reliability of the CP/M machines. Crude? Yes,
    >indeed. Limited?
    >For sure. _BUT_ I could edit a file without fear of crashing, and I
    >never found the need
    >for ScanDisk or Defrag. Cetainly not Norton Disk Doctor!! I did my
    >columns and stuff in
    >Wordstar for years; also programming in Turbo Pascal. During that time,
    >I had exactly
    >_ZERO_ crashes, or bugs of any kind (except when a solder joint failed
    >once, because of thermal
    >stress). Wordstar and TP simply did what I told them, when I told them,
    >every time.
    >
    >Someone recently asked me how many times I had a disk failure with CP/M.
    >It was exactly
    >once, and that was when a floppy got so worn, the oxide began flaking
    >off the vinyl.


    I used CP/M a fair amount, too. In general, my only problems
    were with Persei floppies. Voice coil drive, fast, and I had a
    few fail on me. The Shugarts just kept working.

    I rely on DOS, similarly. Of course, if you've done *any*
    assembly programming on a DOX x86 with .COM files or have
    programmed with the early DOS 1.0 function calls, you *know*
    about the many similarities (identical, sometimes) with CP/M.
    But there are times when Windows won't boot and that DOS is
    still ticking away just fine, that I can jump in and use it to
    get Windows restarted. Another reason I'm still on Win98, by
    the way.

    >> >Having said that, truth in advertising also requires me to say that I
    >> >use it almost every day. The reason is simple: It's the only game in town.
    >> >It's the only Windows program that lets you write both math equations and text, lets
    >> >you generate graphics, and also does symbolic algebra, in a WYSIWYG interface. Pity
    >> >it's so unstable.

    >>
    >> Yes. I really need to get an older version, I guess, or else
    >> put myself into deep freeze to be awakened when they "get it
    >> right."

    >
    >Whatever you do, do _NOT_ get Version 11. It's even worse that the
    >predecessors.


    Okay. It just makes me angry with them, having paid them well
    and received absolutely NOTHING of value for it. And I keep
    seeing that carrot in front of my face that I can't quite get.
    I just have to close my eyes, I guess.

    >> >Come to that, my relationship with Mathcad is very consistent, and much
    >> >the same as my relationship with Microsoft Office and Windows. I use it every day,
    >> >and curse it every day. I've learned to save early and often. Even that doesn't
    >> >always help, but it's the best policy. I had one case where saving broke the file, but
    >> >the Mathcad support people (who can be really nice, sometimes) managed to restore
    >> >it.

    >>
    >> Well, I can't compare my experiences with Mathcad with Windows
    >> or Office. Frankly, I've never seen a single product so
    >> potentially useful and at the same time so totally useless as
    >> Mathcad!

    >
    >There you go. It's a great pity, and I do hope they manage to get it
    >back on track.
    >If ever there were a need for an open source version, this is it.


    Hehe. Noted. I'll remember this example when others rail at
    the idea of open source and stuff this example into their face.
    It's a classic, for sure.

    >> >I stay in pretty constant contact with the Mathcad people. As near as I
    >> >can tell, they are trying hard to get the thing under control. Their goal is to get the
    >> >program to such a point that it's reasonable to use as an Enterprise-level utility,
    >> >and a means of sharing data across organizations. I'm also on their power users'
    >> >group, and theoretically supposed to be telling them where things aren't working.

    >>
    >> Well, I talked to them at length and even told several people
    >> there to look at your book to read about someone's experiences
    >> in print.
    >>
    >> >Even so, when I report problems, which is often, the two most common
    >> >responses I get are:
    >> >
    >> >1) It's not a bug, it's a feature, and
    >> >2) Sorry, we can't reproduce that problem.

    >>
    >> Yes, that's a good summary!
    >>
    >> >I think Mathsoft went through a period where all the original authors
    >> >were replaced by maintenance programmers -- programmers with more confidence than
    >> >ability. They seemingly had no qualms about changing things around and redefining user
    >> >interfaces, with little regard for what they might break. Mathsoft is trying to
    >> >turn things around now, but it's not going to be easy. IMO.

    >>
    >> That would fit facts. The top-dogs in the business probably
    >> just thought that the technical resources were "replaceable"
    >> when they weren't, really. Of course, the top-dogs also think
    >> they are *not* replaceable. Hypocracy in action.

    >
    >I think that's exactly right. Someone made decisions based on profit
    >rather than quality.


    Like that's anything new. In the US, at least, it's not just
    profit but short-term-profit-in-the-next-three-months which
    drives almost all decisions.

    >> >> Reboot time every hour is not my idea of a good
    >> >> thing. And that's only if I don't type and change things too
    >> >> fast. When I work quick on it, I can go through what's left
    >> >> with Win98 on a 256Mb RAM machine in a half hour! No help from
    >> >> them and two versions later I've simply stopped using it. I
    >> >> don't even want to hear from them, again. Hopefully, I'll be
    >> >> able to find an old version somewhere. For now, I'm doing
    >> >> without.
    >> >
    >> >See RecycledSoftware as mentioned above. BTW, have you _TOLD_ Mathsoft
    >> >how you feel? Sometimes I think I'm the only one complaining.

    >>
    >> Oh, yes. I've told them on numerous occasions and in no
    >> uncertain terms. I'll look at recycledsoftware and elsewhere.
    >> But it's my opinion that Mathsoft should *give* me a copy of
    >> version 5 and be done with it. It's not like I haven't paid
    >> them their due and have found their later products unusable.
    >> They should make every attempt within their reasonable power to
    >> satisfy their customers and prodiving version 5 is reasonable,
    >> in my estimation. I've come to the conclusion that they aren't
    >> interested enough in a satisfied customer base.

    >
    >That's absolutely true, IMO. There was a time, around versions 7-8,
    >when they were
    >changing the user interface radically, with each version. We "power
    >users" complained
    >mightily, to no avail. We finally figured out, they were responding to
    >neophyte users who
    >couldn't figure out the interface, and called in because it didn't
    >behave like Word.
    >Instead of saying RTFM, they set out to make the interface more
    >Wordlike, thereby blowing
    >away all of us who _HAD_ learned the interface.
    >
    >Here's one you'll appreciate. It's a minor nit, but indicative, I
    >think, of something-or-other.
    >
    >All recent versions of the interface have a function called
    >insert/delete lines. With older versions,
    >it used to be the _ONLY_ way to deal with white space. In newer
    >versions, you can just enter or delete
    >carriage returns, but the insert/delete is still there. I use the
    >function a lot when I want to
    >insert extra prose or math in the middle of a file. I just make lots of
    >white space, make my inserts,
    >then delete what white space is left over.
    >
    >I'd do something like insert 500 lines, enter my new stuff, then delete
    >500 lines. It wouldn't let
    >me delete all of them, but through version 6, it would change my 500 to,
    >say, 367, and give me the
    >chance to approve that number.
    >
    >After version 6, it won't do that any more. It says, "please enter a
    >number between 1 and 367." Then I
    >have to type the 367 manually.
    >
    >Now, why on earth would you _CHANGE_ a thing like that? Clearly the
    >routine knows how many lines can
    >be deleted; it's telling me so. But it won't just put the correct limit
    >in, even though it used to do
    >exactly that.
    >
    >Why would someone take out working code and replace it with something
    >dumber? And why, oh why, have they
    >let the software progress through six more major releases, plus
    >countless updates, without fixing it?


    What can I say?? I'm as frustrated and I've not been exposed to
    it, like you have. It all just baffles me.

    >> Hehe. Kalman!! And continuous time Kalman-Bucy. Oh, the joys.
    >> Now there's a subject which could use a good writer for the
    >> embedded programming masses!

    >
    >I know; I'm working on it. Just about the time I think I'm ready to
    >write that article, someone else already has written their own version.


    hehe. Well, keep at it! I used it for sensor fusion with
    various phased-array radar and other sensor systems, all with
    varying characteristics to model. It's worth knowing about!

    >> >Used it again, 20 years later,
    >> >on a '486. Needless to say, it's not very accurate, but 16 bits is
    >> >about all we can get out of an A/D converter anyway, so it's reasonable
    >> >for embedded use.

    >>
    >> I tend to look very closely at folks who just apply floating
    >> point without thinking. Even in cases where the dynamic range
    >> is crutial (and I've worked on systems measuring current, for
    >> example, over 12 orders of magnitude), I usually find far fewer
    >> stumbling blocks to working code without using floating point.
    >> Most programmers just do NOT understand it's limitations and
    >> pitfalls. And they will use it unwisely. It's a testimonial to
    >> the design of floating point, that they can get away with it do
    >> often without really knowing why. But they are still largely
    >> ignorant when applying it.
    >>
    >> So I sometimes just summarize this to another programmer by
    >> saying, "Well, you get your data from an integer ADC and you put
    >> out your results on an integer DAC, so why are you using
    >> floating point?" Sometimes, that's enough of a prod at least to
    >> get them to think about it.

    >
    >Well, let me give you a counterpoint. The huge advantage to f.p. is the
    >dynamic range.


    Of course. I gave a swipe at that point above, in fact. But I
    added that even then, I've often found better ways -- even in
    the face of 12 orders of mag. The key to my point isn't that
    floating point should be entirely avoided. It's that it should
    be applied with understanding -- and more particularly, in the
    case of most embedded systems. If Excel crashes out, "Oh, gee.
    I guess I'll just reboot." You live with it. But in an
    embedded system, subtle errors crop up if you aren't careful.

    >It's virtually impossible to write a Kalman filter any other way, for
    >that very reason
    >(hence our use of f.p. in that 8080 version). In integer arithmetic,
    >you have to check
    >every single operation for overflow. I can't tell you how many times
    >I've used a debugger
    >to read a fixed-point value in hex, and convert it to real using my
    >trusty Sharp calculator.


    Or you can do the analysis to verify that it is impossible for
    overflow to occur. Which is what I've done in many cases. One
    should be careful, no matter, I suppose.

    >It's a very, very tedious process, and fraught with the possibility of
    >error. One of the
    >Mars missions was lost for this very reason -- overflow in a fixed-point
    >computation.
    >
    >Of course, these things are supposed to be caught during V & V, but
    >clearly they are not, always.
    >
    >Sometimes it can be pretty scary, realizing that all those nuke-tipped
    >missiles sitting in
    >silos and other places, have software written in a way that multiplies
    >can overflow and go
    >negative. Brrrr!!! ("Return to sender, address unknown...")
    >
    >If nothing else, floating point can eliminate that worry. I tell my
    >readers, if your CPU has f.p. capability, USE IT! It may be a crutch,
    >as you suggest,
    >but it sure speeds up the development process. Makes the software more
    >robust, too.


    Well, you've given me a story early on, about modeling PMTs.
    Let me tell you one.

    Calculating a standard deviation is often done by "smarty pants"
    programmers with a modified version of the "standard equation"
    where only one pass through the data is required. You know the
    one, where you accumulate both a SUM(x) and SUM(x^2). At the
    end, a difference calculation is used. But in this case, the
    magnitude of both parts are often quite similar, leaving only
    the least significant bits in the result.

    When this happens, preserving those bits during accumulation can
    be very important. For example, what often isn't realized is
    that it is important to pre-sort the data before accumulation so
    that the smaller numbers can accumulate to larger values
    *before* getting swamped out by the accumulation of one of the
    larger values. If the largest value, for example, were added
    first, the smaller values might very well truncate out
    completely as they are accumulated and never get a chance to
    impact the least significant bits in the summed result, before
    the difference is taken and they become crucial for the final
    calculation.

    This is only one of many subtle examples. And the analysis is
    sometimes rather difficult to shepherd well, without training
    and practice. On the other hand, analyzing integer math is, by
    comparison, much more of an "undergrad" kind of thing. The
    issues are more tractable to more people, as a rule.

    And in the end, it *is* helpful to remember that it's integer
    in, integer out, for many embedded systems and it's worth doing
    to analyze the data flows throughout. My belief is that
    floating point should be justified by the proponents. But so
    should integer.

    In other words, someone should be paying attention and it should
    be clear from the record why either integer or floating point is
    chosen for a particular application. But to be honest, the
    issues of floating point are more subtle and the skills required
    to properly analyze it are greater, I think.

    In any case, it's good to question someone and make them think
    about it.

    >The other scary part is, as nearly as I can tell, desk checking and hand
    >checking has become
    >a lost art. With all too many programmers nowadays, if it compiles, it's
    >ready to ship.


    Tell me about it. It's quite common for me to prepare a 10 or
    20 page analysis, complete with timing diagrams and mathematical
    derivations. I'll include error budgets/tolerances in that
    analysis and show how I got them. Sometimes, people just want
    me to roll up my sleeves and get the task out. But I need
    confidence, even if they don't. So I do the work, anyway.

    Originally, I'd hoped that others would actually take the chance
    to point out my errors and help me improve the documents. But
    most of my target readers just ignore them, assuming I am
    getting things right, or unable to challenge my points, or
    unwilling to put in the time. No matter. Now, I just do it
    mostly for my own sake -- just to help me be sure that I've
    covered the issues and to provide myself with something to look
    back on at a later time. It's turned out to help me a lot to
    get back into the right mindset, when having to return to a
    project.

    So the point often isn't anymore to get input from others. It's
    more for me. I can live with that.

    Sadly, too few programmers have learned numerical methods for
    analysis -- for example, power functions applied to recurrances.
    Who today reads through each page of Knuth's 3-vol set, as I did
    when it came out? (Or his "Concrete Mathematics," published
    recently, or Chapra and Canale's "Numerical Methods for
    Engineers" or your own book or a host of others worth studying.)

    Times have changed, I suppose.

    Jon
     
    Jonathan Kirwan, Jul 5, 2003
    #18
  19. On Sat, 05 Jul 2003 15:00:37 GMT, Jack Crenshaw
    <> wrote:

    >Jonathan Kirwan wrote:
    >>
    >> On Thu, 03 Jul 2003 17:28:20 GMT, "Tauno Voipio"
    >> <_REMOVE.invalid> wrote:
    >>
    >> >"Jack Crenshaw" <> wrote in message
    >> >news:...
    >> >>
    >> >> Paul Keinanen wrote:
    >> >> >
    >> >> > On Wed, 2 Jul 2003 07:48:49 PST, (Everett M.
    >> >> > Greene) wrote:
    >> >> >
    >> >> > >Jack Crenshaw <> writes:
    >> >> > >
    >> >> > >> You just haven't lived until you've twiddled F.P. bits in Fortran
    >> ><g>.
    >> >> >
    >> >> > What is the problem ?
    >> >> >
    >> >> > IIRC the IAND and IOR are standard functions in FORTRAN and in many
    >> >> > implementations .AND. and .OR. operators between integers actually
    >> >> > produced bitwise and and bitwise or results.
    >> >>
    >> >> Hmphh! In Fortran II, we were lucky to get add and subtract. No such
    >> >> thing
    >> >> as .AND. and .OR. there.
    >> >
    >> >Right. The logic operations were performed by strings of three-way GOTO's.
    >> >
    >> >IIRC, the IBM 1620 did not have the logical operations in machine code,
    >> >either.

    >>
    >> Oh, cripes. It's that goto process which reminded me of Fortran
    >> II days on the IBM 1130. Now some more brain cells have been
    >> restored. Bad news, as I have now probably forgotten yet
    >> another important something. ;)

    >
    >Grin!! I did a _LOT_ of work on the 1130. That's where I learned a lot
    >of my Fortran skills.
    >IMO the 1130 was one of IBM's very few really good computers. Ours had
    >16k of RAM <!>, and
    >one 512k, removable HD. And it supported 100 engineers, plus the
    >accounting dept.
    >
    >The 1130 OS provided all kinds of neat tricks (remember LOCAL?) to save
    >RAM. I was generating
    >trajectories to the Moon and Mars on it. Its Fortran
    >compiler was designed to be fully functional with 8k of RAM, total.
    >Let's
    >see Bill Gates try _THAT_!!!


    hehe. We had a 16k system, too! The timesharing system I wrote
    provided timeshared BASIC for 32 users, by the way, and lived in
    16k RAM -- 6k for the interpreter and 10k for the swapped user
    page. Included Chebyshev and mini-max methods for the
    transcendentals -- something Intel failed to use for their x87
    floating point units until the advent of the Pentium, many years
    later.

    Oh, well.

    Jon
     
    Jonathan Kirwan, Jul 5, 2003
    #19
  20. In article <>,
    Jack Crenshaw <> wrote:
     
    Cameron Laird, Jul 6, 2003
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Dave Hansen
    Replies:
    0
    Views:
    441
    Dave Hansen
    Jun 27, 2003
  2. Grant Edwards
    Replies:
    0
    Views:
    375
    Grant Edwards
    Jun 27, 2003
  3. CBFalconer
    Replies:
    0
    Views:
    341
    CBFalconer
    Jun 27, 2003
  4. Jonathan Kirwan
    Replies:
    22
    Views:
    990
    Jonathan Kirwan
    Jul 6, 2003
  5. Jonathan Kirwan
    Replies:
    3
    Views:
    1,937
    Grant Edwards
    Jun 29, 2003
Loading...

Share This Page