1. This forum section is a read-only archive which contains old newsgroup posts. If you wish to post a query, please do so in one of our main forum sections (here). This way you will get a faster, better response from the members on Motherboard Point.

FPU vs soft library vs. fixed point

Discussion in 'Embedded' started by Don Y, May 25, 2014.

  1. Don Y

    Don Y Guest

    Hi,

    Yes, most cheaper "general purpose" processors don't (though
    there seems to be some movement in this direction, of late).
    You can also *only* support the floating point operators that you
    really *need*. E.g., type conversion, various rounding modes, etc.
    But, most significantly, you can adjust the precision to what
    you'll need *where* you need it. And, "unwrap" the various
    routines so that you only do the work that you need to do, *now*
    (instead of returning a "genuine float/double" at the end of each
    operation).
    Yes, I've also been exploring use of rationals in some parts of
    the computation. As I said, tweek the operations to the specific
    needs of *this* data (instead of trying to handle *any* possible
    set of data).

    I suspect something more akin to arbitrary (though driven) precision
    will work -- hopefully without having to code too many variants.
    I've written several floating point packages over the past 35+
    years so I'm well aware of the mechanics -- as well as a reasonable
    set of tricks to work around particular processor shortcomings
    (e.g., in the 70's, even an 8x8 MUL was a fantasy in most processors).

    You can gain a lot by rethinking the operations you *need* to
    perform and alternative (equivalent) forms for them that are
    more tolerant of reduced precision, less prone to cancellation,
    etc. E.g., the resonators in the speech synthesizer are
    particularly vulnerable at lower frequencies or higher bandwidths
    in a "classic" IIR implementation. So, certain formants have been
    reimplemented in alternate/equivalent (though computationally
    more complicated -- but not *expensive*) forms to economize there.

    [I.e., do *more* to "cost less"]

    But, to date, I have been able to move any FP operations out of
    time-critical portions of code. So, my FP implementations could
    concentrate on being *small* (code/data) without having to worry
    about execution speed.

    Now, I'd (ideally) like NOT to worry about speed and still avail
    myself of their "ease of use" (wrt support efforts).

    Thx,
    --don
     
    Don Y, May 26, 2014
    #21
    1. Advertisements

  2. Don Y

    Don Y Guest

    Hi Theo,

    Not sure I follow. :< It's a FP library? Overloaded operators?
    How does *it* know what's optimal? Or, is it an arbitrary precision
    library that *maintains* the required precision, dynamically (so
    you never "lose" precision).

    Is this something homegrown or is it commercially available?
    (I imagine it isn't particularly fast?)

    [One of the drool factors of C++'s syntax IMO is operator overloading.
    It's *so* much nicer to be able to read something in infix notation
    even though the underlying data representation may be completely
    different from what the user thinks!]
    Exactly. Find a way to move things into the fixed point realm (not
    necessarily integers) just to take advantage of the native integer
    operations in all processors.

    But, often this requires a fair bit of magic and doesn't stand up
    to "casual" maintenance (because folks don't bother to understand
    the code they are poking with a stick to "maintain").

    How do *you* avoid these sorts of screwups? Or, do you decompose
    the algorithms in such a way that there are no sirens calling to
    future maintainers to "just tweek this constant" (in a way that
    they don't full understand)?

    Thx,
    --don
     
    Don Y, May 26, 2014
    #22
    1. Advertisements

  3. It's a fixed point library that follows the precision through the
    calculations (so if you multiply an 4.12 number by a 10.6 number you get a
    14.18 number, which then propagates through), and takes note of where any
    over/underflows occur. Each variable is constrained (eg voltage = 0 to
    100mV, decided by the limits of the physical system being modelled) so we
    know what 'sensible' values are.

    So you can run your algorithm in testing mode (slowly) and see what happens
    to the arithmetic, and then flip the switch that turns off all the checks
    when you want to run for real.
    I think it's intended to be open source, but he's currently away so I'm
    unclear of current status.
    Our answer is save the scientists (in this case) from writing low level
    code. Give them a representation they understand (eg differential equations
    with units), give them enough knobs to tweak (eg pick the numerical
    methods), and then generate code for whatever target platform it's intended
    to run on. CS people do the engineering of that, scientists do their
    science (which we aren't domain experts in). That means both ends of the
    toolchain are more maintainable.

    Theo
     
    Theo Markettos, May 26, 2014
    #23
  4. Don Y

    Don Y Guest

    Hi Theo,

    So, each variable is tagged with it's Q notation? This is evaluated by
    run-time (conditionally enabled?) code? Or, compiler conditionals?
    So, the "package" is essentially examining how the limits on
    each value/variable -- in the stated Q form -- are affected by
    the operations? E.g., a Q7.9 value of 100 can't be SAFELY doubled
    (without moving to Q8.8 or whatever).

    The developer, presumably, frowns when told of these things and
    makes adjustments, accordingly (to Q form, limits, operators, etc.)
    to achieve what he needs?
    "Slowly" because you just want to see the "report" of each operation
    (without having to develop yet another tool that collects that data
    from a run and reports it ex post factum).
    OK. I'd be curious to see even a "specification" of its functionality,
    if he consents (if need be, via PM)
    So, you expose the wacky encoding formats to the "scientists" (users)?
    Or, do you have a set of mapping functions on the front and back ends
    that make the data "look nice" for the users (so they don't have to
    understand the formats).

    But, you still have the maintenance issue -- how do you keep "CS folks"
    from making careless changes to the code without fully understanding
    what's going on? E.g., *forcing* them to run their code in this
    "test mode" (to ensure no possibility of overflow even after they
    changed the 1.2 scalar to 1.3 in the example above)?

    [Or, do you just insist on competence?]
     
    Don Y, May 26, 2014
    #24
  5. Temps are just getting back to normal (60-70F) - it's been a cold
    Spring. Unlike most of the East, we're running about average for wet
    weather here: predictably rains every trash day and most weekends :cool:

    The discussion typically starts with "IEEE-754 is broken because ..."
    and revolves around the design of the FPU and the memory systems to
    feed it: debating implementing an all-up FPU with fixed operation vs
    exposing the required sub-units and allowing software use them as
    desired - and in the process getting faster operation, better result
    consistency, permitting different rounding and error modes, etc.

    This is a frequently recurring topic in c.a - if you poke around a bit
    you'll find quite a number of posts about it.

    George
     
    George Neuner, May 27, 2014
    #25
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.