1. This forum section is a read-only archive which contains old newsgroup posts. If you wish to post a query, please do so in one of our main forum sections (here). This way you will get a faster, better response from the members on Motherboard Point.

Worst case execution time problem

Discussion in 'Embedded' started by Christian Christmann, Dec 29, 2006.

  1. Hi,

    in real-time systems the worst case execution time (WCET) is an important
    issue. In the literature I found the statement that its calculation is
    undecidable in general. Why? I appreciate a detailed explanation.

    Regards,
    Chris
     
    Christian Christmann, Dec 29, 2006
    #1
    1. Advertisements

  2. Because _in_general_, it's even impossible to decide whether a given
    program will *ever* finish. This is known as the Halting Problem, and
    its impossibility was proven by Turing long before most of us were born.

    If you have to be able to calculate WCET and be sure about the result,
    that requirement restricts the choice of programs you can write. I.e.
    it's a loss of generality.
     
    Hans-Bernhard Bröker, Dec 29, 2006
    #2
    1. Advertisements

  3. Christian Christmann

    larwe Guest

    Because it is not yet even known generally if Saddam Hussein is in U.S.
    or Iraqi custody.
     
    larwe, Dec 29, 2006
    #3
  4. I suspect that the OP posed a more general question than he intended ...

    Hans, ... in general the problem is undecideable, but for most specific
    practical examples in embedded work you can just examine all the branch
    paths and loops and find a maximum.

    The undecideability result doesn't apply to most practical applications that
    occur in embedded systems.

    http://en.wikipedia.org/wiki/Worst_case_execution_time

    http://en.wikipedia.org/wiki/Halting_problem

    http://en.wikipedia.org/wiki/Alan_Turing

    ---------

    P.S.--Turing and von Neumann are legends.

    http://en.wikipedia.org/wiki/Von_Neumann

    There is the anecdotal story (a variation of it is here:

    http://thesciencepundit.blogspot.com/2006/07/john-von-neumann-and-mathematicians.html

    ) that a reporter posed to von Neumann the classic bee-train problem at a
    party. von Neumann was said to have thought for less than ten seconds
    before replying with the answer. The reporter queried, "I guessed you used
    the shortcut solution?". von Neumann answered, "What shortcut solution?"
    (i.e. von Neumann had evaluated the infinite series in his head and it never
    occurred to him to just take time before collision and multiply it times the
    bee's speed).
     
    David T. Ashley, Dec 30, 2006
    #4
  5. Oo, ouch.

    Steve
    (applauding the quick wit while wincing)
    http://www.fivetrees.com
     
    Steve at fivetrees, Dec 30, 2006
    #5
  6. Christian Christmann

    werty Guest

    If one wants to create structured , modular code , timing IS
    deterministic . But i code it into the kernel , so he calc's it ....
    he knows exactly when to do every task ...

    Even in the VN mem module , ARM7 , it is deterministic ,
    if you disipline yourself to structured code .

    But you must not do this urself , let the kernel calc and
    control all the idle time...
    Computers arent like humans , they wont "create" idle , like
    a Trade Unionist !!

    BTW comming soon to a theater near you , a free OpSys
    for ARM . Its so structured , takes minutes to figure
    everything in the kernel , then use those patterns
    to disassemble any code !!


    _______________________________________________________________
     
    werty, Dec 30, 2006
    #6
  7. I don't think so. He asked why he found what he did, in literature.
    And the reason for that is what I told him: it's because the statement
    he found is correct; a corollary of the Halting Theorem.
    There's a fiercely dangerous dragon hiding behind that little word
    "just". There will often be way more branch paths and loops to examine
    than expected, turning this innocent looking "just" into one heck of a
    lot of work.
     
    Hans-Bernhard Bröker, Dec 30, 2006
    #7
  8. Christian Christmann

    CBFalconer Guest

    However, if you keep functions short and understandable (meaning
    simple) it often is fairly easy to characterize them for max
    execution time, possibly in terms of their parameters. Then this
    can be used, in turn, for the calling functions. This process will
    expose the 'undecideable' routines and will often suggest
    modifications to make such routines decideable. The use of global
    variables can easily defeat this effort.
     
    CBFalconer, Dec 30, 2006
    #8
  9. Christian Christmann

    Tim Wescott Guest

    Tim Wescott, Dec 30, 2006
    #9
  10. Christian Christmann

    larwe Guest

    And oddly enough, nothing else in the world has changed, for better or
    worse.
     
    larwe, Dec 30, 2006
    #10
  11. Ulf Samuelsson, Dec 30, 2006
    #11
  12. Regardless of that, it looks like the time of execution can't be any worse.

    VLV
     
    Vladimir Vassilevsky, Dec 30, 2006
    #12
  13. Christian Christmann

    larwe Guest

    True, unlike the U.S. judicial system, wherein EVERY non-masked event
    is a high-priority interrupt and every single task contains
    interminable, nonpreemptible busy-waits with interrupts disabled.
     
    larwe, Dec 30, 2006
    #13
  14. Christian Christmann

    Guest Guest

    ....and then come interrupts, delays waiting for hardware devices,
    unpredictable cache effects, ...
     
    Guest, Jan 1, 2007
    #14
  15. Christian Christmann

    John Larkin Guest

    Everything is undecidable in general. But a real hunk of well-designed
    embedded code will have a fairly obvious worse-case path, and its
    execution time can be calculated or measured. If its execution time is
    undecidable, it's bad code and doesn't belong in an embedded system.

    I generally use an oscilloscope. Pull up a test point at the start of
    a routine, drop it at the end, set the scope to infinite persistance,
    and go away for a while. Do whatever things cause all the various
    paths to be taken, and observe the various execution times.

    John
     
    John Larkin, Jan 1, 2007
    #15
  16. Christian Christmann

    CBFalconer Guest

    I disagree. Counter example - a thermostat. Design it with a
    timer interrupt, and a watchdog for the interrupt. On interrupt,
    it examines the setpoint and the actual temperature, and turns the
    heater on/off. So far all this code has worst case execution
    times. However it is normally in the 'set' state, awaiting keypad
    input for the setpoint. The timing of this routine is totally
    unknown, it depends on external happenings. This may be a state
    machine (await 1st dig, 2nd dig, ..., backup, cancel, execute).

    This can be a very solid design, and execute in a few hundred bytes
    of object code, and use very little storage. I.E. ideal for a
    PIC. The problems will be in the keypad reading code, which may
    need another state machine. It can easily be gussied up to display
    current setpoint, current temp, partial input value. Or to use
    up/down setpoint adjustment buttons, for a simpler but more awkward
    interface.
     
    CBFalconer, Jan 1, 2007
    #16
  17. Au contraire, my friend,
    When I have done this type of code,
    I connect the keypad to an multi-input wakeup interrupt
    and after debouncing, I send a message to a keyboard decoding task
    which implements a state machine.
    If nothing happens, the CPU goes to sleep.
    All code in the state machine exhibit a worst case execution time.
    The CPU spends most of its time in sleep.

    You may argue that I still do not know how long time
    it takes for a complete key sequence to be entered
    but if you think about WHY you need to calculate
    the worst case execution time, you realize
    that in a well-behaved thermostat system you should not have to worry
    about the timing of user input.

    --
    Best Regards,
    Ulf Samuelsson

    This message is intended to be my own personal view and it
    may or may not be shared by my employer Atmel Nordic AB
     
    Ulf Samuelsson, Jan 1, 2007
    #17
  18. Christian Christmann

    CBFalconer Guest

    Either approach will work, but you are making some added
    assumptions. First, that power consumption is important. Second,
    that something is present to create that keypad interrupt. The
    system may need to be polling a set of lines for changes and
    keeping a history (via the one timer interrupt) to detect
    keyup/down/debounce etc. The up/down setpoint version needs only
    two input bits. I can see the thing needing only 5 pins, plus
    another for programming (2 input, 1 output, 2 power) before output
    display control. Your version will be complicated if an output
    display showing actual temp and setpoint is needed.

    In fact, my version can dispense with any interrupt, by simply
    polling the output of a timer. Less to go wrong = added
    reliability.
     
    CBFalconer, Jan 1, 2007
    #18
  19. Christian Christmann

    John Larkin Guest

    You have a choice in situations like this: either do the keypad input
    as the mainline program and to the temperature control as an
    interrupt-driven state machine, or do both as interrupt-driven (or
    otherwise periodically executed) state machines. In the first case,
    the mainline routine literally runs forever (is "forever" decidable?)
    and we don't mind a bit. In the second case, each state machine runs
    for a predictable (and probably short) amount of time and then gets
    the hell out.

    My embedded products often follow a common structure: a mainline loop
    that does serial i/o (or some similar function which likes to be
    classic procedural code) and a timer-driven interrupt that dispatches
    to various state machine snippets on some sort of rotating basis,
    doing all the realtime stuff. You can scope the IRQ execution time and
    see all the state machines executing in progression, and see each
    execution time jump around depending on the path through each
    snippet... fun to watch. A robust design won't care if an occasional
    snippet runs too long and a periodic interrupt is missed, although
    most designs try to avoid this situation.

    Most deep-embedded things work very well this way and don't need an
    RTOS, queues, semiphores, or any of that fancy stuff that tends to
    make WCET hard or impossible to predict. WCET must be considered, but
    is often not a big deal if you keep the state blocks
    rip-right-through-simple and don't mind wasting a bit of available CPU
    power.

    Small state machines can do complex stuff. They tend to be more
    reliable than classic procedural code where the system "state" is the
    program counter's current location inside a maze of branches and
    subroutines. A state machine, essentially, keeps coming up for air.

    John
     
    John Larkin, Jan 1, 2007
    #19
  20. Yep, absolutely agreed.

    Just to add a bit of emphasis - nowadays I never have code that just waits
    for something to happen. It seems odd to me now that anybody would, in the
    same way that the use of globals seems odd to me now. Instead I use state
    machines and keep right on going.

    Steve
    http://www.fivetrees.com
     
    Steve at fivetrees, Jan 1, 2007
    #20
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.