Interrupt Management in an Embedded System



In this article, I have described generic interrupts as well as NMI which is the most crucial interrupt in any responsive Real Time (as well as non-real time) system. This article is not a guidebook on interrupt management in an embedded system but it is a novel attempt to highlight some important aspects of interrupt handling. The article doesnot endorse or favour any particular system configuration. This article is useful for developers working on soft embedded systems like Mobile communication, digital multimedia etc.


Generic Interrupts

Every interrupt is an exquisite ballet of hardware of hardware and software, each interacting in a complex way to produce a very simple result. The controller generally prioitizes multiple interrupt priorities, sorting them out for various conflicts regarding services, and presents the most important one.

The processor responds to the level-sensitive interrupts only if asserted during the window while the microcode samples the signal. The processor could miss a very short pulse coming outside the window. With the edge sensitive interrupts, the CPU latches the rising or falling edge of the pulse, saving the transition until the machine is ready for it.

All edge sensitive interrupts will be serviced, no matter how short their duration is.

In case of 80x88 processors, when the interrupt is asserted then computer completes the current instruction and pushes the program counter (consisting of 16 bit offset and 16 bit segment) and flag register. The interrupting device supplies an 8 bit vector, which forms a pointer to the interrupt table when multiplied by 4. The processor disables all maskable interrupts and branches to the ISR whoose address is found in the corresponding table entry.



Non Maskable Interrupt

An NMI will interrupt the CPU at any point of time. Power failure is such an important system event that the only option for notifying the software of its impending demise is NMI. The edge sensitive nature of NMI renders it susceptible to every stray bit of electrical noise.

No power-monitoring circuit is fail-safe unless it clamps the reset line whenever power is less than the magic 4.75v level.

With good rentrant design, interrupts should never be disabled for more than few tens of micro seconds. Most problems with interrupts lie in one of the 4 areas : completion, latency, stack corruption and reentrancy.The software developer will use semaphores, which while admittedly slower, keep interrupts enabled to minimize latency. The key to minimize latency in any system is to minimize time duration in which interrupts are disabled.

Another option is a forked interrupt system, which can be used when interrupts starts some very complex activity. Every ISR is split into 2 parts. The first part handles only what must be taken care of as soon as interrupt is issued; for instnace things like acknowledgement of the interrupt, device handling and the like. the seond part takes care of lower tier activities like processing inputs from device. In Linux they are referred to as Top half and Bottom halves respectively.

Interrupts generally affect only the system stack and automatically drive the processor into system mode. A more insiduous form of stack corruption plays havoc with fast and miscoded interrupts. If they come very quickly such that no ISR has a chance to run to completion, then the stack will eventually overflow as service requests back up. Usually the mainline code can be non-reentrant. These rules define reentrancy requirements:
  1. No routine can call itself or be called by multiple threads unless it is reentrant.

  2. Do not call a non reentrant subroutine from more than one place if it is possible that an interrupt or recursicve call could make two or more calls active at one time.

  3. Never use self modifying code, unless you can guarantee that only one incarnation of the code will be active at a time.

Hosted by www.Geocities.ws

1