A tuned application will reach a balance between the number/rate of interrupts processed and the amount of real work that gets done, for example by processing multiple packets per interrupt rather than one. Even spinning applications can benefit from the occasional interrupt. For example when a spinning thread has been de-scheduled from a CPU, a timeout interrupt will prod the thread back to action after 250 µs.
# onload_stackdump lots | grep ^interrupt
Counter | Description |
---|---|
Interrupts | Total number of interrupts received for the stack. |
Interrupt polls | Number of times the stack is polled - invoked by interrupt. |
Interrupt evs | Number of events processed when invoked by an interrupt. |
Interrupt wakes | Number of times the application is woken by interrupt. |
Interrupt primes | Number of times interrupts are re-enabled (after spinning or polling the stack). |
Interrupt no events | Number of stack polls for which there was no event to recover. |
Interrupt lock contends | The application polled the stack and has the lock before an interrupt fired. |
Interrupt budget limited | Number of times, when handling a poll in an interrupt, the poll was stopped when the NAPI budget was reached. Any remaining events are then processed on the stack workqueue. |
Solution
If an application is observed taking lots of interrupts it might be beneficial to
increase the spin time with the EF_POLL_USEC variable or setting a high interrupt
moderation value for the net driver using ethtool
.
You should also ensure that the application CPU cores are isolated to avoid
descheduling. If it is not possible to isolate the cores, consider switching to
interrupt mode.
The number of interrupts on the system can also be identified from /proc/interrupts.