Dynamic Tick Mode

Dynamic Tick Mode is a power-saving feature that was first introduced in μC/OS-III V3.05.00 and later revised in µC/OS-III V3.07.00. It is a compromise between the approaches outlined in the sections on Tickless Mode and Periodic Tick Mode in that it gives low power applications the option to utilize timing services, without the drawback of waking the system up on every tick.

Benefits

  • The full set of time services is available for use.
  • Power consumption is reduced by allowing the system to stay asleep through idle periods.

Implications

  • Support for this mode is more involved than for Periodic Tick Mode.
  • Round-robin scheduling cannot be used. A build error will be generated if it is enabled.

System Behavior

Compared to Periodic Tick Mode, the system behavior changes drastically while using Dynamic Tick. It more closely resembles an application using Tickless Mode with a hardware timer, as was covered in that section. The major difference between the two cases is that Dynamic Tick operates alongside the kernel's timing API. For code making use of those APIs, there should be no difference between a properly-implemented Dynamic Tick BSP and the simpler Periodic Tick BSP.

Figure - Small Delay in Dynamic Tick Mode

(1) The Tick interrupt now represents the elapse of more than one tick.

(2) Our system now sleeps throughout the duration of its idle period. It is only woken up when we have to ready the task. The power consumption of our system is no longer tied to the tick rate, but instead by the frequency of tasks and other peripheral interrupts.

Dynamic Tick

Dynamic Tick requires a programmable hardware timer which can be configured to count a specified number of ticks. A down-counting timer can be used, but we will assume an up-counter for the remainder of this document. The OS needs some way to read and write to the timer, so it defines a new API to do this: OS_DynTickSet() and OS_DynTickGet(). However, this API is somewhat unique in that it is not implemented by the kernel, but instead left to the BSP. This gives the BSP developer some flexibility in the choice of time source, provided that certain restrictions are followed.

The timer must have the following characteristics:

  • A programmable ceiling value which:
    • Generates an interrupt on match
    • Sets a status bit on match
  • A counter that can be read while the timer is running
  • A frequency that is an integer multiple of the OS Tick rate.

For an optimal implementation of Dynamic Tick, the following constraints should be added, for reasons which will be discussed later in this section:

  • A timer frequency which matches the OS Tick frequency.
  • A register width with matches the width of the OS_TICK type (32-bit by default).

There is one other addition to the API, OSTimeDynTick(), which must be used in place of OSTimeTick(). The main difference between the two is that the new function allows the ISR to process multiple ticks at once.

Listing - OSTimeDynTick() pseudocode
void  OSTimeDynTick (OS_TICK  ticks)                        (1)
{
    OSTimeTickHook();                                       (2)
                                                            (3)
    Update the tick list by processing ticks;               (4)
}

(1) Unlike OSTimeTick() in Periodic Tick Mode, in the Dynamic Tick case our tick ISR must process several ticks per interrupt. The number of ticks to be processed is provided by the calling ISR code.

(2) The dynamic time tick ISR starts by calling a hook function, OSTimeTickHook(). The hook function allows the implementer of the µC/OS-III port to perform additional processing when a tick interrupt occurs. In turn, the tick hook can call a user-defined tick hook if its corresponding pointer, OS_AppTimeTickHookPtr, is non-NULL. The reason the hook is called first is to give the application immediate access to this time source.

(3) Another difference as compared to OSTimeTick() in Periodic Tick Mode is that we do not run any round-robin scheduling code since it is not supported with Dynamic Tick Mode.

(4) µC/OS-III calls OS_TickUpdate() to process several ticks. The tick list is updated, and any tasks whose timeouts or delays expire on this tick are made ready to run.

Dynamic Tick Model

The function shown above, OSTimeDynTick(), is only a small part of the Dynamic Tick API. There are many more details to cover, so we've decided to lay out the rest through a number of different cases. One of detail is fundamental to the remainder of the examples, so we state it here: the Dynamic Tick API is always invoked either with interrupts disabled or from interrupt context. This prevents race conditions on the kernel's data structures and the hardware timer.

Case 1 - Requesting a Timeout

Dynamic tick is a best-effort service which fulfills timeout requests from the kernel. The service can only handle one request at any given time, so the kernel must ensure that the timeout it requests will elapse at the closest task delay expiration or pend timeout.

Figure - Dynamic Tick - Case 1

(1) The kernel requests that the Dynamic Tick service issues a timeout after N ticks by invoking OS_DynTickSet().

(2) Unlike the generic kernel code, OS_DynTickSet() lies within the BSP and therefore knows the frequency and register width of the hardware timer. Based on those two properties, the hardware timer may not be capable of handling the entire timeout in one request. If that is the case, the timer is programmed to count as many ticks as it is able. This is why Dynamic Tick is described as a best-effort service.

(3) The timer is programmed to start counting up to T. The time this operation takes to complete will be less than or equal to N ticks.

(4) While the timer is counting, task execution proceeds normally. If there is no work to do at this time, the kernel can switch to the Idle Task and optionally enter low-power mode.

(5) When the interrupt finally triggers, the Tick ISR reports the number of ticks which have elapsed, N*, by calling OSTimeDynTick(). If our timer was capable of handling the entire timeout in one request, N* should be equal to N and OSTickUpdate() should free up the task which was set to expire at this time. If not, N* is less than N and the kernel's next request will take into account what was reported in OSTimeDynTick().

(6) With the previous request now complete, OSTickUpdate() implicitly generates the next request so that we avoid additional drift. This requires another call to OS_DynTickSet(), this time from interrupt context. Because OS_DynTickSet() is invoked from both kernel and interrupt context, it should always service the interrupt, configure the timer, and start the new countdown when called.


Case 2 - Overriding the Current Request

There are situations where the current request must be overridden before it completes. For example, assume Task1 delays for 50 ticks, then Task2 runs for 10 ticks and delays for 20 ticks. This situation is illustrated below.

Figure - Dynamic Tick - Case 2

(1) For the sake of discussion, let's say the system time is currently 0. Task1's wants to delay for 50 ticks from the current time which causes the kernel to request a 50 tick timeout and place Task1 in the delayed state. While Task1 is delayed, the kernel schedules Task2 to run.

(2) After processing data for 10 ticks of time, Task2 wants to delay for 20 Ticks. This means that Task2 needs to wake up at the 30 tick mark, which is earlier than Task1's wake-up. This makes the current timeout request invalid, and the kernel is forced to override it by simply calling OS_DynTickSet() with the new timeout value. In our example, the new request is for a 20 tick timeout.

(3) With both tasks delayed, our system can idle until the next timeout at the 30 tick mark. This is 20 ticks after the delay, which is what we wanted. Task2 is given control and, for the sake of discussion, pends indefinitely on a semaphore. With Task2 out of the way, we still need to satisfy the original request to delay Task1 until the 50 tick mark. As we saw with Case 1, OSTickUpdate() implicitly generates the next timeout request from interrupt context. While examining the tick list to process the elapsed ticks, it takes the opportunity to find the proper timeout value for the next task in the list and use it for the next call to OS_DynTickSet().

(4) With Task2 pending and Task1 delayed, our system is idle once again. The timer is counting another 20 ticks in the meantime.

(5) Finally, Task1 is made ready to run by the tick interrupt. Once again, OSTickUpdate() generates the next timeout request for any tasks in the tick list. If there are no more tasks in the list, it will request an indefinite timeout which is covered in Case 3.


One final note on this case: there are situations where an override is required because we've removed a task from the tick list prematurely. This usually happens when a task pending with timeout receives the object before time is up, but it may also be due to a PendAbort or call to OSTimeDlyResume(). In any case, the situation is handled in a similar fashion. If, after the task is removed, the current timeout is too short the kernel will override it with a longer timeout request.

Case 3 - Indefinite Timeouts

One function in the Dynamic Tick API has been left out of the discussion so far. OS_DynTickGet() allows the kernel to read the number of ticks counted towards the current timout request. For example, if a 50 tick timeout request is issued, OS_DynTickGet() would return 0 at the start of the request and 50 at the end of the request. It should never return a value greater than 50.

The simplest use for OS_DynTickGet() is to read the System Time. In Periodic Tick Mode, the current system time was represented by a global variable, OSTickCtr. As its name suggests, it is a count of the number of ticks that the OS has processed since it started running. Dynamic Tick Mode throws a wrench into this scheme because we now have ticks that have elapsed (i.e. been counted by the timer) but have not been processed by the ISR. To remedy this situation, OSTimeGet() reads the number of ticks in this in-between state by calling OS_DynTickGet(). The value it reads is added to OSTickCtr to give the correct system time.

Figure - Dynamic Tick - Case 3 : OS_DynTickGet()

(1) The kernel assumes that the timer was started at some point and is currently counting. That assumption should always hold because, similar to the Periodic Tick, the Dynamic Tick must be started in the first task. In order to avoid losing time between starting the timer and our first delay, the initial timer configuration should be an indefinite delay, which is covered in the next example.

(2) When OS_DynTickGet() is called, it quickly reads the timer's count.

(3) It is possible that the count has overflowed (i.e. the timeout has been exceeded). In order for the kernel's timing logic to work correctly, we must never return a number of ticks which exceeds the full timeout. If extra time has passed unaccounted for, it contributes to the drift.

(4) If no overflow occurred, we are OK to convert the count back to ticks and return it to the kernel.


However, OS_DynTickGet() is not only used for reading the System Time; it's primary purpose is to maintain it. Internally, the system's tick counter is updated whenever we process the tick ISR.  If we look at the scenario outlined in Case 2 once again, we'll realize that the 10 ticks which elapsed before Task2 requested a delay were never added to our tick counter because the first ISR will service the new request of 20 ticks instead. We can confirm this by counting the ticks reported by the ISRs in the diagram; we should see 50, but the total is 40. To ensure that the system time is properly maintained, the kernel uses OS_DynTickGet() to make up the difference whenever it issues an override. While performing the second delay for Task2, the kernel internally calls OS_DynTickGet() and learns that 10 ticks have elapsed. It ensures that the tick counter is adjusted suitably.


There is one more case to consider, which is the culmination of our discussion surrounding OS_DynTickGet(). How does Dynamic Tick Mode handle the situation when no tasks are waiting for a timeout (i.e. the tick list is empty)? It turns out that OS_DynTickSet() sets aside 0 for just this case: an indefinite timeout. However, this tells us nothing about how an indefinite timeout is actually handled.

One possibility would be to simply disable the timer until we get a new request. The problem is that we would lose time-keeping altogether, so our tick counter would no longer reflect the actual number of ticks that have elapsed. What we actually need is for our timer keep counting until we're ready to issue a new timeout request. Since that period could, in theory, exceed the capacity of any real timer, this solution cannot be realized on actual hardware. However, this is of little concern because Dynamic Tick is a best-effort service, and we've already seen how it handles requests that are too large to realize on the timer; this is just a special case of the same idea.

Figure - Dynamic Tick - Case 3

(1) With our tick list empty, a request is made for an indefinite timeout by passing 0 to OS_DynTickSet().

(2) Our best-effort to count indefinitely is to count as many ticks as our hardware allows. We need to explain another detail which was only hinted earlier in the discussion. There is an important difference between the maximum count of a timer and the maximum tick count of a timer. This is best illustrated through example.

Suppose we have a 16-bit timer which runs at 1 MHz and a kernel tick rate of 1 kHz. The maximum count of that timer is 65,535, but this only represents 65 whole ticks because we need 1000 timer counts to match the period of a single tick. The 535 additional counts would not equate to a complete tick and would be truncated due to integer division when converting from timer counts to ticks. However, if 65,535 was the value we placed in our timer at the start of the indefinite timeout then our ISR would only fire after the timer reaches that value. The ISR can only report an integral number of ticks to the kernel, so time is only advanced by 65 ticks and the fractional tick is disregarded. This has the undesirable effect of extending the time value of the final tick, in this case by about 50%. In order to avoid this, we must always configure the timer with a value that represents an integral number of ticks. The diagram provides a simple formula for calculating the maximum number of ticks that a timer can hold. It works by subtracting the fractional portion out of the maximum count. For the case outlined here, the maximum count should be 65,000.

(3) The count value from Step 2 is loaded into the timer.

(4) While the timer is running, we may execute other tasks or idle. If a task were to request a delay or timed pend during this period, it would most likely cause the kernel to override the indefinite timeout. In that case, OS_DynTickGet() would be used to record the number of ticks that elapsed during the indefinite timeout. For our purposes, we will assume that no tasks requested a delay or a pend with timeout.

(5) Once we've reached the timer's maximum tick count, we have no choice but to service the interrupt as we normally would. Even though no tasks will be woken up, the ISR ensures that the tick counter is maintained.

(6) Since the tick list is still empty, the kernel's next request is for another indefinite timeout. The indefinite timeouts will continue until a task is inserted into the tick list by means of a kernel API call.


This last case is interesting because it imposes a limit on the power efficiency of Dynamic Tick Mode. If we have a perfectly idle system with the Dynamic Tick timer running, we may still be forced to wake up periodically. However, Instead of waking up every tick, we only wake up each time the timer reaches its maximum tick count. Depending on the application and the timer configuration, it is more likely that another event will wake up the system before the indefinite timeout triggers an interrupt.

It is because of the behavior outlined above that we recommend a 32-bit timer running at exactly the tick frequency. Assuming OS_TICK is defined to be a 32-bit integer, as is the default, such a timer can handle any request issued by the kernel with a single ISR since ticks requested will equal the number of timer counts. Also, a 64-bit timer running at the tick rate would provide no improvement because OSTimeDynTick() would not be capable of processing more than what OS_TICK can hold. If, in theory, we allow this larger timer to exceed a 32-bit integer value on an indefinite delay, the value provided to OSTimeDynTick() would not represent the amount of time which has actually passed due to the implicit cast. In conclusion, a timer that runs at the Tick Frequency and matches the size of OS_TICK is optimal.