Lowering the system power, and some thoughts on time.

Originally posted May 5, 2017

Reposted June 18, 2024

Keep the system sleepy

Let’s think about the simple system with only one set of tasks. How long does it take to execute all the tasks in the list?

If you have a GPIO, or even better a bank of GPIO that are accessible, it is easy to find out. In every loop, before a task runs, either toggle the GPIO, or output the task number on the GPIO. Now, we should add to the output on the GPIO, using the NUM_TASKS variable when a task returns. Now you are monitoring the time it takes to run all the tasks, plus overheard not in a task time.

From experience, in most designs, the loop does not take long. A few tens of miliseconds. For most applications the tasks are not doing much, just waiting for data or interrupts. If the system is busy on a modern processor for more than half a second (500ms), just running tasks, rule of thumb says get a more powerful processor. The one you have is not enough.

Most modern microcontrollers support a sleep mode. The processor and non essential systems get shut down, put into a low power mode. When the timer times out, it interrupts the processor, and wakes the processor.

So, first we setup a timer. In Linux it would be called the jiffy timer, and I will use the same term here. We will rely on the “wake on interrupt” features of the processor. Every time the timer “times out”, counts to 0 or a predetermined value based on the clock period provided by the hardware, the system will get an interrupt. On some systems the timer must be reset, on others it will keep going and interrupt again after the exact same number of clocks. The interrupts will happen periodically at a well defined interval. It is useful to have a time out every 10ms, 50ms or 100ms. For a Jiffies timer we would count the timeouts and have a reasonably accurate clock and timer.

The simple way to set the timer, then go to sleep after each call to taskrunner() will work. But the task time can be variable, depending on interrupts and work to be done, or data to be processed. So the timer timeout adds the task time and is also variable. Fine, if you don’t care about keeping time accurately.

How do I keep an accurate time, with variable task times?

Easy! The timer init task runs and sets up the timer. The timer interrupt resets the time out (depends on the hardware) and increments a jiffies counter. Tick-tock we now have a periodic timer independent of what is executing in the foreground. The timer is completely interrupt driven.

After the main loop calls taskrunner() and the tasks complete, put the processor to sleep. Sleep is very hardware dependent, but the only thing to remember is wake on an interrupt, and don’t disable the timers. Other interrupts for hardware and devices can also wake the processor.

This is where using the runNow variable becomes key. Tasks that don’t need to run don’t. The check in taskrunner() will avoid calling a task and forcing the task code to figure out if there is data to process or anything useful to do. If it is just a timer timeout and there is no work to do, a quick run though the task list won’t even call any task functions, and back to sleep we go.

By setting up the interrupts carefully, the system will still be very responsive and feel quick. It is also sleepy and saves power. Adjust the sleep time to be as long as possible and the power savings can be significant.

Time is on your side

I think that is pretty cool. A side effect of building the sleep behavior based on a periodic timer is the system now keeps (reasonably) accurate time. What can I do with that? Time in a system, especially a small system without time keeping hardware can be tricky. Keeping up with long times is especially difficult. I will include the code later, but for this post, let’s just run through the theory.

Let’s make a new task called “task_real_time”. I am open to suggestions for a new, better name.

Inside task_real_time() the code keeps track of timers for other tasks. The other tasks can either be run (set runNow) or use a call back function when their time comes. The real time task should probably be the first task in the list. It will force other tasks to run, so it pretty important. How would this get implemented?

First we define a linked list. This is C code, so don’t fear the pointers. A linked list is simply a structure that includes a pointer to another structure of the same type. Yuck! Fodder for another post. Anyway we have this list, see. We can add things to the list anyplace we want.

The list will be a list of “jiffies values” in the future. Does that make sense?

Why not use an array? I used an array for the Task List, why not here? It is simpler to use a linked list. The important idea is to put things into the list anyplace we want, and remove them without having to resort the list. The add anyplace let’s the code keep the list sorted in order. Because it is sorted, the code is not reading the whole list every time. It only has to read until the all the next timeout is greater than the current jiffies count. There is a big savings in overhead by doing it this way.

An example may help here. Assume a task (task_A) in our system wants to wait 3.7 seconds before sending another message. (Why? Picked a number, does not matter for the example.) The task wants to “sleep”, or not be run for 3.7 seconds. When the timeout happens, it will be 3.7 seconds in the future, and the task will get run again.

The task uses a call from the timer API. task_A calls timer_runmelater(sec_in_jiffies(3.7), &task_A_runnow). The notation “&task_A_runnow” means a pointer to the variable task_A_runnow.

The function timer_runmelater() will create a new node in the timer list. The timer list is private to the real time code. The new node is placed in the list, in order based on the current jiffies count + sec_in_jiffies(3.7). Assuming the current count is 8935, and a the jiffies timeout happens every 10ms. So sec_in_jiffies would do 3.7 * 1000msec/sec/10msec/jiffie = 370. (I am assuming a 10msec periodic jiffy timeout.) So the jiffies count when the task should be run would be 8935+370=9305. 9305 is the value used to place the new node into the list. It would go after 9300 and before 9310, get it?

The call returns and the task completes. Other tasks run, but nothing sets the runNow variable for task_A. After the jiffies timer has counted 370 ticks, the task_real_time() is run due to the jiffies timer interrupt. When it compares the current value of the timer to the first items on the list, they are less than or equal, so it sets the runNow flag on task_A. It walks the list checking for tasks with work to do based on time outs. When it hits the first timer with a greater jiffies value, the real time code knows it is done, and exits.

As the rest of the tasks get run in the TaskTurner, task_A will be called this time around.

Now, is this super accurate? No, we are not able to do microsecond timing with this method. It is good for seconds long timeouts, or tenths of seconds. You always seem to need that range of timeout to flash LED’s nicely or update a display.

There are some things to be careful with. Always use unsigned math and pay attention to the number of bits in the jiffies counter and the timers. Addition will role over the jiffies count. You have to pay attention to the role over, and insert the timer node at the end of the list. If you are not familiar with unsigned math and role overs, don’t worry. If you want to implement this sort of timer, it is based on the code in the great book, Programming Embedded Systems by Michael Barr.

Winding up…

Again, unsigned math is a topic for another post. I am avoiding hardware specific details here. This post is about the theory. When I have a nice sleep and timer code to show, I will post it in GitHub and a Gist on the blog.

For next time, we can clear up any questions from the comments. If you have questions, please ask below.







Leave a Reply

Your email address will not be published. Required fields are marked *