Neil Armstrong

In my life, there have been many people who have had a positive influence on my life. This week, one of them passed away. While I have been fortunate enough to meet members of the Canadian astronaut corps, I’ve never met Neil Armstrong. In spite of this, he had a profound influence on my life. An engineer, test pilot, astronaut, explorer and pioneer; he led the way in many fields. It was fitting that he was first human being to set foot on another heavenly body.

Still, there is a lot more to this story than just that. The Apollo space system was perhaps the first ever example of an embedded computer system. Who other than NASA would put two very expensive computers into a vehicle when only one would be recovered? The engineer in Neil would have appreciated that aspect. When training for and piloting the Lunar Module, quick action was twice required. The test pilot in Neil remained calm and did the right thing in time to save the day. Once on the moon, a carefully planned course of action led to the accumulation of valuable scientific samples in a short time. As a trained observer, he provided the eyes and hands of those who could not be there themselves. As a pioneer, he led the way into the unknown, facing terrible risks and danger.

Neil Armstrong was my childhood hero. He inspired me to work harder in school, to go beyond the lessons in my own studies, and to keep looking at life as the greatest, only opportunity anyone could ever have. Mr Armstrong is gone now; but he will live on in my heart with the gratitude I feel for how he inspired me and thanks for the sacrifices he made to the benefit of mankind.

As always; your comments and thoughts are invited.

Peter Camilleri (aka Squidly Jones)

Modules: The Callback Manager

A recent series of postings have dealt with modular programming and the transmission of events and data from one module to another. The push and pull strategies were both examined and compared. This article deals with a special case of data flow. The case were there is a single event/data source, and an unknown number (zero or more) of event/data sinks. This is illustrated below:

In this case we can see that there is a single data source and an unknown multitude of sinks. To handle this task, an array of pointers to functions, maintained by the CallBack Manager,  is employed. The CallBack manager is a small module that provides the functionality associated with the call back list data structure. Since this data structure is never allocated as a global, static or automatic (stack based) structure, its type definition only specifies a pointer to a structure and not the structure itself. It is shown below:

// The call back list data structure.
typedef struct
    uint8_t         capacity;   // The capacity of the list.
    uint8_t         count;      // The occupancy of the list.
    uint8_t         rfu1;       // Reserved. Padding for now.
    uint8_t         rfu2;       // Reserved. Padding for now.
    emPCCALLBACK    list[0];    // The list of call backs.

Since only a pointer is defined, some means of allocating the structure must be provided. In the CallBack manager this is the emCreateCallBackList function:

emPCALLBACKLIST emCreateCallBackList(uint8_t capacity);

This function definition reveals the first compromise. While the number of event sinks may not be know, the upper limit on the number of sinks must. This function creates a callback list with the specified capacity and returns a pointer to it. If the list could not be allocated, a NULL pointer is returned instead.

The functions used by the CallBack Manager have a standard signature. This is:

typedef void (*emPCALLBACKFN)(PV rarg, PV sarg);

Where rarg and sarg are generic void pointer arguments that represent parameters determined by the receiver and the sender respectively.

After the list is created, it must be populated, it must be manipulated. This is done with two functions:

void emAddCallBack   (emPCALLBACKLIST list, 
                      emPCCALLBACK cb, 
                      emLISTDIRECTION dir);

void emRemoveCallBack(emPCALLBACKLIST list, 
                      emPCCALLBACK cb);

The first is used to add functions to the callback list. The dir parameter determines if the callback is added to the first fee slot (emFront) or the last free slot (emBack). This allows a crude sort of priority in the callback list, The second function removes the specified entry from the list.

Finally, there must be some way to invoke the call back list. There are four available functions for this task:

void emExecuteCallBack       (emPCALLBACKLIST list);

void emExecuteCallBackPV     (emPCALLBACKLIST list, 
                              PV sarg);

void emExecuteTimedCallBack  (emPCALLBACKLIST list, 
                              uint32_t timeOutNS);

void emExecuteTimedCallBackPV(emPCALLBACKLIST list, 
                              uint32_t timeOutNS, 
                              PV sarg);

These four functions provide for two orthogonal choices:

  1. To send a void pointer sarg or not.
  2. To impose a time limit (in nanoseconds) on the call back list or not.

The sarg is useful in cases like an interrupt service routine where it can be used to pass a pointer to a task awoken flag. The time out is useful to prevent the call back from consuming too much time. Note that individual called functions all run to completion, it is the processing of the array that stops when time has expired. This time parameter relies on the timing facilities discussed in A Brief Time Out.

These small number of simple functions summarize the entire CallBack Manager. It is a small module that provides for flexible connections between modules, promoting loosely bound architectures without a great deal of overhead. Soon to come will be an examination of a module that makes strong use of the CallBack Manager to distribute data; look forward to a posting on the A2D Manager.

As always, comments, replies, and suggestions are welcomed.

Peter Camilleri (aka Squidly Jones)

It takes all types… [Updated]

All programming languages have data types. This article focuses on integer data types in statically typed environments; like C, C++ and Java. When examining integers, two characteristics are important:

  1. The signed-ness of the value; Signed vs Unsigned. Signed integers may represent negative values through the twos-complement representation. [Note that other signed integer representation systems exist, they just are no longer used]
  2. The number of bits in the value. Most typically 8, 16, 32, or 64 bits. [Note that many odd lengths like 12, 18, and 36 bits were once common; the machines that support them are pretty much extinct.]

In classical (pre-C99) C, the signage of integers was specified where the size was implementation dependent. I suppose the idea was that if the programmer specified that a variable was an “int” the compiler would select the most appropriate size for the system in question.It is true that certain minimums where required. An “int” was required to be at least 16 bits in size, but beyond that you never could be sure. The only hard rule could be summarized as:

sizeof(short) ≤ sizeof(int) ≤ sizeof(long)

and that’s it. All three could be 64 bits and it would still be legal under the ANSII rules. In embedded systems, this sort of ambiguity is not acceptable primarily because hardware registers need to be mapped precisely to a data structure laid out exactly the same, to the bit. This can be difficult when the specification says so little about what “int” means.

Other languages have taken different approaches. In Java, four integer data types are defined:

Name Width Value Range
Byte 8 -128 … 127
Short 16 –32,768 to 32,767
Int 32 –2,147,483,648 to 2,147,483,647
Long 64 –9,223,372,036,854,775,808 to 9 ,223,372,036,854,775,807

This is better, as far as it goes, but there are three serious flaws. First Java does not have any support for unsigned data; a serious omission and second, Java is generally unavailable for embedded systems. These make it quite difficult indeed to use Java. Finally the Java spec still has too much wiggle room. The sizes specified are only minimums, nothing prevents an implementer from using 32 bits for Bytes, Shorts and Ints. This is rare, but makes portability problematic again.

So what’s to be done to solve this dilemma? The traditional approach has been for projects to have a common include file named something like “GenericTypes.h” that contains a series of typedefs that specify the integer types supported by that compiler. The problem is that there is absolutely no standardization of the names used by these user defined integer types. An 8 bit unsigned integer could be UINT8, UINT_8, uint8, U8 or any one of a myriad possible names. There has to be a better way, and with the ANSII C99  standard there is. C99 introduced the stdint.h header file which defines standard sized integers data types. Our unsigned 8 bit example is now the standard type “uint8_t”! This is great!

There’s only one problem. To be useful, code needs to

#include <stdint.h>

and then exclusively use the new types, or types derived from them. I am as guilty as any. My library code contains a emTypes.h file the defines my own peculiar names for integer data types. The time has come to abandon the proprietary scheme and fully adopt the standard conventions. Over time I plan to shift my code over to the C99 stdint types wherever possible and to build upon stdint types when needed.

[UPDATE] My proposal is this: In cases where a general purpose integer is required, without concern for space, alignment, in a structure, or other special concerns, use plain old “int”, as this is supposed to be the “native” integer data type. In all other cases, use the types of <stdint.h>. I call on library writers to adopt the same approach. It will promote interoperability and portability of code and promote standardization of data types.

What are your thoughts? Good idea? As always, your views, comments, suggestions and ideas are most welcomed.

Peter Camilleri (aka Squidly Jones)

Impressions of Masters 2012 [Updated]

Welcome to Masters!

First a disclaimer. There is no way that the Microchip 2012 Masters Conference can be summarized here. It is simply far to vast in scale and extent to do it justice. Instead, consider this to be a series of first person impressions, notes, recollections, and anecdotes.

To put things in perspective, this was my seventh time attending the Masters conference. Previous years attended were 2000, 2001, 2002, 2004, 2009, and 2011.


The front door!

I’m not quite sure how long it’s been there, but this year the conference was held at the luxurious J.W. Marriott. Here’s the grand front resort entrance. It goes without saying that the hotel, its rooms, conference halls and public spaces where all luxurious and well appointed. The conference included three sumptuous meals a day with beverages, fruit and “treats” for those who got the munchies in between study courses.


The “swag” of the year!

As is common at such events, all attendees went home with a little “swag” to remember their participation. In past years, little technology demos were often given out. The last few years this has been replaced by a note book, polo shirt and course notes thumb drive combo. This is a little disappointing as the gadgets were always fun, but to be honest, other than some fun playing with them, not much came of them, so the cost savings is sort of understandable, even if it is a bit of a downer.

As said earlier, this is my seventh Masters. Each year I have the same problem. There are many more courses that I would like to take than there are time slots in which to take them. This year there were 36 courses offered covering a wide range of topics. That’s too many to list so instead here is the Masters 2012 Event Guide. The class list is on pages 11 through 31. As for myself, I was kept busy learning about:

  • Upcoming developments in Microcontrollers and Development Tools.
  • Learning more details of the FreeRTOS software. I was fortunate enough to win the programming challenge and got a copy of the book Microchip PIC32 Edition of the FreeRTOS user’s manual. I’ll be doing a review of that soon.
  • Using the Microchip Graphics Display Designer (GDD-X)
  • Robust I2C communications with the PIC I2C peripherals.
  • USB: Both the Generic Driver and CDC class devices.
  • Bootloading, Application Mapping and Loading Techniques on a PIC32.
  • An examination of Advanced Touch Screen Technologies: Projected Capacitance Touch.

As if that were not enough, the time slot between the end of studies and dinner was also utilized each evening.

The Keynote Address (Picture from Facebook)

 On Wednesday there was the interesting (OK not THAT interesting, but were did I have to go?) keynote address by Steve Sanghi, the CEO of Microchip. Mr Sanghi has a direct, down-to-earth approach that is most refreshing and while not edge of your seat stuff, it was still very informative to hear the corporate perspective.

On Thursday, I was delighted to see the return of Don McMillon! Don is the self-proclaimed “Funniest Engineer in the World!”. Even though this was his second time at masters, the show was mostly new material and I had a great time. With all the serious engineering work going on, some levity was welcomed.

And then there’s Friday… On Friday, there was a Casino Night. During this event, I made the earth shattering discovery that I truly suck at poker. Thank goodness the $1,000 I lost was only play money (Unlike some 2012 USA Presidential candidates, I do NOT make casual $10,000 bets!).

Finally, I have left what is possibly the best part for last. After dinner each evening, there were semi-formal discussion groups with various microchip teams in attendance to answer questions. I attended the XC16 and XC32 discussions. I learned a great deal of useful info. Here’s a sampling of some tasty tid-bits:

  • XC32++ is ready! That’s right C++, with a FULL library by Dinkumware, a highly respected library specialist vendor. XC16++ is still in the very early phases and is rather much much further off.
  • New PIC32s are coming that will support new modes that are faster and more compact than the current MIPS16 mode. Support is coming for higher speeds and much larger memory and a external memory interface. However, no target dates were given. Sigh… Marketing!
  • One surprise was getting more clarity on the Errata Issues that I have written about before. It now seems that the first issue (with the prefetch cache) has only ever been seen with highly unusual configurations, not used by Microchip tools. Most developers need not worry about it. The second issue (with double writes to I/O registers) is more widespread and needs to be taken seriously. In the mean time, new silicon is coming out very soon (weeks?) that address these issues. We should be seeing an Errata Update in a little while and I will post it here too.
  • As the session was closing, somehow the discussion went to the topic of the use of assembly language by C programmers. To make a long story short, I indicated that the ONLY time I have ever used assembly language was to provide the special interrupt prologue and epilogue required by the FreeRTOS to save memory. At this point it was suggested that this was better done by the compiler itself and the Richard Barry (of FreeRTOS fame) should be contacted to see how progress could be made on this front. We’ll see what happens, but that would be a cool addition to XC32.

There was one important thing that was missing though: There was no break out session for the XC8 compiler group. Perhaps they were too busy?

It seems that Santa works for Fedex!

Oh and I forget… The really final final, even better thing about all the masters conferences is the deep discounting of developers tools. I felt like a kid in a candy shop with all those wonderful toys to contemplate! I admit that I had to restrain myself to a few choice selections. I’m sure I will be writing about them more very soon.

Perhaps I should start doing a mail bag segment on a You-Tube channel where I open my (interesting) mail? Then again, perhaps not.


As always, your thoughts, comments, suggestions and encouragement are most welcome!

Peter Camilleri (aka Squidly Jones)

PS: I was giving some thought to the idea of FreeRTOS prologue/epilogue support in XC32, and it seems that it need not be complex.One idea would be to support the idea of an “Interrupt Stack” which would be used by ISRs. This option would support FreeRTOS and provide benefit to any case where multiple stacks are in use. This would be an easily set up option for compilers and the linker could allocate the space in a manner similar to the way the heap is currently allocated.

[Update] Shortly after the conference, I exchanged emails with Joe Drzewiecki of Microchip regarding this idea. This correspondence follows:

Hello Joe;

I want to thank you again for a wonderful Masters Conference and especially for the very informative break out session on the XC32 compiler. The discussions covered a lot of ground and I learned a lot of new stuff.

I was thinking about the prologue and epilogue code for ISRs in FreeRTOS, and it occurs to me that what is involved here is the use of a special “Interrupt Stack” area. A stack set aside for use by ISRs. Implementation of this concept would be of use beyond FreeRTOS and include any system that runs multiple stacks.

I have updated my web site with a review of Masters 2012 here: http://teuthida-technologies.com/ and included a bit about Interrupt Stack support in XC32.

 I look forward to future developments;

 Best regards; Peter Camilleri

 and here was Joe’s reply:

Hi Peter!

Thanks for the props and the suggestions. We’ve had a very spirited discussion about your prologue/epilogue suggestion (go figure! :).

We’ll continue to discuss this, but may not act on it right away. There are many subtleties to it.

 Hope to see you next MASTERs,


Thanks Joe; It’s good to know I can still stir things up! 😉

Modules: Pull vs Push

A previous posting on modular programming dealt with tight vs loose coupling of modules. In this post a common special case of module interaction is examined. Consider the case where one module (called the source) needs to transmit an event or some data to another module (called the sink). A situation represented in the following illustration:

Module Dependency

This scenario occurs quite often in embedded systems. Examples include received serial data, completion of an analog data conversion, receipt of a USB command packet, to name just a very few few.

The Source may be an interrupt service routine, a task in a multi-tasking system, or a simple I/O or device library. The Sink can be The Great Loop in simple systems or a task in larger ones. In spite of all this diversity, the approaches taken can be summarized as belonging to one of two basic camps: Pull and Push.


With the pull strategy, the sink module is responsible for pulling in the data by polling for it. This is simple and popular approach and is typified in the following code snippet:

    if (uartRxRdy())
        nextChar = uartRxRead();
        // Processing of data omitted.

    // The balance of The Great Loop omitted.

In the above code, the “main loop” code calls the uartRxRdy() function to see if there is any a data to be processed. On the plus side, this code is simple, on the down side, a lot of calls to uartRxRdy() are needed and if the main loop is busy else where, polling can be neglected possibly leading to data loss.

Of course there are more sophisticated  versions of this strategy, for example the sink module can employ a task to do the polling, but this does nothing to avoid wasted processor cycles polling for no data. A variant that does have the potential to avoid this waste is to use a blocking call to obtain the data and simply waiting. This approach only yields savings though if the blocking call actually puts the task to sleep when no data is present. If it instead goes into a polling spin loop, the problem is merely hidden rather than solved.


With the push strategy, the source module is responsible for pushing out the data by some means. To get the attention of the sink module, there are basically two possible approaches:

1) Modify a variable visible to the sink so that it can be detected. This is such a bad idea; not only is the source meddling with the internals of the sink the sink still has to poll the variable to act on it.

2) Call a function in the sink to process the data or event. This is much better than the above in that it avoids meddling or polling, but there is still a potential problem. When the call to the sink is made, execution is still in the thread of the source. If the source is calling from an interrupt service routine, this places severe restrictions on what may be done in the called routine.

There’s another problem with the function call approach. The source has to know which function to call. This tightens the binding or coupling between the source and the sink. Ideally, the source would require no such knowledge. So, is it possible to call a function without knowing its name? The answer is YES! The mechanism involved is the pointer to a function. Pointers to functions are often avoided or shunned due to the perception that they are complex or difficult to work with. While the syntax in C can be daunting, it need not be so. Consider:

// Generic function pointer types
typedef void (*emPFNV)(void);
typedef void (*emPFNPV)(PV arg);

These simple typedefs create definitions for emPFNV a pointer to a function taking no arguments and emPFNPV a pointer to a function taking a pointer to void (called arg) as an argument. Pointers to other types of functions are easily created by cloning and modifying the above templates. So in the source module simply defines a variable to hold the pointer and exports it in its dot H file:

extern emPFNV SomeEventDetected;

In its initialization, the sink makes the connection with:

SomeEventDetected = superEventHandler;

and when the event needs to be triggered by the source it simply does:

if (SomeEventDetected != NULL)

This results in the source calling the sink’s function (superEventHandler) without placing a code dependency on the source. Note that the source needs to check for the pointer being NULL just in case no sink exists for the event. In this case, an else clause may be added to perform a default action. This is of course entirely optional and if omitted the event or data would simply be ignored in that case.

So far, the scenarios examined have assumed one source and one sink. What if there are many or even an unknown number of sinks? In that case we need to apply management algorithms to a list or array of pointers to functions. This is done by Modules: The Callback Manager and is the topic of a future posting. Conversely, there is the case where there are many or even an unknown number of sources. In this case data needs to be funneled into the sink module. This situation was made famous by the message queue of Windows fame (infamy?) and is the topic of the post Modules: The Message Pump.

As always; remarks, opinions and suggestions are welcomed. Feel free to leave your comments. Some encouragement wouldn’t hurt either 😉

Peter Camilleri (aka Squidly Jones)

Modules: Keeping Things Loose

A recurrent theme in this blog has been the concept of modular programming. The systematic deconstruction of large complex applications into more manageable chunks know as modules. Each module operates by being an abstraction of a key concept in the application. These in turn interact with each other to create the desired total application. It is the interaction of these modules that is the topic of this discussion.

Whenever modules interact, there is a sharing of information that must happen. For example, if module “A” calls a function in module “B”, then module “B” must share with module “A” the signature of the function in question. In addition, the purpose of the function must be clear so that it may be called appropriately. Another example would be module “A” accessing a variable in module”B”.  This causes a great deal of shared information, some of which are, the signature of the variable, the meaning of the data it contains, how mode “B” reads the variable, how module “B” updates the variable, how module “A” and all other modules read and update the variable, what actions are based on the variable, what events affect the variable, and list list goes, seemingly without end.

In the first case, the function call, where little information is shared, is referred to as loosely bound or coupled. The analogy being that of an arms length business relationship. The second case, where a great deal on information is shared is called tightly bound or coupled. The human analogy being that of a romantic couple.

Now, before the orthodox view is examined, it must be pointed out, that tightly coupled  modules do have their good points. When modules work very closely together, a great deal of “protocol” overhead may be omitted and high levels of performance can be achieved. This is especially so in highly resource constrained systems. The down side is clear in the above statement though. By working as one, the two modules have in fact become one big module. Taken to the limit, the application becomes one huge mass of code where changes to one area have undesirable side effects in seemingly unrelated areas. Again, highly resource constrained systems are often quite small, and thus have smaller/simpler applications. In these cases this complexity may be manageable.

The predominant view is that loose coupling is better. Most systems are NOT hyper-constrained and applications are large and complex. By keeping the inner-working of modules hidden and private, many benefits are observed.

  • Clarity; when modules interact along clear, well defined lines lines, the code becomes easier to understand, write, and debug. Further, loose coupling generally results in simpler module interfaces are easier (and thus more likely) to document clearly. Tightly coupled modules require an extensive documentation effort that is often never done.
  • Maintainability; by limiting access, the odds of a change in one module have an undesired side-effect on an other are great reduced.
  • Reuse: When modules are loosely bound, it is possible to reuse modules in different applications without having to “fix” all of the “back-door” interactions.
  • Abstraction; having a limited, published interface means that the module is able to provide an abstract view of the services provided, shielding the consumer modules from the details of operations.
  • Team Programming; when modules have well defined interactions, it becomes easier to divide the work among a number of coders. Since each need only concern themselves on the published interfaces they use, it is no longer necessary to memorize the entire application and programmers are less likely to “step” on each others work.

The downside of loosely coupled systems is that some performance is lost “going through channels” rather than taking short-cuts. In my career, I have sometimes seen this argument put forth as a reason to abandon the discipline of loosely coupled modules with well defined interfaces. These factors will all have to be considered in the design of each interface, still, the overwhelming advantages of loose coupling should make it the default choice that is only set aside when circumstances leave no choice.

This article has dealt with module interaction in general. An important special case is where one module (the source) needs to transmit data or an event to another module (the sink). That is the topic of the upcoming post “Modules: Push or Pull”.

After a long hiatus, this web site is back! I want to take this time to apologize to my readers for the long pause in postings. A lot of water has gone under the bridge (and some even over it) in my life, leaving no time for my duties here. Hopefully things will be better now. As always your comments, thoughts and suggestions are most welcomed.

Peter Camilleri (aka Squidly Jones)