Embedded Systems

Vending Machine ControllerA comparatively new medium for social interaction is the idea of a “meetup”. A gathering of like minded people to a public event, usually moderated by a web service like meetup.com or it’s equivalent. I attend these as a way to relax and meet interesting people.

There is however a sort of downside; Most of the people attending are 30 years younger than me. The result is this rather odd snippet of conversation:

Young Person: What do you do?
Old Guy: I’m a programmer.
Young Person: What kind of web programming do you do?
Old Guy: I do embedded systems programming.
Young Person: Huh?

From there the conversations diverge a lot, but the problem is answering the difficult question always follows: What is an embedded system?

While Wikipedia has an excellent article on this subject, I’ll give you my answer to this question:

An embedded (computer) system is a computer that is part of a product, device, or system that is not itself considered to be a computer.

Here are a few examples:

  • Cars: cars contain many embedded systems from fuel control, anti-lock braking, transmission control, dashboard display, ignition timing and many many more. I once read that the average car contains more than a dozen embedded systems.
  • Dumb cell phones: The cell phone needs a computer to interact with the over-the-air digital data network used to make phone calls. I exclude Smart cell phones because these are so powerful and flexible (with applications) that they are closer to hand held computers than to the dumb cell phones of old.
  • Vending machines: Like the car above, these are actually a number of embedded systems in one package. The devices that accept coins, bills, and credit cards for payment are little embedded modules within the whole, and the vending machine controller is an embedded system too. I can say that I have personally worked on each of these components over the years. A prototype of the controller is pictured above.
  • Computer components: Many storage sub-systems are in fact embedded systems that work to make mass storage devices work. Hard drives and memory “sticks” contain their own computers hidden away in them. They can even be hacked for good or ill.

This web site is devoted to the joys and frustrations of programming these embedded systems. Very often, programmers must deal with complex protocols, demanding real-time performance requirements, difficult debugging, and limited or non-existent user interfaces. Yet, it is precisely these challenges that make this type of programming so interesting. It was fun (?) yes, but maybe not so relevant any more.

Embedded systems used to use a lot of 4 bit microcontrollers. These are all gone now. The poor performance, lack of features, and high cost of programming those relics has long since over-shadowed any cost savings. So too, lower end, more primitive systems are being pushed into smaller and smaller niches. When those niches dry up, so will the sales of those parts.

In the pages of this blog, we have reviewed the chips, their defects, the math behind how things work, the crazy things that the vendors do, and the many other things that affect programming at the bare metal level. Yet embedded programming IS changing. New products like the Arduino, Raspberry PI and the Beagle are bringing a new level of power, integration and ease of use to the embedded world.

The world is changing, and so must this blog. I have taken the plunge and obtained a number of these new embeddable systems and plan to focus on this new direction. This will mean a new emphasis on Linux, higher level languages and protocols, and yes even some web programming. I have a great deal of learning and exploring to do and I hope I can share this with you, my readers.

As always, your comments and suggestions are most welcome:

Best regards;

Peter Camilleri (aka Squidly Jones)

Wow! Did they really do that?

I once heard that the intelligence of any group of people is that of the person with the lowest IQ divided by the number of people involved. A recent case seems to be a supporting data point for this point of view.

Consider if you will the company FTDI. This company is (was?) the world leader in the area of USB bridge chips that make it easy to add USB access to all sorts of gadgets that would otherwise be difficult, expensive, and problem prone. The FTDI chips work well and come with easy to use software distributed for free with Windows, Mac, Linux, and many other sorts of computers. The chips and the software drivers were so good that many customers chose to go with FTDI rather than other vendors. The result of this is that these chips are expensive and often hard to find.

So FTDI gets to laugh all the way to the bank and otherwise all is well right? Well not quite. Counterfeits! With the real mccoy so hard to get, many legitimate vendors were tricked into buying fake chips. Do these fakes work? Mostly, it’s not as if the USB bridging function is that hard to do. FTDI however was not amused. They were losing some sales and lots of that delicious money! They had to do something and they did!

To stem the loss of revenue, a new software driver was released that detects some subtle difference between genuine and fake chips and then erases a vital data entry called the PID (Product ID) if a fake is detected. After that, the fake chip and the product it was embedded in will no longer function. The consumer is now the proud owner of a new paper weight or brick as it is sometimes called. The new driver was released through the Microsoft Windows update mechanism where it was incorporated into countless millions of computers all around the world.

So; Is this a nasty thing to do? Consider this bit on computer trespass (my emphasis):

In Virginia, computer trespass consists of, with malicious intent, copying, altering, or erasing data from a computer, causing a computer to malfunction, causing an electronic funds transfer, etc.[Wikipedia]

Yes this IS a very nasty thing to do! It hurts consumers at random. Some have gone so far as to label it cyber-terrorism! While I am not sure about that, one thing is certain, this action has caused serious, if not fatal, damage to the faith and value of the FTDI brand. Engineers chose and specified FTDI chips over others because they were trusted, reliable and dependable. That perception is gone and it will be very difficult to recover that goodwill.

I cannot even begin to imagine the legal fall-out from what would happen if a large number of products suddenly stopped working and the victim decided to let loose the dogs (lawyers) of war in revenge!

I am a big fan of the EEVBLOG and this posting does a good job of summarizing people’s feelings about this issue and the lameness of the company’s responses:

EEVblog #676 – RANT: FTDI Bricking Counterfeit Chips!

As for me, I am looking for other vendors. This sort behaviour is so unacceptable that it makes me seriously wonder what kind of crazy people are in charge at FTDI.

What do you think? I’d really like to hear what you think about this issue!

Best regards;

Peter Camilleri (aka Squidly Jones)

Project: Sequencer – Part 1 [Updated]

This is the first in a number of posts covering the development of a fairly simple sequencer board. This board is designed to be easy to put together, and could possibly be the basis for a simple kit. It is intended from the outset, that this be an open source hardware and software development.

This is not my first crack at designing a sequencer board. See the picture to the left; In 1999, Bret Walters and I came up with our first sequencer design for a fireworks show. This early design reflected our desire to keep things simple. I wrote the firmware for the PIC16F84 in CCS C, and the PC based, scripting and control language was developed by me, but I’ll be darned if I can remember much beyond that.

This development was quite a learning experience. A lot of time has passed, so my troubles with the CCS compiler should not color your view of it now. What I really do recall was that my grasp of the requirements of a sequencer left a lot to be desired. For instance, the 1999 board lacked a continuity check capability. This crucial feature allows the operator to determine if all of the circuits are connected properly to the controller. We left this out on the assumption that it was simpler to just wire things up correctly in the first place. Wrong! On a recent Discovery Channel Show called “Pyros“, professional pyrotechnicians showed time and time again how important it is for a show to pass its continuity tests. We noticed as much when our test show pretty much suffered a massive failure to launch.

This new board addresses this serious failing. The 1999 board also used two power supplies. A 12 volt supply for the micro (regulated on board down to 5 volts) and a 48 volt supply for the ignitors. The new board has a single, flexible supply to meet multiple needs. Other changes are that this board is designed using the open source Kicad package which did not exist way back then and the PIC will be programmed with Microchip’s brand new XC8 compiler.

Schematic Diagram

Enough of the old and onto the new! To the left is the Kicad generated schematic of the new sequencer. Using Kicad for schematic capture was easy, with one caveat. The parts library is the sum of what developers wanted. They (unlike me) where crazy about Atmel, less so on Microchip. The result is that if you use a more modern part, prepare to create your own symbols.

The design itself is fairly typical of simple PIC designs. A micro-chip surrounded by I/O circuitry of all sorts. In spite of this, there are some aspects of the design that should be highlighted:

  • Unlike the 1999 version, there’s no need for a crystal oscillator and its tricky layout and twitchy capacitors. Many modern PIC chips supply a robust internal oscillator.
  • The debug signals, Vpp, IcspDat, and IcspClk are dedicated to their role in debugging.
  • The 8 outputs are from Q2 through Q9. These are all tied to Q10 which is in parallel with R2, a 22K resistor. If Q10 is off, it does not conduct and firing current must flow through R2. This current is low enough to NOT fire the end effect, but can still be measured to confirm continuity. When Q10 is on, a large amount of current may flow, firing the end effect. Thus Q10 is the “Arming” transistor.
  • Like classic TV’s channel 1, Q1 was removed and the other transistors were not renumbered. Thus there is no Q1.
  • The ten LEDs are connected to six PIC I/O lines in a simplified Complimentary  Drive configuration using a single current limiting resistor.
  • The Test, Arm, and Fire switches have hardware interlocks to prevent a fire signal from triggering an end effect in the wrong mode. This is in addition to software checks to prevent this from happening. When firing things off, you can never be too careful.
  • The board is designed to be daisy chained to allow for more than 8 end effects. Various parts of the design may be omitted based on where in the chain the board is installed (at the head, or in the tail).
  • The power is derived from a simple 7805, 5 volt voltage regulator. This is perfectly adequate for the standard 12 volt input. For higher input voltages however, the 7805 is replaced by a drop in, buck regulator module that is more efficient.

Once the schematic was captured and verified carefully, a netlist was generated. Then, a footprint was selected for each part. As before in the parts library, the footprint library is a bit chaotic. Several custom footprints had to be created even for this simple design.

Partially routed.

Once this done, the data was imported into the pcb layout software. Well, that is a bit of a simplification. The reality was that the custom footprints required several passes and a lot of painstaking scrutiny to get right. This is most certainly an error prone step that must not be rushed.

Still, eventually the import was fully completed and the focus shifted to the layout and how the board would be used in the field. Connectors were moved, parts dragged about until a harmonious arrangement was found. Then reviews of the routing of analog signals, power, and control signals were corrected and verified.

The current PCB layout.

Finally there was a final stage of proofreading and checking. The true proof will be in the testing, but that has yet to happen, The next step is to order prototype PCBs and populate, program and test them. That should be easy, right? 😉

As always, your comments and suggestions are most welcomed.

Peter Camilleri (aka Squidly Jones)

[Updated] The sharp eyed will have noticed that the schematic does not quite match the PCB. That’s because that schematic was just a bit out of date. For an up-to-date schematic click on Pyro Seq Schematic.

Magnus Errata Part 3

This article is the fourth in a two part series about some serious errata published regarding the Microchip PIC32 famility of embedded microcontrollers. Previous articles have been Magnus Errata Part 1,  Magnus Errata Part 2, and Magnus Errata Update. There has been a significant passage of time and with the arrival of new errata data, a new chapter in the story is called for. At the same time, a goal for this article is to clarify the application of errata data beyond the PIC32 designation. This is to reflect the fact that the PIC32 family is implemented in multiple varieties, each with unique characteristics.

The PIC32 family can be broadly divided into three main branches:

  1. Smaller parts: PIC32MX1xx/2xx
  2. Mid sized parts: PC32MX3xx/4xx
  3. Larger parts: PIC32MC5xx/6xx/7xx

The first condition was an issue with spurious Data Bus Exceptions when data constants were read from flash memory with interrupts turned on. The respective, current errata notes for this issue are:

1xx/2xx 3xx/4xx 5xx/6xx/7xx
Not applicable to these parts.

It is noteworthy that corrected silicon exists for the 5xx/6xx/7xx family but not the 3xx/4xx and that this issue does not apply to the 1xx/2xx family. I had hoped the errata would say more, but it does not. At the 2012 Master’s Conference I learned that this issue has never been observed when the cache and prefetch are set up normally. This explains why I have never seen this issue and why the vast majority never see it. If you are really paranoid, then implement the work-around, but for the majority of cases, I don’t see the point (PS: That’s my opinion, NOT engineering advice!)

The second condition was an issue with a double data write with interrupts turned on. This was indicated as mostly harmless except when the write target changed it’s state based on a write, like a UART, SPI port etc. The errata were:

1xx/2xx 3xx/4xx 5xx/6xx/7xx

Again, only the 5xx/6xx/7xxx family of parts has a note regarding a corrected revision, so this fix must still be forthcoming in the other two families. Also note that turning off interrupts during the write is now feature in addition to using DMA. And a correction to the errata is that the I/O port Toggle registers where left out as potential trouble spot. Double writing the Toggle register would create a very short spike rather than toggling the desired data bits.

One final word is that this article can never be the final word. I urge everyone working with a microcontroller to go to that chip’s web page and download and carefully study the published errata on that chip. As always; it’s what you don’t know that can hurt you.

Don’t forget that your comments and suggestions are most welcomed.

Peter Camilleri (aka Squidly Jones)

A Character Reference [Updated]

In an earlier posting, portable integer data types were examined in It takes all types… This posting continues this thought with an examination of the character data in realm of embedded systems programming.

The “C” programming language has supported the character data type since in infancy of the language in 1972. Well sort of a supported data type. In a fashion typical of the extreme minimalism of early “C”, development the “char” data type was made to serve three distinct and often incompatible uses. Char is (was) used for:

  • The coding of a single character of data. A pointer to characters became synonymous with a pointer to an array of characters terminated with a null character. This is what passed for strings in”C”.
  • The smallest integer data type supported by the processor. This small data type was attractive when the range of values was known to be very limited and space was tight (as it always was)!
  • The smallest unit of addressable storage. As such pointers to “char” were treated as interchangeable with pointer of any type. This usage of “char” has been obsolete for quite some time now with pointers to “void” fulfilling this role.

There are two main issues with this state of affairs.

The first is that in “C”, by default all integer data types are signed. For integers this makes sense as this matches the traditional view of integer variables as a subset of integers in mathematics. Thus the integer usage of “char” mandates that it too must default to being signed. This is in direct conflict with “char” for character data. Textual data does not posses a sign and while the meaning of the words may be negative, the text of those words cannot be negative!

Despite common sense, “C” defines “char” as signed. Over the years, many compilers have given the programmer the option to default it to be unsigned instead, but this is hardly universal.

The second issue that is encountered is the mapping of character glyphs, like ‘C’ to a code like 67. Early in the history of computers, mapping were in chaos. Then the ANSI  committee came up with the ASCII code (pictured to the left, labeled in octal) in 1965 and order was restored, briefly!

ASCII is a seven bit code that handles text in English, Numbers plus some punctuation. Soon the demand arose for accented characters, new punctuation and much more. Most processors use eight bit bytes, so most characters had 128 unused codes available for other uses (assuming you got around the signed nonsense) The big problem was that while 128 free codes was enough for a few languages, it was not enough for all of them. Thus was born the Codepage. The Codepage was a number that allowed the selection of one character mapping from many. Now, in embedded systems, Codepages were not generally used. The application hardwired the mapping of code to glyphs. As applications become more sophisticated, this option becomes less attractive.

And then there is the problem of Asian character sets. Here 128 extra codes is quite inadequate. Many thousands of codes are required. Over the years, many strange and bizarre coding schemes have been created to handle Japanese, Chinese, Korean and other Asian languages. I am glad to say that I won’t waste time discussing them as they are all obsolete. They have been swept aside by Unicode!

Unicode defines a huge 1 million code character space and a smaller 65536 subset. These characters may be encoded through a number of schemes; some of these are:

  1. UTF-32: this massive four byte format can encode the full 1 million character set. It is seldom used due to it’s space inefficiency and the fact that codes greater than 65535 are seldom required. There are two flavors of UTF-32: Big Endian and Little Endian (see below).
  2. UTF-16: this format based on two byte format uses 2 bytes for most glyphs and 4 for seldom used (or supported) ones. There are two flavors of UTF-16: Big Endian and Little Endian (see below).
  3. UTF-8: this variable length encoding includes traditional seven bit ASCII. That is, any valid seven bit ASCII string is also a UTF-8 string. In addition, the coding is such that data may be processed one byte at a time. This means that UTF-8 strings can work with most standard “C” library routines. This compatibility is a huge advantage! Further, UTF-8 is the most compact encoding with glyphs ranging from 1 to 4 bytes long or 1 to 3 bytes long for the 65535 code subset. The downside is that the encoding is variable. To find the N’th character in a string, it is necessary to scan the string. It also complicates the allocation of buffers and such.

So what is to be done in an embedded system? I think it is foolish to imagine that one designer can select one coding scheme in advance for all others. It is for that reason that the emSystem software library provides the choice of four possible encodings:

// Advanced character support configuration
#define CHAR_ASCII_7    0   // Standard 7 bit ASCII.
#define CHAR_ASCII_8    1   // The 8 bit extension to ASCII.
#define CHAR_UTF_8      2   // The var len extension to ASCII.
#define CHAR_UTF_16     3   // The 16 bit extension to ASCII.

// What sorts of characters are to be supported?

How support for these four options are implemented is the topic of a future posting. For more information on ASCII and Unicode Encoding please visit the ASCII Table. As always, your comments and suggestions are most welcomed.

Peter Camilleri (aka Squidly Jones)

Note: Big Endian vs Little Endian refers to a holy war in computer science about how bytes in multibyte data should be ordered. The Big Endian camp favors the most significant byte being first (in the lowest address) while the Little Endian camp favors the least significant byte being first. To help avoid chaos, a byte order mark may be placed at the start of a text to indicate the byte ordering in use. The names given these byte orderings trace back over 200 years to the Jonathan Swift book Gulliver’s Travels.

[Update 1] A now for Unicode the Movie! Starring all 109,242 characters in the Unicode Version 6.0 specification (a cast of 0.1 millions) it is amazing how many forms of written/printed language exist on this planet!

[Update 2] I don’t know how I could have missed adding a link to the Unicode Consortium.

Modules: The A2D Manager

The examination of modular programming so far has included a look at module coupling, module event propagation, and the CallBack Manager mechanism. This posting delves into the operation of an actual data source module, the A2D Manager. First a little background:

In this space, a number of articles have dealt with getting the most from an embedded, resistive touch screen. In all of the articles that have examined the touch screen, the topic that has been omitted until now is “how are the touch screen voltages actually measured?” The answer lies with the PIC-32s capable Analog to Digital Converter (A2D or A/D Converter). A glance at the A2D reference manual gives some idea of the power of this peripheral. A common complication is the fact that the A2D is often required to operate multiple analog devices. In this case those devices are “unspecified” analog sensors.

A typical solution to the requirement to share a device is to use a mutual exclusion (or MUTEX) semaphore to enforce serial reuse of the resource. The only problem with this approach is that it has poor timing consistency. The rate of analog measurements would be subject to timing variation due the order of task execution. Instead the approach used is to place control of the A2D in the hands of an A2D manager. The A2D Manager handles the reading of analog data and serves up this data to client tasks. The goal of the A2D manager was to read all the analog data possible at a constant, predictable rate, make that data available to tasks that need it, and to use as few resources as possible.

The diagram to the left summarizes the operation of the A2D Manager. After initialization in the emInitA2DManager function, the A2D manager does not use a task. It does use a programmable timer and an interrupt service routine (ISR). All 16 analog input channels are scanned once per millisecond, under the control of the timer. At the end of the millisecond, the ISR is triggered by the A2D and copies the data from the A2D to a buffer with the following definition:

typedef struct
    uint16_t seq;      // A counter. 1 count per cycle.
    uint16_t analog;   // 0 == analog pin, 1 == digital pin.
    uint16_t data[16]; // A2D data.

To share data with client tasks, the ISR then calls a callback list to inform interested parties that data is available. The callback list itself is one of those rare instances of global data that is required. This is declared as:

extern emPCALLBACKLIST emA2DIsrCallBackList;

The “Isr” in the name reminds coders of client code that this callback occurs in the context of an interrupt service routine, so brevity is a requirement. The A2D Manager initializes the list with:

emA2DIsrCallBackList = 

Callback lists are managed by the Callback Manager; previously discussed. An example of a client adding a callback to the list is:

emAddCallBack(emA2DIsrCallBackList, &touchCB, emFront);

In which the function touchCB is added to the emA2DIsrCallBackList in the first free slot. When the A2D Manager is ready to signal an event, the code for this is simply:

// Execute the A2D callback list.

In which the first parameter is the callback list and the second is a pointer to a flag used by the FreeRTOS to indicate that a higher priority task is now ready to run and that a task switch is needed when the interrupt service routine concludes.

It goes without saying that the A2D manager is a compromise. It does however meet its goals of providing a constant 1 K Hz data rate; It does read all 16 possible channels; and it does not tie up a task or use a lot of resources. What resources are used then?

  1. Timer 3 is used to generate Start Conversion pulses at a rate of 16,000 Hz. This consumes no CPU bandwidth.
  2. The A2D module is used to read all 16 possible channels and buffer them.  This consumes no CPU bandwidth.
  3. An interrupt is triggered every 16th sample, or 1,000 Hz. At this point, the A2D buffers are full of data that are copied to a buffer.
  4. A Callback list is used to update clients of the A2D manager.
  5. 36 bytes of RAM for the shared data structure and the the callback list uses an additional 4+8*A2D_CALLBACK_LIST_ENTRIES bytes.

In review, the Callback Manager is a good mechanism for supporting the case where one data/event source needs to reach multiple data/event sinks. In the examination of the Message Queue or Message Pump the case of multiple data/event sources and a single data/event sink will be discussed.

As always, your comments and thoughts are encouraged and welcomed.

Peter Camilleri (aka Squidly Jones)

Modules: The Callback Manager

A recent series of postings have dealt with modular programming and the transmission of events and data from one module to another. The push and pull strategies were both examined and compared. This article deals with a special case of data flow. The case were there is a single event/data source, and an unknown number (zero or more) of event/data sinks. This is illustrated below:

In this case we can see that there is a single data source and an unknown multitude of sinks. To handle this task, an array of pointers to functions, maintained by the CallBack Manager,  is employed. The CallBack manager is a small module that provides the functionality associated with the call back list data structure. Since this data structure is never allocated as a global, static or automatic (stack based) structure, its type definition only specifies a pointer to a structure and not the structure itself. It is shown below:

// The call back list data structure.
typedef struct
    uint8_t         capacity;   // The capacity of the list.
    uint8_t         count;      // The occupancy of the list.
    uint8_t         rfu1;       // Reserved. Padding for now.
    uint8_t         rfu2;       // Reserved. Padding for now.
    emPCCALLBACK    list[0];    // The list of call backs.

Since only a pointer is defined, some means of allocating the structure must be provided. In the CallBack manager this is the emCreateCallBackList function:

emPCALLBACKLIST emCreateCallBackList(uint8_t capacity);

This function definition reveals the first compromise. While the number of event sinks may not be know, the upper limit on the number of sinks must. This function creates a callback list with the specified capacity and returns a pointer to it. If the list could not be allocated, a NULL pointer is returned instead.

The functions used by the CallBack Manager have a standard signature. This is:

typedef void (*emPCALLBACKFN)(PV rarg, PV sarg);

Where rarg and sarg are generic void pointer arguments that represent parameters determined by the receiver and the sender respectively.

After the list is created, it must be populated, it must be manipulated. This is done with two functions:

void emAddCallBack   (emPCALLBACKLIST list, 
                      emPCCALLBACK cb, 
                      emLISTDIRECTION dir);

void emRemoveCallBack(emPCALLBACKLIST list, 
                      emPCCALLBACK cb);

The first is used to add functions to the callback list. The dir parameter determines if the callback is added to the first fee slot (emFront) or the last free slot (emBack). This allows a crude sort of priority in the callback list, The second function removes the specified entry from the list.

Finally, there must be some way to invoke the call back list. There are four available functions for this task:

void emExecuteCallBack       (emPCALLBACKLIST list);

void emExecuteCallBackPV     (emPCALLBACKLIST list, 
                              PV sarg);

void emExecuteTimedCallBack  (emPCALLBACKLIST list, 
                              uint32_t timeOutNS);

void emExecuteTimedCallBackPV(emPCALLBACKLIST list, 
                              uint32_t timeOutNS, 
                              PV sarg);

These four functions provide for two orthogonal choices:

  1. To send a void pointer sarg or not.
  2. To impose a time limit (in nanoseconds) on the call back list or not.

The sarg is useful in cases like an interrupt service routine where it can be used to pass a pointer to a task awoken flag. The time out is useful to prevent the call back from consuming too much time. Note that individual called functions all run to completion, it is the processing of the array that stops when time has expired. This time parameter relies on the timing facilities discussed in A Brief Time Out.

These small number of simple functions summarize the entire CallBack Manager. It is a small module that provides for flexible connections between modules, promoting loosely bound architectures without a great deal of overhead. Soon to come will be an examination of a module that makes strong use of the CallBack Manager to distribute data; look forward to a posting on the A2D Manager.

As always, comments, replies, and suggestions are welcomed.

Peter Camilleri (aka Squidly Jones)

It takes all types… [Updated]

All programming languages have data types. This article focuses on integer data types in statically typed environments; like C, C++ and Java. When examining integers, two characteristics are important:

  1. The signed-ness of the value; Signed vs Unsigned. Signed integers may represent negative values through the twos-complement representation. [Note that other signed integer representation systems exist, they just are no longer used]
  2. The number of bits in the value. Most typically 8, 16, 32, or 64 bits. [Note that many odd lengths like 12, 18, and 36 bits were once common; the machines that support them are pretty much extinct.]

In classical (pre-C99) C, the signage of integers was specified where the size was implementation dependent. I suppose the idea was that if the programmer specified that a variable was an “int” the compiler would select the most appropriate size for the system in question.It is true that certain minimums where required. An “int” was required to be at least 16 bits in size, but beyond that you never could be sure. The only hard rule could be summarized as:

sizeof(short) ≤ sizeof(int) ≤ sizeof(long)

and that’s it. All three could be 64 bits and it would still be legal under the ANSII rules. In embedded systems, this sort of ambiguity is not acceptable primarily because hardware registers need to be mapped precisely to a data structure laid out exactly the same, to the bit. This can be difficult when the specification says so little about what “int” means.

Other languages have taken different approaches. In Java, four integer data types are defined:

Name Width Value Range
Byte 8 -128 … 127
Short 16 –32,768 to 32,767
Int 32 –2,147,483,648 to 2,147,483,647
Long 64 –9,223,372,036,854,775,808 to 9 ,223,372,036,854,775,807

This is better, as far as it goes, but there are three serious flaws. First Java does not have any support for unsigned data; a serious omission and second, Java is generally unavailable for embedded systems. These make it quite difficult indeed to use Java. Finally the Java spec still has too much wiggle room. The sizes specified are only minimums, nothing prevents an implementer from using 32 bits for Bytes, Shorts and Ints. This is rare, but makes portability problematic again.

So what’s to be done to solve this dilemma? The traditional approach has been for projects to have a common include file named something like “GenericTypes.h” that contains a series of typedefs that specify the integer types supported by that compiler. The problem is that there is absolutely no standardization of the names used by these user defined integer types. An 8 bit unsigned integer could be UINT8, UINT_8, uint8, U8 or any one of a myriad possible names. There has to be a better way, and with the ANSII C99  standard there is. C99 introduced the stdint.h header file which defines standard sized integers data types. Our unsigned 8 bit example is now the standard type “uint8_t”! This is great!

There’s only one problem. To be useful, code needs to

#include <stdint.h>

and then exclusively use the new types, or types derived from them. I am as guilty as any. My library code contains a emTypes.h file the defines my own peculiar names for integer data types. The time has come to abandon the proprietary scheme and fully adopt the standard conventions. Over time I plan to shift my code over to the C99 stdint types wherever possible and to build upon stdint types when needed.

[UPDATE] My proposal is this: In cases where a general purpose integer is required, without concern for space, alignment, in a structure, or other special concerns, use plain old “int”, as this is supposed to be the “native” integer data type. In all other cases, use the types of <stdint.h>. I call on library writers to adopt the same approach. It will promote interoperability and portability of code and promote standardization of data types.

What are your thoughts? Good idea? As always, your views, comments, suggestions and ideas are most welcomed.

Peter Camilleri (aka Squidly Jones)

Impressions of Masters 2012 [Updated]

Welcome to Masters!

First a disclaimer. There is no way that the Microchip 2012 Masters Conference can be summarized here. It is simply far to vast in scale and extent to do it justice. Instead, consider this to be a series of first person impressions, notes, recollections, and anecdotes.

To put things in perspective, this was my seventh time attending the Masters conference. Previous years attended were 2000, 2001, 2002, 2004, 2009, and 2011.


The front door!

I’m not quite sure how long it’s been there, but this year the conference was held at the luxurious J.W. Marriott. Here’s the grand front resort entrance. It goes without saying that the hotel, its rooms, conference halls and public spaces where all luxurious and well appointed. The conference included three sumptuous meals a day with beverages, fruit and “treats” for those who got the munchies in between study courses.


The “swag” of the year!

As is common at such events, all attendees went home with a little “swag” to remember their participation. In past years, little technology demos were often given out. The last few years this has been replaced by a note book, polo shirt and course notes thumb drive combo. This is a little disappointing as the gadgets were always fun, but to be honest, other than some fun playing with them, not much came of them, so the cost savings is sort of understandable, even if it is a bit of a downer.

As said earlier, this is my seventh Masters. Each year I have the same problem. There are many more courses that I would like to take than there are time slots in which to take them. This year there were 36 courses offered covering a wide range of topics. That’s too many to list so instead here is the Masters 2012 Event Guide. The class list is on pages 11 through 31. As for myself, I was kept busy learning about:

  • Upcoming developments in Microcontrollers and Development Tools.
  • Learning more details of the FreeRTOS software. I was fortunate enough to win the programming challenge and got a copy of the book Microchip PIC32 Edition of the FreeRTOS user’s manual. I’ll be doing a review of that soon.
  • Using the Microchip Graphics Display Designer (GDD-X)
  • Robust I2C communications with the PIC I2C peripherals.
  • USB: Both the Generic Driver and CDC class devices.
  • Bootloading, Application Mapping and Loading Techniques on a PIC32.
  • An examination of Advanced Touch Screen Technologies: Projected Capacitance Touch.

As if that were not enough, the time slot between the end of studies and dinner was also utilized each evening.

The Keynote Address (Picture from Facebook)

 On Wednesday there was the interesting (OK not THAT interesting, but were did I have to go?) keynote address by Steve Sanghi, the CEO of Microchip. Mr Sanghi has a direct, down-to-earth approach that is most refreshing and while not edge of your seat stuff, it was still very informative to hear the corporate perspective.

On Thursday, I was delighted to see the return of Don McMillon! Don is the self-proclaimed “Funniest Engineer in the World!”. Even though this was his second time at masters, the show was mostly new material and I had a great time. With all the serious engineering work going on, some levity was welcomed.

And then there’s Friday… On Friday, there was a Casino Night. During this event, I made the earth shattering discovery that I truly suck at poker. Thank goodness the $1,000 I lost was only play money (Unlike some 2012 USA Presidential candidates, I do NOT make casual $10,000 bets!).

Finally, I have left what is possibly the best part for last. After dinner each evening, there were semi-formal discussion groups with various microchip teams in attendance to answer questions. I attended the XC16 and XC32 discussions. I learned a great deal of useful info. Here’s a sampling of some tasty tid-bits:

  • XC32++ is ready! That’s right C++, with a FULL library by Dinkumware, a highly respected library specialist vendor. XC16++ is still in the very early phases and is rather much much further off.
  • New PIC32s are coming that will support new modes that are faster and more compact than the current MIPS16 mode. Support is coming for higher speeds and much larger memory and a external memory interface. However, no target dates were given. Sigh… Marketing!
  • One surprise was getting more clarity on the Errata Issues that I have written about before. It now seems that the first issue (with the prefetch cache) has only ever been seen with highly unusual configurations, not used by Microchip tools. Most developers need not worry about it. The second issue (with double writes to I/O registers) is more widespread and needs to be taken seriously. In the mean time, new silicon is coming out very soon (weeks?) that address these issues. We should be seeing an Errata Update in a little while and I will post it here too.
  • As the session was closing, somehow the discussion went to the topic of the use of assembly language by C programmers. To make a long story short, I indicated that the ONLY time I have ever used assembly language was to provide the special interrupt prologue and epilogue required by the FreeRTOS to save memory. At this point it was suggested that this was better done by the compiler itself and the Richard Barry (of FreeRTOS fame) should be contacted to see how progress could be made on this front. We’ll see what happens, but that would be a cool addition to XC32.

There was one important thing that was missing though: There was no break out session for the XC8 compiler group. Perhaps they were too busy?

It seems that Santa works for Fedex!

Oh and I forget… The really final final, even better thing about all the masters conferences is the deep discounting of developers tools. I felt like a kid in a candy shop with all those wonderful toys to contemplate! I admit that I had to restrain myself to a few choice selections. I’m sure I will be writing about them more very soon.

Perhaps I should start doing a mail bag segment on a You-Tube channel where I open my (interesting) mail? Then again, perhaps not.


As always, your thoughts, comments, suggestions and encouragement are most welcome!

Peter Camilleri (aka Squidly Jones)

PS: I was giving some thought to the idea of FreeRTOS prologue/epilogue support in XC32, and it seems that it need not be complex.One idea would be to support the idea of an “Interrupt Stack” which would be used by ISRs. This option would support FreeRTOS and provide benefit to any case where multiple stacks are in use. This would be an easily set up option for compilers and the linker could allocate the space in a manner similar to the way the heap is currently allocated.

[Update] Shortly after the conference, I exchanged emails with Joe Drzewiecki of Microchip regarding this idea. This correspondence follows:

Hello Joe;

I want to thank you again for a wonderful Masters Conference and especially for the very informative break out session on the XC32 compiler. The discussions covered a lot of ground and I learned a lot of new stuff.

I was thinking about the prologue and epilogue code for ISRs in FreeRTOS, and it occurs to me that what is involved here is the use of a special “Interrupt Stack” area. A stack set aside for use by ISRs. Implementation of this concept would be of use beyond FreeRTOS and include any system that runs multiple stacks.

I have updated my web site with a review of Masters 2012 here: http://teuthida-technologies.com/ and included a bit about Interrupt Stack support in XC32.

 I look forward to future developments;

 Best regards; Peter Camilleri

 and here was Joe’s reply:

Hi Peter!

Thanks for the props and the suggestions. We’ve had a very spirited discussion about your prologue/epilogue suggestion (go figure! :).

We’ll continue to discuss this, but may not act on it right away. There are many subtleties to it.

 Hope to see you next MASTERs,


Thanks Joe; It’s good to know I can still stir things up! 😉

Modules: Pull vs Push

A previous posting on modular programming dealt with tight vs loose coupling of modules. In this post a common special case of module interaction is examined. Consider the case where one module (called the source) needs to transmit an event or some data to another module (called the sink). A situation represented in the following illustration:

Module Dependency

This scenario occurs quite often in embedded systems. Examples include received serial data, completion of an analog data conversion, receipt of a USB command packet, to name just a very few few.

The Source may be an interrupt service routine, a task in a multi-tasking system, or a simple I/O or device library. The Sink can be The Great Loop in simple systems or a task in larger ones. In spite of all this diversity, the approaches taken can be summarized as belonging to one of two basic camps: Pull and Push.


With the pull strategy, the sink module is responsible for pulling in the data by polling for it. This is simple and popular approach and is typified in the following code snippet:

    if (uartRxRdy())
        nextChar = uartRxRead();
        // Processing of data omitted.

    // The balance of The Great Loop omitted.

In the above code, the “main loop” code calls the uartRxRdy() function to see if there is any a data to be processed. On the plus side, this code is simple, on the down side, a lot of calls to uartRxRdy() are needed and if the main loop is busy else where, polling can be neglected possibly leading to data loss.

Of course there are more sophisticated  versions of this strategy, for example the sink module can employ a task to do the polling, but this does nothing to avoid wasted processor cycles polling for no data. A variant that does have the potential to avoid this waste is to use a blocking call to obtain the data and simply waiting. This approach only yields savings though if the blocking call actually puts the task to sleep when no data is present. If it instead goes into a polling spin loop, the problem is merely hidden rather than solved.


With the push strategy, the source module is responsible for pushing out the data by some means. To get the attention of the sink module, there are basically two possible approaches:

1) Modify a variable visible to the sink so that it can be detected. This is such a bad idea; not only is the source meddling with the internals of the sink the sink still has to poll the variable to act on it.

2) Call a function in the sink to process the data or event. This is much better than the above in that it avoids meddling or polling, but there is still a potential problem. When the call to the sink is made, execution is still in the thread of the source. If the source is calling from an interrupt service routine, this places severe restrictions on what may be done in the called routine.

There’s another problem with the function call approach. The source has to know which function to call. This tightens the binding or coupling between the source and the sink. Ideally, the source would require no such knowledge. So, is it possible to call a function without knowing its name? The answer is YES! The mechanism involved is the pointer to a function. Pointers to functions are often avoided or shunned due to the perception that they are complex or difficult to work with. While the syntax in C can be daunting, it need not be so. Consider:

// Generic function pointer types
typedef void (*emPFNV)(void);
typedef void (*emPFNPV)(PV arg);

These simple typedefs create definitions for emPFNV a pointer to a function taking no arguments and emPFNPV a pointer to a function taking a pointer to void (called arg) as an argument. Pointers to other types of functions are easily created by cloning and modifying the above templates. So in the source module simply defines a variable to hold the pointer and exports it in its dot H file:

extern emPFNV SomeEventDetected;

In its initialization, the sink makes the connection with:

SomeEventDetected = superEventHandler;

and when the event needs to be triggered by the source it simply does:

if (SomeEventDetected != NULL)

This results in the source calling the sink’s function (superEventHandler) without placing a code dependency on the source. Note that the source needs to check for the pointer being NULL just in case no sink exists for the event. In this case, an else clause may be added to perform a default action. This is of course entirely optional and if omitted the event or data would simply be ignored in that case.

So far, the scenarios examined have assumed one source and one sink. What if there are many or even an unknown number of sinks? In that case we need to apply management algorithms to a list or array of pointers to functions. This is done by Modules: The Callback Manager and is the topic of a future posting. Conversely, there is the case where there are many or even an unknown number of sources. In this case data needs to be funneled into the sink module. This situation was made famous by the message queue of Windows fame (infamy?) and is the topic of the post Modules: The Message Pump.

As always; remarks, opinions and suggestions are welcomed. Feel free to leave your comments. Some encouragement wouldn’t hurt either 😉

Peter Camilleri (aka Squidly Jones)