Month: May 2011

PIC18F Software Project Survival Guide 3

This is the third installment on a series of post regarding software programming on PIC18F cpu family. You can find the first here and the second here.
Linker
The linker is expected to group together the code from all the different compilation units produced by the compiler and generate the binary code. Since Pic18f architecture is what it is, this is not a trivial task.
Compiler groups data in data sections. These sections may be assigned to a specific region of the data memory via the linker script.
This file is defined for every project and you’d better to familiarize with its syntax.
In fact some troubles may arise by bad linker script and can be fixed by changing the linker script.
For example the compiler uses a temporary data section (named .tmpdata) – to store intermediate expression results – that is free to float around in the data memory. If the linker script is modified without care, then this section may fall across the banks boundary causing wrong computation (in the best case, memory corruption in the worst).
The default project comes with a default linker script that avoids data object to get across bank boundaries. (Note that linker script banks are not data memory banks, but user defined regions of memory, you may want to make linker script banks coincide with data memory banks for avoiding bank switching problems). So, by default, you are protected from this kind of fault (at the cost of some slack space, unless your code is so lucky to fill perfectly all the pages). But when the project size increases your data object will grow as well. So you may be tempted (I was) to merge all the banks into a big one.
I did, then I found many unexpected troubles because of this (see the .tmpdata and C startup problems for example). So I wrote a small awk script to run over the map file to spot these problems:

#!/usr/bin/awk -f

/<[[iu]data>/ {
        len=strtonum($5)
        if( len > 0 )
        {
            lastByte=strtonum($3)+len-1
            if( and(lastByte, 0xFFFFFF00) != and(strtonum($3),0xFFFFFF00))
            {
                print "Warning file " $1 " spans over multiple pages (data "
                    "size=" len ")"
            }
        }
    }

From the results I selected those modules that have large data object. I found three large data objects of 360, 600 and 501 bytes respectively. So I modified the linker script to have 3 multi-page banks – 2 banks composed by 2 pages and 1 spanning over 3.
In this way the linker is forced to put big objects in those multi-pages banks, but it will keep all the other objects within a single bank as required.
The best option you have is to start with a default linker script and then merge together adjacent banks as soon as you discover a large data object (this will be reported by an obscure linker error message pointing to a completely innocent module).
The Linker is also very uninformative about errors, you are allowed only to know that you ran out of memory. To be more precise you are allowed to know it only after a training, because the error message is very obscure, something on the lines of “section <a section name you are unaware of> cannot fit some other section”.

Assembler
Since Pic 18 are basically C-unfriendly, some assembly may be required. If you need a little bit of assembly then you can write it directly in the C source code (at a price we’ll see later). If you need some more assembly you want to the separate assembler. In this case you can take full advantage of specific assembly directives and/or macros, but then you lose integration with C language. In fact the assembler cannot fully understand C preprocessor directives, making it impossible to use the same header file for inclusion in both C and assembly.
There are two ways to workaround this, both not very satisfying. First you can write shared header files with the common subset of preprocessor directives shared both by assembly and C. Just keep in mind that rules for searching header file differs.
The other way is to write a filter (more or less complex according to the complexity of your header files) for converting C headers into assembly includes.
I went the last way because it seemed simpler, just convert C comments into assembly language comments, then I modified the filter to resolve include files. I gave up when I tried to switch from #if defined(X) to the old #ifdef X supported by assembler.
Eventually I decided to opt for very basic header files included both from assembly and integrated in more convoluted header file structure for C. I resorted to this solution only because it would take too much time to write a complete filter. If you do this just keep in mind that although comments are not compatible, you can use #if 0/#endif to bracket away parts from both the assembly and the C part.
When you mix assembly and C in the same source file you may get surprising results. As I wrote before I had defined an assert macro to execute a Sleep instruction in order to halt the debugger. My assert was something like:

#define ASSERT(X__) do { if( !(X__) ) { asm_ Sleep endasm_ } } while( false )

The effect is that this inserts the assembly fragment with the Sleep instruction everywhere you want to assert something. I was running short on program memory so I tried several configuration on debugging and optimization options and I discovered a significant difference in memory usage whether asserts where implemented with the assembly fragment or via a function call.
Apparently the optimizer has a hard time in doing its work when an assembly code block is inserted in a C function, no matter what the content of the block is (the sleep instruction has no side effects that can disturb the C code execution).
I think the assert is one of the rare case where you want assembly language not for performance reason. Therefore it is a sort of contradiction – you use assembly language fragment to improve speed, but you kill the C language optimization.
If you need assembly for performance, put it in a specific .asm source file.

Next time I’ll write about the IDE and debugging.

PIC18F Software Project Survival Guide 2

This is the second installment on a series of post regarding software programming on PIC18F cpu family. You can find the first here.
Tools
You can (and you really should for a non-trivial project if you care about your mental sanity) program the 18F using the C language.
Compiler
Basically there are two options – the first is the MCC18 compiler from Microchip and the other is the HiTech C. MCC18 is cheap and crappy, HiTech C is expensive and more optimizing (I cannot say whether is crappy or not since I never used it).
MCC18 is not fully C89 standard, on the other hand you need some extension to get your work done on this little devil. HiTech could be more ISO/ANSI compliant (I don’t know), but it is not compatible with MCC18 (is something they are planning to add in next releases, anyway I wouldn’t hold my breath). For this reason you’d better chose early which compiler you want to go with since they are not compatible. Probably you can manage to write portable code, but be prepared to write a lot of wrapper layers. Nonetheless you have to sort this out before starting coding.
Just to give you a hint about the compatibility problem I am talking about, apart from the way the two compilers provide access to the hardware registers, the HiTech uses the “const” attribute to chose the storage for variables, while MCC18 relies on non-standard storage qualifier keywords rom and (optionally) ram.

When I say that MCC18 is crappy, I have a number of arguments to support my point. Each point cost me at least a couple of hours to discover and work around, but sometimes I needed to spend days.
ISO/ANSI compliance is lacking from the preprocessor to the compiler. Not only preprocessor fails to properly expand nested macros, but it also messes up line numbering when a function-like macro is invoked on multiple lines.
For the first problem I haven’t found any workaround but to hand-code a part of the preprocessor work. For line numbers I use the backslash to foul the preprocessor in believing it is just a long line

#define A(B,C,D) /* macro definition */

A( longParameterB,
    longParameterC,
    longParameterD );

Compiler warnings are inadequate at best. For example you don’t get any message if a function that return a non-void type has no return statement. On the contrary when you compare an unsigned int to 0 (and not 0u) you get a warning. And you get warning for correct code, for example you can’t pass a T* to a const void* parameter without getting the warning, event if the two pointers have the same size and the same internal representation.
This behavior makes your life hard if your programming guidelines require maximum warning level and no warnings and doesn’t help you with real problems in your code. I use PC-Lint to spot real problems, but a run of gcc with some #define to handle non-standard constructs will spot most of them.
About warnings I had to fight back my loathing of useless casts and add them just for shutting up the compiler.
Given the poor state of the lad, I hadn’t been able to write a static assertion macro. Usually you write such a macro by turning a boolean condition in a compile-time expression that can be either valid or invalid (e.g. declaring an array with -1 or 0 elements, declaring an enum and assigning the first value to 1/0 or 1/1…). I haven’t found any way to get the compiler refuse any of these constructs.
One of the worst part of the toolchain is that it can produce code that breaks some hardware limitation without a warning. For example the compiler relies on a global temporary area for computing numerical expressions (array access is a case). The code generated expects that the temporary area is entirely contained in a data memory bank. The compiler nor the linker are able to detect when this area falls across a data memory bank boundary and alert the programmer. This is nasty because you can get subtle problems or having a failing program just after a recompilation.
Similarly the C startup code relies on a similar constraint for a group of variables, should they not fit in the same data memory bank, the initialization silently fails.
It took me few minutes to rewrite the startup code initialization routine and can’t see any noticeable slowdown.
I would advise to:

  • rewrite the C startup code, keep in mind the limitations of the compiler (breaks on objects laid across page boundary, breaks on accessing struct larger than 127 bytes, break on accessing automatic variables if the space taken is some tens of bytes);
  • use another tool (gcc/Pc-lint) to parse the source code and get meaningful warnings (missing returns, == instead of =, unused variables, uninitialized variables and so on);
  • enforce data structure invariants and consistency by use of assertion;
  • if you find a way to implement static assertion, let me know.

Next time I’ll write about linker and assembler.

PIC18F Software Project Survival Guide

Now that I’m getting nearly through I feel confident about posting this series of posts about my work experience on PIC18F. Although my writing could seem a bit intimidating or cathedratical I would like to receive your feedback and your thoughts on the matter. I got through, but I don’t like to have universal solutions 🙂
So, at last you failed in defending your position. No use in all the documentation you provided, the articles from internet and blog posts where with no doubt, PIC was clearly depicted as the wrong choice.
But either your boss or your customer (at this point it makes not much difference) imposed a PIC18F on your project. And she also gave a reason you can hardly argue about – Microchip never sends a CPU the way of the dodo… so, twenty years from now we could continue to manufacture the same device with the same hardware avoiding the need for engineering maintenance.
Given that this device will be sold in billion units, that makes a lot of sense.
Their problem is solved, but yours are just looming at a horizon crowded with dark clouds.
Good news first, you can do it – PIC 18Fs (after some twiddling) have enough CPU power to fit most of the applications you can throw at them. I just completed a device which acts as a central hub of a real time network and provides information via a 128×64 pixels display.
Bad news – it won’t be easy, for anything more convoluted than a remote gate opener, due at most by yesterday (as most of the project requires nowadays) your life is going to be a little hell. I’ll try to describe if not the safest path in this hell, at least the one where you cannot get hurt too badly.
So, let’s start by architecture.

Architecture
PIC18 architecture is described almost everywhere (checked on the back of your cereal box, recently?), but the first place you are going to look, the datasheet, will be mostly helpless. So I will try not to repeat anything and I will not go much into details, but I will try to give you a picture with the objective of showing the capabilities and the drawbacks of these gizmos.
First these are 8 bits CPUs rooted in the RISC field – simple instruction, simple task, low code density.
The memory follows the so called Harvard architecture – two distinct memories for data and for program instructions. Data memory is called Register File, while program memory is called… Program Memory. Data memory is a RAM, while program memory is a flash.
Program memory is linear (no banks, no pages), each word is 16 bits wide, but the memory can be accessed for reading (or writing) data one byte at time. Current PIC18s have program memory sizes up to 128k, but nothing in their design prevents them to address up to 16Mbytes (2^24).
You can erase and write the program memory from the PIC program itself (this is called self-programming), but there are some constraints – first memory is organized in pages, 1024 bytes each. In order to write the program memory you have first to erase it and this can be done only one page at time. Once the page has been erased you may write it altogether or just one byte at time.
The worst part is that when the program memory is erased or written the program memory bus is used and therefore the execution is stalled. This stall can last for several milliseconds.
Data memory can be accessed either linearly or through banks of 256 bytes each depending on the assembly instruction you use. Data memory for PIC18s is up to 4k, again there’s nothing in the design that prevents the CPU to address up to 64k or RAM. In the data memory there is a special section (Special Function Registers) where hardware registers can be accessed.
PIC18 architecture becomes quite funny on SFR since you can find the usual timer, interrupts and peripherals control registers along with CPU registers such as the status flags, the W register (a sort of accumulator). Further there are registers that basically implements specific addressing. For example PIC18 has no instruction for indirect addressing (i.e. read from a location pointed to by a register); if you want to indirect access a location you have to load the location into a SFR (say FSR0 for example) and then read from another SFR (e.g. INDF0). If you want a post-increment you read from POSTINC0.
That may sound elegant, but it is a nightmare for a C compiler, basically any function that accepts a pointer could thrash part of the CPU state, since most of the CPU state is memory mapped!
That’s also the reason why, conservatively, the C compiler pushes about 60 bytes of context into the stack on entering a generic interrupt handler.
There is a third memory in every PIC18F – the hardware return stack. This is a LIFO memory with 31 entries. Each entry is the return address stored every time that a CALL (or RCALL) assembly instruction is executed.
Still on the CPU side, PIC18F features two level of interrupts – the high priority level interrupt and the low priority interrupt, you can assign every interrupt on the MCU to one or the other of the levels.
Talking about the peripherals you will find almost everything – from low pin count device to 100 pins MCUs with parallel port interface, external memory and ethernet controller. Even in a 28 pins DIL package you found a number of digital I/Os, comparators, DA and AD converters, PWMs. Every pin is multiplexed on two or three different functions. I2C and SPI are available on each chip, while USB port is available only on a couple of sub-families.

Next time, I’ll talk about tools

Referendum

Il prossimo 12 e 13 giugno c’è un referendum che raggruppa tre quesiti su tre argomenti completamente differenti: nucleare, privatizzazione dell’acqua e legittimo impedimento.Penso di non scrivere niente di nuovo dicendo che secondo la legge italiana il referendum è abrogativo, quindi per cambiare bisogna votare SI, mentre se si vuole mantenere lo stato attuale bisogna votare NO. E non è una novità che lo strumento referendario premia il NO, perchè se non si raggiunge il quorum, (se non ricordo male la metà più uno degli aventi diritto al voto) il referendum viene considerato non-valido. In altre parole gli astenuti contano per il NO.
Scrivo questo post per due motivi: primo credo che siano quesiti importanti che meritano una risposta da parte del popolo italiano, secondo perchè mi risulta che l’argomento è piuttosto ignorato e confuso.
Nucleare, è vero che la legge è stata emendata e apparentemente può sembrare che non ci sia più necessità di questo referendum, ma, per stessa ammissione del nostro presidente del consiglio la legge è stata modificata per evitare il referendum e quindi per poter riproporre l’energia nucleare come parte del piano energetico italiano nel giro di un anno o due. Da quanto ho capito il quesito referendario dovrebbe comunque essere presente ed è dunque importante votarlo se si vuole continuare ad evitare la costruzione di centrali nucleari in Italia.
Privatizzazione acqua, sebbene meno considerato, questo quesito non è meno importante del primo – se di incidenti nucleari si muore, senz’acqua non si vive. Purtroppo in Italia il privato non funziona quando si tratta di appalti pubblici, come sarebbero quelli per gli acquedotti, si veda la puntata di report sull’acqua. Sembra che questo argomento sia tabù e non se ne possa parlare.
Legittimo impedimento se nel caso degli altri due quesiti l’interesse superiore della collettività è chiaro ed è solo da ribadire ai nostri rappresentanti, per questo quesito forse il dibattito è più aperto. La consulta si è già espressa in merito al legittimo impedimento, riducendone la portata prevista dal legislatore. Dal momento che sia tutti uguali davanti alla legge (lo dice la costituzione) o il legittimo impedimento vale per tutti o per nessuno – nessuno è più uguale degli altri. Senza contare che la legge italiana attuale è molto sbilanciata nella tutela dell’imputato anche a scapito della parte lesa, quindi forse non c’è bisogno di anche questo ulteriore meccanismo.