Tag: programming

Anniversaries

Happy Birthday! BASIC just turned 50. It should have been 1981 or ’82 when I first saw a BASIC listing. The magazine was named Nuova Elettronica (New Electronics) and featured a series of columns about building a computer. I remember the writer was very proud they managed to license an Italian version of BASIC. For sure it was weird (even more in hindsight), something was even understandable (LET a=3, in Italian “SIA a=3”), something else was pretty obscure (FOR i=1 TO 10, “PER i=1 A 10”). I had no computer and the Internet wasn’t even in my wildest sci-fi dreams, so I wondered how those lines could produce 10 high-resolution (!) concentric rectangles. One rather puzzling statement was LET a=a+1. I understood equations (I was in my first year of high school), but that couldn’t possibly be an equation as I knew. So I tried to ask an even more puzzled math teacher, who stared at the line for a while, then muttered something about simulations and universe-changing semantics.

Luckily shortly later another magazine “Elettronica 2000” (“Electronics 2000”) started a BASIC tutorial. I read those pages until I consumed them and learned Basic. For some years programming and BASIC were quite synonyms. The first thing you saw when you switched a Zx or a Commodore on, was the BASIC prompt. The machine was ready to be programmed.

The BASIC era, for me, ended with the Amiga, years later (at that age, years are eons indeed). Microsoft BASIC for Amiga was pretty unstable, also real performances can be achieved with C. Maybe the tombstone was in the second year of University, in the first lesson of Computer Science, when the professor wrote “BASIC” on the blackboard and then stroked it out saying: “This is the last time I want to hear about BASIC”.

Talking about anniversaries, I think it is noteworthy that the first message I wrote on my blog was posted 10 years ago. I would have liked to celebrate this with a new look for my website. I have been working on this for a while, but it is still not ready. I hope to have it online before my website becomes of age.

Complexily Complicated

The embedded muse is always filled with interesting links and articles. In the last issue I read this: Compared to what? about cost of firmware. In fact I think that the main problem we programmer face when trying to have our work recognized by non-programmers (usually those who sign paychecks) is that it is very hard to describe what we do to non-programmers. Going in details to explain how hard it is can be out of our reach.That’s like to say that programming (real programming, not integration or recompiling demo projects) has a non-intuitive grade of complexity.
This reminds of a recent project which I routinely failed to have the upper management appreciate the complexity, since they perceived it as a replacement for an electrical system (a couple of lights, buttons and wires). In fact the firmware equivalent of such simple system was completed in a short time. The rest of the time (actually quite a large amount of time) went in what couldn’t be done by a plain electrical system (graphical display, history, complex priority and communication management).

Necessary, but not Sufficient

“How can you say that 2+2 is not 5?! Never ever?! It is only because you believe this. For sure there will be someone, out there, better than you at this, that will be able to make 5 out of 2+2.”That’s about the transcript of a heated discussion I overheard in a near office. Though they were not talking about mathematics, but physics. Basically my colleague was claiming that a given result could not be achieved because of something that is physically not possible.
Just a few days ago I wrote about seeing atoms and that something considered not possible should be considered (basically) as an opportunity in disguise. So I felt the urge of sharing a couple more of thoughts.
It is important – I believe – to clear out the distinction. Photons do not hit a single atom (I’m sure that this is not a formally correct assumption, but it is just for getting the hang of it), so it is true that you can’t see single atoms. And human race stood with that for quite a long time. Then someone had the technology to build a mapping of atoms into a display screen and made the magic, but it is still true that you cannot watch them with your bare eyes or any powerful optical microscope.
It doesn’t matter how much you boss yells at you, it is something that cannot be done. And it would still be impossible if you don’t have the budget to build a complex machinery that does the job with special sensors. If your customer requirements is – I wanna see atoms bare eyes, then it IS impossible.
If you want a significant advance, then you have to mix lateral thinking and quite a lot of money. But, most of all, you have to be ready to accept that though these conditions are required, they do not grant any result.

You can’t see atoms

You can’t see atoms. That’s what I was taught. Well, everything you see is actually made by atoms, but you can’t see single atoms – that was the meaning of the teaching. Simply it is not possible – it has to do with atom dimensions and light wavelength. I won’t delve into the scientific reasons, but every serious physic textbook will confirm this.You may imagine my surprise when I saw some pictures of actual molecules. In fact it is true that you can’t see atoms (nor molecules), but you can use specialized sensor to produce a visual representation of atoms and molecules aspects. Yes it is not seeing directly by light rays, but is fully equivalent for all practical purposes.
That’s the idea – something is considered impossible until someone comes out with an idea to work around the limitations and voila, what once was considered impossible, today is within everyone reach.
This is what my friend Jimmy Playfield (the name is fake to protect his privacy) told me.
Some days ago Jimmy’s boss called him and all senior programmer and senior hardware designers to assign them a homework for the short vacancy they were about to have.
The goal was to find a way to work in the company environment and successfully craft project after project. The constraint though were that they couldn’t change anything in the chaotic way projects were handled.
Here is a sum of the constraints –

  • budgets are fixed, no increase in the engineers workforce;
  • requirements are roughly defined at the beginning of the project, then they continue to pour in during the entire lifetime of the project;
  • Resources are not statically assigned to a single project, but they may be temporary (and not so temporary) shared among two (or more) projects;
  • contracts are naively written, no real fences and firewall are set to force the customer to behave constructively nor to protect the company from customer whims;
  • project manager role has been declared useless, so their company charges the responsibilities of project management onto project specialists;
  • there are more projects than programmers;
  • no real policy for resource management, no motivational incentives, no benefits, no training (well, here I guess Jimmy was somewhat exaggerating).

Easy, ain’t it?
Well it sounds a hard problem. Let’s see what “project management 101” has to say about this situation.
First the triangle of project variables. Basically for every project there are three variables – Budget, Features and Time. Of these three variables you can fix two (any two), but not the third one. E.g. you can fix Time and Features, but the Budget is a consequence, or you can fix Budget and Time, but Features are a consequence (this is the agile methodology configuration.)
Usually projects at Jimmy’s company have Budget and Time fixed. So the only chance would be to work on Features.
Features variable is to be intended as a mix of features and their quality. A poorly implemented feature is likely to take less time/budget than a robust and top quality implementation. So they could slash the quality.
The problem to work in this direction – put aside ethical aspects – is that usually quality is the topmost motivational driver. Taking pride of what one does helps the involvement in the project. When a programmer in despair states – “tell me what I have to do and I’ll do it” – that’s the end of motivation, the end of involvement and the sunset of efficiency. The programmer will consider that project an accident, something that is not worth his/her neurons to be burnt on.
The other problem in slashing the quality is that legal contracts have to be armored to protect the company against the customer that could complain about the lack of quality.
I can’t see any solution for them, at least, not within the system, much like you can’t see the atom without moving to another level.
So by moving to a meta-level, you can break out of the system, e.g. by hiring someone to write the code for the project. This programmer won’t do any better than Jimmy and his coworkers – sometimes he will complete the project, sometimes he will fail. But to the company is a win-win solution, if the contractor succeeds then the company wins, if the contractor fails then the company could blame him and that’s a win again.
The major problem I see with this is that it is a bit suicidal, i.e. Jimmy and his coworkers become redundant and disposable as soon and the contractor is hired. Good luck, Jimmy and let me know if you find a better solution.

The Nature of Quality

In the last days before vacations, discussions at workplace raged over the subject of software quality. The topic is interesting and we got easily carried away throwing flames each other on programming techniques, programming mastery, (we stopped just before getting to personal arguments). Eventually we didn’t reach any agreement, but the discussion was interesting and insightful. More precisely some of us claimed that in software development quality always pays off while others claimed that the same project goals may be achieved earlier (thus cheaper) by employing quality-less programming (I am a bit overemphasizing.)
In other words, the question the office tried to answer was – is the cost of quality enhancement worth spending?
It is hard to get to a final answer and that’s possibly why there isn’t a clear consensus even within programmers. Take for example Lua designers. They were adamant in not limiting the language expressiveness in some questionable directions because they targeted the language at programs of no more than a few hundreds lines written by a single programmer. In other words they sacrificed some forms of quality because, in their opinion, using those was just overhead without benefit.
Detractors of software quality have their reasons – programmers are human and as such they may “fall in love” with their own creation or with the process. This causes either an endless polishing and improvement, or the creation of overwhelming designs well beyond the scope of the project. If you question these programmers about their choices the answer usually involves design for change, portability or re-usability. Seldom an assessment has been made to check whether these are really needed for the project, its future evolution or the company.
It is clear that the point is finding the right compromise. “Anybody can build a bridge that stands, but it takes an engineer to build a bridge that barely stands.”
Unfortunately bridge engineering (and other elder and more stable engineering fields) is not much of help beside witty saying. For bridge engineering does not have to take in account staged deliveries, demos, versioning, evolution maintenance (“yeah, nice bridge, now grow it three stories, add a panoramic wheel and a skiing slope”), customer changing idea (“yeah, that’s exactly my bridge, now move it on the other river”), nor specification problems (“I said ‘bridge’, but I meant ferry-boat”).
When talking about quality there is an important distinction to make – quality perceived by the user and internal quality (perceived only by the programmers working on such software).
User perceived quality is the opinion of software customers. As such, it is the most important quality of your software. As an engineer you should write the program so to spend the minimum needed to reach the required user perceived quality or slightly exceed it. No more, no less. (Just keep in mind that these aspects strongly depend by your customers). This claim holds as long as you consider the whole project without any maintenance. Since maintenance is usually the most expensive part of a project cost (even up to 75% of the whole project cost), you may want to lower its impact by employing maintenance-oriented development techniques, this means that you must spend more during the development, than what is strictly needed to match customer expected quality.
Internal quality is what, we programmers usually refer to when we say “quality”, good code, well insulated, possibly OO, with patterns clearly identified, documented, no spaghetti, readable and so on.
Unfortunately for software engineering supporters, programs do exist with a fair, adequate or even good perceived quality that have bad or no internal quality at all.
And this is the main argument against the strive for internal quality.
I think that this reasoning has a fault. In fact it relies on the hypothesis that since you can have fair to good perceived quality without any internal quality, then internal quality is an extra cost that we can save. But usually this is not the case, even if there would be no direct relation, would it be simpler to write a program where you keep just a bunch of variables under your attention to get to your goal, or write a program where you need to practice Zen in order to get an holistic comprehension of the whole not to mess anything in the process of treading your way?
And then it is hard that no whatsoever relation exists between the two natures of quality.
On this point both literature and commonsense agree that from medium to large program size internal quality affects several aspects of user perceived quality. E.g. if the program crashes too often perceived quality can’t be any good.
Moreover – in the same class of projects – the overall cost of the software across its entire life cycle is less if internal quality is high. That is desirable software properties really help you in understanding and modifying the software more confidently and with reliable results.
Quality-less supporters – at this point – backs up on picking single aspects of internal quality. For example they can target reuse and state that reuse practice is not worth since you pay in this project a greater cost with the risk that you won’t reuse the code in another project or that the context will be so different that it will be cheaper rewrite the code from scratch.
In this case it is hard to provide good figure of savings for designing and coding reusable components. This is unfortunate both for internal quality supporters in their advocacy and for software architects deciding if a component has to be reusable or not.
Unfortunate also because there is a large band of components between the ones that obviously are to be made reusable and the ones that obviously not.
Also it has to be considered that techniques adopted to make a component reusable are – generally speaking – techniques to improve internal quality. Pushing this thought to the extreme, you can’t rule out reusability from your tool chest because all other techniques that improve internal quality drive you in the reusability direction to the point that reusability either comes for free or at a little cost.
Despite of what I wrote, I think that there exists a problem and it occurs when the designer starts overdesigning too many components and the programmer starts to code functions that won’t be used in this project but just because.
A little overdesigning could be a good idea most of the times (After all design takes a little percentage of the development time), nonetheless you should always keep in mind a) the scope of the project and b) the complexity you are introducing in the code respect to the complexity of the problem to solve.
Since you are overdesigning you are not required to implement everything. Just implement what is needed for the next iteration of the project.
At this point you shouldn’t rush into moving your reusable component in the shared bin, but you should keep it in a “aging” area, granting it time to consolidate. Before being reused it should prove to survive at least to one project (or most of it). Then, when the component is reused in at least two projects, once properly documented, could be moved in the shared folder.
What I can’t understand and leaves me wandering is that in many workplaces firmware is considered an accidental cost, that everyone would happily do without it only if could it be possible. So the logical consequence would be to scientifically attempt to lower the cost of development and maintenance by finding the proper conditions. Instead in many places there is a blind shooting at firmware development, attempting to shrink any cost, with the effect that the cost will actually increase. E.g. you can save on tools and training, only to discover that your cost per LoC is more. I am afraid that the reason is either mistrust or ignorance in the whole body of researches done on software development since the 60s.

xc8: see xc8

The project with my “beloved” Pic18 is nearly over, the one I’m starting is based on a Freescale ARM with enough horse-power to run a modest desktop. Nonetheless an interface (to the proprietary field bus) is still. . . You guessed, a glorious PIC 18. This is one of the reasons I keeping an eye on the Microchip tools strategy for these chips. The other one is that the nearly over project is likely to be maintained in the coming years to implement small changes and requests from the customer. So, what happened in their yard since the last time I wrote?
First, mplab has been driven into retirement and replaced by a customization of Netbeans, named Mplab X. This I consider a good move since I much prefer this IDE over the sadly ubiquitous Eclipse. The reason is long to tell and may be partly invalidated by the latest progress, but that’s enough matter for another post.
Mplab X is still young and that may be the reason for some clumsiness in migration from Mplab 8.
The other strategic move that Microchip is doing is the unification of the two families of compiler. Microchip had the low cost, under-performing mcc18, then re-branded as mplabc18, and the acquired, high cost, high performance HTC.
Microchip decided to dump mplabc18 and to add some compatibility into the HTC so that mplabc18 code could be smoothly recompiled. This new version has been called xc.
Given what I witnessed using mplabc18 for over 3 years, I thought this had to be a good idea.
Then I took a look at the xc8 manual and I discovered that the compiler is not capable of generating re-entrant code. Despite of this Microchip claims that xc is ANSI (89) compliant. I could rant on the fact that c89 is no longer the c standard at least since 12 years ago, but I will concentrate on re-entrant code (or lack of thereof).
A function is re-entrant when it is computed correctly even if it is called one or more times before returning.
The first case is the recursive algorithm. Many algorithms on trees are formulated quite intuitively in a recursive form. Also expression parsing is inherently recursive.
Do you need recursion on a PIC18? May be not, also because – as you may remember – the little devil has a hardware call stack of only 31 entries.
Is there any other case you need re-entrancy? Yes pre-emptive multi-tasking – a task executing a function may be pre-empted and another task be switched in, if you are unlucky enough then it may be entering the same function (in multiprogramming unluckiness is the norm).
You may correctly object that multitasking on an 8 bit is overkilling. I agree, even if I don’t like to give up options before starting to analyze the project requirements.
Anyway there is a form of multitasking you can hardly escape even on 8bits – interrupts.
Interrupts occur every time the hardware feels it right to do so, or – more precisely – asynchronously. And it is perfectly legal and even desirable to have some library code shared between the interrupt and non-interrupt contexts.
When I pointed this, Microchip technical support answered that this case is solved through a technique they call “function duplication” (see section 5.9.5 in the compiler manual). In fact if you don’t use a stack for passing parameters to a function you may use an activation record (i.e. the parameters and the space for return value) in a fixed position of the static ram. Of course this causes your code not to be re-entrant (that’s why you usually want a stack). But you can simply provide a two levels re-entrancy by duplicating the activation record and the right one according to the context you run in.
Note that you don’t duplicate the code, just the activation records.
Neat trick, at least until you realize that a) PIC18 has two interrupts levels, so you would need three copies of activation records and b) this is going to take a lot of space.
In this sense the call stack is quite efficient because it keeps only what is needed in the current execution point. You will not find, in the stack, anything that does not belong to the dynamic chain of function invocations. Instead, if you pre-allocate activation records and local variables you need that space always.
Well this may not be completely true since the compiler can actually perform convoluted static analysis and discover which functions are mutually exclusive. Say that function a() calls first b() and then c() and that b() and c() are never called elsewhere and that their address is never taken. In this case you may reuse the same memory space for activation records and locals first to host b() stuff and the to host c() stuff.
Note that this kind of analysis is very delicate and sometimes not even possible. In fact you may have function pointers, but you may also have assembly code calling at fixed address. So I don’t think this kind of optimization can be very effective, but I would need to run some tests to support my claim.
Let’s get back to my firmwares. The most complex one counts about 1000 functions. Nearly 700 of them require one or more parameters.
Almost every one of them use local variables.
Even if we count 1 byte per function (and I am underestimating) we are talking of 2k on a micro that has 3.5k of ram.
Yes, I have to admit that the static approach is very fast because addressing is fixed. In any point you know the address of the variable at compile time. You don’t have to walk the stack and direct access is fast.
Anyway what I wrote is enough to demonstrate that it is not possible to port trivially a complex code from mplabc18 to xc8.
A last case when you need re-entrancy is when you have several function wrappers that accepts function pointers. If the wrapped code calls another code that uses the same wrapper you are again in the need for re-entrancy.

Ugly Code

Unfortunately, I’m a sort of a purist when it comes to coding. Code that is not properly indented, global and local scopes garbled up, obscure naming, counter-intuitive interfaces… all conjure against my ability to read a source and causes a headache, acid stomach, and buboes. “Unfortunately,” I wrote, meaning that’s most unfortunate for the poor souls that have to work with me to whom I should appear as sort of source code Taliban.

Recently my unholy-code-alarm triggered when a colleague – trying unsuccessfully to compile an application produced by a contractor – asked me for advice.

More and more I delved into the code, more and more my programmer survival instinct screamed. The code was supposedly C++ and the problem was related to a class, that I would call – to save the privacy and dignity (?) of the unknown author – SepticTank. This class interface was defined inside a .cpp and then again in a .h. Many methods were inlined by implementing them in the class interface (and this possibly was part of the problem).

After resolving some differences, the code refused to link because there was a third implementation of the SepticTank destructor in a linked library. I claimed that such code couldn’t possibly work (even after disabling the dreaded precompiled headers – never seen a Visual Studio project working fine with precompiled headers), even if we could manage to get it compiled and linked the mess was so widespread that nothing good could come.

My colleague tried to save the day by removing the implementation of the SepticTank destructor so as to leave the implementation found in the linked library.

In the end, he had to give up because the code was broken beyond repair, and even if it compiled and linked it crashes on launch (not really surprising).

What stroke me most, basically because it caused a slight fit of dizziness, was the sight of the mysterious operator below –

class SepticTank
{
    public:
        // code removed for your comfort
        operator SepticTank* ()
        {
            return this;
        }
        // other code removed, again for your comfort
};

My brain had some hard moments trying to decode signals from my eyes. Then it figured out that the coder was redefining the implicit conversion to the class pointer so as to use instances and references where pointers were expected… why on the Earth one should want something like that?!?

Implicit conversions if not handled correctly are a sure way to kick yourself on the nose and this is enough a good reason to stay away. But… trying to enter the (criminal) mind that wrote that code, what’s the purpose? Just to avoid the need for the extra ‘&’ ? Or is it a Javism? Maybe it is better to stay out of such minds…

I’d like to introduce…

Say you have slightly more than one hour to talk about C++ to an audience of programmers that range from the self-taught C-only programmer to the yes-I-once-programmed-in-C++-then-switched-to-a-modern-language. What would you focus your presentation on?I started composing slides thinking C++ for C programmers, but it is a huge task and sure the result won’t fit into a single week.
Also I must resist the temptation to teach, since a single hour is not enough to learn anything.
Then I am planning to re-shaping my presentation in form of wow-slides. I mean every slide (no more than 10-15 of them) should show a C++ idiom / technique / mechanism that would cause a wow-reaction in a C programmer or in a C++80 programmer.
Advices are highly appreciated.

Having a Good Time

Once upon a time, there was this Unix thing powering departmental mainframes. When it came to keeping track of system time it sounded like a perfect solution to have a virtual clock ticking once every second and counting seconds since the EPOCH was the way to keep track of the current date and time. Such a clock, stored into a 32bits, had several advantages – it was simple and easy to store and manipulate, it was granted to be monotonically increasing for such a long time that it would have been someone else problem the day that this grant would have been broken. Now switch context, you are working on a micro within an embedded system. The micro has the standard nowadays System On Chip parade of peripherals, including a hardware timer that can be programmed to interrupt your code at any frequency is convenient to you.
Using 1ms as a clock base may seem a little aggressive, but that timespan makes a lot of sense – it is unnoticeable for us sluggish humans but its sort of eons for a speedy electron. In other words, it is a good compromise to base your time engine. You can build on this some timer virtualization so that you may employ multiple timers either periodical or one shot in your application.
So far so good, but there’s a catch. Consider using a 32bit variable to keep the time (as the glorious Unix time_t). Starting from 0 on every power-up (after all, it is not mandatory to keep the full calendar information), it will take one-thousandth of the time it takes Unix to run out of seconds. Is that enough to be someone else problem? No, that’s much closer to your paycheck. It takes some 49 days to roll over if the variable is unsigned.
At the rollover, you lose one important property of your clock – increasing monotonicity. One ms you are at 0xFFFFFFFF and the next one you are at 0x00000000. Too bad.
Every comparison on time in your code is going to fail.
Is there anything that we could do? Let’s say we have a code like this –

if( now > startTime + workingSpan )
{
    // timeout
}

It seems perfectly legit, but what happens if startTime approaches the rollover boundary? Let’s say that startTime is 0xFFFFFFF0, workingSpan is 0x10, and now is 0xFFFFFFF8. That means that now is within the working span and we expect the test to succeed. Instead the test will fail, because 0xFFFFFFF0 + 0x10 rolls into 0x00000000 which is NOT greater than now (0xFFFFFFF8).
Is there any way around before going into 64bits?
You may try to check if the operation is going to overflow and act accordingly, e.g.

uint32_t LIMIT = MAXUINT32 - workingSpan;
if( startTime > LIMIT )
{
    handle overflow...
}

Or you may check if the operation overflowed –

uint32_t timeLimit = startTime + workingSpan;
if( now < startTime || now > timeLimit )
{
    handle overflow
}

You may have to trim a bit the code for corner cases, but I wouldn’t care because there is a simpler way. An interesting property of modulo arithmetic is that

a - b = modulo[n]( a- b )

as long as a – b < n. That means that if we manage to transform the comparison operation from absolute to relative then we can forget about the overflow and roll over problem. So the following code:

if( workingSpan < now - startTime )
{
// timeout
}

is correct from an overflow/rollover point of view, does not requires extra computing time, nor space, and is as readable as the original code.

The ideal job

It is since long that Videogame Studios recruiters send occasional mail proposing jobs (for whose I am mostly unqualified, but it is always nice to be considered). This morning I received an opening in South France with the following benefit list

  • Christmas Bonus for all employees in November = about 1500€ for one full working year and on prorata for a shorter period
  • Salary increase each year for good performance (as an example in 2009, salaries increased of 7%, in 2010, salaries increased of 12%, in 2011, salaries increased of 8%)
  • Profit sharing system: Above a certain threshold, we give back to employees 50% of bonus gained on video game sales
  • Gift vouchers at the end of the year = 140 €
  • 19 luncheon vouchers per month of 8,80 € each (we cover 60 % = 5,28€ ), you pay only 40%.
  • 50 % of your public transport card will be reimbursed.
  • Retirement and health care – we take 100% in charge (for you and your family). And you perhaps know that the French state guarantee for a high level of health care insurance too.
  • We contribute to an agency that can helps you financially in buying or renting a flat
  • We will help you to find a flat to rent – we have a partnership with relocating company based locally (they help the candidate to find a flat, they welcome him at his arrival, help him with stuff like opening a bank account, discovering the city, finding schools for children, finding a good place to practice sport…..).
  • If you’re not speaking French, we would offer you French lessons to facilitate your integration
  • Media library : you can borrow consoles, videogames, DVD… for free (see with the ITguys
  • Free cakes and soft drinks dispenser.
  • As we are more than 50 employees, we work as a work council – Its goal is to propose you some discounted products such as theater tickets, trips, cinema…and another of its roles is to be the mediator between employees and employer if needed.
  • From our studio, you will be on the beach in 1 hour and you will be on ski slopes in 2,5 hours !

How couldn’t I be tempted? Especially by the 1 hour distance from seaside?!