button down -> light on

Picture this project – you have a battery operated gizmo with LEDs that has to light up at a given time and light off at another one. Reality is a bit more complex than this, but not that much.

So what could possibly go wrong? How much could cost the development of its firmware?

The difficult may seem on par with “press button – light on”.

As usual devil is in the details.

Let’s take just a component from the whole. If it is battery operated you need to assess quite precisely the charge of the battery, after all – it is expected to do¬†one thing and it is expected to do it reliably. The battery charger chip (fuel gauge) comes with a 100+ pages datasheet. You may dig out of internet a pseudo driver for this chip only to realize that it uses undocumented registers. Eventually you get some support from the producer and get a similar, but different data-sheet, and a initialization procedure leaflet that is close, but doesn’t match exactly neither of the two documents.

Now may think that a battery is a straightforward component and that the voltage you measure to its connectors are proportional to the charge left in the battery. In fact it is not like that, because the voltage changes according to temperature, humidity and the speed with which you drain the energy. So the voltage may fall and then rise even if you don’t recharge the battery.

To cope with this non-linear behavior the fuel-gauge chip keeps track of energy in and energy out and performs some battery profile learning to provide reliable charge status and sound predictions about residual discharge time and charging time. The chip feature an impressive number of battery specific parameters.

Since the chip is powered by the battery itself (or by the charger) and doesn’t feature persistent storage, you have to take care of reading learned parameters, store into some persistent memory, and rewrite them back at the next reboot.

Btw, this is just a part of the system which is described by some 2000 pages programming manuals specific to the components used in this project.

In a past post, I commented an embedded muse article about comparing firmware/software project cost to the alternatives. The original article basically stated that for many projects there is no viable mechanical or electrical alternative.

This specific project could have an electromechanical alternative – possibly designing a clock with some gears, dials and a mechanical dimmer could be feasible. I am not so sure that the cost would be that cheaper. But the point I’d like to make now is that basically you can’t have an electro-mechanical product today – it is not what the customer wants. Even if the main function is the same, the consumer expects the battery charge meter, the PC connection, the interaction. That’s something we take for granted.

Even the cheapest gizmo from China has the battery indicator!

And this is the source of distortion, we (and our internal customer who requested the project) have a hard time in getting that regardless of how inexpensive the consumer device is, that feature may have taken programmer-years to be developed.

This distortion is dangerous in two ways – first it leads you to underestimate the cost and allocate an insufficient budget; second and worse it leads you to think that this is a simple project and can be done by low-cost hobbyists. Eventually you get asked to fix it, but when you realize how bad the shape of the code (and the electronics) is, you are already overbudget with furious customers waving pitchforks, torches and clubs knocking at your door.

First Sprint

The first sprint is over and it has been though. As expected we are encountering some difficulties. Mainly role keys without enough time to invest in scrum.

Scrum IMO tends to be well balanced – the team has lot of power having the last word on what can’t be done, what can be done and how long it will take (well, it’s not really the time, because you estimate effort and derive duration, but basically you can do the inverse function and play the estimation rules to get what you want).

This great power is balanced by another great power – the Product Owner (PO) who defines what has to be done and in what order.

Talking, negotiating and bargaining over the tasks is the way the product progress from stage to stage into its shippable form.

In this scenario it is up to the PO to write User Stories (brief use cases that can be implemented in a sprint) and define their priority. In the first scrum planning meeting, the team esitimate one story at time to set the Story Points value.

This is a straightforward process, just draw a Fibonacci Series along a wall and then set a reference duration. We opted for a single person in a week is capable of working for 8 story points. This is just arbitrary. You can set whatever value you consider sound. Having set two week sprint and being more or less 5 people we estimated an initial velocity somewhere between 80 points per sprint and (having the value) 40 points. The Scrum Master (SM) reads the User Story aloud and the team decide where to put it on the wall. Since the relationship between story points is very evident then it is quite easy and fast to find where to put new stories.

In that meeting, that went well beyond the boxed 2 hours, all the team was there, SM included, but the PO who was ill (the very first day in years). We could have delayed the meeting but the team would have been idling for some days…. not exactly doing nothing, you know there’s always something to refactor, some shiny new technology to try, but for sure we wouldn’t have gone the way our PO would have want us to go.

So we started and in a few moments it became clear that we were too much. Being a project that ranges from firmware to the cloud when talking about a specific story many people were uninterested and became bored and started doing their personal stuff. At the end we were about three being quite involved in the estimation game.

The lack of the PO in that specific moment was critical – we missed the chance of setting proper priority on tasks since we had to rely on a mail and we can’t ask our questions and get the proper feedback. At the end we discovered that the top most prioritaire User Story remained out of the sprint.

The other error we made was to avoid decomposing User Stories into Tasks. This may seem redundant at first, but it is really needed because an User Story may involve different skills and thus different people to be implemented.

The sprint started and we managed to get the daily scrum. This is a short meeting scheduled each day early in the morning where each member of the team says what she/he has done the day before, what is going to do that day and if she/he sees any obstacle in reaching the goal. This meeting is partly a daily assessment and partly an intent statement where, at least ideally, everyone sets the goal for the day. Everyone could assist, but only the team may speak (in fact has to speak).

Daily scrum were fine, we managed to host one/two programmers that were not co-located most of the time via a skype video call. The SM updated the planned, doing, done walls. The PO attended most of the times.

On the other hand the team interacted sparingly with the PO during the sprint, but just in part because his presence was often needed elsewhere. Also I need to say that our PO is usually available via a phone all even if he’s not in the office.

This paired with the team underestimating the integration testing and definition of done. Many times user stories were considered done even if not tested with real physical systems, but just with mock ups. Deploy process failed silently just before the sprint review leaving many implemented features not demonstrable. Also our definition of done requires that the PO approves the user story implementation. This wasn’t done for any task before the sprint end, but we relied on the sprint review for getting the approval.

The starting plan included 70 Story Points and we collect nearly another 70 points in new User Stories during the sprint. These new Stories appeared both because we found stuff that could be done together with the current activities or the needed to be done to make sense of the current activities.

Without considering the Definition of Done, we managed to crunch about 70 points that were nearly halved by the application of Definition of Done (in the worst possible moment, i.e. during the Sprint Review).

Thinking about improving the process, probably working from remote is not quite efficient, I noticed that a great deal of communication (mainly via slack, but also via email and Skype) happened the day that those two programmers were off site.

The sprint end was adjusted to allow for a even split of sprints before a delivery, therefore we had something less time than was we planned for.

End/Start sprint meetings (i.e. Sprint Review, Sprint Planning and Sprint Retrospective), didn’t fit quite well in the schedule, mostly because of PO being too busy in other meetings and activities.

Am I satisfied about the achievements? Quite. I find that at least starting to implement the process exposes the workload and facilitates the communication. The team pace is clear for everybody.

Is the company satisfied about the achievements? This is harder to say and should be the PO to say this. I fear that other factors affecting the team speed in implementing the requests of the PO may be considered with the scrum and dumped altogether. Any process poses some overhead and the illusion that a process is not really needed to get the things done is lurking closely.

Today we planned for another sprint, but this is for another post.

Demotivational Poster

Yesterday my old friend Jimmy Playfield found a motivational poster on the wall beside his desk. Usually it is quite hard to find a motivational poster that is not lame or, worse, demotivational. And the motto on the wall was quite belonging to the latter. An ancient Chinese proverb quoted: “The person who says that it cannot be done should not interrupt the person doing it”. The intent, possibly, was to celebrate those who, despite of the adverse situation and the common sense, heroically work against all odds to achieve what is normally considered impossible.
Unfortunately reality is quite different. As that famous singer optimistically sang – One among thousand makes it. That sounds like a more accurate estimation for those trying to attain the impossible. And likely, I would add, that one is the person who benefits from advice and help from friends and co-workers.
Human kind didn’t reach the moon just because someone was kept away from those who said it wasn’t possible. To achieve the impossible you need a great team fully informed on why and how your goal is considered impossible. Usually great management and plenty of resource is helpful, too.
Just reminding a lesson at the University. The teacher, made this example: “Microsoft, in the search for new features to add to their compilers line may find that adding non termination detection would be great. Imagine, you write the code and the compiler tells you that no way, under these condition, your code will hang forever in some hidden loop. Wouldn’t it be really helpful? But this is not possible and it has been proved by Turing”. But… According to the motivational post, no one would be expected to tell the marketing and the poor programmer that volunteered for implementing the feature, that no matter how hard he tries, that feature is simply not possible (at least on Turing machine equivalents).
So that motivational sentence is basically against team work, informed decisions, information sharing, risk management.
A witty saying doesn’t prove anything [Voltaire], but sometimes it is just detrimental.
Well, I was about to close here, but I just read another quote by Voltaire: “Anything too stupid to be said is sung.” Well… that makes a good confrontation between Morandi and Voltaire.

The Lottery Syndrome & Other Hypothesis

Software projects, regardless they are small or large, are likely to fail or go overtime/overbudget today not much less than they did 60 years ago. What is puzzling is that, over this period, academy and industry defined, if not a complete theory system, at least a set of rules, best practices and “don’t” lists that, they claim, can provide accurate (or at least good enough) estimations for the effort required to complete a software project, while also providing a set of tools for managing such project to reach the desired objectives.So the real question is why, projects keep failing and management keeps ignoring the wealth of evidences that would lead to a better handling of such projects.
Yesterday I watched the webinar “Software Estimation in an Agile World” by Steve McConnell. I “met” Steve years ago, nearly by chance, reading his book “Software Project Survival Guide”. He is a great Software Engineer so listening to what he has to say on the matter is always worth the time.
At a given time in the presentation a point is made that progressing the CMM level of a company not only allows to obtain very good estimations, but also development costs are reduced.
So, what can prevent any wise company manager/owner/leader to climb the CMM ladder? I have three hypothesis and my guess is that the real reason is a specific combination of these three.

  1. Mystical Data. At CMM level 0 the company is in the wilderness of mystic evaluation. There are no scientific data since there is no data collection a thus no data elaboration. The company may, in “bona fide”, reckoning that current practice is actual faster without the burden of process.
  2. NAH Syndrome. Sometimes I heard this. It goes something like “yes, CMM and real process and the like are useful for large companies or for places like NASA and nuclear reactors [a favourite example], but CMM is Not Applicable Here, it would cost too much and it would slow us out of competitiveness”.
  3. “Lottery Syndrome”. This is very close to the reason you may want to enter a Lottery. You know that the chances to win are low, BUT they are not zero. In the same way the company knows that driving a project without a CMM may be risky, but there is an actual chance that the outcome be a project developed in the given time and budget and in a way that makes the customer happy.

What do you think?

Reign of Terror

Thanks to linked-in pulse I stumbled in this excellent article Why my employees don’t care. I wonder, how many workplace are where bosses read these kind of articles (and Peopleware and Slack, and I may go on and on)? At the end my impression is that those who read this kind of literacy are not the people who have the real power of making things happen.I always have been at discomfort with the “Carrot & Stick” method, not to talk about the “Reign of Terror” (Project Managers are watch dogs that must shout and menacing to keep people doing what the must do). If I have to look back at my career, I expressed my profession and skills at best when I was given (or was able to get) an “insulated” framework where I could work without watchdog on my neck ready to bite me at first hesitation.

In the hope that soon or later, also those in charge will read.

You can’t see atoms

You can’t see atoms. That’s what I was taught. Well, everything you see is actually made by atoms, but you can’t see single atoms – that was the meaning of the teaching. Simply it is not possible – it has to do with atom dimensions and light wavelength. I won’t delve into the scientific reasons, but every serious physic textbook will confirm this.You may imagine my surprise when I saw some pictures of actual molecules. In fact it is true that you can’t see atoms (nor molecules), but you can use specialized sensor to produce a visual representation of atoms and molecules aspects. Yes it is not seeing directly by light rays, but is fully equivalent for all practical purposes.
That’s the idea – something is considered impossible until someone comes out with an idea to work around the limitations and voila, what once was considered impossible, today is within everyone reach.
This is what my friend Jimmy Playfield (the name is fake to protect his privacy) told me.
Some days ago Jimmy’s boss called him and all senior programmer and senior hardware designers to assign them a homework for the short vacancy they were about to have.
The goal was to find a way to work in the company environment and successfully craft project after project. The constraint though were that they couldn’t change anything in the chaotic way projects were handled.
Here is a sum of the constraints –

  • budgets are fixed, no increase in the engineers workforce;
  • requirements are roughly defined at the beginning of the project, then they continue to pour in during the entire lifetime of the project;
  • Resources are not statically assigned to a single project, but they may be temporary (and not so temporary) shared among two (or more) projects;
  • contracts are naively written, no real fences and firewall are set to force the customer to behave constructively nor to protect the company from customer whims;
  • project manager role has been declared useless, so their company charges the responsibilities of project management onto project specialists;
  • there are more projects than programmers;
  • no real policy for resource management, no motivational incentives, no benefits, no training (well, here I guess Jimmy was somewhat exaggerating).

Easy, ain’t it?
Well it sounds a hard problem. Let’s see what “project management 101” has to say about this situation.
First the triangle of project variables. Basically for every project there are three variables – Budget, Features and Time. Of these three variables you can fix two (any two), but not the third one. E.g. you can fix Time and Features, but the Budget is a consequence, or you can fix Budget and Time, but Features are a consequence (this is the agile methodology configuration.)
Usually projects at Jimmy’s company have Budget and Time fixed. So the only chance would be to work on Features.
Features variable is to be intended as a mix of features and their quality. A poorly implemented feature is likely to take less time/budget than a robust and top quality implementation. So they could slash the quality.
The problem to work in this direction – put aside ethical aspects – is that usually quality is the topmost motivational driver. Taking pride of what one does helps the involvement in the project. When a programmer in despair states – “tell me what I have to do and I’ll do it” – that’s the end of motivation, the end of involvement and the sunset of efficiency. The programmer will consider that project an accident, something that is not worth his/her neurons to be burnt on.
The other problem in slashing the quality is that legal contracts have to be armored to protect the company against the customer that could complain about the lack of quality.
I can’t see any solution for them, at least, not within the system, much like you can’t see the atom without moving to another level.
So by moving to a meta-level, you can break out of the system, e.g. by hiring someone to write the code for the project. This programmer won’t do any better than Jimmy and his coworkers – sometimes he will complete the project, sometimes he will fail. But to the company is a win-win solution, if the contractor succeeds then the company wins, if the contractor fails then the company could blame him and that’s a win again.
The major problem I see with this is that it is a bit suicidal, i.e. Jimmy and his coworkers become redundant and disposable as soon and the contractor is hired. Good luck, Jimmy and let me know if you find a better solution.

Managing Project

The current project I am working on could be considered a medium size. It involves 5 software engineers and 1 hardware engineer for about 8 months. I contributed to the planning of the software components, but I am quite critical about my skill in predicting future. Considering the pressure from top and middle management to complete the project on a given date I would take my own planning prediction with great care. So I dared to ask the project manager if he was doing some sort of risk assessment. His answer – “What the heck! If I had to do _even_ risk assessment then I would have no time at all for anything”.

Slack

I hate wasting time. If you have time don’t wait for (more) time, that’s one of my favorite savings.When I started working I had a small commute by car and some time got wasted waiting for the traffic light to become green. So I started reading books in those waits.
With my MS-Windows becoming slower and slower at startup, I decided to spend the waiting time reading books. At work I started a copy of “Slack” by Tom De Marco. Being a book on efficiency I found fitting to read it in recycled time.
If you work in software development (or if you are a “white collar” at large) you ought know who Tom De Marco is. Or, at least, you should have heard about the book he wrote with Timothy Lister – Peopleware. This is one of the most referenced book, almost every book on project management written in recent years quotes Peopleware.
Back to Slack, this is a book intended to be read in a short time to convey good an bad practice about efficiency, innovation and risk taking in today’s organizations.
The author states about an hour and half as reading time, I guess it is somewhat more even if you read from page 1 to the end at the same time (and not scattered over a couple of month of PC boots), nonetheless the thought-provoking stuff is very dense. Chapter are short and almost everyone made me think.
It is possible I will write a resume of it in the future, but I strongly encourage you to read by yourself.
Tom hacks through myths trying to grasp what is needed by an organization to survive in times of change and crisis. First myth to fall is the one of total efficiency. Total efficiency means total rigidity. An organization that maximizes efficiency cannot withstand change because lacks of the buffer and the spaces to react to change and adapt.
Then he digs through the (wrong) assumptions about making the organization more productive – pressure, competition, fear and so on.
Eventually he bashes the Management By Objects and illustrates what really means to deal with risk.
For example planning should be an estimation and as a such should offer a range defined by: cannot complete before X and surely completed after Z, with a best estimation at Y, with X <= Y <= Z. Now that’s a planning, that’s quite different by setting a goal. From my experience, someone defines a planning, someone else carves it in stone and says that’s the goal. This way of operation ignores the risk, ignores that an estimation is just an estimation, not a commitment, and it is just a sure way to have a delay on the completion. In fact, even if the most likely date (Y in the example above) is taken, this is usually achievable only with a probability of 33%.
De Marco also marks “Management by Objectives” (MBO) as “don’t”. I already read about criticisms at MBO, but more about the bonus system rather than objectives by themselves. De Marco states that it is hard, if not impossible, that the defined objectives for employees lead the organization toward company objectives. Also in a time of changes it is hard to hit an objective even if the actions taken are excellent for the company.
I found this particular opinion a little forced on software development. I think that an objective and bonus scheme over the course of a project, if properly done, could be helpful. Maybe MBO is just crap if applied to vending or other areas.
I liked the book a lot and I think it is not just for managers (regardless of their level). The more widespread the ideas here contained the better the life of fellow programmers.

Requirement baseline

I thought that by nowadays two main consensuses had been established towards requirements. The XP-ish “waste-of-time” school of thought that pretends this stuff belongs to NASA and similarly priced development and clearly are NAH (Not Applicable Here). And the “we-need-them-right” school of thought that believes in properly written and managed requirement documents.Of course, I am biased, I belong to the second school since I don’t buy the XP gibberish and I am a firmly believer that a sound methodology may not be The Solution to all software development troubles, but it is surely part of it.
So I am a bit surprised when I received a 35 pages requirements document that’s actual crap. Joel Spolsky wrote a sort of basic requirements for requirements in four parts. Well below is my list. I don’t want to compete with Joel (really, do read his post, even parts 1 and 2 and 3, it’s worth), but mine is shorter and should fit even in a tight schedule should you be requested to write specifications.
Think of it as a baseline rules of thumb:

  • start by describing the system to which the specification refers. What it does, which kind of existing systems do more or less the same.
  • don’t go ahead. These are specification, not software design. Leave out the “how”s.
  • define every acronym you are using. Not everyone knows even the most basic acronym, left out the more exotic ones (I’m still wondering what “TBP” does mean). Also define terms that are meaningful only for those who already knows what you are talking about.
  • list requirements in a way you can refer to them. Number them, or better use textual tags so that you can refer them in the same document or in the documentation that is following.
  • Don’t use the following words: “some”, “etc.”, “good”, “bad”, “fine” and the ellipses. The idea is that you have to be precise and define the entire system without leaving open holes.
  • use a spell checker. It’s easy, every modern Word Processor has it, just switch it on and fix the words underlined with the red snake (you should really do this for everything, not just the requirement document).
  • re-read everything. People is going to read and work on that document, just re-read and fix, iterate until you don’t fix anything.

As trivial as they seems, all these rules were broken at least once in the document I was handed.

Software project failures

Yesterday I stumbled upon an article about why software projects fail. Though hardly you can find any surprising claim while reading, it is nonetheless a quick paper you can hand over to your colleagues and/or managers when trying to enhance the chances for a successful project.What I find quite interesting is the table near the beginning, the one titled “Project Challenged Factors”. What draw my attention is that the “other” item weights more than each other single entry in the top ten. In other words the causes for challenging projects are so widespread and so equally important that there is no a main culprit.
We programmers are working on a so complex and delicate mechanism that any one in hundreds of factors could drive the entire stuff crazy. On the other hand the number of variables involved is so high that it is not possible to shot at one target to likely ensure project reliability.
Take software requirements for example, (missing or incomplete accounts for a 12,3%), while I am not advocating about entering a project without them, it is clear that in some circumstances the lack of such documents is not dooming per se. Take for example an expert team that knows all the ins and outs of the domain and has a crystal clear idea of what is needed. In this case maybe the project could get along with no SRS.
So one might be tempted to strictly follow every “best practice” to minimize the risks of a failure. But this is not likely to work either – every practice has its cost and some practices (take the “code review”) are extremely daunting.
Therefore is all a matter of balancing, picking the right decisions, having good reflexes and willing to occasionally work harder to fix what has gone wrong.
Following this line of thought the best insurance you can have for your projects is an experienced and well jelled team with: a) an experienced leader, b) a good success history. This isn’t rocket science either, you can find similar assertions in many software project management books (Peopleware is the first coming to mind).
On the other hand projects keeps failing. This leads me to two considerations, first is that even with the best premises a challenging project could fail – we have to accept this for today’s projects and try to minimize the failure impact. Second how is changing the failure rate in time? The first study quoted in the article is more than 13 years old! That’s a huge time in software industry.