It starts out innocently, a few documents, a few signatures. Before you know it, the product pipeline is enveloped in processes, chains of paperwork, chasing phantom senior figures for signatures to approve features they know nothing about. In government, it’s known as bureaucracy. In IT, it’s a breakdown of trust, leading to the blame game and an accountability whirlpool.
There are two extremes in how you develop and deploy code:
A: Full Trust
An in-house development team with experienced developers who know the business. You give them the basic requirements and they rapidly breakdown the impact and the work required. With the system fresh in mind, they begin development, perform sufficient testing and, believe it or not, release straight to production. Unsurprisingly, there are issues, but the issues come straight back to the developers – they fix and redeploy. The cycle continues for a while until the product is stable.
Obviously, the developers are paid handsomely, but the planning, admin and validation cost savings are enormous.
Then one of the developers has to leave the business. A new junior is employed and set to work on a new feature. It’s a disaster. They miss key business requirements, make fundamental code mistakes, and the bugs that go live result in damaged data, damaged reputations, and broken trust.
B: Full Accountability
Imagine the opposite solution: rigorous, fine-tuned, comprehensive, complete. The requirements of the system are drafted by those who understand the system and know how to communicate it. The development team are given a clear picture of what is required, and they roll out the solution to a team of validators who are specialised in recognising common bugs. They have a detailed test plan and repeat it flawlessly. The key owners in each department sign off their contributions in satisfaction. When something goes wrong, it can always be traced back to its point of origin – there is accountability.
The system goes live and it’s perfect. There isn’t a single bug, the customer is happy, the business is happy. Right?
Let’s be honest, System A sounds rather raw, and System B seems to be far more professional. In theory it should cover all the angles, and yet here we are, decades after the invention of punch cards, and systems big and small are failing like clockwork. Why?
Let’s break down the issues:
System B requires quality at every level, because it’s a chain that’s only as strong as its weakest link.
- Requirements can exceed the limits of the technology
- Code can achieve requirements but perform poorly and be hell to maintain
- Validation can miss flaws that developers would think of
- Deployment can miss non-standard release steps
What is the answer to each failing? More processes, more structure, more paperwork, more meetings. And with each new step comes additional cost. The benefit to cost ratio deteriorates exponentially (or is that logarithmically). In other words, if you have a team of 2 validators, adding a third might in fact increase productivity by a third. Adding a 10th might actually only increase value by a 30th, and adding a 20th could result in so much communication, documentation and meetings that you start to lose value.
The worst outcome is the real possibility that funds simply run out and the project fails. The IT industry is a catalogue of budget over-runs that never saw the light of day.
In System A, you certainly have a much stronger chance of releasing a bug into production, but on average the cost of fixing that bug is relatively small, because the process is so streamlined. Once the bug is identified, it goes straight back to the developers, who fix it, know what features it impacts and test those, then release. Ultimately it’s a cost-saving to allow some bugs, but fix them rapidly. This approach works particularly well with release candidates, alphas, betas and pilot projects.
Staff in general want to know that their work is valued and purposeful, and if you hear “what’s the point of this?”, then you have a red flag. I personally remember the feeling of running around “top floor”, looking for specific managers who had to sign off a feature so that we could release it. They had no idea what the feature was for or whether it worked. They began to sign off blindly, which rendered the whole exercise pointless. Alternately they could have asked for an explanation, which would have required yet another communication, often lengthy, to someone who didn’t actually need to know, or failed to grasp the technical workings of the system. They had to make a decision to trust the developer, which is what the system should have started with in the first place.
How did this happen? Because System B insists on traceability and accountability. When something fails, instead of fixing it and moving on, a finger needs to be pointed. Find the signature, the owner, and have them explain how the bug could have happened.
In theory, accountability prevents bugs happening in the future, in reality it reduces ownership and results in the blame-game and a loss of motivation. With the need to blame comes the need to defend and deflect. So you get a chain of redirection, which puts everyone on edge and adds tension to the organisation. Staff then stick purely to what they were given official written instruction for, so that if it goes wrong they can defend their corner. What you end up with is: “It’s not my problem”, “I just work here”. Blame is intended to promote ownership, but it results in the opposite.
When motivation deteriorates, staff leave and take their training and product knowledge with them. System A obviously can’t function in that environment, because it depends heavily on staff loyalty and retention of expertise. And when you have the regular staff turnover of System B, you get caught in a cycle of re-training, where every new developer starts rewriting the system in their own vernacular, resulting in spaghetti code.
By now it should be easy to understand that System B products are very slow to market. In the modern era, by the time you’ve finished conceptualising and begin development, your System A competition already have something on the ground. Economic theory calls it “first-mover advantage”. Even if the early product has bugs, they have the efficiency to correct them. And even if there were obvious flaws in this competitor’s product, would System B have the speed to catch them off-guard?
“Too many chefs spoil the broth” – the more opinions you add, the more you have to debate and justify every feature, and so often the victors are the loudest rather than the cleverest. While it’s easy to understand that a one-man team can use extra input on features, that rarely needs to be more than one or two voices.
A common organisational practice is to have representation from each department. Work becomes “I have another meeting”, and the amount of genuine input that you might have is on average pretty small. Much of that conversation is either not needed at all, or can be handled “offline”. Too many opinions obviously increase your cost, but also result in product confusion. In a manner of speaking, dictatorship is more efficient than democracy, and often more correct, as long as you put the right people in charge. If you have the wrong people developing your product, then the answer is not to be more democratic, but to change your personnel.
It’s Called “Agile”
System B is called the “Waterfall” methodology, System A is called “Agile“, and it’s taking the market by storm. But… traditional organisations and customers need to understand the bug-repair cycle – expectations need to be managed so that when incidents arise, they’re expected and handled calmly.
Let’s not kid ourselves, “lean” agile requires a high level of software quality, but that’s where the improvements need to be targeted. Up-skill your developers rather than impede them with processes. Don’t push for more accountability, push for more trust and rework whatever might be breaking down that trust.