Enterprises that need software which they can’t buy must build it. This is a notoriously risky and expensive undertaking. The right approach has been well understood since the late 1960s but has rarely been applied. It is undergoing a revival now, and you can have it too, if you choose.

From the earliest days at the University of Manchester those who write programs for electronic computers have been astonished at how hard a thing that is to get right. Although the state of the art has advanced over the past fifty years the business problems which developers address have also become hugely more complex.

Building software systems is still an undertaking not for the faint-hearted. Alongside the evolution of technical practice, those who manage software development have also sought to improve the way that programming work is planned, tracked and managed. Unfortunately, a lot of this thinking has been based upon false premises. Those mistaken ideas have been very attractive for various reasons, but largely unsuccessful.

The new old thing

By the time of the first ‘Software Engineering’ conferences of 1968 and 69 it was, after twenty years of practice, well known how to be successful at writing software: have the programmers in close and continual contact with users, begin testing before coding is complete, being coding before design is complete, begin design before analysis is complete, begin analysis before requirements gathering is complete.

Build a minimum set of functionality and put it in the hands of users early to gain feedback. Build a first cut of a system knowing that you will throw it away and build a new one. This is what most organisations which are successful at building software mostly do.

It was also well-known what characteristics tended to make development unsuccessful: trying to get everything right first time, trying to follow very detailed plans extending far into the future, having independent streams working in isolation until a big-bang integration near to delivery time, compiling comprehensive requirement documents and handing them over to developers, and worst of all testing only towards the end of the project.

There are some perhaps surprising results here, and they don’t fit particularly neatly into a model of programming as a branch of mathematics (as some of the folks at those conferences argue), nor do they fit well with a project management approach that owes its origin to manufacturing.

Software development is not a completely novel activity and there are valuable lessons to be drawn from other disciplines, and yet software does have a particular nature and particular properties and attempts to manage it without due attention to them are unlikely to be successful.

Machine assistance

There was a time when processor cycles were much more expensive than developer thinking time. This is no longer true. The laptop I am using to write this can supply around four thousand million cycles per second, which is far, far more than I can expect to use even though those cycles are exclusively at my disposal.

A premier machine current at the time much of the foundational thinking about software engineering was done, might supply around fifteen million cycles per second and might be expected to meet the needs of a department or an entire enterprise.

This large a quantitative difference becomes a qualitative difference. The severe economic constraint on system builders of the past, a drought of cycles, is now replaced with an embarrassment of riches. And yet programmers have not grown particularly smarter.

There are a lot more of them, but as with any other field of endeavour the range of performance is large and highly skilled programmers are still rare. Organisations often find themselves trying to manage software development in a way that is mismatched with the economics of the situation.

Economically sensible programming

If cycles are scarce, we tend to want to use them for value-adding activities and not burn them up on others, such as programming. This desire is one driver behind the ‘get it right first time’ school of thought. But if programmer thinking time is scarce we should try to make the best possible use of that, assisted by use of the abundant resource.

So, rather than burn large amounts of developer effort on a ‘design’ activity trying to get it right, implement a design and run an experiment on the machine to see if it is good enough, and if not try another.

The problem with testing towards the end of a project is that when tests fail (a good thing, that is their job) the project has the least resources available to diagnose the failure, using scarce developer thinking time and then devise a fix, also using scarce developer thinking time. And we don’t know how long those activities will take, and then we will have to test again.

This is very risky, risk is built right into the plan, but if we can redirect some of our abundant processor cycles to intensive automated testing we can adopt an approach where tests are so small that diagnosing failures is easy (and cheap and fast) and tests are run so often that the ‘fix’ can be as simple as backing out the last five minutes worth of programming and trying something else.

And so on. The old, old observations about effective, fast, cheap, low risk and high quality development have been given a new lease of life by a new economics. It’s been reported that the teams who wrote the software for NASA’s Project Mercury and other U.S. government projects in the 1950s used a development process almost indistinguishable from what we would today call Extreme Programming.

The trick was that they could afford to put programmers in front of computers, which commercial development organisations could not. Today, it would be inconceivable not to give every programmer their own computer. Today we can all do what the government funded research projects of fifty years ago did, if we choose: build software the right way.