Notes Information Apocalypse

People-Power and the Mythic Man Month

Dog Bites Man Month. Or does it? The criticism is sharp, but I'm not sure how deep it cuts. This comes back to the argument over whether software development is an art or a science. As scientific as it can be, it will be almost impossible to clear any vestiges of craft from the process. People learn from their own mistakes, and the industry can learn as a whole from larger collective mistakes.

Here's the thing: there is widespread, almost unanimous agreement amongst experienced programmers regarding what The Mythical Man Month has to say about the process of software development. This is both concerning (is the response to aforementioned criticism indicative of a widespread lack of critical thinking when it comes to analysis of this famous book?) but also a very telling sign that perhaps Brooks is on to something that's really there, cold hard evidence or not.

One of the reasons why the book is so well known and widely cited is that Brooks was one of the first people to identify and clearly describe the massive gap in communication between managers and developers that occurs on larger projects - in the 1970s this marked the beginnings of a shift away from purely theoretical approaches to program design, to a cultural understanding of the environment in which program design takes place. While it may be very difficult to clearly answer the question of how adding more people to a project affects the result of the project, there is a more important takeaway from The Mythical Man Month, and perhaps it relates to the problems mentioned.

In my view (which admittedly, is warped and biased by my interests in sociology and anthropology) dependency problems are communication problems. The very act of writing software is an act of communication. The problem of waiting on deliverables from other people shows that divide and conquer approaches using people-power are subject to all the fickleness and unpredictability of the human condition. One thing Andy Singleton is right about here is that effectively responding to these problems probably involves a movement away from hierarchies of control. This seems obvious to us today, but would have been unthinkable in the Cold War dominated military industrial environment where Brooks learned these lessons the hard way. Perhaps the reason why the Linux kernel has succeeded despite an increasing number of contributors is that it never used a strict divide and conquer approach to addressing arbitrary requirements. Instead, it started out with a clear and simple goal and simply snowballed. Socially, its development more closely resembles cultivation and gardening than engineering and bridge building.

Being a complex mixture of chaordic elements and socio-technical assemblages (not so dissimilar to the strange agglutinates that led to the spawning of actor network theory), the craft of software development is not as clear cut as traditional methods of design and material production. If Brooks' work does not definitively answer the question of whether large teams can effectively collaborate, it still leads to an important issue that plagues the industry - traditional methods of top-down design fail extraordinarily frequently when applied to software development. Unlike a large scale engineering project working with stable laws of physics, in the social environment of software, decisions are based on uncertain knowledge with results that are difficult to reliably predict. This means it is often more effective to build a prototype and work out the fine details through doing rather than planning. Not that planning has no place, but it's much easier to do the right thing once you've done the wrong thing first and identified why it was the wrong thing. Software design is unique from other disciplines of design in this regard (see the law of leaky abstractions and the last possible moment for design). This is the essence of build one to throw away, and it is a very important lesson, one that petty managerial approaches to agile methods often underestimate or get wrong.

As for the missing empirical evidence - that's a great idea. More research into how large software teams actually work would be very useful to a lot of people. Just let's not forget that intuition based on experience ain't a bad thing when it comes to crafting good software.