Silver Bullets

silver_bulletsParadigms matter. The Pythagorans killed the discoverer of irrational numbers because they didn’t fit within their world view. The Romans had a numerical system that was actively antagonistic to arithmetic, and made no significant mathematical discoveries. The Arabs, on the other hand, had much the same numerical writing system as we use today, developed the bases of modern mathematical notation, and were responsible for many advances over hundreds of years (e.g., algebra, induction, binomial theorem, etc.).

Over 60 years ago, Richard Feynman, Nobel-prize winning physicist and all around hilarious guy, came up with an innovative method to calculate sub-atomic particle interactions. Even for the simplest interactions, however, the equations were so complex that they could be impossible to calculate. Enter the amplituhedron, a geometric object that represents the same problem as a volumetric equation, dramatically decreasing the complexity of the problem.

…Interactions that were previously calculated with mathematical formulas thousands of terms long can now be described by computing the volume of the corresponding jewel-like “amplituhedron,” which yields an equivalent one-term expression.

“The degree of efficiency is mind-boggling,” said Jacob Bourjaily, a theoretical physicist at Harvard University and one of the researchers who developed the new idea. “You can easily do, on paper, computations that were infeasible even with a computer before.”

– Natalie Wolchover, A Jewel at the Heart of Quantum Physics

No Silver Bullet

In his classic essay, Fred Brooks explains that “there is no single development, in either technology or in management technique, that by itself promises even one order-of-magnitude improvement in productivity, in reliability, in simplicity.” He then lays out the two kinds of complexity – accidental and essential. The former is self-inflicted – like coding in assembly instead of a high level language, or using ed instead of an IDE. The latter refers to the underlying complexity of the software being developed – if there are N features that need to be written, there’s no way to get around the fact that N features need to be specified with exactitude.

I believe the essay deserves another look, and an update. The software we create today is so much more complicated than the projects of thirty years ago, and the tools and libraries we build upon have simplified the task in ways that Brooks either dismissed or didn’t envision. In no particular order:

  • Modern IDEs

When Brooks originally wrote his essay, the advantages he saw in IDEs were slight: “Language-specific smart editors are developments not yet widely used in practice, but the most they promise is freedom from syntactic errors and simple semantic errors.” Though there are, of course, those who still swear by emacs or vim, modern IDEs are critically important tools that dramatically improve productivity. E.g., the ability to click through to a class or object definition; to instantly find instances of usage; or to step through code line by line (sure, you could do this with gdb, but seriously, what a pain in the tuchus); code completion; graphical GUI layout; and graphical indications of errors or warnings. It’s hard to overestimate the impact of a great IDE on developer productivity.

  • Interpreted languages / dynamic recompiling

Being able to re-run code without having to re-compile and re-link dramatically reduces the time between iterations (e.g., Python vs. C, or JSP/Velocity/PHP vs. Java). The same is true of products like JRebel, which allow you to recompile only the code you care about, without having to relaunch between compilations.

  • Source control

Amazingly, source control doesn’t even make it into the essay. This isn’t entirely surprising, as revision control was in its infancy at the time the essay was written. Still, hard to imagine modern software development without source control.

  • Static code analysis

The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.
– Tom Cargill

If you don’t currently use static code analysis, please go read this article by John Carmack. I’ll wait. The point is that, far from the “program verification” Brooks discusses, static analysis tools provide a pragmatic, focused, and extremely useful method of improving stability and productivity. The earlier a bug is caught, the less time it takes to fix – being able to find bugs as part of an automated process, whether by setting warnings-as-errors or through static analysis, can speed the identification of bugs and dramatically reduce the amount of time it takes to fix them, thereby reducing the overall time to completion of a project.

  • Better programming primitives / FOSS

A very senior Microsoft developer who moved to Google told me that Google works and thinks at a higher level of abstraction than Microsoft. “Google uses Bayesian filtering the way Microsoft uses the if statement,” he said. That’s true. Google also uses full-text-search-of-the-entire-Internet the way Microsoft uses little tables that list what error IDs correspond to which help text. Look at how Google does spell checking: it’s not based on dictionaries; it’s based on word usage statistics of the entire Internet, which is why Google knows how to correct my name, misspelled, and Microsoft Word doesn’t.
Joel Spolsky

Many of the most common problems have been solved, commoditized, and made available for free as open source. Web servers, data analysis tools (R, Hadoop), RDBMSs, NoSQL databases, languages, IDEs, source control, build systems, code libraries, Unicode, etc. etc. Brooks described this under “buy vs build”, but the quantity (huge), quality (generally as good as or better than commercially available packages), cost (free), and ease of acquisition (immediate) has changed the equation completely. Coders are able to focus entirely on the novel part of their problem, and generally don’t have to go through lengthy requisition and/or negotiation processes in order to get the components they need to do the job. This also relates to hardware – twenty years ago, web servers were owned by universities, corporate research centers, and governments. Ten years ago, you could rent and set one up reasonably quickly. Now, you can rent and provision a VM instantly through AWS, SoftLayer, Heroku, etc.

Along these lines, magic convention over configuration is another way of avoiding a lot of the work that shouldn’t really be necessary when starting a new project.

  • Improved tools for concurrency

As transistor density finds its physical limits, most hardware acceleration now focuses on additional cores. Old-style concurrent programming was an extraordinarily painful, error-prone exercise that required a very specialized skillset. Modern tools – whether embedded primitives in languages like Go and Erlang, or well-designed libraries in Java, dramatically improve the ease with which concurrent programs can be developed, without requiring the same level of expertise as previously.

  • Better compilers

Being able to write custom embedded assembly used to be a key skill for eking out top performance in critical sections of your code. While this may not have disappeared entirely, compilers have improved so much that writing your own hand-tuned assembly is almost guaranteed to reduce performance, not the reverse.

  • StackOverflow

Another blind spot in Brooks’s essay is the lack of any mention of community. The ability to get the answer to any common question immediately, and the answer to any more obscure question quickly, is a complete game-changer. We can reformulate Linus’s Law to state that “given enough eyeballs, all difficult, obsure, or intractable questions are answerable.”

  • Better Hardware

More memory and faster processors reduce the latency between thought and experiment, increasing the number of iterations possible per time unit, the efficiency of time spent in flow, and overall productivity.

Getting to 10x

The central question in how to improve the software art centers, as it always has, on people. … The differences are not minor–they are rather like the differences between Salieri and Mozart. Study after study shows that the very best designers produce structures that are faster, smaller, simpler, cleaner, and produced with less effort. The differences between the great and the average approach an order of magnitude.
– Fred Brooks

The only thing that hasn’t yet been commoditized is the actual software engineer. There was an attempt to do this by offshoring to developing countries, and sites like TopCoder and Rentacoder exist to reduce friction in hiring short term contractors with very specific skills. And while each of these has its use, the software engineer remains the one element in the picture that remains relatively unchanged. But it doesn’t have to be this way.

At the end of his essay, Brooks talks about “great designers,” which I believe is what we would now call “architects.” Regardless of whether we’re talking about architects or coders, though, the point is valid – developing and nurturing talent is going to be a critical part of any attempt to improve overall developer efficiency. Great developers don’t just code faster or with fewer bugs, they also make better choices when it comes to tools, architecture, etc., improving the efficiency of their teams.

But how do we nurture greatness? DeMarco and Lister provide an interesting, amusing, and mostly ignored set of research that points the way to some counterintuitive results. According to their research, great engineers are created not by some unknowable mystical process, but in large part by their workplace environment. They go into great detail (c.f. chapters 7-13, Peopleware) describing environmental problems, their effects, and how to overcome them. It is, of course, much more complicated than that (distrust the single cause fanatic), but every environmental improvement that reduces the distance between idea and implementation, decreases the time between iterations, reduces frustration, increases flow, and keeps butts in seats, is going to tend to create better, more productive engineers. All of the above innovations fit these criteria.

In Closing

When you strip away the specifics, the basics of what Brooks is saying makes sense – in order to specify a project with N features, there’s no way around having to specify N features in code. The difference is that what he meant by “feature” and “project” is vastly different from what we mean today – although there is still software built completely from scratch, our projects are generally both more complicated, and far simpler.

Consider: let’s say that someone back in 1987 described a project to set up a server on the DARPA Internet that would receive requests using a specialized protocol on top of TCP, then serve rich dynamic content using a data format that encoded meta-data, images, videos, etc. These requests would come from, and responses be sent to, users with specialized document viewers (which would also have to be developed, including support for audio, video, structured text, hyperlinks, etc.) on a variety of microcomputers from different manufacturers with different operating systems. The project would have been a massive undertaking, but today the entire infrastructure aleady exists, and only the last piece, the actual feature, needs to be developed.

This isn’t just true of the internet – the tools, hardware and software innovations, and community exist for all but the most obscure applications (nuclear reactor OSs?). In terms of pre-existing libraries, most domains – mobile apps, statistical packages, 3D engines, rendering software, physics engines, etc. – also have significant infrastructure support.

It’s as trite as it is true to talk about how much the world has changed in the last quarter century. The tools available for software engineering have made the field almost unrecognizable, and while perhaps no one innovation has improved “productivity, reliability, or simplicity” by an order of magnitude, the sedimentary layering of tool on top of standard on top of infrastructure enhancement on top of hardware improvement on top of community on top of <you get the point> has created a new paradigm almost under our noses. Where once we were individuals and tribes working alone in the desert, we have built our Library of Alexandria; created (as we have become) a global brain to enhance our own knowledge; and forged tools of Damascus steel. No one silver bullet, perhaps, but thousands of silver flechettes, far more important and consequential than any individual improvement.

One thought on “Silver Bullets

  1. Tagged with simply-awesome, wish-I’d-written-it, and required-reading-for-engineering-managers. There is an ever growing “digital divide” between companies who understand and exploit this magic combination of people, processes and tools, and those who cannot or will not. It’s why in my 40s I left a senior position at a stable company; the slow side is a well of endless frustration.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s