I almost misled myself into thinking AC is blaming virtualization and SaaS/IaaS (infrastructure as a service) on creating such inefficiencies. Rather, he skipped past the obvious environmental benefits of server consolidation (improved resource utilization) and service centralization (via economies of scale) and instead builds on the consequences: code inefficiencies become more obvious when you’re no longer massively overprovisioning hardware. This is an opportunity as much as a challenge. Bad code did matter previously, but not to the extent that is forseen: we’re scaling web applications to a much larger degree than ever before. There are more users, and inefficiencies are multiplied.
So it is great news that virtualization and on demand infrastructure will allow us to focus more on code efficiency since as Alistair (incidentally a veteran in the monitoring of applications) points out, it exposes more granular economics of computing. These technologies are paving the way to greater infrastructure efficiencies and by forcing better utilization of hardware, putting more focus on the efficiency of the code that cohabits the infrastructure.
Increasing code efficiency has been generally unimportant except in edge cases. Stability and function has been more of a concern while Moore’s law and incomplete costing of infrastructure have been more than compensating for performance. Rapid application development platforms proliferate based upon the abilities of modern hardware to crunch “affordably” through the multiple layers of abstraction.
What me worry?
There’s definite potential for code to have an environmental impact. We have an existing ecological disaster on our hands with the castoff personal computing hardware of both enterprises and consumers. Almost all of that computing power was wasted idling, never used, just for the ability to load Microsoft Office applications quickly. How do we acheive more efficient code? As more people rely more on computing, the costing of which is becoming more accurate and granular, and as the barriers of entry for developers drop, we should witness an evolutionary process at work battling inefficiency assuming:
- large population of users
- competing applications
- rapid generation spans with modification
- market exerts selective pressure
While I think these evolutionary forces are already at work, the selective pressures have been weak, the environment has been overly abundant leading to a Cambrian explosion of inefficiencies that eventually will be represented in costs that the market will react to, assuming that the market has the freedom to do so. This is where intellectual property issues and the “one platform to rule them all” attitude may present a bit of a speedbump, but only that.
Alistair’s most important point is highlighting that proper costing of computing is essential: if we want to minimize environmental impact we need to measure the efficiency of work performed by applications and the true cost of the resources they consume. My conjecture is that an evolutionary process of anthropogenic artificial selection, automated or not, should optmize resource utilization. This rests upon the premise of a competitive market, which I believe we are just starting to see in the world of software.
For now I’m much more concerned with how poorly conceived code can compromise privacy, and restrictive code that restricts our freedom of communication and innovation. But those are stories for another post.