The cost of commoditization of computing: infrastructure and software
Discipline, a new view of system architecture, and rigorous automation procedures are required to take advantage of Amazon EC2, Google AppEngine, and other infrastructure as a service providers. Last week a customer commented on the rapid pace of Amazon's innovation. Yesterday they announced a new way to generate revenue from unused server instances by letting users bid on a spot market for unused EC2 instances.
Discipline When you own your own server farm, even if it is just a few back room servers, you can spread out applications over your servers in a haphazard way and usually get away with some sloppiness in your process architecture. When you are dealing with someone else's infrastructure a more disciplined approach is just about mandatory.
New view of system architecture Both Google and Amazon have published papers on dealing with very large scale geographically disperse systems comprised of many components, some of which are guaranteed to fail. These companies have also shared how for some applications they trade off immediate data consistency (between distributed data writers and readers) for scalability.
Rigorous automation procedures Large distributed fault tolerant systems involve two types of software development: core application development and writing automation code to manage managing servers (starting/stopping, DNS management) and partitioning an application over available servers.
Software The open source and free software movements have hugely contributed to the commoditization of computing. Even not counting operating system and networking infrastructure, almost all of the code in the systems that we create is written by other people and it is usually open source software. I would argue that in addition to classical skills like algorithms and raw coding skill, a modern developer must be well versed in available frameworks and libraries, their strengths and weaknesses, and knowledge of modifying and using them.
Commoditization comes with cost This is something that I am in the process of learning about. For most of my career I have had either dedicated hardware for development and deployment (starting in the 1980s, developing world wide distributed systems, I had expensive dedicated servers), or in the last 10 years, my "world" consisted of spending a lot of effort building systems on top of open source frameworks (J2EE, Rails, etc.) and deploying to a leased server, or a VPS for projects with few users. In this old scenario, deployment and admin would be a very small part of the effort of creating a new system. Now, designing, implementing, and deploying to Amazon AWS, Google AppEngine, etc. has changed that because of the effort it takes to properly "live" in a commodity computing world.
Discipline When you own your own server farm, even if it is just a few back room servers, you can spread out applications over your servers in a haphazard way and usually get away with some sloppiness in your process architecture. When you are dealing with someone else's infrastructure a more disciplined approach is just about mandatory.
New view of system architecture Both Google and Amazon have published papers on dealing with very large scale geographically disperse systems comprised of many components, some of which are guaranteed to fail. These companies have also shared how for some applications they trade off immediate data consistency (between distributed data writers and readers) for scalability.
Rigorous automation procedures Large distributed fault tolerant systems involve two types of software development: core application development and writing automation code to manage managing servers (starting/stopping, DNS management) and partitioning an application over available servers.
Software The open source and free software movements have hugely contributed to the commoditization of computing. Even not counting operating system and networking infrastructure, almost all of the code in the systems that we create is written by other people and it is usually open source software. I would argue that in addition to classical skills like algorithms and raw coding skill, a modern developer must be well versed in available frameworks and libraries, their strengths and weaknesses, and knowledge of modifying and using them.
Commoditization comes with cost This is something that I am in the process of learning about. For most of my career I have had either dedicated hardware for development and deployment (starting in the 1980s, developing world wide distributed systems, I had expensive dedicated servers), or in the last 10 years, my "world" consisted of spending a lot of effort building systems on top of open source frameworks (J2EE, Rails, etc.) and deploying to a leased server, or a VPS for projects with few users. In this old scenario, deployment and admin would be a very small part of the effort of creating a new system. Now, designing, implementing, and deploying to Amazon AWS, Google AppEngine, etc. has changed that because of the effort it takes to properly "live" in a commodity computing world.
Comments
Post a Comment